[Dovecot] A new director service in v2.0 for NFS installations
As http://wiki.dovecot.org/NFS describes, the main problem with NFS has always been caching problems. One NFS client changes two files, but another NFS client sees only one of the changes, which Dovecot then assumes is caused by corruption.
The recommended solution has always been to redirect the same user to only a single server at the same time. User doesn't have to be permanently assigned there, but as long as a server has some of user's files cached, it should be the only server accessing the user's mailbox. Recently I was thinking about a way to make this possible with an SQL database: http://dovecot.org/list/dovecot/2010-January/046112.html
The company here in Italy didn't really like such idea, so I thought about making it more transparent and simpler to manage. The result is a new "director" service, which does basically the same thing, except without SQL database. The idea is that your load balancer can redirect connections to one or more Dovecot proxies, which internally then figure out where the user should go. So the proxies act kind of like a secondary load balancer layer.
When a connection from a newly seen user arrives, it gets assigned to a mail server according to a function:
host = vhosts[ md5(username) mod vhosts_count ]
This way all of the proxies assign the same user to the same host without having to talk to each others. The vhosts[] is basically an array of hosts, except each host is initially listed there 100 times (vhost count=100). This vhost count can then be increased or decreased as necessary to change the host's load, probably automatically in future.
The problem is then of course that if (v)hosts are added or removed, the above function will return a different host than was previously used for the same user. That's why there is also an in-memory database that keeps track of username -> (hostname, timestamp) mappings. Every new connection from user refreshes the timestamp. Also existing connections refresh the timestamp every n minutes. Once all connections are gone, the timestamp expires and the user is removed from database.
The final problem then is how multiple proxies synchronize their state. The proxies connect to each others forming a connection ring. For example with 4 proxies the connections would go like A -> B -> C -> A. Each time a user is added/refreshed, a notification is sent to both directions in the ring (e.g. B sends to A and C), which in turn forward it until it reaches a server that has already seen it. This way if a proxy dies (or just hangs for a few seconds), the other proxies still get the changes without waiting for it to timeout. Host changes are replicated in the same way.
It's possible that two connections from a user arrive to different proxies while (v)hosts are being added/removed. It's also possible that only one of the proxies has seen the host change. So the proxies could redirect users to different servers during that time. This can be prevented by doing a ring-wide sync, during which all proxies delay assigning hostnames to new users. This delay shouldn't be too bad because a) they should happen rarely, b) it should be over quickly, c) users already in database can still be redirected during the sync.
The main complexity here comes from how to handle proxy server failures in different situations. Those are less interesting to describe and I haven't yet implemented all of it, so let's just assume that in future it all works perfectly. :) I was also thinking about writing a test program to simulate the director failures to make sure it all works.
Finally, there are the doveadm commands that can be used to:
- List the director status:
doveadm director status
mail server ip vhosts users 11.22.3.44 100 1312 12.33.4.55 50 1424
- Add a new mail server (defaults are in dovecot.conf):
doveadm director add 1.2.3.4
- Change a mail server's vhost count to alter its connection count (also works during adding):
doveadm director add 1.2.3.4 50
- Remove a mail server completely (because it's down):
doveadm director remove 1.2.3.4
If you want to slowly get users away from a specific server, you can assign its vhost count to 0 and wait for its user count to drop to zero. If the server is still working while "doveadm director remove" is called, new connections from the users in that server are going to other servers while the old ones are still being handled.
On Wed, 19 May 2010 10:51:06 +0200, Timo Sirainen tss@iki.fi wrote:
The company here in Italy didn't really like such idea, so I thought about making it more transparent and simpler to manage. The result is a new "director" service, which does basically the same thing, except without SQL database. The idea is that your load balancer can redirect connections to one or more Dovecot proxies, which internally then figure out where the user should go. So the proxies act kind of like a secondary load balancer layer.
As I understand, the first load balancer is just IP balancer, not POP3/IMAP balancer, isn't it?
When a connection from a newly seen user arrives, it gets assigned to a mail server according to a function:
host = vhosts[ md5(username) mod vhosts_count ]
This way all of the proxies assign the same user to the same host without having to talk to each others. The vhosts[] is basically an array of hosts, except each host is initially listed there 100 times (vhost count=100). This vhost count can then be increased or decreased as necessary to change the host's load, probably automatically in future.
The problem is then of course that if (v)hosts are added or removed, the above function will return a different host than was previously used for the same user. That's why there is also an in-memory database that keeps track of username -> (hostname, timestamp) mappings. Every new connection from user refreshes the timestamp. Also existing connections refresh the timestamp every n minutes. Once all connections are gone, the timestamp expires and the user is removed from database.
I have implemented similar scheme here with imap/pop3 proxy (nginx) in front of dovecot servers. What i have found to work best (for my conditions) as hashing scheme is some sort of weighted constant hash. Here is the algorithm I use:
On init, server add or server remove you initialize a ring:
- For every server:
- seed the random number generator with the crc32(IP of the server)
- get N random numbers (where N = server weight) and put them in an array. Put randon_number => IP in another map/hash structure.
- Sort the array. This is the ring.
For every redirect request:
- get crc32 number of the mailbox
- traverse the ring until you find a number that is bigger than the crc32 number and was not yet visited.
- mark that number as visited.
- lookup if it is already marked dead. If it was marked goto 2.
- lookup the number in the map/hash and you find the IP of the server.
- redirect the client to that server
- If that server is not responding, you mark it as dead and goto 2.
In this way you do not need to synchronize a state between balancers and proxies. If you add or remove servers very few clients get reallocated - num active clients/num servers. If one server is not responding, the clients that should be directed to it are redirected to one and a same other server without a need to sync states between servers.
This scheme has some disadvantages also - on certain circumstances, different sessions to one mailbox could be handled by different servers in parallel. My tests showed that this causes some performance degradation but no index corruptions here (using OCFS2, not NFS).
So my choice was to trade correctness (no parallel sessions to different servers) for simplicity (no state synchronization between servers).
Finally, there are the doveadm commands that can be used to:
- List the director status:
doveadm director status
mail server ip vhosts users 11.22.3.44 100 1312 12.33.4.55 50 1424
- Add a new mail server (defaults are in dovecot.conf):
doveadm director add 1.2.3.4
- Change a mail server's vhost count to alter its connection count
(also
works during adding): # doveadm director add 1.2.3.4 50
- Remove a mail server completely (because it's down):
doveadm director remove 1.2.3.4
If you want to slowly get users away from a specific server, you can assign its vhost count to 0 and wait for its user count to drop to zero. If the server is still working while "doveadm director remove" is called, new connections from the users in that server are going to other servers while the old ones are still being handled.
This is nice admin interface. Also, I have a question. Your implementation, what kind of sessions does it balance? I suppose imap/pop3. Is there a plan for similar redirecting of LMTP connections based on delivery address?
Best regards and thanks for the great work luben
On 19.5.2010, at 16.16, luben karavelov wrote:
On Wed, 19 May 2010 10:51:06 +0200, Timo Sirainen tss@iki.fi wrote:
The company here in Italy didn't really like such idea, so I thought about making it more transparent and simpler to manage. The result is a new "director" service, which does basically the same thing, except without SQL database. The idea is that your load balancer can redirect connections to one or more Dovecot proxies, which internally then figure out where the user should go. So the proxies act kind of like a secondary load balancer layer.
As I understand, the first load balancer is just IP balancer, not POP3/IMAP balancer, isn't it?
Right.
I have implemented similar scheme here with imap/pop3 proxy (nginx) in front of dovecot servers. What i have found to work best (for my conditions) as hashing scheme is some sort of weighted constant hash.
I guess you meant consistent hash? Yeah, I thought about that too first but simpler hash seemed .. simpler. :)
In this way you do not need to synchronize a state between balancers and proxies. If you add or remove servers very few clients get reallocated - num active clients/num servers. .. This scheme has some disadvantages also - on certain circumstances, different sessions to one mailbox could be handled by different servers in parallel.
That's the main thing I wanted to prevent with the director service, so I don't think consistent hashing would have made implementing it easier. Although it might have helped make the caching work a bit better when servers were added/removed.
So my choice was to trade correctness (no parallel sessions to different servers) for simplicity (no state synchronization between servers).
I would have liked to avoid the state sync too, but I like more the idea of having it work 100% perfectly in all conditions. :)
Also, I have a question. Your implementation, what kind of sessions does it balance? I suppose imap/pop3. Is there a plan for similar redirecting of LMTP connections based on delivery address?
It's a pretty generic service. Currently imap and pop3 use it, but it would be pretty easy to make LMTP proxy support it too. The main difference is that for imap/pop3 it needs to emulate being an auth socket, while for lmtp it would need to emulate being a userdb socket. I'd guess it would need less than 50 lines of code total. I guess I should do it before v2.0.0.
Timo,
-----Original Message----- From: dovecot-bounces+brandond=uoregon.edu@dovecot.org [mailto:dovecot-
The company here in Italy didn't really like such idea, so I thought about making it more transparent and simpler to manage. The result is a new "director" service, which does basically the same thing, except without SQL database. The idea is that your load balancer can redirect connections to one or more Dovecot proxies, which internally then figure out where the user should go. So the proxies act kind of like a secondary load balancer layer.
This looks very cool! We run a basic two-site active-active configuration with 6 Dovecot hosts in either location, with an Active/Standby load balancer cluster in front, and a cluster of geographically distributed NFS servers in the back. I'm sure I've described it before. We'd like to keep failover as simple as possible while also avoiding single points of failure. I have some questions about the suggested configuration, as well as the current implementation.
Does this work for POP3 as well as IMAP?
Is there any reason not to use all 12 of our servers as proxies as well as mailbox servers, and let the director communication route connections to the appropriate endpoint?
Does putting a host into 'directed proxy' mode prevent it from servicing local mailbox requests?
How is initial synchronization handled? If a new host is added, is it sent a full copy of the user->host mapping database?
What would you think about using multicast for the notifications instead of a ring structure? If we did set up all 12 hosts in a ring, it would be conceivable that a site failure plus failure of a single host at the surviving site would segment the ring. Multicast would prevent this, as well as (conceivably) simplifying dynamic resizing of the pool.
Thanks!
-Brad
On 25.5.2010, at 20.32, Brad Davidson wrote:
I have some questions about the suggested configuration, as well as the current implementation.
- Does this work for POP3 as well as IMAP?
Yes.
- Is there any reason not to use all 12 of our servers as proxies as well as mailbox servers, and let the director communication route connections to the appropriate endpoint?
So instead of having separate proxies and mail servers, have only hybrids everywhere? I guess it would almost work, except proxy_maybe isn't yet compatible with director. That's actually a bit annoying to implement.. You could of course run two separate Dovecot instances, but that also can be a bit annoying.
- Does putting a host into 'directed proxy' mode prevent it from servicing local mailbox requests?
No. The director service simply adds "host" field to auth lookup replies if the original reply had proxy=y but didn't have host field.
- How is initial synchronization handled? If a new host is added, is it sent a full copy of the user->host mapping database?
Yes. So the connections between the proxies should be pretty fast. I think the maximum bytes transferred per user is 38.
- What would you think about using multicast for the notifications instead of a ring structure? If we did set up all 12 hosts in a ring, it would be conceivable that a site failure plus failure of a single host at the surviving site would segment the ring. Multicast would prevent this, as well as (conceivably) simplifying dynamic resizing of the pool.
The proxies always try to keep connecting to next available server (i.e. if the next server won't connect, it tries one further away until it finally connects to something or reaches itself). So the segmentation could happen only if there was no network connection between the two segments.
I don't know much about how to do multicasting, except I've heard it can be problematic. Also not everything about the ring is simply about broadcasting user states. Most importantly the ring synchronization step during mail server adds/removes that prevents user from being assigned to different servers by different proxies wouldn't work with multicasting.
Timo,
After straightening out some issues with Axel's spec file, I'm back to poking at this.
On 5/25/10 3:14 PM, "Timo Sirainen" tss@iki.fi wrote:
So instead of having separate proxies and mail servers, have only hybrids everywhere? I guess it would almost work, except proxy_maybe isn't yet compatible with director. That's actually a bit annoying to implement.. You could of course run two separate Dovecot instances, but that also can be a bit annoying.
Would I have to run two separate instances, or could I just set up multiple login services on different ports; one set to proxy (forwarding the password to the remote server) and one set to not? I suppose each login service would have to use a different authdb, which I don't know how to do.
No. The director service simply adds "host" field to auth lookup replies if the original reply had proxy=y but didn't have host field.
Interesting. It sounds like proxying requires a database query that will return 'proxy=y' as part of the auth lookup. It would be nice to have a static password authdb for proxying that didn't require a real database backend. I'm using PAM now, and don't see a good way to enable proxying.
The wiki also says that there's a way to let the proxy backend handle authentication, but I don't see an example of that anywhere.
Yes. So the connections between the proxies should be pretty fast. I think the maximum bytes transferred per user is 38.
Cool.
The proxies always try to keep connecting to next available server (i.e. if the next server won't connect, it tries one further away until it finally connects to something or reaches itself). So the segmentation could happen only if there was no network connection between the two segments.
Ahh, OK - good to know. That sounds like a good way to do it. Can I confirm my understanding of a few other things?
It looks like the mailserver list is initially populated from
director_mail_servers, but can be changed by discovering hosts from other
directors or by adding/removing hosts with doveadm. Since the initial host
list is not written back in to the config file, changes made with doveadm
are not persistent across service restarts. Does 'doveadm director
The list of director servers used to build the ring is read from director_servers, and cannot be changed at runtime. A host finds its position within the ring based on its order within the list, and connects to hosts to its left and right until it has a connection on either side and can successfully send a test message around the ring.
Is that all correct? What happens if some hosts have only a subset, or different subsets, of a group of hosts in their mail server or director server list?
Thanks!
-Brad
On ma, 2010-05-31 at 05:02 -0700, Brandon Davidson wrote:
So instead of having separate proxies and mail servers, have only hybrids everywhere? I guess it would almost work, except proxy_maybe isn't yet compatible with director. That's actually a bit annoying to implement.. You could of course run two separate Dovecot instances, but that also can be a bit annoying.
Would I have to run two separate instances, or could I just set up multiple login services on different ports; one set to proxy (forwarding the password to the remote server) and one set to not? I suppose each login service would have to use a different authdb, which I don't know how to do.
Well .. maybe you could use separate services. Have the proxy listen on public IP and the backend listen on localhost. Then you can do:
local_ip 127.0.0.1 { passdb { .. } }
and things like that. I think it would work, but I haven't actually tried.
No. The director service simply adds "host" field to auth lookup replies if the original reply had proxy=y but didn't have host field.
Interesting. It sounds like proxying requires a database query that will return 'proxy=y' as part of the auth lookup. It would be nice to have a static password authdb for proxying that didn't require a real database backend. I'm using PAM now, and don't see a good way to enable proxying.
The wiki also says that there's a way to let the proxy backend handle authentication, but I don't see an example of that anywhere.
There's not yet a static passdb .. perhaps there should be. But you could use e.g. sqlite backend for the proxy and use:
password_query = select null as password, 'Y' as nopassword, 'Y' as proxy
It looks like the mailserver list is initially populated from director_mail_servers, but can be changed by discovering hosts from other directors or by adding/removing hosts with doveadm. Since the initial host list is not written back in to the config file, changes made with doveadm are not persistent across service restarts.
Right. I considered using a more permanent database for them, but .. I don't know if that's a good idea.
Does 'doveadm director
' need to be run against each director individually, or will the changes be sent around the ring?
The changes are sent around the ring.
If a new host comes up with a mailserver in its list that has been removed by doveadm, will the handshake remove it from the list?
If a new director joins an existing ring, it ignores its own mail server list and uses the ring's.
The list of director servers used to build the ring is read from director_servers, and cannot be changed at runtime. A host finds its position within the ring based on its order within the list, and connects to hosts to its left and right until it has a connection on either side and can successfully send a test message around the ring.
The way directors can be added currently is:
- Add the new host to one director's config.
- Reload config, this should restart director process and have it reconnect to the ring, announcing the new host to everyone
- Start up the new host, it'll insert itself to the ring.
That's anyway the theory, haven't tried yet. :) Also of course it's a good idea to add the host to all directors' config, but only one of them needs to be restarted to have it reload its config.
Is that all correct? What happens if some hosts have only a subset, or different subsets, of a group of hosts in their mail server or director server list?
That should never happen (well, except temporarily).
this feature sounds very interesting...
is this proxy working for lmtp too? is there a roadmap when this will be released and considered as stable?
On 31.5.2010, at 22.31, stefan novak wrote:
is this proxy working for lmtp too?
Not yet, but it's probably less than 50 lines of code.
is there a roadmap when this will be released and considered as stable?
Hopefully by the time v2.0.0 is released. It should already be usable with two proxy servers.
Timo,
On 5/31/10 6:04 AM, "Timo Sirainen" tss@iki.fi wrote:
Well .. maybe you could use separate services. Have the proxy listen on public IP and the backend listen on localhost. Then you can do:
local_ip 127.0.0.1 { passdb { .. } }
and things like that. I think it would work, but I haven't actually tried.
It doesn't seem to be honoring the passdb setting within the local block. I've got a single host set up with director, and itself listed as a mail server:
director_servers = 128.223.142.138 director_mail_servers = 128.223.142.138 userdb { driver = passwd } passdb { driver = sql args = /etc/dovecot/proxy-sqlite.conf } local 127.0.0.1 { passdb { driver = pam } }
If I telnet to localhost and attempt to log in, the logs show:
May 31 14:39:34 cc-popmap7 dovecot: auth: Debug: client in: AUTH 1 PLAIN service=imap secured lip=127.0.0.1 rip=127.0.0.1 lport=143 rport=60417 resp=<hidden> May 31 14:39:34 cc-popmap7 dovecot: auth: Debug: sql(brandond,127.0.0.1): query: SELECT null AS password, 'Y' AS nopassword, 'Y' AS proxy May 31 14:39:34 cc-popmap7 dovecot: auth: Debug: client out: OK 1 user=brandond proxy pass=<hidden> May 31 14:39:34 cc-popmap7 dovecot: auth: Debug: client in: AUTH 1 PLAIN service=imap secured lip=128.223.142.138 rip=128.223.142.138 lport=143 rport=44453 resp=<hidden> May 31 14:39:34 cc-popmap7 dovecot: auth: Debug: sql(brandond,128.223.142.138): query: SELECT null AS password, 'Y' AS nopassword, 'Y' AS proxy May 31 14:39:34 cc-popmap7 dovecot: auth: Debug: client out: OK 1 user=brandond proxy pass=<hidden> May 31 14:39:34 cc-popmap7 dovecot: imap-login: Error: Proxying loops to itself: user=<brandond>, method=PLAIN, rip=128.223.142.138, lip=128.223.142.138, secured, mailpid=0 May 31 14:39:34 cc-popmap7 dovecot: auth: Debug: new auth connection: pid=4700 May 31 14:39:34 cc-popmap7 dovecot: imap-login: Disconnected (auth failed, 1 attempts): user=<brandond>, method=PLAIN, rip=128.223.142.138, lip=128.223.142.138, secured, mailpid=0
Even if the alternate passdb worked, how would I get it to connect to the backend on localhost? It looks like the proxy connection comes in over the external IP even if it's to itself, as the external address is what's specified as the proxy destination by the director.
I do have a private network that I run NFS over; I suppose I could run the proxy on the external, backend on the internal, and use only the internal IPs in the mailserver list. I've also tried that, but it doesn't seem to work either due to the passdb setting not being honored within local|remote blocks.
Even if it did, wouldn't it still complain about the proxy looping back to itself since both lip and rip would both be local addresses? Unless the loopback check just compares to see if they're the same... Either way, it seems like having proxy_maybe work with the director service would make the whole setup a lot simpler.
There's not yet a static passdb .. perhaps there should be. But you could use e.g. sqlite backend for the proxy and use:
password_query = select null as password, 'Y' as nopassword, 'Y' as proxy
That seems to work well enough, with the major caveat noted above.
On 31.5.2010, at 23.59, Brandon Davidson wrote:
You need to put the other passdb/userdb to the external IP:
local 1.2.3.4 {
userdb { driver = passwd } passdb { driver = sql args = /etc/dovecot/proxy-sqlite.conf }
}
Even if the alternate passdb worked, how would I get it to connect to the backend on localhost? It looks like the proxy connection comes in over the external IP even if it's to itself, as the external address is what's specified as the proxy destination by the director.
Yeah, you're right.. You could have it be listening on a different port. But there is no local_port {} block yet. Well, unless you changed the sqlite config so that it has ".. where %a=143". Then also return "14300 as port" or something.
I do have a private network that I run NFS over; I suppose I could run the proxy on the external, backend on the internal, and use only the internal IPs in the mailserver list. I've also tried that, but it doesn't seem to work either due to the passdb setting not being honored within local|remote blocks.
I guess that'd work too.
Even if it did, wouldn't it still complain about the proxy looping back to itself since both lip and rip would both be local addresses? Unless the loopback check just compares to see if they're the same...
That's how it does. lip=rip and lport=rport is required.
Either way, it seems like having proxy_maybe work with the director service would make the whole setup a lot simpler.
Hmm. I guess it could work like:
- Director forwards auth lookup
- Receives proxy=y with no host (auth process forgets about this request - user can't login with it)
- Gets the remote IP
- Figures out that it is the same IP and port where client connected
- Either it has the username+password or master user+username+master password
- It does another auth lookup, also giving some input parameter so that auth process will be forced to ignore the proxy and keep the request around so user can login
- Director forwards the reply to login process, dropping the proxy stuff
The main thing to be implemented would be the "some input parameter". Maybe it could be just that it sets rip=lip and rport=lport and you could compare those (or just lip=rip) in the sql query.
Timo,
On 5/31/10 4:13 PM, "Timo Sirainen" tss@iki.fi wrote:
You need to put the other passdb/userdb to the external IP:
local 1.2.3.4 {
userdb { driver = passwd } passdb { driver = sql args = /etc/dovecot/proxy-sqlite.conf }
}
It still doesn't seem to work. I tried this, with no userdb/passdb outside a local block:
local 128.223.142.138 { userdb { driver = passwd } passdb { driver = sql args = /etc/dovecot/proxy-sqlite.conf } } local 10.142.0.162 { userdb { driver = passwd } passdb { driver = pam } }
But I got this error in the log file upon connecting to the external IP:
May 31 16:20:42 cc-popmap7 dovecot: auth: Fatal: No passdbs specified in configuration file. PLAIN mechanism needs one May 31 16:20:42 cc-popmap7 dovecot: master: Error: service(auth): command startup failed, throttling May 31 16:20:42 cc-popmap7 dovecot: master: Error: service(director): child 5339 killed with signal 11 (core dumps disabled) May 31 16:20:42 cc-popmap7 dovecot: master: Error: service(director): command startup failed, throttling
So I added a global passdb/userdb:
userdb { driver = passwd } passdb { driver = pam } local 128.223.142.138 { userdb { driver = passwd } passdb { driver = sql args = /etc/dovecot/proxy-sqlite.conf } } local 10.142.0.162 { userdb { driver = passwd } passdb { driver = pam } }
And again it uses the global passdb for all requests, ignoring the contents of the local blocks.
-Brad
On 1.6.2010, at 0.30, Brandon Davidson wrote:
May 31 16:20:42 cc-popmap7 dovecot: auth: Fatal: No passdbs specified in configuration file. PLAIN mechanism needs one
Hmm. Maybe this check should check also the local etc. blocks..
So I added a global passdb/userdb:
The passdbs and userdbs are checked in the order they're defined. You could add them at the bottom. Or probably more easily:
local 128.223.143.138 { passdb { driver = sql args = .. }
passdb { driver = pam } userdb { driver = passwd }
This should work because all proxy lookups get caught by the sql lookup, while non-proxy connections go directly to pam. And only non-proxy connections care about userdb.
Timo,
On 5/31/10 4:36 PM, "Timo Sirainen" tss@iki.fi wrote:
The passdbs and userdbs are checked in the order they're defined. You could add them at the bottom. Or probably more easily:
local 128.223.143.138 { passdb { driver = sql args = .. }
passdb { driver = pam } userdb { driver = passwd }
Ahh, OK. For some reason I was assuming that the best match was used. Unfortunately that doesn't seem to work either. I've got it set up just as you recommended:
[root@cc-popmap7 ~]# cat /etc/dovecot/dovecot.conf | nl | grep -B1 -A4 passdb 35 local 128.223.142.138 { 36 passdb { 37 driver = sql 38 args = /etc/dovecot/proxy-sqlite.conf 39 } 40 } 41 passdb { 42 driver = pam 43 } 44 userdb { 45 driver = passwd
It still doesn't respect the driver for that local block, and uses PAM for everything:
May 31 16:48:16 cc-popmap7 dovecot: auth: Debug: client in: AUTH 1 PLAIN service=imap secured lip=128.223.142.138 rip=128.223.162.22 lport=993 rport=57067 resp=<hidden> May 31 16:48:16 cc-popmap7 dovecot: auth: Debug: pam(brandond,128.223.162.22): lookup service=dovecot May 31 16:48:16 cc-popmap7 dovecot: auth: Debug: pam(brandond,128.223.162.22): #1/1 style=1 msg=Password: May 31 16:48:16 cc-popmap7 dovecot: auth: Debug: pam(brandond,128.223.162.22): #1/1 style=1 msg=LDAP Password: May 31 16:48:16 cc-popmap7 dovecot: auth: Debug: client out: OK 1 user=brandond May 31 16:48:16 cc-popmap7 dovecot: auth: Debug: master in: REQUEST 1 5652 1 d19a5592fd2206241cfc0ca658020b0b May 31 16:48:16 cc-popmap7 dovecot: auth: Debug: passwd(brandond,128.223.162.22): lookup May 31 16:48:16 cc-popmap7 dovecot: auth: Debug: master out: USER 1 brandond system_groups_user=brandond uid=41027 gid=91 home=/home10/brandond May 31 16:48:16 cc-popmap7 dovecot: imap-login: Login: user=<brandond>, method=PLAIN, rip=128.223.162.22, lip=128.223.142.138, TLS, mailpid=5667
Interestingly enough, if I run 'doveconf -n' it doesn't seem to be retaining the order I specified. The local section is dropped down to the very end:
[root@cc-popmap7 ~]# doveconf -n | nl | grep -B1 -A4 passdb 31 } 32 passdb { 33 driver = pam 34 } 35 plugin { 36 quota = fs:user:inode_per_mail
82 local 128.223.142.138 {
83 passdb {
84 args = /etc/dovecot/proxy-sqlite.conf
85 driver = sql
86 }
87 }
Ideas?
-Brad
On 1.6.2010, at 0.59, Brandon Davidson wrote:
Interestingly enough, if I run 'doveconf -n' it doesn't seem to be retaining the order I specified. The local section is dropped down to the very end:
Right .. it doesn't work exactly like that I guess. Or I don't remember :) Easiest to test with:
doveconf -f lip=128.223.142.138 -n
That should show the wanted order.
Timo,
On 5/31/10 5:09 PM, "Timo Sirainen" tss@iki.fi wrote:
Right .. it doesn't work exactly like that I guess. Or I don't remember :) Easiest to test with:
doveconf -f lip=128.223.142.138 -n
That looks better:
[root@cc-popmap7 ~]# doveconf -f lip=128.223.142.138 -h |grep -B1 -A7 passdb } passdb { args = /etc/dovecot/proxy-sqlite.conf deny = no driver = sql master = no pass = no } passdb { args = deny = no driver = pam master = no pass = no } plugin {
local 128.223.142.138 { passdb { args = /etc/dovecot/proxy-sqlite.conf driver = sql } }
Still not sure why it's not proxying though. The config looks good but it's still using PAM even for the external IP.
-Brad
Timo,
On 5/31/10 5:34 PM, "Brandon Davidson" brandond@uoregon.edu wrote:
Still not sure why it's not proxying though. The config looks good but it's still using PAM even for the external IP.
I played with subnet masks instead of IPs and using remote instead of local, as well as setting auth_cache_size = 0, but no dice. It still seems to ignore the block and only use the global definition, even if doveconf -f lip=<ip> shows that it's expanding it properly.
-Brad
On 1.6.2010, at 2.44, Brandon Davidson wrote:
It still seems to ignore the block and only use the global definition, even if doveconf -f lip=<ip> shows that it's expanding it properly.
Oh, you're right. For auth settings currently only protocol blocks work. It was a bit too much trouble to make local/remote blocks to work. :)
Timo,
On 5/31/10 6:56 PM, "Timo Sirainen" tss@iki.fi wrote:
Oh, you're right. For auth settings currently only protocol blocks work. It was a bit too much trouble to make local/remote blocks to work. :)
That's too bad! Any hope of getting support for this and director+proxy_maybe anytime soon?
-Brad
On ma, 2010-05-31 at 19:09 -0700, Brandon Davidson wrote:
Timo,
On 5/31/10 6:56 PM, "Timo Sirainen" tss@iki.fi wrote:
Oh, you're right. For auth settings currently only protocol blocks work. It was a bit too much trouble to make local/remote blocks to work. :)
That's too bad! Any hope of getting support for this
I wasn't really planning on implementing it soon.
and director+proxy_maybe anytime soon?
I tried looking into it today, but it's an annoyingly difficult change, so probably won't happen soon either.
Timo,
-----Original Message-----
That's too bad! Any hope of getting support for this
I wasn't really planning on implementing it soon.
and director+proxy_maybe anytime soon?
I tried looking into it today, but it's an annoyingly difficult change, so probably won't happen soon either.
That's too bad! I guess I'll see what I can do within the current constraints. You'd gotten my hopes up though ;)
I was just playing around a bit, and came up with something like:
passdb { driver = sql args = /etc/dovecot/proxy-sqlite.conf } passdb { driver = pam }
driver = sqlite connect = /dev/null password_query = SELECT null AS password, 'Y' AS nopassword, 'Y' AS proxy WHERE '%{lip}' LIKE '10.142.0.%%' AND '%{lip}' != '%{rip}'
This is a hack, but should be roughly equivalent to proxy_maybe and a local block around the sql passdb, right?
-Brad
Am 19.05.2010 10:51, schrieb Timo Sirainen:
As http://wiki.dovecot.org/NFS describes, the main problem with NFS has always been caching problems. One NFS client changes two files, but another NFS client sees only one of the changes, which Dovecot then assumes is caused by corruption.
host = vhosts[ md5(username) mod vhosts_count ]
Hello Timo, i am currently playing around with the new director service and i am really looking forward for it in 2.0 Wouldn't it be better to use a consistent hash function instead of the md5 ? So that you would only get a new assignment of users belonging to the failed server and not a "complete remapping". With this setup it might be possible to store local indexes in a NFS Backend setting, as the users stay kind of sticky to their server. And there would also be no need for the distribution of the currently active mappings within the ring. Maybe only for the state for the servers.
Regards, Oliver
On Wed, 2010-06-16 at 15:19 +0200, Oliver Eales wrote:
Am 19.05.2010 10:51, schrieb Timo Sirainen:
As http://wiki.dovecot.org/NFS describes, the main problem with NFS has always been caching problems. One NFS client changes two files, but another NFS client sees only one of the changes, which Dovecot then assumes is caused by corruption.
host = vhosts[ md5(username) mod vhosts_count ]
Hello Timo, i am currently playing around with the new director service and i am really looking forward for it in 2.0 Wouldn't it be better to use a consistent hash function instead of the md5 ? So that you would only get a new assignment of users belonging to the failed server and not a "complete remapping".
I thought about it first, but then thought it would only make it more complex without much benefits.
With this setup it might be possible to store local indexes in a NFS Backend setting, as the users stay kind of sticky to their server.
Well, this would be a benefit that I didn't think of. :) Maybe it could be done in future.
And there would also be no need for the distribution of the currently active mappings within the ring. Maybe only for the state for the servers.
I think it's still needed. Maybe not all mappings, but at least when new servers are added. For example if server2 handles users A and B, and a new server3 is added. Now server2 would handle A and server3 would handle B. But server1 would still need to know that server2 would still have to handle B until it timeouts (B user can't just be moved to server3 without killing its existing connections, which doesn't sound like a good idea).
participants (6)
-
Brad Davidson
-
Brandon Davidson
-
luben karavelov
-
Oliver Eales
-
stefan novak
-
Timo Sirainen