[Dovecot] Dovecot, LVS and the issues I have with it.

Thanos Chatziathanassiou tchatzi at arx.net
Mon Apr 6 15:58:49 EEST 2009


O/H neil έγραψε:
> We run around 5 dovecot (debian etch 1.0.rc15) POP/IMAP 'nodes' using 
> the LVS load balancer and an NFS based SAN. it works pretty well. I 
> love the robustness of load balancing POP/IMAP. We do a reasonable 
> amount of throughput through these especially at peak times pushing 
> our SAN to around 1.5k IOP/s
>
> We currently have two issues with this setup. One of which is NFS 
> index corruption issues we get due to NFS/dovecot locking. Basically 
> the UUID list or a .index gets corrupt. This causes a full re-indexing 
> of the mailbox / broken mailbox until i delete the indexes. In the 
> UUID lists case the symptom tends to effect use who use POP rather 
> than IMAP and insist on keeping messages on the server. Because it's 
> corrupt it gets rebuilt one way or the other and the users email 
> client proceeds to redownload the entire mailbox again until he 
> remarks them to be saved. This tends to annoy the user a lot. After a 
> bit of testing we do however expect this to be fixed by version 1.1. 
> However if anyone has any comments on this I would certainly be 
> interested.
>
> The other issue is a little tricky to describe or even log 
> effectively. Occasionally a node basically receives more connections 
> than it is able to handle. The connection count goes through the roof 
> for that node. It will go up way beyond 150 dovecot authentication 
> threads and 100 or so active POP/IMAP dovecot threads. The IO_wait/cpu 
> and memory usage also tends to be spiking at this point. The server 
> gets to the tipping point where it can no longer serve it's POP/IMAP 
> requests fast enough compared to the number of connections it's 
> getting. I'd be fine with this but this creates some less than 
> desirable symptoms:
>
> 1. We obviously reach the auth thread cap eventually so any new auth 
> requests basically get refused by the server. To the user this is a 
> seen as an outlook/mail client re-auth pop-up request. This annoys 
> them. Ideally if the server stops accepting auth requests it should 
> fall off our load balancer until it can consistently accept them 
> again. Since the LVS detects a node fail by whether the tcp port is 
> still open this doesn't happen, since dovecot keeps the port open. 
> This is obviously more an LVS issue and not for this mailing list I 
> expect unless anyone has any config tweaking tips here?
Perhaps try the (Weighted) Least Connections algorithm ? That way, every 
member of your cluster would get a roughly equal number of pop/imap 
requests/active connections. Also, give the persistent (-p) option a 
look. If you're sill running out, then perhaps you need to throw more 
hardware/disk spindles/nfsd threads at the problem.

 > Since the LVS detects a node fail by whether the tcp port is still open

That's not entirely true: LVS does not detect anything whatsoever 
regarding the health of the nodes.
ldirectord works that way and instructs LVS via ipvsadm to add or remove 
nodes. Unfortunately ldirectord is very http centered and only does 
plain tcp connection stuff for the rest of the services (https and ftp 
have their own check_ functions too).
Your options:
- It is fairly trivial to add something like ``check_imap'' or 
``check_pop3'' along the lines of ``check_http'' and ``check_ftp'' to 
ldirectord, and use that to monitor your actual service and not the tcp 
socket status. Perl Modules Mail::POP3Client, Net::POP3, 
Mail::IMAPClient or Net::IMAP will probably come in handy.
(untested) - use mon instead of ldirectord. it seems to have more 
monitoring options (http://mon.wiki.kernel.org/index.php/Monitors)
(untested) - use heartbeat v2 style resource monitoring

we're running a similar setup and the one thing that troubled us were 
the index files. As soon as we set up each server to maintain its own 
index files on local disk (we could spare the disk space and CPU) and 
not on NFS we were fine. Mail still resides on NFS. YMMV of course.

Best Regards,
Thanos Chatziathanassiou

>
> 2. Now here's my real gripe. Dovecot does not handle running out 
> resources very gracefully at all in our setup. It does start killing 
> threads after a while. I get multiple *"dovecot: child 17989 (login) 
> killed with signal 9". *I'm not exactly sure what's going on here 
> because after this all I can see is the machine totally out of memory 
> and the kernel starts killing absolutely everything. All services are 
> killed (including ssh etc..) and I plug a monitor into the server and 
> find the last few lines of the console listing init and other rather 
> important things having just been killed. At this point it is a case 
> of power cycling the server and all is back to normal again.
>
> I imagine there's not a huge amount of people using dovecot in this 
> way. But anyone got any recommendations here? I really like using 
> dovecot in this setup it handles it pretty well and the redundancy and 
> functionality options it provides have been invaluable.
>
> Neil
>
>


-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/x-pkcs7-signature
Size: 3242 bytes
Desc: S/MIME Cryptographic Signature
Url : http://dovecot.org/pipermail/dovecot/attachments/20090406/bf4f61ba/attachment.bin 


More information about the dovecot mailing list