On Wed, Jan 18, 2012 at 9:58 AM, Timo Sirainen tss@iki.fi wrote:
On 18.1.2012, at 19.54, Mark Moseley wrote:
I'm in the middle of working on a Maildir->mdbox migration as well, and likewise, over NFS (all Netapps but moving to Sun), and likewise with split LDA and IMAP/POP servers (and both of those served out of pools). I was hoping doing things like setting "mail_nfs_index = yes" and "mmap_disable = yes" and "mail_fsync = always/optimized" would mitigate most of the risks of index corruption,
They help, but aren't 100% effective and they also make the performance worse.
In testing, it seemed very much like the benefits of reducing IOPS by up to a couple orders of magnitude outweighed having to use those settings. Both in scripted testing and just using a mail UI, with the NFS-ish settings, I didn't notice any lag and doing things like checking a good-sized mailbox were at least as quick as Maildir. And I'm hoping that reducing IOPS across the entire set of NFS servers will compound the benefits quite a bit.
as well as probably turning indexing off on the LDA side of things
You can't turn off indexing with dbox.
Ah, too bad. I was hoping I could get away with the LDA not updating the index but just dropping the message into storage/m.# but it'd still be seen on the IMAP/POP side--but hadn't tested that. Guess that's not the case.
--i.e. all the suggestions at http://wiki2.dovecot.org/NFS. Is that definitely not the case? Is there anything else (beyond moving to a director-based architecture) that can mitigate the risk of index corruption? In our case, incoming IMAP/POP are 'stuck' to servers based on IP persistence for a given amount of time, but incoming LDA is randomly distributed.
What's the problem with director-based architecture?
Nothing, per se. It's just that migrating to mdbox *and* to a director architecture is quite a bit more added complexity than simply migrating to mdbox alone.
Hopefully, I'm not hijacking this thread. This seems pretty pertinent as well to the OP.