[Dovecot] Please help to make decision
Ed W
lists at wildgooses.com
Thu Mar 28 22:34:16 EET 2013
I believe a variation on that theme is also to "double" each machine
using DRBD so that machines are arranged in pairs. One can fail and the
other will take over the load. ie each pair of machines mirrors the
storage for the other. With this arrangement only warm failover is
usually required and hence DRBD can run in async mode and performance
impact is low
Note I don't use any of the above, it was a setup described by Timo some
years back
Good luck
Ed W
On 25/03/2013 18:47, Thierry de Montaudry wrote:
> Hi Tigran,
>
> Managing a mail system for 1M odd users, we did run for a few years on some high range SAN system (NetApp, then EMC), but were not happy with the performance, whatever double head, fibre, and so on, it just couldn't handle the IOs. I must just say that at this time, we were not using dovecot.
>
> Then we moved to a completely different structure: 24 storage machines (plain CentOS as NFS servers), 7 frontend (webmail through IMAP + POP3 server) and 5 MXs, and all front end machines running dovecot. That was a major change in the system performances, but not happy yet with the 50T total storage we had. Having huge traffic between front end machine and storage, and at this time, I was not sure the switches were handling the load properly. Not talking about the load on the front end machine which some times needed a hard reboot to recover from NFS timeouts. Even after trying some heavy optimizations all around, and particularly on NFS.
>
> Then we did look at the Dovecot director, but not sure how it would handle 1M users, we moved to the proxy solution: we are now running dovecot on the 24 storage machines, our webmail system connecting with IMAP to the final storage machine, as well as the MXs with LMTP, we only use dovecot proxy for the POP3 access on the 7 front end machines. And I must say, what a change. Since then the system is running smoothly, no more worries about NFS timeouts and the loadavg on all machine is down to almost nothing, as well as the internal traffic on the switches and our stress. And most important, the feed back from our users told us that we did the right thing.
>
> Only trouble: now and then we have to move users around, as if a machine gets full, the only solution is to move data to one that has more space. But this is achieved easily with the dsync tool.
>
> This is just my experience, it might not be the best, but with the (limited) budget we had, we finally came up with a solutions that can handle the load and got us away from SAN systems which could never handle the IOs for mail access. Just for the sake of it, our storage machines only have each 4 x 1T SATA drives in RAID 10, and 16G of mem, which I've been told would never do the job, but it just works. Thanks Timo.
>
> Hoping this will help in your decision,
>
> Regards,
>
> Thierry
>
>
> On 24 Mar 2013, at 18:12, Tigran Petrosyan <tpetrosy at gmail.com> wrote:
>
>> Hi
>> We are going to implement the "Dovecot" for 1 million users. We are going
>> to use more than 100T storage space. Now we examine 2 solutions NFS or GFS2
>> via (Fibre Channel storage).
>> Can someone help to make decision? What kind of storage solution we can use
>> to achieve good performance and scalability.
More information about the dovecot
mailing list