Sven Hartge sven@svenhartge.de wrote:
Sven Hartge sven@svenhartge.de wrote:
Stan Hoeppner stan@hardwarefreak.com wrote:
If an individual VMware node don't have sufficient RAM you could build a VM based Dovecot cluster, run these two VMs on separate nodes, and thin out the other VMs allowed to run on these nodes. Since you can't directly share XFS, build a tiny Debian NFS server VM and map the XFS LUN to it, export the filesystem to the two Dovecot VMs. You could install the Dovecot director on this NFS server VM as well. Converting from maildir to mdbox should help eliminate the NFS locking problems. I would do the conversion before migrating to this VM setup with NFS.
Also, run the NFS server VM on the same physical node as one of the Dovecot servers. The NFS traffic will be a memory-memory copy instead of going over the GbE wire, decreasing IO latency and increasing performance for that Dovecot server. If it's possible to have Dovecot director or your fav load balancer weight more connections to one Deovecot node, funnel 10-15% more connections to this one. (I'm no director guru, in fact haven't use it yet).
So, this reads like my idea in the first place.
Only you place all the mails on the NFS server, whereas my idea was to just share the shared folders from a central point and keep the normal user dirs local to the different nodes, thus reducing network impact for the way more common user access.
To be a bit more concrete on this one:
a) X backend servers which my frontend (being perdition or dovecot director) redirects users to, fixed, no random redirects.
I might start with 4 backend servers, but I can easily scale them, either vertically by adding more RAM or vCPUs or horizontally by adding more VMs and reshuffling some mailboxes during the night.
Why 4 and not 2? If I'm going to build a cluster, I already have to do the work to implement this and with 4 backends, I can distribute the load even further without much additional administrative overhead. But the load impact on each node gets lower with more nodes, if I am able to evenly spread my users across those nodes (like md5'ing the username and using the first 2 bits from that to determine which node the user resides on).
Ah, I forgot: I _already_ have the mechanisms in place to statically redirect/route accesses for users to different backends, since some of the users are already redirected to a different mailsystem at another location of my university.
So using this mechanism to also redirect/route users internal to _my_ location is no big deal.
This is what got me into the idea of several independant backend storages without the need to share the _whole_ storage, but just the shared folders for some users.
(Are my words making any sense? I got the feeling I'm writing German with English words and nobody is really understanding anything ...)
Grüße, Sven.
-- Sigmentation fault. Core dumped.