[Dovecot] Providing shared folders with multiple backend servers

Sven Hartge sven at svenhartge.de
Sun Jan 8 22:15:22 EET 2012


Sven Hartge <sven at svenhartge.de> wrote:
> Stan Hoeppner <stan at hardwarefreak.com> wrote:

>> If an individual VMware node don't have sufficient RAM you could build a
>> VM based Dovecot cluster, run these two VMs on separate nodes, and thin
>> out the other VMs allowed to run on these nodes.  Since you can't
>> directly share XFS, build a tiny Debian NFS server VM and map the XFS
>> LUN to it, export the filesystem to the two Dovecot VMs.  You could
>> install the Dovecot director on this NFS server VM as well.  Converting
>> from maildir to mdbox should help eliminate the NFS locking problems.  I
>> would do the conversion before migrating to this VM setup with NFS.

>> Also, run the NFS server VM on the same physical node as one of the
>> Dovecot servers.  The NFS traffic will be a memory-memory copy instead
>> of going over the GbE wire, decreasing IO latency and increasing
>> performance for that Dovecot server.  If it's possible to have Dovecot
>> director or your fav load balancer weight more connections to one
>> Deovecot node, funnel 10-15% more connections to this one.  (I'm no
>> director guru, in fact haven't use it yet).

> So, this reads like my idea in the first place.

> Only you place all the mails on the NFS server, whereas my idea was to
> just share the shared folders from a central point and keep the normal
> user dirs local to the different nodes, thus reducing network impact for
> the way more common user access.

To be a bit more concrete on this one:

a) X backend servers which my frontend (being perdition or dovecot
   director) redirects users to, fixed, no random redirects.

   I might start with 4 backend servers, but I can easily scale them,
   either vertically by adding more RAM or vCPUs or horizontally by
   adding more VMs and reshuffling some mailboxes during the night.

  Why 4 and not 2? If I'm going to build a cluster, I already have to do
  the work to implement this and with 4 backends, I can distribute the
  load even further without much additional administrative overhead.
  But the load impact on each node gets lower with more nodes, if I am
  able to evenly spread my users across those nodes (like md5'ing the
  username and using the first 2 bits from that to determine which
  node the user resides on).

b) 1 backend server for the public shared mailboxes, exporting them via
   NFS to the user backend servers

Configuration like this, from http://wiki2.dovecot.org/SharedMailboxes/Public

,----
| # User's private mail location
| mail_location = mdbox:~/mdbox
|
| # When creating any namespaces, you must also have a private namespace:
| namespace {
|   type = private
|   separator = .
|   prefix = INBOX.
|   #location defaults to mail_location.
|   inbox = yes
| }
|
| namespace {
|   type = public
|   separator = .
|   prefix = #shared.
|   location = mdbox:/srv/shared/
|   subscriptions = no
| }
`----

With /srv/shared being the NFS mountpoint from my central public shared
mailbox server.

This setup would keep the amount of data transferred via NFS small (only
a tiny fraction of my 10,000 users have access to a shared folder,
mostly users in the IT-Team or in the administration of the university.

Wouldn't such a setup be the "Best of Both Worlds"? Having the main
traffic going to local disks (being RDMs) and also being able to provide
shared folders to every user who needs them without the need to move
those users onto one server?

Grüße,
Sven.

-- 
Sigmentation fault. Core dumped.




More information about the dovecot mailing list