[Dovecot] mmap in GFS2 on rhel 6.1

Aliet Santiesteban Sifontes alietsantiesteban at gmail.com
Sat Jun 11 07:24:18 EEST 2011


Hello list, we continue our tests using Dovecot on a RHEL 6.1 Cluster
Backend with GFS2, also we are using dovecot as a Director for user node
persistence, everything was ok until we started stress testing the solution
with imaptest, we had many deadlocks, cluster filesystems corruptions and
hangs, specially in index filesystem, we have configured the backend as if
they were on a NFS like setup but this seems not to work at least on GFS2 on
rhel 6.1.
We have a two node cluster sharing two GFS2 filesystem
- Index GFS2 filesystem to store users indexes
- Mailbox data on a GFS2 filesystem

The specific configs for NFS or cluster filesystem we used:

mmap_disable = yes
mail_fsync = always
mail_nfs_storage = yes
mail_nfs_index = yes
fsync_disable=no
lock_method = fcntl

mail location :

mail_location =
mdbox:/var/vmail/%d/%3n/%n/mdbox:INDEX=/var/indexes/%d/%3n/%n

But this seems not to work for GFS2 even doing user node persistence,
maillog is plagged of errors and GFS2 hangs on stress testing with imaptest,
many corrupted index for example, transaction logs etc, at this point we
have many questions, first mmap...
In Redhat GFS2 docs we read:
Gold rules for performance:
An inode is used in a read only fashion across all nodes
An inode is written or modified from a single node only.

We have succesfull archived this using dovecot director

Now, for mmap rh says:

... If you mmap() a file on GFS2 with a read/write mapping, but only read
from it, this only counts as a
read. On GFS though, it counts as a write, so GFS2 is much more scalable
with mmap() I/O...

But in our config we are using mmap_disable=yes, do we have to use
mmap_disable=no with GFS2???

Also, how dovecot manage the cache flush on GFS2 filesystem???

Why, if we are doing user node persistence, dovecot indexes gets
corrupted???

What lock method do we have to use??

How fsync should be used??

We know we have many questions, but this is really a very complex stuff and
we are going to appreciate any help you can give us.

Thank you all for a great work, specially Timo...
best regards


More information about the dovecot mailing list