[Dovecot] distributed mdbox
Anyone know how to setup dovecot with mdbox so that it can be used through shared storage from multiple hosts? I've setup a gluster volume and am sharing it between 2 test clients. I'm using postfix/dovecot LDA for delivery and I'm using postal to send mail between 40 users. In doing this, I'm seeing these errors in the logs
Mar 21 09:36:29 test-gluster-client2 dovecot: lda(testuser34): Error: Fixed index file /mnt/testuser34/mdbox/storage/dovecot.map.index: messages_count 272 -> 271 Mar 21 09:36:30 test-gluster-client2 dovecot: lda(testuser28): Error: Log synchronization error at seq=4,offset=3768 for /mnt/testuser28/mdbox/storage/dovecot.map.index: Append with UID 516, but next_uid = 517 Mar 21 09:36:30 test-gluster-client2 dovecot: lda(testuser28): Error: Log synchronization error at seq=4,offset=4220 for /mnt/testuser28/mdbox/storage/dovecot.map.index: Extension record update for invalid uid=517 Mar 21 09:36:30 test-gluster-client2 dovecot: lda(testuser28): Error: Log synchronization error at seq=4,offset=5088 for /mnt/testuser28/mdbox/storage/dovecot.map.index: Extension record update for invalid uid=517 Mar 21 09:36:30 test-gluster-client2 dovecot: lda(testuser28): Warning: fscking index file /mnt/testuser28/mdbox/storage/dovecot.map.index Mar 21 09:36:30 test-gluster-client2 dovecot: lda(testuser34): Warning: fscking index file /mnt/testuser34/mdbox/storage/dovecot.map.index
This is my dovecot config currently:
jdevine@test-gluster-client2:~> dovecot -n # 2.0.13: /etc/dovecot/dovecot.conf # OS: Linux 3.0.0-13-server x86_64 Ubuntu 11.10 lock_method = dotlock mail_fsync = always mail_location = mdbox:~/mdbox mail_nfs_index = yes mail_nfs_storage = yes mmap_disable = yes passdb { driver = pam } protocols = " imap" ssl_cert =
On 21.3.2012, at 17.56, James Devine wrote:
Anyone know how to setup dovecot with mdbox so that it can be used through shared storage from multiple hosts? I've setup a gluster volume and am sharing it between 2 test clients. I'm using postfix/dovecot LDA for delivery and I'm using postal to send mail between 40 users. In doing this, I'm seeing these errors in the logs
Dovecot assumes that the filesystem behaves the same way as regular local filesystems.
Mar 21 09:36:29 test-gluster-client2 dovecot: lda(testuser34): Error: Fixed index file /mnt/testuser34/mdbox/storage/dovecot.map.index: messages_count 272 -> 271 Mar 21 09:36:30 test-gluster-client2 dovecot: lda(testuser28): Error: Log synchronization error at seq=4,offset=3768 for /mnt/testuser28/mdbox/storage/dovecot.map.index: Append with UID 516, but next_uid = 517
Looks like gluster doesn't fit that assumption. So, the solution is the same as with NFS: http://wiki2.dovecot.org/Director
On Wed, Mar 21, 2012 at 10:05 AM, Timo Sirainen tss@iki.fi wrote:
On 21.3.2012, at 17.56, James Devine wrote:
Anyone know how to setup dovecot with mdbox so that it can be used through shared storage from multiple hosts? I've setup a gluster volume and am sharing it between 2 test clients. I'm using postfix/dovecot LDA for delivery and I'm using postal to send mail between 40 users. In doing this, I'm seeing these errors in the logs
Dovecot assumes that the filesystem behaves the same way as regular local filesystems.
Mar 21 09:36:29 test-gluster-client2 dovecot: lda(testuser34): Error: Fixed index file /mnt/testuser34/mdbox/storage/dovecot.map.index: messages_count 272 -> 271 Mar 21 09:36:30 test-gluster-client2 dovecot: lda(testuser28): Error: Log synchronization error at seq=4,offset=3768 for /mnt/testuser28/mdbox/storage/dovecot.map.index: Append with UID 516, but next_uid = 517
Looks like gluster doesn't fit that assumption. So, the solution is the same as with NFS: http://wiki2.dovecot.org/Director
What filesystem mechanisms might not be working in this case?
Also I don't seem to get these errors with a single dovecot machine using the shared storage and it looks like there are multiple simultaneous delivery processes running
On Wed, Mar 21, 2012 at 10:25 AM, James Devine fxmulder@gmail.com wrote:
On Wed, Mar 21, 2012 at 10:05 AM, Timo Sirainen tss@iki.fi wrote:
On 21.3.2012, at 17.56, James Devine wrote:
Anyone know how to setup dovecot with mdbox so that it can be used through shared storage from multiple hosts? I've setup a gluster volume and am sharing it between 2 test clients. I'm using postfix/dovecot LDA for delivery and I'm using postal to send mail between 40 users. In doing this, I'm seeing these errors in the logs
Dovecot assumes that the filesystem behaves the same way as regular local filesystems.
Mar 21 09:36:29 test-gluster-client2 dovecot: lda(testuser34): Error: Fixed index file /mnt/testuser34/mdbox/storage/dovecot.map.index: messages_count 272 -> 271 Mar 21 09:36:30 test-gluster-client2 dovecot: lda(testuser28): Error: Log synchronization error at seq=4,offset=3768 for /mnt/testuser28/mdbox/storage/dovecot.map.index: Append with UID 516, but next_uid = 517
Looks like gluster doesn't fit that assumption. So, the solution is the same as with NFS: http://wiki2.dovecot.org/Director
What filesystem mechanisms might not be working in this case?
The problem is most likely the same as with NFS: Server A caches data -> server B modifies data -> server A modifies data using stale cached state -> corruption. Glusterfs works with FUSE, and FUSE has quite similar problems as NFS.
With director you guarantee that the same mailbox isn't accessed simultaneously by multiple servers, so this problem goes away.
On 21.3.2012, at 18.47, James Devine wrote:
Also I don't seem to get these errors with a single dovecot machine using the shared storage and it looks like there are multiple simultaneous delivery processes running
On Wed, Mar 21, 2012 at 10:25 AM, James Devine fxmulder@gmail.com wrote:
On Wed, Mar 21, 2012 at 10:05 AM, Timo Sirainen tss@iki.fi wrote:
On 21.3.2012, at 17.56, James Devine wrote:
Anyone know how to setup dovecot with mdbox so that it can be used through shared storage from multiple hosts? I've setup a gluster volume and am sharing it between 2 test clients. I'm using postfix/dovecot LDA for delivery and I'm using postal to send mail between 40 users. In doing this, I'm seeing these errors in the logs
Dovecot assumes that the filesystem behaves the same way as regular local filesystems.
Mar 21 09:36:29 test-gluster-client2 dovecot: lda(testuser34): Error: Fixed index file /mnt/testuser34/mdbox/storage/dovecot.map.index: messages_count 272 -> 271 Mar 21 09:36:30 test-gluster-client2 dovecot: lda(testuser28): Error: Log synchronization error at seq=4,offset=3768 for /mnt/testuser28/mdbox/storage/dovecot.map.index: Append with UID 516, but next_uid = 517
Looks like gluster doesn't fit that assumption. So, the solution is the same as with NFS: http://wiki2.dovecot.org/Director
What filesystem mechanisms might not be working in this case?
On 3/21/2012 12:04 PM, Timo Sirainen wrote:
The problem is most likely the same as with NFS: Server A caches data -> server B modifies data -> server A modifies data using stale cached state -> corruption. Glusterfs works with FUSE, and FUSE has quite similar problems as NFS.
With director you guarantee that the same mailbox isn't accessed simultaneously by multiple servers, so this problem goes away.
If using "real" shared storage i.e. an FC or iSCSI SAN LUN, you could use a true cluster file system such as OCFS or GFS. Both will eliminate this problem, and without requiring Dovecot director. And you'll get better performance than with Gluster, which, BTW, isn't really suitable as a transactional filesystem, was not designed for such a use case.
-- Stan
On 03/22/2012 12:11 AM, Stan Hoeppner wrote:
The problem is most likely the same as with NFS: Server A caches data -> server B modifies data -> server A modifies data using stale cached state -> corruption. Glusterfs works with FUSE, and FUSE has quite similar problems as NFS.
With director you guarantee that the same mailbox isn't accessed simultaneously by multiple servers, so this problem goes away. If using "real" shared storage i.e. an FC or iSCSI SAN LUN, you could use a true cluster file system such as OCFS or GFS. Both will eliminate
On 3/21/2012 12:04 PM, Timo Sirainen wrote: this problem, and without requiring Dovecot director. And you'll get better performance than with Gluster, which, BTW, isn't really suitable as a transactional filesystem, was not designed for such a use case.
Speaking as an admin who has run Dovecot on top of GFS both with and without the director, I would never go back to a cluster without the director. The cluster performs *so* much better when glocks can be cached on a single node, and this can't happen if a single user has IMAP processes on separate nodes.
No, you don't strictly need the director if you have GFS, but if you can manage it, you'll be a lot happier.
Jim
On 3/22/2012 11:17 AM, Jim Lawson wrote:
On 03/22/2012 12:11 AM, Stan Hoeppner wrote:
The problem is most likely the same as with NFS: Server A caches data -> server B modifies data -> server A modifies data using stale cached state -> corruption. Glusterfs works with FUSE, and FUSE has quite similar problems as NFS.
With director you guarantee that the same mailbox isn't accessed simultaneously by multiple servers, so this problem goes away. If using "real" shared storage i.e. an FC or iSCSI SAN LUN, you could use a true cluster file system such as OCFS or GFS. Both will eliminate
On 3/21/2012 12:04 PM, Timo Sirainen wrote: this problem, and without requiring Dovecot director. And you'll get better performance than with Gluster, which, BTW, isn't really suitable as a transactional filesystem, was not designed for such a use case.
Speaking as an admin who has run Dovecot on top of GFS both with and without the director, I would never go back to a cluster without the director. The cluster performs *so* much better when glocks can be cached on a single node, and this can't happen if a single user has IMAP processes on separate nodes.
No, you don't strictly need the director if you have GFS, but if you can manage it, you'll be a lot happier.
Did/do you see the Director/glock benefit with both maildir and mdbox Jim? Do you see any noteworthy performance differences between the two formats on GFS, with and without Director? BTW, are you hitting FC or iSCSI LUNs?
-- Stan
On 3/23/12 3:13 AM, Stan Hoeppner wrote:
Speaking as an admin who has run Dovecot on top of GFS both with and without the director, I would never go back to a cluster without the director. The cluster performs *so* much better when glocks can be cached on a single node, and this can't happen if a single user has IMAP processes on separate nodes.
No, you don't strictly need the director if you have GFS, but if you can manage it, you'll be a lot happier. Did/do you see the Director/glock benefit with both maildir and mdbox Jim? Do you see any noteworthy performance differences between the two formats on GFS, with and without Director? BTW, are you hitting FC or iSCSI LUNs?
Actually, we're all mbox. This primarily has to do with how users do self-service mail recovery from backup: one folder = one file.
I'd like to move to mdbox, but it would mean the recovery scripts will need to understand which files are associated with which folders, as well as restoring the associated index files. That's a to-do.
We're using fibrechannel (IBM v7000) storage, but I would expect to see the same thing with iSCSI. It's mostly about different nodes contending over locks on the same files (although I'm sure cache locality helps a great deal, too.) If you end up with imap processes for the same folder on different nodes, or mail delivery happening on one node and imap on the other, you will feel the lag in your IMAP client. "Oh, my INBOX has been unresponsive for 10 seconds, I must be getting a lot of mail right now!" That's an exaggeration, but not by much.
Jim
On 3/23/2012 7:13 AM, Jim Lawson wrote:
On 3/23/12 3:13 AM, Stan Hoeppner wrote:
Speaking as an admin who has run Dovecot on top of GFS both with and without the director, I would never go back to a cluster without the director. The cluster performs *so* much better when glocks can be cached on a single node, and this can't happen if a single user has IMAP processes on separate nodes.
No, you don't strictly need the director if you have GFS, but if you can manage it, you'll be a lot happier. Did/do you see the Director/glock benefit with both maildir and mdbox Jim? Do you see any noteworthy performance differences between the two formats on GFS, with and without Director? BTW, are you hitting FC or iSCSI LUNs?
Actually, we're all mbox. This primarily has to do with how users do self-service mail recovery from backup: one folder = one file.
Yeah, mbox isn't as dead as some people contend, but it just doesn't have legs for newer deployment architectures.
I'd like to move to mdbox, but it would mean the recovery scripts will need to understand which files are associated with which folders, as well as restoring the associated index files. That's a to-do.
That's an easy weekend project. ;)
We're using fibrechannel (IBM v7000) storage, but I would expect to see the same thing with iSCSI. It's mostly about different nodes contending over locks on the same files (although I'm sure cache locality helps a great deal, too.) If you end up with imap processes for the same folder on different nodes, or mail delivery happening on one node and imap on the other, you will feel the lag in your IMAP client. "Oh, my INBOX has been unresponsive for 10 seconds, I must be getting a lot of mail right now!" That's an exaggeration, but not by much.
I was asking about your SAN storage unrelated to the locking issue. Just a curiosity thing. Note my email domain. ;) I'm an FC fan but iSCSI seems to be more popular in many circles, actually pretty much market wide these days. So when I come across another SAN user I'm naturally curious as to what hardware they use.
Just so nobody gets the wrong idea, I wasn't advocating against Director earlier in the thread. I think it's fantastic and solves some critical scalability problems. As in your case, it allows one to use his mail storage format of choice with a cluster filesystem while mostly avoiding the locking headaches. In the past one pretty much had to use maildir with a cluster FS to avoid the locking performance killed. But one had to suffer the higher IOPS load on the storage. Not always a good tradeoff, especially for busy mail systems.
I assume you do still have some minor locking/performance issues with the INBOX, even with Director, when LDA and the user MUA are both hitting the INBOX index and mbox files. You'll still see this with mdbox, but probably to a lesser degree if you use a smallish mdbox_rotate_size value. To mitigate this INBOX locking you could go with a dual namespaces, using maildir or sdbox for the INBOX and mdbox for the other user mail folders.
-- Stan
On 23.3.2012, at 19.11, Stan Hoeppner wrote:
I assume you do still have some minor locking/performance issues with the INBOX, even with Director, when LDA and the user MUA are both hitting the INBOX index and mbox files. You'll still see this with mdbox, but probably to a lesser degree if you use a smallish mdbox_rotate_size value. To mitigate this INBOX locking you could go with a dual namespaces, using maildir or sdbox for the INBOX and mdbox for the other user mail folders.
The biggest difference is that mbox requires read locks, mdbox doesn't. mdbox lock waits are very similar to maildir's. Of course, I don't know about the cluster filesystems' internal locking, but I thought it was even worse with Maildir than with mbox because it had to get a read lock for each read file, but I guess this depends on the filesystem.
On 3/23/12 1:11 PM, Stan Hoeppner wrote:
On 3/23/2012 7:13 AM, Jim Lawson wrote:
I'd like to move to mdbox, but it would mean the recovery scripts will need to understand which files are associated with which folders, as well as restoring the associated index files. That's a to-do. That's an easy weekend project. ;)
Out of curiosity, does anyone do self-service restoration of individual mdbox folders? If I'm going to write a script to do it, it'd be nice to avoid any pitfalls someone else has already run into. :-) We're already backing up from snapshots, so the synchronization issues are solved (at least at backup time...)
Jim
Anyone know how to setup dovecot with mdbox so that it can be used
On Wed, 21 Mar 2012 09:56:12 -0600, James Devine fxmulder@gmail.com wrote: through
shared storage from multiple hosts? I've setup a gluster volume and am sharing it between 2 test clients. I'm using postfix/dovecot LDA for delivery and I'm using postal to send mail between 40 users. In doing this, I'm seeing these errors in the logs
Mar 21 09:36:29 test-gluster-client2 dovecot: lda(testuser34): Error: Fixed index file /mnt/testuser34/mdbox/storage/dovecot.map.index: messages_count 272 -> 271 Mar 21 09:36:30 test-gluster-client2 dovecot: lda(testuser28): Error: Log synchronization error at seq=4,offset=3768 for /mnt/testuser28/mdbox/storage/dovecot.map.index: Append with UID 516, but next_uid = 517 Mar 21 09:36:30 test-gluster-client2 dovecot: lda(testuser28): Error: Log synchronization error at seq=4,offset=4220 for /mnt/testuser28/mdbox/storage/dovecot.map.index: Extension record update for invalid uid=517 Mar 21 09:36:30 test-gluster-client2 dovecot: lda(testuser28): Error: Log synchronization error at seq=4,offset=5088 for /mnt/testuser28/mdbox/storage/dovecot.map.index: Extension record update for invalid uid=517 Mar 21 09:36:30 test-gluster-client2 dovecot: lda(testuser28): Warning: fscking index file /mnt/testuser28/mdbox/storage/dovecot.map.index Mar 21 09:36:30 test-gluster-client2 dovecot: lda(testuser34): Warning: fscking index file /mnt/testuser34/mdbox/storage/dovecot.map.index
This is my dovecot config currently:
jdevine@test-gluster-client2:~> dovecot -n # 2.0.13: /etc/dovecot/dovecot.conf # OS: Linux 3.0.0-13-server x86_64 Ubuntu 11.10 lock_method = dotlock mail_fsync = always mail_location = mdbox:~/mdbox mail_nfs_index = yes mail_nfs_storage = yes mmap_disable = yes passdb { driver = pam } protocols = " imap" ssl_cert =
I was able to get dovecot working across a gluster cluster a few weeks ago and it worked just fine. I would recommend using the native gluster mount option (need to install gluster software on clients), and using distributed replicated as your replication mechanism. If you're running two gluster servers you should have a replica count of two with distributed replicated. You should test first to make sure you can create a file in both mounts and see it from every mount point in the cluster, as well as interact with it. It's also very important to make sure your servers are running with synchronized clocks from an NTP server. Very bad things happen to a (dovecot or gluster) cluster out of sync with NTP.
On 23.3.2012, at 15.39, list@airstreamcomm.net list@airstreamcomm.net wrote:
I was able to get dovecot working across a gluster cluster a few weeks ago and it worked just fine. I would recommend using the native gluster mount option (need to install gluster software on clients), and using distributed replicated as your replication mechanism.
Have you tried stress testing it with imaptest? Run in parallel for both servers:
imaptest host=gluster1 user=testuser pass=testpass imaptest host=gluster2 user=testuser pass=testpass
And see if Dovecot logs any errors.
On Fri, 23 Mar 2012 16:06:25 +0200, Timo Sirainen tss@iki.fi wrote:
On 23.3.2012, at 15.39, list@airstreamcomm.net list@airstreamcomm.net wrote:
I was able to get dovecot working across a gluster cluster a few weeks ago and it worked just fine. I would recommend using the native gluster mount option (need to install gluster software on clients), and using distributed replicated as your replication mechanism.
Have you tried stress testing it with imaptest? Run in parallel for both servers:
imaptest host=gluster1 user=testuser pass=testpass imaptest host=gluster2 user=testuser pass=testpass
And see if Dovecot logs any errors.
I did stress test it, but we have developed a "mail bot net" tool for the purpose. I should mention this was tested using dovecot 1.2, as this is our current production version (hopefully will be upgrading soon). Its comprised of a control server that starts a bot network of client machines that creates pop/imap connections (smtp as well) on our test cluster of dovecot (and postfix) servers. In my test I distributed the load across a two node dovecot (/postfix) cluster back ended by glusterfs, which has SAN storage attached to it. I actually didn't change my configuration from when I had a test NFS server connected to the test servers (mmap disabled, fcntl locking, etc), because glusterfs was an afterthought when we were stress testing our new netapp system using NFS. We have everything in VMware, including the glusterfs servers. Using five bot servers and connecting 7 times a second from each server (35 connections per second) for both pop and imap (70 total connections per second) split between two dovecot servers I was not seeing any big issues. The load average was low, and there were no errors to speak of in dovecot (or postfix). I was mounting the storage with the glusterfs native client, not using NFS (which I have not tested). I would like to do a more thorough test of glusterfs using Dovecot 2.0 on some dedicated hardware and see how much further I can push the system.
On 23.3.2012, at 19.43, list@airstreamcomm.net list@airstreamcomm.net wrote:
Have you tried stress testing it with imaptest? Run in parallel for both servers: I did stress test it, but we have developed a "mail bot net" tool for the purpose. I should mention this was tested using dovecot 1.2, as this is our current production version (hopefully will be upgrading soon). Its comprised of a control server that starts a bot network of client machines that creates pop/imap connections (smtp as well) on our test cluster of dovecot (and postfix) servers. In my test I distributed the load across a two node dovecot (/postfix) cluster back ended by glusterfs, which has SAN storage attached to it. I actually didn't change my configuration from when I had a test NFS server connected to the test servers (mmap disabled, fcntl locking, etc), because glusterfs was an afterthought when we were stress testing our new netapp system using NFS. We have everything in VMware, including the glusterfs servers. Using five bot servers and connecting 7 times a second from each server (35 connections per second) for both pop and imap (70 total connections per second) split between two dovecot servers I was not seeing any big issues. The load average was low, and there were no errors to speak of in dovecot (or postfix). I was mounting the storage with the glusterfs native client, not using NFS (which I have not tested). I would like to do a more thorough test of glusterfs using Dovecot 2.0 on some dedicated hardware and see how much further I can push the system.
What did the bots do? Add messages and delete messages as fast as they could? I guess that's mostly enough to see if things work. imaptest anyway hammers the server as fast as it can with all kinds of commands.
On 23.3.2012, at 19.43, list@airstreamcomm.net list@airstreamcomm.net wrote:
Have you tried stress testing it with imaptest? Run in parallel for both servers: I did stress test it, but we have developed a "mail bot net" tool for
On Fri, 23 Mar 2012 23:03:01 +0200, Timo Sirainen tss@iki.fi wrote: the
purpose. I should mention this was tested using dovecot 1.2, as this is our current production version (hopefully will be upgrading soon). Its comprised of a control server that starts a bot network of client machines that creates pop/imap connections (smtp as well) on our test cluster of dovecot (and postfix) servers. In my test I distributed the load across a two node dovecot (/postfix) cluster back ended by glusterfs, which has SAN storage attached to it. I actually didn't change my configuration from when I had a test NFS server connected to the test servers (mmap disabled, fcntl locking, etc), because glusterfs was an afterthought when we were stress testing our new netapp system using NFS. We have everything in VMware, including the glusterfs servers. Using five bot servers and connecting 7 times a second from each server (35 connections per second) for both pop and imap (70 total connections per second) split between two dovecot servers I was not seeing any big issues. The load average was low, and there were no errors to speak of in dovecot (or postfix). I was mounting the storage with the glusterfs native client, not using NFS (which I have not tested). I would like to do a more thorough test of glusterfs using Dovecot 2.0 on some dedicated hardware and see how much further I can push the system.
What did the bots do? Add messages and delete messages as fast as they could? I guess that's mostly enough to see if things work. imaptest anyway hammers the server as fast as it can with all kinds of commands.
We created two python scripts on the bots that listed all the messages in the inbox then deleted all the messages in the inbox, one script doing pop and the other doing imap. The bots were also sending messages to the server simultaneously to repopulate inboxes. I didn't know about imaptest, thanks!
On Fri, Mar 23, 2012 at 7:39 AM, list@airstreamcomm.net wrote:
Anyone know how to setup dovecot with mdbox so that it can be used
On Wed, 21 Mar 2012 09:56:12 -0600, James Devine fxmulder@gmail.com wrote: through
shared storage from multiple hosts? I've setup a gluster volume and am sharing it between 2 test clients. I'm using postfix/dovecot LDA for delivery and I'm using postal to send mail between 40 users. In doing this, I'm seeing these errors in the logs
Mar 21 09:36:29 test-gluster-client2 dovecot: lda(testuser34): Error: Fixed index file /mnt/testuser34/mdbox/storage/dovecot.map.index: messages_count 272 -> 271 Mar 21 09:36:30 test-gluster-client2 dovecot: lda(testuser28): Error: Log synchronization error at seq=4,offset=3768 for /mnt/testuser28/mdbox/storage/dovecot.map.index: Append with UID 516, but next_uid = 517 Mar 21 09:36:30 test-gluster-client2 dovecot: lda(testuser28): Error: Log synchronization error at seq=4,offset=4220 for /mnt/testuser28/mdbox/storage/dovecot.map.index: Extension record update for invalid uid=517 Mar 21 09:36:30 test-gluster-client2 dovecot: lda(testuser28): Error: Log synchronization error at seq=4,offset=5088 for /mnt/testuser28/mdbox/storage/dovecot.map.index: Extension record update for invalid uid=517 Mar 21 09:36:30 test-gluster-client2 dovecot: lda(testuser28): Warning: fscking index file /mnt/testuser28/mdbox/storage/dovecot.map.index Mar 21 09:36:30 test-gluster-client2 dovecot: lda(testuser34): Warning: fscking index file /mnt/testuser34/mdbox/storage/dovecot.map.index
This is my dovecot config currently:
jdevine@test-gluster-client2:~> dovecot -n # 2.0.13: /etc/dovecot/dovecot.conf # OS: Linux 3.0.0-13-server x86_64 Ubuntu 11.10 lock_method = dotlock mail_fsync = always mail_location = mdbox:~/mdbox mail_nfs_index = yes mail_nfs_storage = yes mmap_disable = yes passdb { driver = pam } protocols = " imap" ssl_cert =
I was able to get dovecot working across a gluster cluster a few weeks ago and it worked just fine. I would recommend using the native gluster mount option (need to install gluster software on clients), and using distributed replicated as your replication mechanism. If you're running two gluster servers you should have a replica count of two with distributed replicated. You should test first to make sure you can create a file in both mounts and see it from every mount point in the cluster, as well as interact with it. It's also very important to make sure your servers are running with synchronized clocks from an NTP server. Very bad things happen to a (dovecot or gluster) cluster out of sync with NTP.
What storage method are you using? I'm able to produce errors within seconds of starting postal with more than one thread
As it turns out I can duplicate this problem with a single dovecot server and a single gluster server using mdbox, so maybe not caching? This being the case I don't think director would help
On Thu, Apr 5, 2012 at 7:16 PM, James Devine fxmulder@gmail.com wrote:
On Fri, Mar 23, 2012 at 7:39 AM, list@airstreamcomm.net wrote:
Anyone know how to setup dovecot with mdbox so that it can be used
On Wed, 21 Mar 2012 09:56:12 -0600, James Devine fxmulder@gmail.com wrote: through
shared storage from multiple hosts? I've setup a gluster volume and am sharing it between 2 test clients. I'm using postfix/dovecot LDA for delivery and I'm using postal to send mail between 40 users. In doing this, I'm seeing these errors in the logs
Mar 21 09:36:29 test-gluster-client2 dovecot: lda(testuser34): Error: Fixed index file /mnt/testuser34/mdbox/storage/dovecot.map.index: messages_count 272 -> 271 Mar 21 09:36:30 test-gluster-client2 dovecot: lda(testuser28): Error: Log synchronization error at seq=4,offset=3768 for /mnt/testuser28/mdbox/storage/dovecot.map.index: Append with UID 516, but next_uid = 517 Mar 21 09:36:30 test-gluster-client2 dovecot: lda(testuser28): Error: Log synchronization error at seq=4,offset=4220 for /mnt/testuser28/mdbox/storage/dovecot.map.index: Extension record update for invalid uid=517 Mar 21 09:36:30 test-gluster-client2 dovecot: lda(testuser28): Error: Log synchronization error at seq=4,offset=5088 for /mnt/testuser28/mdbox/storage/dovecot.map.index: Extension record update for invalid uid=517 Mar 21 09:36:30 test-gluster-client2 dovecot: lda(testuser28): Warning: fscking index file /mnt/testuser28/mdbox/storage/dovecot.map.index Mar 21 09:36:30 test-gluster-client2 dovecot: lda(testuser34): Warning: fscking index file /mnt/testuser34/mdbox/storage/dovecot.map.index
This is my dovecot config currently:
jdevine@test-gluster-client2:~> dovecot -n # 2.0.13: /etc/dovecot/dovecot.conf # OS: Linux 3.0.0-13-server x86_64 Ubuntu 11.10 lock_method = dotlock mail_fsync = always mail_location = mdbox:~/mdbox mail_nfs_index = yes mail_nfs_storage = yes mmap_disable = yes passdb { driver = pam } protocols = " imap" ssl_cert =
I was able to get dovecot working across a gluster cluster a few weeks ago and it worked just fine. I would recommend using the native gluster mount option (need to install gluster software on clients), and using distributed replicated as your replication mechanism. If you're running two gluster servers you should have a replica count of two with distributed replicated. You should test first to make sure you can create a file in both mounts and see it from every mount point in the cluster, as well as interact with it. It's also very important to make sure your servers are running with synchronized clocks from an NTP server. Very bad things happen to a (dovecot or gluster) cluster out of sync with NTP.
What storage method are you using? I'm able to produce errors within seconds of starting postal with more than one thread
Yeah, not caching then. I know Glusterfs people implemented some fixes/workarounds to make Dovecot work better. I don't know if all of those fixes are in the public glusterfs.
On 6.4.2012, at 18.39, James Devine wrote:
As it turns out I can duplicate this problem with a single dovecot server and a single gluster server using mdbox, so maybe not caching? This being the case I don't think director would help
On Thu, Apr 5, 2012 at 7:16 PM, James Devine fxmulder@gmail.com wrote:
On Fri, Mar 23, 2012 at 7:39 AM, list@airstreamcomm.net wrote:
Anyone know how to setup dovecot with mdbox so that it can be used
On Wed, 21 Mar 2012 09:56:12 -0600, James Devine fxmulder@gmail.com wrote: through
shared storage from multiple hosts? I've setup a gluster volume and am sharing it between 2 test clients. I'm using postfix/dovecot LDA for delivery and I'm using postal to send mail between 40 users. In doing this, I'm seeing these errors in the logs
Mar 21 09:36:29 test-gluster-client2 dovecot: lda(testuser34): Error: Fixed index file /mnt/testuser34/mdbox/storage/dovecot.map.index: messages_count 272 -> 271 Mar 21 09:36:30 test-gluster-client2 dovecot: lda(testuser28): Error: Log synchronization error at seq=4,offset=3768 for /mnt/testuser28/mdbox/storage/dovecot.map.index: Append with UID 516, but next_uid = 517 Mar 21 09:36:30 test-gluster-client2 dovecot: lda(testuser28): Error: Log synchronization error at seq=4,offset=4220 for /mnt/testuser28/mdbox/storage/dovecot.map.index: Extension record update for invalid uid=517 Mar 21 09:36:30 test-gluster-client2 dovecot: lda(testuser28): Error: Log synchronization error at seq=4,offset=5088 for /mnt/testuser28/mdbox/storage/dovecot.map.index: Extension record update for invalid uid=517 Mar 21 09:36:30 test-gluster-client2 dovecot: lda(testuser28): Warning: fscking index file /mnt/testuser28/mdbox/storage/dovecot.map.index Mar 21 09:36:30 test-gluster-client2 dovecot: lda(testuser34): Warning: fscking index file /mnt/testuser34/mdbox/storage/dovecot.map.index
This is my dovecot config currently:
jdevine@test-gluster-client2:~> dovecot -n # 2.0.13: /etc/dovecot/dovecot.conf # OS: Linux 3.0.0-13-server x86_64 Ubuntu 11.10 lock_method = dotlock mail_fsync = always mail_location = mdbox:~/mdbox mail_nfs_index = yes mail_nfs_storage = yes mmap_disable = yes passdb { driver = pam } protocols = " imap" ssl_cert =
I was able to get dovecot working across a gluster cluster a few weeks ago and it worked just fine. I would recommend using the native gluster mount option (need to install gluster software on clients), and using distributed replicated as your replication mechanism. If you're running two gluster servers you should have a replica count of two with distributed replicated. You should test first to make sure you can create a file in both mounts and see it from every mount point in the cluster, as well as interact with it. It's also very important to make sure your servers are running with synchronized clocks from an NTP server. Very bad things happen to a (dovecot or gluster) cluster out of sync with NTP.
What storage method are you using? I'm able to produce errors within seconds of starting postal with more than one thread
participants (5)
-
James Devine
-
Jim Lawson
-
list@airstreamcomm.net
-
Stan Hoeppner
-
Timo Sirainen