[Dovecot] GlusterFs - Any new progress reports?
GlusterFs always strikes me as being "the solution" (one day...). It's had a lot of growing pains, but there have been a few on the list had success using it already.
Given some time has gone by since I last asked - has anyone got any more recent experience with it and how has it worked out with particular emphasis on Dovecot maildir storage? How has version 3 worked out for you?
Anyone had success using some other clustered/HA filestore with dovecot who can share their experience? (OCFS/GFS over DRBD, etc?)
My interest is more in bootstrapping a more highly available system from lower quality (commodity) components than very high end use
Thanks
Ed W
Anyone had success using some other clustered/HA filestore with dovecot who can share their experience? (OCFS/GFS over DRBD, etc?)
My interest is more in bootstrapping a more highly available system from lower quality (commodity) components than very high end use
we use drbd with ext3 in a active/passive setup for more than 10000 mailboxes. works like a charm!
I'm not really trusting cluster filesystems and most cluster filesystems are not made for small files.
Alex
-------- Original-Nachricht --------
Datum: Wed, 17 Feb 2010 20:15:30 +0100 Von: alex handle <alex.handle@gmail.com> An: Dovecot Mailing List <dovecot@dovecot.org> Betreff: Re: [Dovecot] GlusterFs - Any new progress reports?
Anyone had success using some other clustered/HA filestore with dovecot
who
can share their experience? (OCFS/GFS over DRBD, etc?)
My interest is more in bootstrapping a more highly available system from lower quality (commodity) components than very high end use
we use drbd with ext3 in a active/passive setup for more than 10000 mailboxes. works like a charm!
I'm not really trusting cluster filesystems and most cluster filesystems are not made for small files.
I use GlusterFS with Dovecot and it works without issues. The GlusterFS team has made huge progress since 2.0 and with the new 3.0 version they have again proved that GlusterFS can get better.
Alex
Steve
-- Sicherer, schneller und einfacher. Die aktuellen Internet-Browser - jetzt kostenlos herunterladen! http://portal.gmx.net/de/go/chbrowser
On Wed, Feb 17, 2010 at 11:55 AM, Steve <steeeeeveee@gmx.net> wrote:
-------- Original-Nachricht --------
Datum: Wed, 17 Feb 2010 20:15:30 +0100 Von: alex handle <alex.handle@gmail.com> An: Dovecot Mailing List <dovecot@dovecot.org> Betreff: Re: [Dovecot] GlusterFs - Any new progress reports?
Anyone had success using some other clustered/HA filestore with dovecot
who
can share their experience? (OCFS/GFS over DRBD, etc?)
My interest is more in bootstrapping a more highly available system from lower quality (commodity) components than very high end use
we use drbd with ext3 in a active/passive setup for more than 10000 mailboxes. works like a charm!
I'm not really trusting cluster filesystems and most cluster filesystems are not made for small files.
I use GlusterFS with Dovecot and it works without issues. The GlusterFS team has made huge progress since 2.0 and with the new 3.0 version they have again proved that GlusterFS can get better.
Alex
Steve
Hi Steve,
I was wondering if perhaps I might snag a copy of your glusterfs server/client configs to see what you are doing? I am interested in using it in our mail setup, but last I tried a little over a month ago I got a bunch of corrupted mails, so far I am only using for a web cluster and that seems to be working but different use case I guess.
Thanks!
Brandon
-------- Original-Nachricht --------
Datum: Thu, 18 Feb 2010 08:36:36 -0800 Von: Brandon Lamb <brandonlamb@gmail.com> An: Dovecot Mailing List <dovecot@dovecot.org> Betreff: Re: [Dovecot] GlusterFs - Any new progress reports?
On Wed, Feb 17, 2010 at 11:55 AM, Steve <steeeeeveee@gmx.net> wrote:
-------- Original-Nachricht --------
Datum: Wed, 17 Feb 2010 20:15:30 +0100 Von: alex handle <alex.handle@gmail.com> An: Dovecot Mailing List <dovecot@dovecot.org> Betreff: Re: [Dovecot] GlusterFs - Any new progress reports?
Anyone had success using some other clustered/HA filestore with
dovecot
who
can share their experience? (OCFS/GFS over DRBD, etc?)
My interest is more in bootstrapping a more highly available system from lower quality (commodity) components than very high end use
we use drbd with ext3 in a active/passive setup for more than 10000 mailboxes. works like a charm!
I'm not really trusting cluster filesystems and most cluster filesystems are not made for small files.
I use GlusterFS with Dovecot and it works without issues. The GlusterFS team has made huge progress since 2.0 and with the new 3.0 version they have again proved that GlusterFS can get better.
Alex
Steve
Hi Steve,
I was wondering if perhaps I might snag a copy of your glusterfs server/client configs to see what you are doing? I am interested in using it in our mail setup, but last I tried a little over a month ago I got a bunch of corrupted mails, so far I am only using for a web cluster and that seems to be working but different use case I guess.
Server part:
volume gfs-srv-ds type storage/posix option directory /mnt/glusterfs/mailstore01 end-volume
volume gfs-srv-ds-locks type features/locks option mandatory-locks off subvolumes gfs-srv-ds end-volume
volume gfs-srv-ds-remote type protocol/client option transport-type tcp # option username # option password option remote-host 192.168.0.142 option remote-port 6998 option frame-timeout 600 option ping-timeout 10 option remote-subvolume gfs-srv-ds-locks end-volume
volume gfs-srv-ds-replicate type cluster/replicate option data-self-heal on option metadata-self-heal on option entry-self-heal on # option read-subvolume gfs-srv-ds-locks # option favorite-child option data-change-log on option metadata-change-log on option entry-change-log on option data-lock-server-count 1 option metadata-lock-server-count 1 option entry-lock-server-count 1 subvolumes gfs-srv-ds-locks gfs-srv-ds-remote end-volume
volume gfs-srv-ds-io-threads type performance/io-threads option thread-count 16 subvolumes gfs-srv-ds-replicate end-volume
volume gfs-srv-ds-write-back type performance/write-behind option cache-size 64MB option flush-behind on # opiton disable-for-first-nbytes 1 # option enable-O_SYNC false subvolumes gfs-srv-ds-io-threads end-volume
volume gfs-srv-ds-io-cache type performance/io-cache option cache-size 32MB option priority *:0 option cache-timeout 2 subvolumes gfs-srv-ds-write-back end-volume
volume gfs-srv-ds-server type protocol/server option transport-type tcp option transport.socket.listen-port 6998 option auth.addr.gfs-srv-ds-locks.allow 192.168.0.*,127.0.0.1 option auth.addr.gfs-srv-ds-io-threads.allow 192.168.0.*,127.0.0.1 option auth.addr.gfs-srv-ds-io-cache.allow 192.168.0.*,127.0.0.1 subvolumes gfs-srv-ds-io-cache end-volume
Client part:
volume gfs-cli-ds-client type protocol/client option transport-type tcp # option remote-host gfs-vu-mailstore-c01.vunet.local option remote-host 127.0.0.1 option remote-port 6998 option frame-timeout 600 option ping-timeout 10 option remote-subvolume gfs-srv-ds-io-cache end-volume
#volume gfs-cli-ds-write-back # type performance/write-behind # option cache-size 64MB # option flush-behind on # # opiton disable-for-first-nbytes 1 # # option enable-O_SYNC false # subvolumes gfs-cli-ds-client #end-volume
#volume gfs-cli-ds-io-cache # type performance/io-cache # option cache-size 32MB # option priority *:0 # option cache-timeout 1 # subvolumes gfs-cli-ds-write-back #end-volume
Thanks!
Brandon
-- Sicherer, schneller und einfacher. Die aktuellen Internet-Browser - jetzt kostenlos herunterladen! http://portal.gmx.net/de/go/atbrowser
Dare I ask...(as it's not exactly clear from the Gluster docs)
If I take 5 storage servers to house my /mail can my cluster of 5 front end dovecot servers all mount/read/write to /mail.
The reason I ask is the docs seem to suggest I should be doing 5 servers, having 5 partitions, one for each mail server?
Any clues?
Regards
John
-------- Original-Nachricht --------
Datum: Thu, 18 Feb 2010 21:32:46 +0000 Von: John Lyons <john@support.nsnoc.com> An: Dovecot Mailing List <dovecot@dovecot.org> Betreff: Re: [Dovecot] GlusterFs - Any new progress reports?
Dare I ask...(as it's not exactly clear from the Gluster docs)
If I take 5 storage servers to house my /mail can my cluster of 5 front end dovecot servers all mount/read/write to /mail.
Yes. That's the beauty of GlusterFS.
The reason I ask is the docs seem to suggest I should be doing 5 servers, having 5 partitions, one for each mail server?
You can do that. But with GlusterFS and Dovecot you don't need to. You can mount read/write the same GlusterFS share on all the mail servers. Dovecot will usually add the hostname of the delivering system into the maildir file name. As long as the delivery is collision free in terms of file names then you can scale up as many read/write nodes you like.
Any clues?
Regards
John
Steve
-- NEU: Mit GMX DSL über 1000,- ¿ sparen! http://portal.gmx.net/de/go/dsl02
On 19.2.2010, at 0.37, Steve wrote:
You can do that. But with GlusterFS and Dovecot you don't need to. You can mount read/write the same GlusterFS share on all the mail servers. Dovecot will usually add the hostname of the delivering system into the maildir file name. As long as the delivery is collision free in terms of file names then you can scale up as many read/write nodes you like.
This has the same problems as with NFS (assuming the servers aren't only delivering mails, without updating index files). http://wiki.dovecot.org/NFS
-------- Original-Nachricht --------
Datum: Fri, 19 Feb 2010 03:02:48 +0200 Von: Timo Sirainen <tss@iki.fi> An: Dovecot Mailing List <dovecot@dovecot.org> Betreff: Re: [Dovecot] GlusterFs - Any new progress reports?
On 19.2.2010, at 0.37, Steve wrote:
You can do that. But with GlusterFS and Dovecot you don't need to. You can mount read/write the same GlusterFS share on all the mail servers. Dovecot will usually add the hostname of the delivering system into the maildir file name. As long as the delivery is collision free in terms of file names then you can scale up as many read/write nodes you like.
This has the same problems as with NFS (assuming the servers aren't only delivering mails, without updating index files). http://wiki.dovecot.org/NFS
Except that NFS is not so flexible as GlusterFS. In GlusterFS I can replicate, stripe, aggregate, etc... All things that I can't do with NFS.
Sicherer, schneller und einfacher. Die aktuellen Internet-Browser - jetzt kostenlos herunterladen! http://portal.gmx.net/de/go/atbrowser
On Fri, 2010-02-19 at 03:12 +0100, Steve wrote:
This has the same problems as with NFS (assuming the servers aren't only delivering mails, without updating index files). http://wiki.dovecot.org/NFS
Except that NFS is not so flexible as GlusterFS. In GlusterFS I can replicate, stripe, aggregate, etc... All things that I can't do with NFS.
Sure .. but you can break the index files in exactly the same way as with NFS. :)
-------- Original-Nachricht --------
Datum: Fri, 19 Feb 2010 04:37:04 +0200 Von: Timo Sirainen <tss@iki.fi> An: dovecot@dovecot.org Betreff: Re: [Dovecot] GlusterFs - Any new progress reports?
On Fri, 2010-02-19 at 03:12 +0100, Steve wrote:
This has the same problems as with NFS (assuming the servers aren't only delivering mails, without updating index files). http://wiki.dovecot.org/NFS
Except that NFS is not so flexible as GlusterFS. In GlusterFS I can replicate, stripe, aggregate, etc... All things that I can't do with NFS.
Sure .. but you can break the index files in exactly the same way as with NFS. :)
That is right :)
Sicherer, schneller und einfacher. Die aktuellen Internet-Browser - jetzt kostenlos herunterladen! http://portal.gmx.net/de/go/chbrowser
Sure .. but you can break the index files in exactly the same way as with NFS. :)
That is right :)
For us, all the front end exim servers pass their mail to a single final delivery server. It was done so that we didn't have all the front end servers needing to mount the storage. It also means that if we need to stop local delivery for any reason we're only stopping one exim server.
The NFS issue is resolved (I think/hope) by having the front end load balancer use persistent connections to the dovecot servers.
All I can say is we've used dovecot since it was a little nipper and have never had any issues with indexes.
Regards
John www.netserve.co.uk
I use GlusterFS with Dovecot and it works without issues. The GlusterFS team has made huge progress since 2.0 and with the new 3.0 version they have again proved that GlusterFS can get better.
You have kindly shared some details of your config before - care to update us on what you are using now, how much storage, how many deliveries/hour, IOPS, etc? Lots of stuff was quite hard work for you back with Glusterfs v2, what kind of stuff did you need to work around with v3? (I can't believe it worked out of the box!) Any notes for users with small office sized setups (ie 2 servers' ish).
I presume you use gentoo on your gluster machines? Do you run gluster only on the storage machines or do you virtualise and use the spare CPU to run other services? (given the price of electricity it seems a shame not to load servers up these days...)
Thanks
Ed W
Quoting Ed W <lists@wildgooses.com>:
Anyone had success using some other clustered/HA filestore with
dovecot who can share their experience? (OCFS/GFS over DRBD, etc?)
GFS2 over DRBD in an active-active setup works fine IMHO. Not perfect, but it was cheap and works well... Let's me reboot machines with "no downtime" which was one of my main goals when implementing it...
My interest is more in bootstrapping a more highly available system
from lower quality (commodity) components than very high end use
GFS+DRBD should fit the bill... You need several nics and cables, but they are dirt cheap... Just 2 machines with the same disk setup, and a handful of nics and cables, and you are off and running...
Thanks
Ed W
-- Eric Rostetter The Department of Physics The University of Texas at Austin
Go Longhorns!
-------- Original-Nachricht --------
Datum: Wed, 17 Feb 2010 21:25:46 -0600 Von: Eric Rostetter <rostetter@mail.utexas.edu> An: dovecot@dovecot.org Betreff: Re: [Dovecot] GlusterFs - Any new progress reports?
Quoting Ed W <lists@wildgooses.com>:
Anyone had success using some other clustered/HA filestore with
dovecot who can share their experience? (OCFS/GFS over DRBD, etc?)GFS2 over DRBD in an active-active setup works fine IMHO. Not perfect, but it was cheap and works well... Let's me reboot machines with "no downtime" which was one of my main goals when implementing it...
My interest is more in bootstrapping a more highly available system
from lower quality (commodity) components than very high end useGFS+DRBD should fit the bill... You need several nics and cables, but they are dirt cheap... Just 2 machines with the same disk setup, and a handful of nics and cables, and you are off and running...
Can you easy scale that GFS2+DRBD to have more then just 2 nodes? Is it possible to aggregate the speed when using many nodes? Can all the nodes at the same time be active or is one node always the master and the other a hot spare that kicks in when the master is down?
Thanks
Ed W
-- Eric Rostetter The Department of Physics The University of Texas at Austin
Go Longhorns!
-- Sicherer, schneller und einfacher. Die aktuellen Internet-Browser - jetzt kostenlos herunterladen! http://portal.gmx.net/de/go/chbrowser
Quoting Steve <steeeeeveee@gmx.net>:
My interest is more in bootstrapping a more highly available system from lower quality (commodity) components than very high end use
GFS+DRBD should fit the bill... You need several nics and cables, but they are dirt cheap... Just 2 machines with the same disk setup, and a handful of nics and cables, and you are off and running...
Can you easy scale that GFS2+DRBD to have more then just 2 nodes? Is
Not really, no. You can have those two nodes distribute it out via gnbd though... Red Hat claims it scales well, but I've not yet tested it...
Can all the nodes at the same time be active or is one node always the master
and the other a hot spare that kicks in when the master is down?
The free version of DRBD only supports max 2 nodes. They can be active-active or active-passive.
The non-free version is supposed to support 3 nodes, but I've heard
conflicting
reports on what the 3rd node can do... You'd have to investigate that
yourself... I'm not interested in it, since I don't want to pay for it...
(Though I am willing to donate to the project)
My proposed solution to the more-than-two-nodes is gnbd...
If that doesn't meet your needs, then DRBD probably isn't the proper choice. You didn't mention anything about number of nodes in your original post, IIRC.
Thanks
Ed W
-- Eric Rostetter The Department of Physics The University of Texas at Austin
Go Longhorns!
-------- Original-Nachricht --------
Datum: Thu, 18 Feb 2010 13:51:33 -0600 Von: Eric Rostetter <rostetter@mail.utexas.edu> An: dovecot@dovecot.org Betreff: Re: [Dovecot] GlusterFs - Any new progress reports?
Quoting Steve <steeeeeveee@gmx.net>:
My interest is more in bootstrapping a more highly available system from lower quality (commodity) components than very high end use
GFS+DRBD should fit the bill... You need several nics and cables, but they are dirt cheap... Just 2 machines with the same disk setup, and a handful of nics and cables, and you are off and running...
Can you easy scale that GFS2+DRBD to have more then just 2 nodes? Is
Not really, no. You can have those two nodes distribute it out via gnbd though... Red Hat claims it scales well, but I've not yet tested it...
I have already installed GFS on a cluster in the past, but never on DRBD.
Can all the nodes at the same time be active or is one node always the master
and the other a hot spare that kicks in when the master is down?The free version of DRBD only supports max 2 nodes. They can be active-active or active-passive.
The non-free version is supposed to support 3 nodes, but I've heard
conflicting reports on what the 3rd node can do... You'd have to investigate that yourself... I'm not interested in it, since I don't want to pay for it... (Though I am willing to donate to the project)
Hmm... when I started with GlusterFS I thought that using more then two nodes is something that I will never need. But now that I have GlusterFS up and running and I am using more then two nodes I really see a benefit in being able to use more then two nodes. For me this is a big advantage of GlusterFS compared to DRBD.
My proposed solution to the more-than-two-nodes is gnbd...
Never heard of it before. Don't like the fact that I need to patch the Kernel in order to get it working.
If that doesn't meet your needs, then DRBD probably isn't the proper choice. You didn't mention anything about number of nodes in your original post, IIRC.
I did not post the original post. I just responded to the original post saying that GlusterFS works for me.
Thanks
Ed W
-- Eric Rostetter The Department of Physics The University of Texas at Austin
Go Longhorns!
-- NEU: Mit GMX DSL über 1000,- ¿ sparen! http://portal.gmx.net/de/go/dsl02
Quoting Steve <steeeeeveee@gmx.net>:
I have already installed GFS on a cluster in the past, but never on DRBD.
Me too (I did in on a real physical SAN before).
Hmm... when I started with GlusterFS I thought that using more then
two nodes is something that I will never need.
GlusterFS is really designed to allow such things... So is GFS. But these are filesystems...
DRBD isn't really designed to scale this way. A SAN or NAS is.
But now that I have GlusterFS up and running and I am using more
then two nodes I really see a benefit in being able to use more then
two nodes. For me this is a big advantage of GlusterFS compared to
DRBD.
You are comparing filesystems to storage/mirroring systems. Not a valid comparison...
My proposed solution to the more-than-two-nodes is gnbd...
Never heard of it before. Don't like the fact that I need to patch
the Kernel in order to get it working.
GNDB is a standard part of GFS. No more patching than GFS or DRBD in any case... Red Hat and clones all come with support for GFS and GNDB built in. DRBD is another issue...
GNDB should be known to anyone using GFS, since it is part of the standard reading (manual, etc) for GFS.
If that doesn't meet your needs, then DRBD probably isn't the proper choice. You didn't mention anything about number of nodes in your original post, IIRC.
I did not post the original post. I just responded to the original
post saying that GlusterFS works for me.
I didn't mean to single you out in my reply... Assume the "you" is a generic you, not specifically aimed at any one individual...
Sorry if I miss-attributed anything to you... Very busy, and trying to reply to these emails as fast as I can when I get a minute or two of time, so I may make some mistakes as to who said what...
I'm not trying to convert or convince any one... I'm just replying and expressing my experiences and thoughts... If glusterfs works for you, then great. If not, there are alternatives... I happen to champion some, others champion others...
Personally, I like SAN storage, but the price has always kept me from using it (except once, when I was setting it up on someone else's SAN).
-- Eric Rostetter The Department of Physics The University of Texas at Austin
Go Longhorns!
On 2010-02-17, Ed W <lists@wildgooses.com> wrote:
Anyone had success using some other clustered/HA filestore with dovecot who can share their experience? (OCFS/GFS over DRBD, etc?)
We´ve been using IBM´s GPFS filesystem on (currently) seven x-series servers running RHEL4 and RHEL5, all SAN-attached all serving the same filesystem for probably 4 years now. This systems serves POP/IMAP/Webmail to ~700.000 mail accounts. Webmail is sticky, while POP/IMAP is being distributed over all the servers by HAproxy.
It´s been working very well. There´s been some minor issues with dovecots locking that forced us to be less parallell in the deliveries than we wanted to, but that´s probably our own fault for being quite back-level on dovecot.
The biggest pain is doing file backups of the maildirs...
-jf
participants (8)
-
alex handle
-
Brandon Lamb
-
Ed W
-
Eric Rostetter
-
Jan-Frode Myklebust
-
John Lyons
-
Steve
-
Timo Sirainen