Hello all,
has someone worked with DRBD (http://www.drbd.org) for HA of mail storage? if so, does it have stability issues? comments and experiences are thanked :)
Thanks, Rodolfo.
Rodolfo Gonzalez Gonzalez wrote:
Hello all,
has someone worked with DRBD (http://www.drbd.org) for HA of mail storage? if so, does it have stability issues? comments and experiences are thanked :)
Look for thread "Dovecot + DRBD/GFS mailstore" starting 6/5/2009 on this list.
~Seth
Quoting Rodolfo Gonzalez Gonzalez rgonzalez@gnt.cc:
has someone worked with DRBD (http://www.drbd.org) for HA of mail storage?
Yes.
if so, does it have stability issues?
None that I've run into.
comments and experiences are thanked :)
Works great for me (two machines, sharing via DRBD, using LVM+GFS, in active-active mode).
Thanks, Rodolfo.
-- Eric Rostetter The Department of Physics The University of Texas at Austin
This message is provided "AS IS" without warranty of any kind, either expressed or implied. Use this message at your own risk.
has someone worked with DRBD (http://www.drbd.org) for HA of mail storage? if so, does it have stability issues? comments and experiences are thanked :) (Now) 25K mailboxes (with ~92GB data) on DRBD-6 (now old - the thing was built in early 2006) with ext{3,4} on it. As long as heartbeat/
On Mon, 2009-11-23 at 15:25 -0600, Rodolfo Gonzalez Gonzalez wrote: [...] pacemaker/openais/whatever assures that it is mounted on at most one host, no problems whatsoever with the filesystem as such. We don't run dovecot though (see the year above) but it shouldn't make any difference. Nowadays, DRBD-7+GFS or OCFS2 (and dovecot of course;-) are very probably worth exploring for new clusters - see other mails.
Bernd
-- Firmix Software GmbH http://www.firmix.at/ mobil: +43 664 4416156 fax: +43 1 7890849-55 Embedded Linux Development and Services
Quoting Bernd Petrovitsch bernd@firmix.at:
(Now) 25K mailboxes (with ~92GB data) on DRBD-6 (now old - the thing was built in early 2006) with ext{3,4} on it. As long as heartbeat/ pacemaker/openais/whatever assures that it is mounted on at most one host, no problems whatsoever with the filesystem as such.
I'm doing a file server with similar setup, but with DRBD 8.2... Primary/secondary, heartbeat, ext3 filesystems...
Nowadays, DRBD-7+GFS or OCFS2 (and dovecot of course;-) are very probably worth exploring for new clusters - see other mails.
I'm doing dovecot off DRBD 8.3, RHCS, GFS, active-active... Why would you recommend DRBD 7 instead of 8?
Bernd
-- Eric Rostetter The Department of Physics The University of Texas at Austin
This message is provided "AS IS" without warranty of any kind, either expressed or implied. Use this message at your own risk.
On Die, 2009-11-24 at 13:44 -0600, Eric Jon Rostetter wrote: [...]
I'm doing dovecot off DRBD 8.3, RHCS, GFS, active-active... Why would you recommend DRBD 7 instead of 8? Because I thought 7 is the newest stable and didn't realized that it is (apparently) 8. Thanks for pointing that out!
Bernd
Firmix Software GmbH http://www.firmix.at/ mobil: +43 664 4416156 fax: +43 1 7890849-55 Embedded Linux Development and Services
On Mon, Nov 23, 2009 at 10:25 PM, Rodolfo Gonzalez Gonzalez rgonzalez@gnt.cc wrote:
Hello all,
has someone worked with DRBD (http://www.drbd.org) for HA of mail storage? if so, does it have stability issues? comments and experiences are thanked :)
Thanks, Rodolfo.
We have deployed active/passive DRBD Mailserver for a customer with ~ 5000 mailboxes. runs nice without any problems.
CentOS 5.3 Heartbeat 1 ! DRBD 8.3.x LVM
If you try to use LVM/ext3 with DRBD and you want to use consistend snapshots, use this setup
http://www.drbd.org/users-guide-emb/s-lvm-drbd-as-pv.html
Storage layer:
disks/hwraid/softwareraid -> drbd -> LVM -> fs
this way lvm can trigger the fs with a freezecall before the blocklevel snapshot.
We currently migrate our 10000 Mailcluster from a netapp storage to multiple storage server with drbd.
Our other project the webcluster runs with about 1000 sites (Typo3/Joomla) on a CentOS nfs-server backed with the drbd. The failover time is about 4 seconds and no pending write ist lost.
participants (5)
-
alex handle
-
Bernd Petrovitsch
-
Eric Jon Rostetter
-
Rodolfo Gonzalez Gonzalez
-
Seth Mattinen