[Dovecot] Highly Performance and Availability

Stan Hoeppner stan at hardwarefreak.com
Thu Feb 18 08:10:00 EET 2010


Ed W put forth on 2/17/2010 12:25 PM:
> I think Stan pretty much covered how to do this stuff *properly*,

At least for a VMware ESX + SAN environment, yes.

> however, for those following along in the bedroom, there are a couple of
> interesting projects what might get you some of the ESX features (surely
> at the expense of far more support and likely reliability, but needs
> always vary...)

Surely.  I hate the licensing cost of VMware ESX and the options.  Also, the
first time I was told about VMware ESX I was extremely skeptical.  Once I
started using it, and built out a SAN architecture under it, I was really,
really impressed by what it can do, and its management capabilities.  It will be
a long time until a FOSS equivalent even comes close to its performance,
reliability, capability, and ease of management.  It really is a great solution.
 HA, Consolidated Backup, and a couple of other technologies are what really
make this an enterprise solution, providing near 24x7x365 uptime and rapid
redeployment of an infrastructure after catastrophic loss of the datacenter.

> Note, I have no experience with any of these projects, they simply
> caught my eye for further research...
> 
> - Latest KVM+QEMU includes some of the desirable ESX features including
> hot migration
> - Apparently Redhat have a nice management utility for this

I'll have to look into this.

> - Or try ProxMox: http://pve.proxmox.com/wiki/Main_Page
> 
> (cheap) High availability storage seems to come down to:
> - iSCSI

1Gbe iSCSI is great for targeted applications on moderate load SANs.  With any
kind of heavy lifting, you need either 10Gbe iSCSCI or Fiber Channel.  Both of
those are a bit more expensive, and 10Gbe iSCSI usually costing quite a bit more
than FC because of the switch and HBA costs.  Either is suitable for an HA SAN
with live backup.  1Gbe iSCSI is not--simply too little bandwidth and too much
latency.

> - Add redundancy to the storage using DRDB (I believe a successful
> strategy with Dovecot is pairs of servers, replicated to each other -
> run each at 50% capacity and if one dies the other picks up the slack)

DRDB is alright for a couple of replicated hosts with moderate volume.  If you
run two load balanced hot hosts with DRDB, and your load increases to the point
you need more capacity, a 3rd hot host, expanding with DRDB gets a bit messy.
With an iSCSI or FC SAN you merely plug in a 3rd host, install and configure the
cluster FS software, expose the shared LUN to the host, and basically you're up
and running in little time.  All 3 hosts share the exact same data on disk, so
you have no replication issues, no matter how many systems you stick into the
cluster.  The only limitation is the throughput of your SAN array.

> - Interesting developing ideas are: PVFS, GlusterFS (they have an
> interesting "appliance" which might get reliability to production
> levels?), CEPH (reviews suggest it's very easily days)

GlusterFS isn't designed as a primary storage system for servers or server
clusters.  A good description of it would be "cloud storage".  It is designed to
mask, or make irrelevant, the location of data storage devices and the distance
to them.  Server and datacenter architects need to know the latency
characteristics and bandwidth of storage devices backing the servers.  GlusterFS
is the antithesis of this.

> None of these solutions gets you an "enterprise" or proper high end
> solution as described by Stan, but may give some others some things to
> investigate

"Enterprise" capability, performance, and reliability don't necessarily have to
come with an "Enterprise" price tag. ;)

Eric Rostetter is already using GFS2 over DRDB with two hot nodes.  IIRC he
didn't elaborate a lot on the performance or his hardware config.  He seemed to
think the performance was more than satisfactory.

Eric, can you tell us more about your setup, in detail?  I promise I'll sit
quiet and just listen.  Everyone else may appreciate your information.

-- 
Stan


More information about the dovecot mailing list