[Dovecot] Highly Performance and Availability
wayne.thursby at pgllc.com
Tue Feb 16 17:42:22 EET 2010
Stan Hoeppner spake:
> Wayne Thursby put forth on 2/15/2010 11:42 PM:
>> I have been looking at the Dell EqualLogic stuff and it seems to provide
>> what we need. I can get most of the information I need from the rep, but
>> I wonder if anyone has any experience with high performance requirements
>> on these kinds of storage.
> EqualLogic has nice iSCSI SAN storage arrays with internal multiple snapshot
> ability and what not, but IMHO they're way over priced for what you get.
I was planning on using EqualLogic because the devices seem competent,
and we already have an account with Dell. Also, being on VMware's HCL is
important as we have a support contract with them.
>> I'd like to continue running my current hardware as the primary mail
>> server, but provide some kind of failover using the SAN. The primary
>> usage of the SAN will be to make our 2TB document store highly
>> available. I'm wondering what kind of options I might have in the way of
>> piggybacking some email failover on this kind of hardware without
>> sacrificing the performance I'm currently enjoying.
> Give me the specs on your current SAN setup and I'll give you some good options.
> 1. What/how many FC switches do you have, Brocade, Qlogic, etc?
> 2. What make/model is your current SAN array controller(s), what disk config?
Here's where I think you misunderstood me. I have no SAN at the moment.
I'm running a monolithic Postfix/Dovecot virtual machine on an ESXi host
that is comprised of a Dell 2950 directly attached via SAS to a Dell
MD-1000 disk array. We have no Fiber Channel anything, so going that
route would require purchasing a full compliment of cards and switches.
[ skipping dead end questions ]
>> Is it as expensive as running my primary mailserver mounted from the SAN
>> via Fiber Channel? Will that get me under 30ms latency?
> I'm not sure what you mean by "expensive" in this context.
Simply that purchasing FC cards and switches adds to the cost, wheras we
already have GbE for iSCSI.
> I ran an entire 500 user environment, all systems, all applications, on two
> relatively low end FC SAN boxen, and you're concerned about the performance of a
> single mail SMTP/IMAP server over a SAN? I don't think you need to worry about
> performance, as long as all is setup correctly. ;)
I hope that is correct, thank you for sharing your experiences. I
inherited a mail system that had capable hardware but was crippled by
bad sysadmin-ing, so I'm trying to make sure I'm going down the right
My main concern is when Dovecot tries to run a body search on an inbox
with 14,000 emails in it, that the rest of the users don't experience
any performance degradation. This works beautifully in my current setup,
however the MD-1000 is not supported by VMWare, doesn't do vMotion, etc,
etc. It sounds like I have nothing to worry about if I go with Fiber
Channel, any idea about iSCSI?
> enables vmotion and HA. (I sincerely hope you don't currently have the VM files
> and data store for your current IMAP store all in a single VMFS volume. That's
> horrible ESX implementation and will make this migration a bear due to all the
> data shuffling you'll have to do between partitions/filesystems, and the fact
> you'll probably have to shut down the server during the file moving).
My current disk layout is as follows:
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 9.5G 4.2G 4.8G 47% /
/dev/sdb1 199G 134G 55G 71% /var/vmail
/dev/sdc1 20G 13G 6.8G 65% /var/sdc1
/dev/sdd1 1012M 20M 941M 3% /var/spool/postfix
/dev/sda1 is a regular VMWare disk. The other three are independent
persistent disks so that I can snapshot/restore the VM without
destroying the queue or stored email.
/var/vmail = maildirs on RAID-10
/var/sdc1 = virusmails from Amavis on RAID-5
/var/spool/postfix = Postfix's spool on RAID-10
You certainly clarified a number of things for me by detailing your past
setup. I suppose I should clarify exactly what the current plan is.
We are migrating a number of other services to some kind of an HA setup
using VMWare and vMotion, that much has been decided. My primary
decision centers around choosing either iSCSI or Fiber Channel. We have
*no* Fiber Channel infrastructure at the moment, so this would add
significantly to the price of our setup (at least 2 cards + switch).
The other applications we are virtualizing are nowhere near as disk i/o
intensive as our email server, so I feel confident that an iSCSI SAN
would meet all performance requirements for everything *except* the
I'm really looking for a way to get some kind of redundancy/failover for
Postfix/Dovecot using just iSCSI, but without killing the performance
I'm experiencing using direct attached storage, but it sounds like
you're saying I need FC.
> This is probably way too much less than optimally written/organized information
> for the list, and probably a shade OT. I'd be more than glad to continue this
> off list with anyone interested in FC SAN stuff. I've got some overly
> aggressive spam filters, so if I block a direct email, hit postmaster@ my domain
> and I'll see it.
Well, I've got the rest of my virtual infrastructure/SAN already figured
out, so my questions are centering around providing redundancy for
Dovecot/maildirs. I think you've answered all of my hardware questions
(ya' freak). It really seems like Fiber Channel is the way to go if I
want to have HA maildirs.
I just don't know if I can justify the extra cost of a FC infrastructure
just because a single service would benefit, especially if there's a
hybrid solution possible, or if iSCSI was sufficient, thus my questions
for the list.
Has anyone attempted to run sizable maildirs over a GbE iSCSI SAN?
Physicians Group, LLC
More information about the dovecot