[Dovecot] Best Cluster Storage

Jonathan Tripathy jonnyt at abpni.co.uk
Thu Jan 13 15:49:10 EET 2011


On 13/01/11 10:57, Stan Hoeppner wrote:
> Jonathan Tripathy put forth on 1/13/2011 2:24 AM:
>
>> Ok so this is interesting. As long as I use Postfix native delivery, 
>> along with
>> Dovecot director, NFS should work ok?
> One has nothing to do with the other.  Director doesn't touch smtp 
> (afaik), only
> imap.  The reason for having Postfix use its native local(8) delivery 
> agent for
> writing into the maildir, instead of Dovecot deliver, is to avoid 
> Dovecot index
> locking/corruption issues with a back end NFS mail store.  So if you 
> want to do
> sorting you'll have to use something other than sieve, such as 
> maildrop or
> procmail.  These don't touch Dovecot's index files, while Deliver 
> (LSA) does
> write to them during message delivery into the maildir.
Yes, I thought it had something to do with that
>>> For any meaningful use of virtualized clusters with Xen, ESX, etc, a
>>> prerequisite is shared storage.  If you don't have it, get it.  The 
>>> hypervisor
>>> is what gives you fault tolerance.  This requires shared storage.  
>>> If you do not
>>> intend to install shared storage, and intend to use things like drbd 
>>> between
>>> guests to get your storage redundancy, then you really need to 
>>> simply throw out
>>> your hypervisor, in this case Xen, and do direct bare metal host 
>>> clustering with
>>> drbd, gfs2, NFS, etc.
>> Why is this the case? Apart from the fact that Virtualisation becomes 
>> "more
>> useful" with shared storage (which I agree with), is there anything 
>> wrong with
>> doing DR between guests? We don't have shared storage set up yet for the
>> location this email system is going. We will get one in time though.
> I argue that datacenter virtualization is useless without shared 
> storage.  This
> is easy to say for those of us who have done it both ways.  You 
> haven't yet.
> Your eyes will be opened after you do Xen or ESX atop a SAN.  If 
> you're going to
> do drbd replication between two guests on two physical Xen hosts then 
> you may as
> well not use Xen at all.  It's pointless.
Where did I say I havn't done that yet? I have indeed worked with VM 
infrastructures using SAN storage, and yes, it's fantastic. Just this 
particular location doesn't have a SAN box installed. And we will have 
to agree to disagree as I personally do see the benefit of using VMs 
with local storage
> What you need to do right now is build the justification case for 
> installing the
> SAN storage as part of the initial build out and setup your virtual 
> architecture
> around shared SAN storage.  Don't waste your time on this other 
> nonsense of
> replication from one guest to another, with an isolated storage pool 
> attached to
> each physical Xen server.  That's just nonsense.  Do it right or don't 
> do it at all.
>
> Don't take my word for it.  Hit Novell's website and VMWare's and pull 
> up the
> recommended architecture and best practices docs.
You don't need to tell me :) I already know how great it is
> One last thing.  I thought I read something quite some time ago about Xen
> working on adding storage layer abstraction which would allow any Xen 
> server to
> access directly connected storage on another Xen server, creating a 
> sort of
> quasi shared SAN storage over ethernet without the cost of the FC 
> SAN.  Did
> anything ever come of that?
>
I haven't really been following how the 4.x branch is going as it wasn't 
stable enough for our needs. Random lockups would always occur. The 3.x 
branch is rock solid. There have been no crashes (yet!)

Would DRBD + GFS2 work better than NFS? While NFS is simple, I don't 
mind experimenting with DRBD and GFS2 is it means fewer problems?


More information about the dovecot mailing list