[Dovecot] Highly Performance and Availability
Ed W
lists at wildgooses.com
Mon Feb 22 15:03:12 EET 2010
Hi
> HA, Consolidated Backup, and a couple of other technologies are what really
> make this an enterprise solution, providing near 24x7x365 uptime and rapid
> redeployment of an infrastructure after catastrophic loss of the datacenter.
>
Can you tell me exactly what "Consolidated Backup" means with respect to
ESX please? From the brief description on the website I'm not quite
sure how it varies to say backing up the raw storage using some kind of
snapshot method?
> GlusterFS isn't designed as a primary storage system for servers or server
> clusters. A good description of it would be "cloud storage". It is designed to
> mask, or make irrelevant, the location of data storage devices and the distance
> to them. Server and datacenter architects need to know the latency
> characteristics and bandwidth of storage devices backing the servers. GlusterFS
> is the antithesis of this.
>
I can't disagree in terms of achieved performance because I haven't
tested, but in terms of theoretical design it is supposed to vary from
how you describe?
Glusterfs has a growing number of translaters and eventually is likely
to have native NFS & Cifs support straight into the cluster. So *in
theory* (difference between theory and practice? In theory nothing, in
practice everything.) you are getting parallel NFS performance as you
add nodes, with the option of also adding redundancy and HA for free...
I get the impression the current implementation deviates somewhat from
theory, but long term that's the goal...
I was giving this some thought - essentially the whole problem comes
down to either some kind of filesharing system which offers up
individual files, or some kind of block level sharing and you have to
then run your own filesystem over the block device.
Now, if latency were zero and fileserver had infinite CPU/bandwidth then
it would seem like the filesharing system wins because it centralises
the locking and all other problems and leaves relatively thin clients
On the flip side since latency/bandwidth very much deviates from perfect
then to me the block level storage initially seems more attractive
because the client can be given "intelligence" about the constraints and
make appropriate choices about fetching blocks, ordering, caching,
flushing, etc. However, if we assume active/active clusters are
required then we need GFS or similar and we have just added a whole heap
of latency and locking management. This plus the latency of translating
a disk based protocol (scsi/ata) into network packets suddenly makes the
block level option look a lot less attractive...
So the final conclusion seems like it's a hard problem and the "best"
solution is going to come down to an engineering decision - ie where
theory and practice deviate and which one actually gets the job done
fastest in practice?
At least in theory it seems like Gluster should be able to rival the
speed of a high end iSCSI san - whether the practical engineering
problems are ever solved is a different matter... (Random quote -
http://www.voicesofit.com/blogs/blog1.php/2009/12/29/gluster-the-red-hat-of-storage
- Gluster claim 131,000 IOPS on some random benchmark using 8 servers
and 18TB of storage...)
Interesting seeing how this stuff is maturing though! Sounds like the
SAN is still the king for people just want something fast reliable and
off the shelf today...
Ed W
More information about the dovecot
mailing list