Re: [Dovecot] Dovecot and SATA Backend - filesystems
The "generally don't see any bugs/issues" is the part I'm worried about (generally isn't comforting).
I was covering my **** on that one. Sods law dictates as soon as I say 'never' we'll discover an issue.
If you use a more traditional filesystem like ext2/ext3/ufs/etc then yes. But you can use a cluster filesystem to get around this, and run
active-active.
I've spent a week looking at the likes of PVFS, GFS, Lustre and a whole host of different systems, including pNFS (NFS 4.1)
At the risk of diverting the thread away from the SATA backend, is there any recommendation for a fault tolerant file service.
I'm really looking for 3 or 4 boxes to store data/metadata to support 10 Apache and Dovecot servers.
The things I don't like are having a single metadata server be a single point of failure.
Regards
John
Quoting John Lyons <john@support.nsnoc.com>:
I've spent a week looking at the likes of PVFS, GFS, Lustre and a whole host of different systems, including pNFS (NFS 4.1)
At the risk of diverting the thread away from the SATA backend, is there any recommendation for a fault tolerant file service.
Most people seem to be recommending either GFS or OCFS. I use GFS myself. They are not fault tolerant per se, just cluster-enabled filesystems... That is, they are not distributed filesystems, but shared filesystems.
I'm really looking for 3 or 4 boxes to store data/metadata to support 10 Apache and Dovecot servers.
If you need to share the filesystem between 3-4 boxes, you either need:
- A SAN/NAS/etc.
- Something to act like a SAN/NAS (drbd, etc)
- Something that exports a filesystem to other hosts (gnbd, nfs, etc).
- A distributed filesystem...
I can't tell you which of the above would be best for you, since it depends on your needs and budget and skill level and risk tolerance and such.
The things I don't like are having a single metadata server be a single point of failure.
Yes, we certainly want to avoid that, if possible... A replicated SAN would work, and I use a poor man's replicated SAN via DRBD myself, but it is only two nodes... (You could then gnbd the files from those two nodes to additional nodes if you wanted, though, to make it scale to almost any size, budget allowing).
The only answer I can give is that this is a very complex issue that needs lots of careful consideration. ;)
Regards
John
-- Eric Rostetter The Department of Physics The University of Texas at Austin
This message is provided "AS IS" without warranty of any kind, either expressed or implied. Use this message at your own risk.
participants (2)
-
Eric Jon Rostetter
-
John Lyons