As are you thinking, you will have 2 servers with drbd active/standby you could teset both setups, exporting over NFS or over iscsi + gfs2
Does gfs2 guarantee integridy withou anm fency device ?
Where i work i guess we choose ocfs2 becasue of this litle problem, we could not have an fenc device in xen.
On our teste, ocfs2 shows to be better than NFS. But we did not test as well as we wish, cause is already in production.
[]'sf.rique
On Thu, Jan 13, 2011 at 8:17 PM, Jonathan Tripathy <jonnyt@abpni.co.uk>wrote:
On 13/01/11 21:34, Stan Hoeppner wrote:
Jonathan Tripathy put forth on 1/13/2011 7:11 AM:
Would DRBD + GFS2 work better than NFS? While NFS is simple, I don't mind
experimenting with DRBD and GFS2 is it means fewer problems?
Depends on your definition of "better". If you do two dovecot+drbd nodes you have only two nodes. If you do NFS you have 3 including the NFS server. Performance would be very similar between the two.
Now, when you move to 3 dovecot nodes or more you're going to run into network scaling problems with the drbd traffic, because it increases logarithmically (or is it exponentially?) with node count. If using GFS2 atop drbd across all nodes, each time a node writes to GFS, the disk block gets encapsulated by the drbd driver and transmitted to all other drbd nodes. With each new mail that's written by each server, or each flag is updated, it gets written 4 times, once locally, and 3 times via drbd.
With NFS, each of these writes occurs over the network only once. With drbd it's always a good idea to dedicate a small high performance GbE switch to the cluster nodes just for drbd traffic. This may not be necessary in a low volume environment, but it's absolutely necessary in high traffic setups. Beyond a certain number of nodes even in a moderately busy mail network, drbd mirroring just doesn't work. The bandwidth requirements become too high, and nodes bog down from processing all of the drbd packets. Without actually using it myself, and just using some logical reasoning based on the technology, I'd say the ROI of drbd mirroring starts decreasing rapidly between 2 and 4 nodes, and beyond for nodes...
You'd be much better off with an NFS server, or GFS2 directly on a SAN LUN. CXFS would be far better, but it's not free. In fact it's rather expensive, and it requires a dedicated metadata server(s), which is one of the reasons it's so #@! damn fast compared to most clustered filesystems.
Another option is a hybrid setup, with dual NFS servers each running GFS2 accessing the shared SAN LUN(s). This eliminates the one NFS server as a potential single point of failure, but also increases costs significantly as you have to spend about $15K USD minimum for low end SAN array, and another NFS server box, although the latter need not be expensive.
Hi Stan,
The problem is, is that we do not have the budget at the minute to buy a SAN box, which is why I'm just looking to setup Linux environment to substitute for now.
Regarding the servers, I was thinking of having a 2 node drbd cluster (in active+standby), which would export a single iSCSI LUN. Then, I would have a 2 node dovecot+postfix cluster (in active-active), where each node would mount the same LUN (With GFS2 on top). This is 4 servers in total (Well, 4 VMs running on 4 physically separate servers).
I'm hearing different things on whether dovecot works well or not with GFS2. Of course, I could simply replace the iSCSI LUN above with an nfs server running on each DRBD node, if you feel NFS would work better than GFS2. Either way, I would probably use a crossover cable for the DRBD cluster. Could maybe even bond 2 cables together if I'm feeling adventurous!
The way I see it, is that there are 2 issues to deal with:
- Which "Shared Disk" technology is best (GFS2 over LUN or a simple NFS server) and
- What is the best method of HA for the storage system
Any advice is appreciated.