Stan Hoeppner put forth on 1/14/2011 1:00 PM:
You have a consolidated Xen cluster of two 24 core AMD Magny Cours servers each with 128GB RAM, an LSI MegaRAID SAS controller with dual SFF8087 ports backed by 32 SAS drives in external jbod enclosures setup as a single hardware RAID 10. You spread your entire load of 97 virtual machine guests across this two node farm. Within this set of 97 guests, 12 of them are clustered network applications, and two of these 12 are your Dovecot/Postfix guests.
I forgot the drbd interfaces in my example. This setup would include two PCIe X8 dual port 10 GbE RJ45 copper adapters connected via x-over cables and link aggregated, yielding 2 GB/s of full duplex b/w.
Also, as the filesystem runs at the guest level, you'd still need gfs2 running on each cluster guest OS to handle file level locking. The underlying disk device would be a Xen virtual disk, which sites atop the drbd driver.
Although, to be quite honest, this isn't the best example, as with that much server and disk hardware involved, the ROI of FC SAN storage would have already kicked in and you'd be using gfs2 directly on SAN LUNs instead of drbd.
The technical point is properly illustrated nonetheless.
-- Stan