On 4/9/2012 2:15 PM, Emmanuel Noobadmin wrote:
Unfortunately, the usual kind of customers we have here, spending that kind of budget isn't justifiable. The only reason we're providing email services is because customers wanted freebies and they felt there was no reason why we can't give them emails on our servers, they are all "servers" after all.
So I have to make do with OTS commodity parts and free software for the most parts.
OTS meaning you build your own systems from components? Too few in the business realm do so today. :(
It sounds like budget overrides redundancy then. You can do an NFS cluster with SAN and GFS2, or two servers with their own storage and DRBD mirroring. Here's how to do the latter: http://www.howtoforge.com/high_availability_nfs_drbd_heartbeat
The total cost is about the same for each solution as an iSCSI SAN array of drive count X is about the same cost as two JBOD disk arrays of count X*2. Redundancy in this case is expensive no matter the method. Given how infrequent host failures are, and the fact your storage is redundant, it may make more sense to simply keep spare components on hand and swap what fails--PSU, mobo, etc.
Interestingly, I designed a COTS server back in January to handle at least 5k concurrent IMAP users, using best of breed components. If you or someone there has the necessary hardware skills, you could assemble this system and simply use it for NFS instead of Dovecot. The parts list: secure.newegg.com/WishList/PublicWishDetail.aspx?WishListNumber=17069985
In case the link doesn't work, the core components are:
SuperMicro H8SGL G34 mobo w/dual Intel GbE, 2GHz 8-core Opteron 32GB Kingston REG ECC DDR3, LSI 9280-4i4e, Intel 24 port SAS expander 20 x 1TB WD RE4 Enterprise 7.2K SATA2 drives NORCO RPC-4220 4U 20 Hot-Swap Bays, SuperMicro 865W PSU All other required parts are in the Wish List. I've not written assembly instructions. I figure anyone who would build this knows what s/he is doing.
Price today: $5,376.62
Configuring all 20 drives as a RAID10 LUN in the MegaRAID HBA would give you a 10TB net Linux device and 10 stripe spindles of IOPS and bandwidth. Using RAID6 would yield 18TB net and 18 spindles of read throughput, however parallel write throughput will be at least 3-6x slower than RAID10, which is why nobody uses RAID6 for transactional workloads.
If you need more transactional throughput you could use 20 WD6000HLHX 600GB 10K RPM WD Raptor drives. You'll get 40% more throughput and 6TB net space with RAID10. They'll cost you $1200 more, or $6,576.62 total. Well worth the $1200 for 40% more throughput, if 6TB is enough.
Both of the drives I've mentioned here are enterprise class drives, feature TLER, and are on the LSI MegaRAID SAS hardware compatibility list. The price of the 600GB Raptor has come down considerably since I designed this system, or I'd have used them instead.
Anyway, lots of option out there. But $6,500 is pretty damn cheap for a quality box with 32GB RAM, enterprise RAID card, and 20x10K RPM 600GB drives.
The MegaRAID 9280-4i4e has an external SFF8088 port For an additional $6,410 you could add an external Norco SAS expander JBOD chassis and 24 more 600GB 10K RPM Raptors, for 13.2TB of total net RAID10 space, and 22 10k spindles of IOPS performance from 44 total drives. That's $13K for a 5K random IOPS, 13TB, 44 drive NFS RAID COTS server solution, $1000/TB, $2.60/IOPS. Significantly cheaper than an HP, Dell, IBM solution of similar specs, each of which will set you back at least 20 large.
Note the chassis I've spec'd have single PSUs, not the dual or triple redundant supplies you'll see on branded hardware. With a relatively stable climate controlled environment and a good UPS with filtering, quality single supplies are fine. In fact, in the 4U form factor single supplies are usually more reliable due to superior IC packaging and airflow through the heatsinks, not to mention much quieter.
-- Stan