On 4/10/12, Stan Hoeppner stan@hardwarefreak.com wrote:
So I have to make do with OTS commodity parts and free software for the most parts.
OTS meaning you build your own systems from components? Too few in the business realm do so today. :(
For the inhouse stuff and budget customers yes, in fact both the email servers are on seconded hardware that started life as something else. I spec HP servers for our app servers to customers who are willing to pay for their own colocated or onsite servers but still there are customers who balk at the cost and so go OTS or virtualized.
SuperMicro H8SGL G34 mobo w/dual Intel GbE, 2GHz 8-core Opteron 32GB Kingston REG ECC DDR3, LSI 9280-4i4e, Intel 24 port SAS expander 20 x 1TB WD RE4 Enterprise 7.2K SATA2 drives NORCO RPC-4220 4U 20 Hot-Swap Bays, SuperMicro 865W PSU All other required parts are in the Wish List. I've not written assembly instructions. I figure anyone who would build this knows what s/he is doing.
Price today: $5,376.62
This price looks like something I might be able to push through although I'll probably have to go SATA instead of SAS due to cost of keeping spares.
Configuring all 20 drives as a RAID10 LUN in the MegaRAID HBA would give you a 10TB net Linux device and 10 stripe spindles of IOPS and bandwidth. Using RAID6 would yield 18TB net and 18 spindles of read throughput, however parallel write throughput will be at least 3-6x slower than RAID10, which is why nobody uses RAID6 for transactional workloads.
Not likely to go with RAID 5 or 6 due to concerns about the uncorrectable read errors risks on rebuild with large arrays. Is the MegaRAID being used as the actual RAID controller or just as a HBA?
I have been avoiding hardware RAID because of a really bad experience with RAID 5 on an obsolete controller that eventually died without replacement and couldn't be recovered. Since then, it's always been RAID 1 and, after I discovered mdraid, using them as purely HBA with mdraid for the flexibility of being able to just pull the drives into a new system if necessary without having to worry about the controller.
Both of the drives I've mentioned here are enterprise class drives, feature TLER, and are on the LSI MegaRAID SAS hardware compatibility list. The price of the 600GB Raptor has come down considerably since I designed this system, or I'd have used them instead.
Anyway, lots of option out there. But $6,500 is pretty damn cheap for a quality box with 32GB RAM, enterprise RAID card, and 20x10K RPM 600GB drives.
The MegaRAID 9280-4i4e has an external SFF8088 port For an additional $6,410 you could add an external Norco SAS expander JBOD chassis and 24 more 600GB 10K RPM Raptors, for 13.2TB of total net RAID10 space, and 22 10k spindles of IOPS performance from 44 total drives. That's $13K for a 5K random IOPS, 13TB, 44 drive NFS RAID COTS server solution, $1000/TB, $2.60/IOPS. Significantly cheaper than an HP, Dell, IBM solution of similar specs, each of which will set you back at least 20 large.
Would this setup work well too for serving up VM images? I've been trying to find a solution for the virtualized app servers images as well but the distributed FSes currently are all bad with random reads/writes it seems. XFS seem to be good with large files like db and vm images with random internal write/read so given my time constraints, it would be nice to have a single configuration that works generally well for all the needs I have to oversee.
Note the chassis I've spec'd have single PSUs, not the dual or triple redundant supplies you'll see on branded hardware. With a relatively stable climate controlled environment and a good UPS with filtering, quality single supplies are fine. In fact, in the 4U form factor single supplies are usually more reliable due to superior IC packaging and airflow through the heatsinks, not to mention much quieter.
Same reason I do my best to avoid 1U servers, the space/heat issues worries me. Yes, I'm guilty of worrying too much but that had saved me on several occasions.