This discussion has been in the context of _storing_ user email. The assumption is that an OP is smart/talented enough to get his spam filters/appliances killing 99% before it reaches intermediate storage or mailboxes. Thus, in the context of this discussion, the average size of a spam message is irrelevant, because we're talking about what goes into the mail store.
The fact is, we all live in different realities, so we're all arguing about apples and oranges. If you're managing a SOHO, small company, large company, university, or in our case, an ISP, the requirements are all different. We have about a million mailboxes, about 20K active at the same time, and people pay for it.
Take for example Stan's spam quote above. In the real world of an ISP, killing 99% of all spam before it hits the storage is unthinkable. We only block spam that is guaranteed to be unwanted, mostly based on technical facts that can't ever happen in normal email. But email that our scanning system flags as probable spam, is just that, probable spam. We can not just throw that away, because in the real world, there are always, and I mean always, false positives. It is unthinkable to throw false positives away. So we have to put these emails in a spam folder in case the user wants to look at it. We block about 40% of all spam on technical grounds, our total spam percentage is 90%, so still about 80% of all customer email reaching the storage is spam.
But in other environments throwing away all probable spam may be perfectly fine. For my SOHO id have no problem throwing probable spam away. I never look in my spam folder anyways, so cant be missing much.
The same goes for SSD. We use SSD drives extensively in our company. Currently mostly in database servers, but our experiences have been good enough that we're slowly starting to add them to more systems as even boot drives. But we're not using them yet in email storage. Like Brad we're using Netapp filers because as far as I know they're one of the few commercially available HA filesystem companies. We've looked at EMC and Sun as well, but havent found a reason to move away from Netapp. In 12 years of netapp we've only had 1 major outage that lasted half a day (and made the front page of national news papers). So, understand that bit. Major outages make it to national news papers for us. HA, failover, etc are kind of important to us.
So why not build something ourselves and use SSD? I suppose we could, but it's not as easy as it sounds for us. (your mileage may vary). It would take significant amounts of engineering time, testing, migrating, etc etc. And the benefits are uncertain. We dont know if an open source HA alternative can give us another 12 years of virtually faultless operation. It may. It may not. Email is not something to start gambling with. People get kind of upset when their email disappears. We know what we've got with Netapp.
I did dabble in using SSD for indexes for a while, and it looked very promising. Certainly indexes are a prime target for SSD drives. But when the director matured, we started using the director and the netapp for indexes again. I may still build my own NFS server and use SSD drives just for indexes, simply to offload IOPS from the Netapp. Indexes are a little less scary to experiment with.
So, if you're in the position to try out SSD drives for indexes or even for storage, go for it. Im sure it will perform much better than spinning drives.
Cor