On 8 Jul 2012, at 08:36, Steve Litt wrote:
Can one even argue on one side or the other without knowing the speed of the network, and how much contention is on that network?
My experience is that with a 100Mbs network, local is faster, although I've never had a SAN, so to speak, on the other end.
The specification of a SATA rev 3 is 6Gbs, which is a heck of a lot faster than 1Gbs per second spec of a gigabit network. Both have a lot of things slowing them from their spec, but I'd need to see some proof of an assertion that anything coming in over a 1Gbs wire can beat a SATA rev3 local disk.
This isn't really a fair comparision because I don't think rev3 is commodity yet. So Rev 2, which IS commodity, is 3Gbs, which is still considerably faster than the wire on a Gigabit network.
I think there are optimal situations where any configuration looks good . . How often can a real-world disk actually deliver the 6Gbs when only a minority of disk reads are long sequential runs on the platters?
That's why I take the broader view . . over the course of 2-3 years, say, with a range of applications and demands, the total impact of resources, reliability, management and end-user experience with large systems that are large and complex by virtue of demand rather than through any fault of design, may be better served by a high-performance storage solution connected with extremely high-speed dedicated channels (I don't think anyone ever suggested 100mbps network for SAN was a high-performance scenario) . . and then look at how things work overall, including hardware maintenance for example.
So, the laboratory / theoretical throughput of an internal 6Gbs bus is only partly a factor, imho...