On 2/27/2012 10:54 AM, Charles Marcus wrote:
These two locations will be connected via a private Gb ethernet connection, and each location will have its own internet connection (I think - still waiting on some numbers to present to the owner to see what he wants to do in that regard, but that will be my recommendation), so bandwidth for replication won't be an issue.
Say you're a boutique mail services provider or some such. In your own datacenter you have a Dovecot server w/64 processors, 512GB RAM, and 4 dual port 8Gb fiber channel cards. It's connected via 8 redundant fiber channel links to 4 SAN array units, each housing 120 x15k SAS drives, 480 drives total, ~140,000 random IOPs. This gear eats 36U of a 40U rack, and about $400,000 USD out of your wallet. In the remaining 4U at the top of the rack you have a router, with two GbE links connected to the server, and an OC-12 SONET fiber link (~$15k-20k USD/month) to a national ISP backbone. Not many years ago OC-12s comprised the backbone links of the net. OC-48s handle that today. Today OC-12s are most often used to link midsized ISPs to national ISPs, act as the internal backbone of midsized ISPs, and link large ISPs' remote facilities to the backbone.
Q: How many concurrent IMAP clients could you serve with this setup before hitting a bottleneck at any point in the architecture? What is the first bottleneck you'd run into?
The correct answer to this question, and the subsequent discussion that will surely take place, may open your eyes a bit, and prompt you to rethink some of your assumptions that went into the architectural decisions you've presented here.
-- Stan