On 3/1/2012 5:43 AM, Charles Marcus wrote:
Obviously, I don't have the experience or expertise to answer these questions myself (never analyzed IMAP traffic to have an idea of the bandwidth each user uses, and probably wouldn't trust my efforts if I made the attempt). Hopefully, there are some people here who have a rough idea, which is why I brought this question up here.
Expanding on my previous statements, and hopefully answering some questions here, or at least getting in the ballpark, lets see what a single GbE link is capable of.
Let's assume an average transfer size of SMTP/IMAP email including headers is roughly 4096 bytes, or 32768 bits.
TCP over GbE after all framing and protocol overhead = 992,697,000 bits/sec maximum bandwidth with jumbo frames = 941,482,000 bits/sec max without jumbo frames
We'll go without jumbo frames in our example. Every GbE interface on one router segment must support jumbo or you can't enable it. If you do, interfaces that don't do jumbo will have bad to horrible performance, or maybe not work at all. Many workstation NICs don't do jumbo frames as well as many commercial routers.
Typical IMAP command payload is absolutely tiny, so we'll concentrate on response traffic. Theoretical steady state IMAP server to client 4KB message transfer rates:
= 28,731 msgs/sec
= 1,723,905 msgs/minute
= 103,434,301 msgs/hour
= 2,482,423,242 msgs/day
General file transfer bandwidth, 5MB JPG:
= 22 files/sec
= 1,346 files/minute
= 80,808 files/hour
= 1,939,393 files/day
General file transfer bandwidth, 100MB TIFF:
= 1 files/sec
= 67 files/minute
= 4,040 files/hour
= 96,969 files/day
General file transfer bandwidth, 500MB video file:
= 1 files 4.5 seconds
= 10 files 44.6 seconds
= 100 files 7.4 minutes
As you can see, a single GbE interface has serious capacity and will probably easily carry your inter-site traffic without needing duplicate servers at the second site. You mentioned putting multiple GbE interfaces on your servers. Very, very few servers *need* 900+ Mb/s of bandwidth, however having two links is good for redundancy. So I'd not worry about the aggregation performance, only the proper and seamless failover functionality.
I obviously haven't seen your workflows Charles, but I recall you do a lot of media work. By 'you' I mean Media Brokers. So obviously your users will be hitting the network harder than average office workers. I'm taking that into account.
My gut instinct, based on experience and the match, is that a single GbE inter site MAN link will be plenty, without the need to duplicate server infrastructure. Again, have a qualified network architect sniff your current network traffic patterns, and discuss with you the anticipated user traffic at the 2nd site to determine your average and peak inter-site b/w needs. The average will absolutely be much less than 1Gb/s, but the peak may be well above 1Gb/s. You can still avoid the myriad problems/costs of server duplication without incurring significant additional link costs. There are a couple of options that should be available to you:
A second fiber pair and GbE link You might negotiate a burst contract. You pay a flat monthly rate for a base bit rate of X and pay extra for bursts. Burst contract availability will depend on the provider's network topology. If at any point they're aggregating multiple customer's traffic on a single trunk fiber pair a burst contract should be available. Burst contract allow them to oversubscribe their trunks, just as ISPs and broadband providers do. Your network architect should be able to assist you in figuring out what you'd want for your base and peak bit rates for such a contract. Why pay for 1000Mb/s from 8pm to 6am if you're only using 20Kb/s?
Add a second GbE link on a different transceiver wavelength using a prism on each end to transmit both links on one fiber pair. This is typically cheaper when the provider has limited fiber runs in a given area or to a given building. You may or may not be able to save money with a burst contract in this scenario. Talk to your provider and find out what your options are. Wait until your architect has finished your network analysis before speaking to the provider.
Treat this link as a traditional WAN link. Do NOT treat it as simply another switch segment. Put an IP router on each side of the GbE MAN link and create a separate IP subnet for hosts and devices in the new office. By doing this you keep broadcast traffic from traversing the link. This includes things like ARP discovery, DHCP, NTP broadcast, and most importantly: broadcast traffic from disk imaging software. If you don't make this an IP routed link, network disk imaging traffic will traverse the MAN link just as it traverses your entire switched LAN. This could be anywhere from 25-80MB/s (200-640Mb/s) of broadcast traffic. You obviously don't want this clogging the link. You *might* be able to eliminate broadcast traffic using special VLAN configurations on sufficiently advanced layer2-7 "switch routers", but it's cheaper and fool proof when done with standard IP routers. Again, chat with your architect.
With this being a routed connection, and broadcast traffic being eliminated, any services that rely on broadcast traffic will need to be duplicated or tweaked accordingly. You will need a DHCP server in the new office. The router should be able to serve DHCP, unless you're currently serving some custom scope it can't handle. If you rely on broadcast for WINS, or have any other Microsoft services that rely on broadcast, you will need to address those. If you currently use NTP broadcast for time updates you'll need another NTP server in the new office. Again, the router should be able to broadcast NTP updates. The solutions to these things have been around forever, so I'm not going to go into all of them, but you need to be aware. You'll need to discuss these things with your network architect or a qualified Microsoft consultant. If you run no MS servers and don't use broadcast, then no need to worry about. And hooray for you, no MS! :)
This may be of interest given the topic. At a previous $dayjob a few years back, we ran the traffic of about 580 desktops/wireless laptops through a single GbE uplink into an 11 blade server farm backed by a small fiber channel SAN. Blade-blade IP traffic was through a dedicated 14x6 port GbE switch module, so things like vmotion, backups, etc worked at full boogie. But the uplink from the switch module in the BladeCenter to the Cisco 5000 core switch was a single copper GbE uplink. All user traffic flowed over this link. We never had performance issues. We'd configured QOS to keep the IP phones happy but that's about it for traffic shaping. Before I left I jacked in a 2nd GbE uplink for redundancy and configured Cisco's link aggregation protocol. We didn't notice a performance difference. I could have aggregated 6 GbE uplinks. One did the job, two gave resiliency, more would have just wasted ports on the core switch.
Hope you find this educational/informational/useful Charles, and maybe others.
-- Stan