[Dovecot] Multiple locations, 2 servers - planning questions...

Stan Hoeppner stan at hardwarefreak.com
Sun Mar 4 02:51:39 EET 2012


On 3/3/2012 2:20 PM, Charles Marcus wrote:
> Thanks very much for taking the time for your detailed reply, Stan, but
> I'll need more time to study it...
> 
> On 2012-03-02 4:17 AM, Stan Hoeppner <stan at hardwarefreak.com> wrote:
> <snip>
>> My gut instinct, based on experience and the match, is that a single GbE
>> inter site MAN link will be plenty, without the need to duplicate server
>> infrastructure.
> 
> I just wanted to point out one thing - I have two primary goals - yes,
> one is to maximize performance, but the other is accomplish a level of
> *redundancy*...

What type of redundancy are you looking for?  I.e. is one reason for
duplicating servers at site #2 to avoid disruption in the event the MAN
link fails?  Do you currently have redundant GbE links to each closet
switch stack in site #1, and also redundant switches in the datacenter?
 I.e. do you skip a beat if a core or closet switch fails?

If you do not currently have, nor plan to create such network redundancy
internally at site #1, then why build application redundancy with the
single goal of mitigating failure of a single network link?  Do you have
reason to believe there is a higher probability of failure of the MAN
link than any other single link in the current network?

> Also - I already have the servers (I have 3 Poweredge 2970's available
> to me, only one of which is currently being used)...
> 
> So, the only extra expenses involved will be relatively minor hardware
> expenses (multi-port Gb NICs), and some consulting services for making
> sure I implement the VM environment (including the routing) correctly.

Again, you don't need multi-port GbE NICs or bonding for performance--a
single GbE link is all each server needs.  Your switches should be able
to demonstrate that, without even needing a sniffer, assuming they're
decent managed units.  If you're after link redundancy, use two single
port NICs per server, or one mobo mounted port and once single port NIC.
 Most dual port NICs duplicate the PHYs but not the ethernet chip nor
power circuits, etc.  Thus, when a dual port NIC fails you usually loose
both ports.

> So, honestly, we'd be incurring most of these expenses anyway, even if
> we didn't set up redundant servers, so I figure why not get redundancy
> too (now is the time to get the boss to pay for it)...

Don't forget power backup at site #2.  Probably not a huge cost in the
overall scheme of things, but it's still another $5000 or so.

In summary, my advice is:

1.  One 1000Mb MAN link is plenty of bandwidth for all users at site #2
    including running internet traffic through site #1, saving the cost
    of an internet pipe at site #2

2.  If truly concerned about link failure, get a backup 100Mb/s link,
    or get two GbE links with a burst contract, depending on price

3.  Keep your servers in one place.  If you actually desire application
    level redundancy (IMAP, SMB/CIFS, etc) unrelated to a network link
    failure, then do your clustering etc "within the rack".  It will be
    much easier to manage and troubleshoot this than two datacenters w/
    all kinds of replication etc between them

4.  If site #1 is not already link redundant, it makes little sense to
    make a big redundancy push to cover a possible single network link
    failure, regardless of which link

5.  Building a 2nd datacenter and using the MAN link for data
    replication gives you no performance advantage, and may actually
    increase overall utilization, vs using the link as a regular trunk

6.  *Setup QOS appropriately to maintain low latency of IMAP and
    other priority data, giving a back seat to SMB/CIFS/FTP/HTTP and
    other bulk transfer protocols*  With proper QOS the single GbE
    MAN link will simply scream for everyone, regardless of saturation
    level

-- 
Stan



More information about the dovecot mailing list