I setup load-balance cluster for clients with HAProxy + KeepAlived + Dovecot Director running in frontend servers, so sad we have to find an alternative to replace Director in such case.
It's not about "small/medium" servers, but the demand of imap/pop3/lmtp proxy service, especially in load-balance cluster.
Curious, trying to understand..
Why would not a true load balancer not be an attractive option for those that need to load balance services across multiple front ends?
It is the model we use with most of our ISP's and scales very well.
The choice of load balancer is important, but with HA load balancers, you are assured that you don't have a single point of failure, and you can spread loads more granularly, eg POP, IMAP and other services.
Not to mention, you can use the same load balancer from many other traffic shaping solutions.
the problem that prevents most load balancers from handling the backend imap/pop traffic is that the load balancer needs to be aware of the context of each connection. which all boils down to the index files (only a single dovecot server can access a set of index files concurrently, else the indexes will get corrupted)
in more usual HTTP case, you'd probably use some sort of cookie based session affinity to keep connections from a particular user going to the same backend http server.
but in the IMAP/POP case most load balancers don't really know anything about the connection and are just blindly forwarding them to the backend nodes. director (or the custom nginx LB setups) get to handle part of the IMAP/POP transaction and get a bit of context (knowing which user the connection is for) to then make additional decisions about which backend imap node to send the connection through to (preventing the index corruption problem).
you could use IP based affinity on pop/imap connections for a context-unaware load balancer, but if you end up with a lot of NAT users your connections will end up being unbalanced across the backend servers. and connections from something like a webmail server will all end up going to the same backend server (since they'd all come from the same IP address).
you could also just have a dumb load balancer sitting in front and just randomly sending the connections to any backend imap server, but each backend imap server would have to maintain its own copy of the indexes. workable, but not particularly efficient, especially if you have large a large number of backend imap servers (though, with a small setup with only 2 or 3 backend imap servers for redundancy instead of performance, probably acceptable)
you'd still want some sort of load balanced director or nginx pool as well, in order to handle redundancy at that level. but that's a much easier task, as you don't have to worry about the session context at that point. (we have hardware load balancers in front of the director nodes)