On 2.2.2013, at 12.59, Jan-Frode Myklebust janfrode@tanso.net wrote:
Or actually .. It could simply be that in v2.0.15 service lmtp { client_limit } default was changed to 1 (from default_client_limit=1000). This is important with the backend, because writing to message store can be slow, but proxying should be able to handle more than 1 client per process, even with the new temporary file writing. So you could see if it helps to set lmtp { client_limit = 100 } or something.
My backend lmtp services are configured with client_limit = 1, process_limit = 25, and there are 6 backends I.e. max 150 backend LMTP processes if all lmtp is spread evenly between the backends, which it woun't be since backends are weighted differently (2x 50, 2x75 and 2x100).
I assume each director will max proxy process_limit*client_limit to my backends. Will it be OK to have a much higher process_limit*client_limit on the directors than on the backends? It will not be a problem if directors are configured to seemingly handle a lot more simultaneous connections than the backends?
Best to keep the bottleneck closest to MTA. If director can handle more connections than backend, then MTA is uselessly waiting on the extra LMTP connections to timeout. So I'd keep the director's process_limit*client_limit somewhat close to what backends can handle (somewhat more is probably ok too). Anyway, if backend reaches the limit it logs a warning about it and then just doesn't accept the connections until one of the existing ones finish.