On Tue, 2010-03-16 at 13:30 -0400, Frank Cusack wrote:
Well, you just mentioned the benefits :) Less memory usage, less context switches (of any kind). (You aren't assuming I'd do something like one thread per connection, right? That's not going to happen.)
But as I stated, dovecot is likely to be I/O limited, not CPU or memory limited. Who cares if there is 10% less memory usage when the limit is still I/O? Feel free to trout-slap me (to borrow from Stan) if you expect dovecot to be memory limited.
It is I/O limited, but less memory usage still can help. Some setups use e.g. NFS storage with multiple Dovecot servers accessing it. They could reduce the number of Dovecot servers if a single server could process more connections.
Also all setups benefit from less memory usage, because that means there's more memory left for disk cache -> less disk I/O.
That's kind of the point. You could have just a few IMAP processes where each can handle hundreds of connections. Currently that's not a very good idea, because a single connection that waits on disk I/O blocks all the other connections in the same process from doing anything.
Or you could have a whole bunch of IMAP processes, one per connection. All you're doing is multiplexing the I/O at a different part of your system. Each connection doesn't share any state or other data with other "related" connections
Actually if multiple connections are accessing the same mailbox in the same process, they can (and already do!) share index data.
Kernel can only parallelize requests from different processes.
Right, but again so what? Since each connection is a different process, all I/O is parallelized in dovecot's case. It works well because connections aren't related to each other.
I think I answered to this in the other mail: access to high-latency storage (which can also reduce command reply latency somewhat for low-latency storage).