On Thu, Dec 9, 2010 at 11:13 AM, Ralf Hildebrandt Ralf.Hildebrandt@charite.de wrote:
- Mark Moseley moseleymark@gmail.com:
We're on 2.6.32 and the load only goes up when I change dovecot (not when I change the kernel, which I didn't do so far)
If you at some point upgrade to >2.6.35, I'd be interested to hear if the load skyrockets on you.
You mean even more? I'm still hoping it would decrease at some point :) I updated to 2.6.32-27-generic-pae today. I wonder what happens.
I also get the impression that the load average calculation in these recent kernels is 'touchier' than in pre-2.6.35. Even with similar CPU and I/O utilization, the load average on a >2.6.35 both is much higher than pre- and it also seems to react more quickly; more jitter I guess. That's based on nothing scientific though.
Interesting.
Interesting, and a major PITA. If I had more time, I'd go back and iterate through each kernel leading up to 2.6.35 to see where things start to go downhill. You'd like to think each kernel version would make things just a little bit faster (or at least, nothing worse than the same). Though if it really is just more jittery and is showing higher loads but not actually working any harder than an older kernel, then it's just my perception only.
Upping the client_limit actually results in less processes, since a single process can service up to #client_limit connections. When I bumped up the client_limit for imap, my context switches plummeted.
Which setting are you using now?
At the moment, I'm using client_limit=5 for imap, but I keep playing with it. I have a feeling that's too high though. If I had faster cpus and more memory on these boxes, it wouldn't be so painful to put it back to 1.
Though as Timo pointed out on another thread the other day when I was asking about this, when that proc blocks on I/O, it's blocking all the connections that the process is servicing.Timo, correct me if I'm wildly off here -- I didn't even know this existed before a week or two ago. So you can then end up creating a bottleneck, thus why I've been playing with finding a sweet spot for imap.
Blocking on /proc? Never heard that before.
I was just being lazy. I meant 'process' :) So, if I'm understanding it correctly, assume you've got client_limit=2 and you've got connection A and connection B serviced by a single process. If A does a file operation that blocks, then B is effectively blocked too. So I imagine if you get enough I/O backlog, you can create a downward spiral where you can't service the connections faster than they're coming in and you top out at the #process_limit. Which, btw, I set to 300 for imap with client_limit=5.
I figure that enough of a process's imap connections must be sitting in IDLE at any given moment, so setting client_limit to like 4 or 5 isn't too bad. Though it's not impossible that by putting multiple connections on a single process, I'm actually throttiling the system, resulting in fewer context switches (though I'd imagine bottlenecked procs would be blocked on I/O and do a lot of volcs's).