On Tue, Nov 9, 2010 at 3:55 PM, Timo Sirainen tss@iki.fi wrote:
On 9.11.2010, at 23.49, Mark Moseley wrote:
On 9.11.2010, at 22.14, Mark Moseley wrote:
service imap { service_count = 0 }
Would the risks involved be identical to your warnings about using "service_count=0" with pop3-login/imap-login, namely that if the daemon gets hacked, it'd be able to access other mailboxes (presumably
On Tue, Nov 9, 2010 at 3:05 PM, Timo Sirainen tss@iki.fi wrote: that the imap/pop3 already had open)?
If all your users share the same UID and there was a security hole in imap/pop3, you could access everyone's mails regardless of this setting. (Except if you also enabled chroots, but then you couldn't use this setting.)
Nice, it does indeed seem to burn a lot less CPU. I've also set "process_min_avail=#" for 'service pop3' which appears to spread out incoming POP3 connections over all # pop3 procs. Any gotchas there? I've always got at least several hundred POP3 connections on a box, so having them not all hitting one proc is desirable. And having, say, 100 pop3 procs hanging around, possibly idle, is fine. This is pretty exciting stuff.
Should be fine.
Anybody running this way in production on a large scale, i.e. using "service_count=0" in a "service imap" or "service pop3" ?
Only potential problem is memory leaks that keep increasing the memory usage. Of course there should be no memory leaks. :) You could anyway set something like service_count=1000 to get it to restart after handling 1000 connections.
I'll keep that one in mind. Doesn't seem like it could hurt to have something like =1000 just to be safe.
For anybody else reading this thread, here's vmstat from two boxes with an almost identical amount of IMAP and POP3 connections, but the first example has "service_count=0" for "service pop3", the second doesn't. Check out the context switches (this will paste horribly; it's the 5th column from the right):
With "service_count=0": r b swpd free buff cache si so bi bo in cs us sy id wa 0 0 0 54056 161536 1116012 0 0 132 0 1326 7868 3 8 85 4 0 0 0 53568 161608 1116192 0 0 116 0 1572 5111 2 6 80 13 1 0 0 52320 161608 1116960 0 0 16 140 1584 6126 3 6 86 5 0 0 0 51940 161656 1117292 0 0 44 128 1430 5508 3 7 86 5 0 0 0 53316 161628 1115500 0 0 12 0 1459 5363 4 7 86 4 0 0 0 52176 161664 1115468 0 0 76 76 1221 6344 6 5 84 6 0 1 0 52300 161688 1115636 0 0 32 0 1250 3631 3 2 90 6 0 0 0 53580 161648 1113116 0 0 76 172 1778 6671 3 9 83 6
Without "service_count=0": r b swpd free buff cache si so bi bo in cs us sy id wa 0 2 2476 73776 147860 958728 0 0 0 0 912 17442 6 11 0 84 0 2 2476 61300 147860 958756 0 0 0 0 992 22130 5 19 0 76 21 4 2476 49484 148008 957608 0 0 140 748 2472 15922 9 24 0 68 30 4 2476 97928 146604 945232 0 0 404 0 7116 20700 43 58 0 0 4 1 2476 169280 146700 946972 0 0 172 0 7014 43727 33 67 1 0 7 1 2476 168684 146800 947304 0 0 168 0 5720 55485 37 62 2 1 9 0 2476 165956 146856 947504 0 0 80 0 5074 56371 35 65 0 0 6 6 2476 163032 146948 948380 0 0 92 2432 5879 63418 27 67 5 2
It's nice to look at 'perf top' on the first box and not see "finish_task_switch" chewing up 15+% of kernel CPU time :)