[Dovecot] Can`t get over 1024 processes on FreeBSD - possible bug?
Hello,
I still cannot get dovecot running with more then 1000 processes, but hard limit is 8192 per user in box. I tried everything, including modifying startup script of dovecot to set ulimit -u 8192. Could it be some dovecot bug or dovecot<>freebsd bug? I also tried to set client_limit=2 in imap service to spawn more imap clients in one process, but still I am over 1000 processes with kernel message:
maxproc limit exceeded by uid 89
Could anybody help? Many thanks Tomas
My system have following settings:
FreeBSD 9.0 / AMD64 Dovecot 2.1.8 kern.maxproc: 12288 kern.maxfilesperproc: 36864 kern.maxprocperuid: 8192
no limit for uid 89:
# limit cputime unlimited filesize unlimited datasize 33554432 kbytes stacksize 524288 kbytes coredumpsize unlimited memoryuse unlimited vmemoryuse unlimited descriptors 36864 memorylocked unlimited maxproc 8192 sbsize unlimited swapsize unlimited
My dovecot.conf:
# 2.1.8: /usr/local/etc/dovecot/dovecot.conf # OS: FreeBSD 9.0-STABLE amd64 auth_mechanisms = plain login digest-md5 cram-md5 default_client_limit = 2048 default_process_limit = 2048 disable_plaintext_auth = no first_valid_gid = 89 first_valid_uid = 89 info_log_path = /data/logfiles/dovecot/dovecot-info.log last_valid_gid = 89 last_valid_uid = 89 listen = * log_timestamp = "%Y-%m-%d %H:%M:%S " login_greeting = Mail Toaster (Dovecot) ready. mail_location = maildir:~/Maildir mail_plugins = " quota" mail_privileged_group = mail maildir_broken_filename_sizes = yes passdb { driver = vpopmail } plugin { quota = maildir quota_rule = Trash:ignore sieve = ~/.sieve/dovecot.sieve sieve_dir = ~/.sieve } protocols = imap pop3 sieve service anvil { client_limit = 6147 } service auth { client_limit = 8192 unix_listener auth-client { mode = 0660 } unix_listener auth-master { mode = 0600 } } service imap-login { process_limit = 2048 service_count = 1 } service imap { client_limit = 1 process_limit = 2048 } service managesieve { process_limit = 2048 } service pop3-login { process_limit = 2048 service_count = 1 } service pop3 { client_limit = 1 process_limit = 2048 } shutdown_clients = no ssl_cert =
}
Am 21.09.2012 11:32, schrieb Tomáš Randa:
Hello,
I still cannot get dovecot running with more then 1000 processes, but hard limit is 8192 per user in box. I tried everything, including modifying startup script of dovecot to set ulimit -u 8192. Could it be some dovecot bug or dovecot<>freebsd bug? I also tried to set client_limit=2 in imap service to spawn more imap clients in one process, but still I am over 1000 processes with kernel message:
no idea about BSD but your config are a total of up to 10.240 PROCESSES
one process has MUCH MORE than one file-handle i have ONE imap-login process with 572 file-handles
your configuration eats up to 5 Mio. file-handles maybe you running out of OS ressources
1000 prcoesses are up to 500000 file handles for one service
[root@mail:~]$ ps aux | grep imap-login | wc -l 2
[root@mail:~]$ lsof | grep imap-l | wc -l 572
for imaplogin / pop3-login as example you do not need a PROCESS per connection
service_count = 0 process_min_avail = 1 process_limit = 10 client_limit = 200
this can handle 2000 connections with up to 10 processes
service imap-login { process_limit = 2048 service_count = 1 } service imap { client_limit = 1 process_limit = 2048 } service managesieve { process_limit = 2048 } service pop3-login { process_limit = 2048 service_count = 1 } service pop3 { client_limit = 1 process_limit = 2048 }
On 9/21/2012 4:32 AM, Tomáš Randa wrote:
Hello,
I still cannot get dovecot running with more then 1000 processes, but hard limit is 8192 per user in box. I tried everything, including modifying startup script of dovecot to set ulimit -u 8192.
What is your value for kern.maxusers? Did you try increasing it? Note in the 2nd paragraph below the relationship between kern.maxusers and process limit. From what you describe it would seem you have a process limit of 1044, thus a kern.maxusers value of 64. Considering your manual setting of 8192 processes is apparently being ignored, it would seem the kern.maxusers value is causing it to be overridden. From:
http://www.pl.freebsd.org/doc/handbook/configtuning-kernel-limits.html
As of FreeBSD 4.5, kern.maxusers is automatically sized at boot based on the amount of memory available in the system, and may be determined at run-time by inspecting the value of the read-only kern.maxusers sysctl. Some sites will require larger or smaller values of kern.maxusers and may set it as a loader tunable; values of 64, 128, and 256 are not uncommon. We do not recommend going above 256 unless you need a huge number of file descriptors; many of the tunable values set to their defaults by kern.maxusers may be individually overridden at boot-time or run-time in /boot/loader.conf (see the loader.conf(5) man page or the /boot/defaults/loader.conf file for some hints) or as described elsewhere in this document. Systems older than FreeBSD 4.4 must set this value via the kernel config(8) option maxusers instead.
In older releases, the system will auto-tune maxusers for you if you explicitly set it to 0[1]. When setting this option, you will want to set maxusers to at least 4, especially if you are using the X Window System or compiling software. The reason is that the most important table set by maxusers is the maximum number of processes, which is set to 20 + 16 * maxusers, so if you set maxusers to 1, then you can only have 36 simultaneous processes, including the 18 or so that the system starts up at boot time and the 15 or so you will probably create when you start the X Window System. Even a simple task like reading a manual page will start up nine processes to filter, decompress, and view it. Setting maxusers to 64 will allow you to have up to 1044 simultaneous processes, which should be enough for nearly all uses. If, however, you see the dreaded proc table full error when trying to start another program, or are running a server with a large number of simultaneous users (like ftp.FreeBSD.org), you can always increase the number and rebuild.
-- Stan
El 21/09/12 11:32, Tomáš Randa escribió:
Hello,
I still cannot get dovecot running with more then 1000 processes, but hard limit is 8192 per user in box. I tried everything, including modifying startup script of dovecot to set ulimit -u 8192. Could it be some dovecot bug or dovecot<>freebsd bug? I also tried to set client_limit=2 in imap service to spawn more imap clients in one process, but still I am over 1000 processes with kernel message:
maxproc limit exceeded by uid 89
Could anybody help? Many thanks Tomas
Hi,
I don't know BSD, but we had a similar problems with linux, when we
reached 1024 processes, no more processes were created and we had errors like "imap-login: Panic: epoll_ctl(add, 6) failed: Invalid argument".
If this is your same case, you could look for more info at
http://www.dovecot.org/list/dovecot/2012-July/067014.html
-- Angel L. Mateo Martínez Sección de Telemática Área de Tecnologías de la Información y las Comunicaciones Aplicadas (ATICA) http://www.um.es/atica Tfo: 868887590 Fax: 868888337
El 21/09/12 11:32, Tomáš Randa escribió:
Hello,
I still cannot get dovecot running with more then 1000 processes, but hard limit is 8192 per user in box. I tried everything, including modifying startup script of dovecot to set ulimit -u 8192. Could it be some dovecot bug or dovecot<>freebsd bug? I also tried to set client_limit=2 in imap service to spawn more imap clients in one process, but still I am over 1000 processes with kernel message:
maxproc limit exceeded by uid 89
You may be running into the kern.maxprocperuid sysctl setting. This is initialized to 9/10ths of kern.maxproc, but can be changed independantly. If you do this you may want to consider setting a default maxproc rlimit in login.conf for the other users on the box. (You may, of course, already have a maxproc limit in login.conf, and that's what's causing the problem, though the default install doesn't include one.)
If you have procfs mounted you can check the maxproc rlimit of a running process by looking in /proc/pid/rlimit. In principle it's possible to also get this information with libkvm, but it's not very easy and I don't think any of the standard utilities expose it.
Ben
participants (5)
-
Angel L. Mateo
-
Ben Morrow
-
Reindl Harald
-
Stan Hoeppner
-
Tomáš Randa