[Dovecot] Very High Load on Dovecot 2 and Errors in mail.err.
Hi,
I have a DELL PE R610 (32GB RAM 2x Six Core CPU and about 1,4 TB RAID 10) running with 20.000 Mailaccounts behind 2 Dovecot IMAP/POP3 Proxies on a Debian Lenny.
The Server was running about 1 year without any problems. 15Min Load was between 0,5 and max 8. No high IOWAIT. CPU Idletime about 98%.
But since yesterday morning the Systemload on the Server has been increased over 500. I Think this is very high. The strange thing: there was no IOWAIT and the CPU Idle time was allways the same on about 98%.
The total amount of IMAP Sessions is about 300 - 600.
Current vmstat and iostat:
# vmstat 1 procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa 1 0 0 27040576 635460 3495600 0 0 14 15 2 31 0 0 99 0 0 0 0 27040320 635468 3496064 0 0 804 455 1383 1281 0 0 98 1 0 0 0 27047016 634964 3489312 0 0 216 156 1841 1292 1 0 98 1 0 0 0 27047140 635028 3489012 0 0 240 619 1629 1658 0 0 96 3 0 0 0 27047264 635120 3489172 0 0 92 0 1069 881 0 0 100 0 0 0 0 27047388 635120 3489256 0 0 0 46 1404 1265 0 0 100 0 0 0 0 27047512 635136 3489312 0 0 128 471 1539 1354 0 0 99 1 0 0 0 27047388 635156 3489384 0 0 12 360 1108 952 0 0 99 0 0 0 0 27047516 635160 3489408 0 0 104 12 893 677 0 0 99 0 ^C
# iostat -k Linux 3.0.28-vs2.3.2.3-rol-em64t (mailstore4) 16.05.2012 _x86_64_ (24 CPU)
avg-cpu: %user %nice %system %iowait %steal %idle 0,08 0,00 0,09 0,34 0,00 99,49
Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn sda 67,35 636,28 1080,48 31337690 53215361 dm-0 73,53 591,55 893,93 29134837 44027496 dm-1 15,06 42,40 181,79 2088493 8953661 drbd1 51,21 316,06 277,94 15566325 13689004 drbd0 9,22 23,50 80,38 1157641 3958796
Current Load: 08:55:44 up 14:04, 3 users, load average: 19,64, 14,47, 10,49 Load is increasing.
The only strange thing I can see is this: # ps -ostat,pid,time,wchan='WCHAN-xxxxxxxxxxxxxxxxxxxx',cmd ax |grep D STAT PID TIME WCHAN-xxxxxxxxxxxxxxxxxxxx CMD D 18713 00:00:00 synchronize_srcu dovecot/imap D 18736 00:00:00 synchronize_srcu dovecot/imap D 18775 00:00:05 synchronize_srcu dovecot/imap D 20330 00:00:00 synchronize_srcu dovecot/imap D 20357 00:00:00 synchronize_srcu dovecot/imap D 20422 00:00:00 synchronize_srcu dovecot/imap D 20687 00:00:00 synchronize_srcu dovecot/imap S+ 20913 00:00:00 pipe_wait grep D
There are many imap processes in D State. Amount is increasing. I think they are delayed and are wating for some event. I have no idea on which event they are waiting.
And many of this in "mail.err" Log: May 16 08:52:11 dcot-rolmail-1.rolmail.net dovecot: master: Error: service(imap): Process 17468 is ignoring idle SIGINT May 16 08:52:24 dcot-rolmail-1.rolmail.net dovecot: master: Error: service(imap): Process 20307 is ignoring idle SIGINT May 16 08:52:25 dcot-rolmail-1.rolmail.net dovecot: master: Error: service(imap): Process 20318 is ignoring idle SIGINT May 16 08:52:26 dcot-rolmail-1.rolmail.net dovecot: master: Error: service(imap): Process 18964 is ignoring idle SIGINT May 16 08:52:28 dcot-rolmail-1.rolmail.net dovecot: master: Error: service(imap): Process 19244 is ignoring idle SIGINT May 16 08:54:22 dcot-rolmail-1.rolmail.net dovecot: master: Error: service(imap): Process 21177 is ignoring idle SIGINT May 16 08:54:41 dcot-rolmail-1.rolmail.net dovecot: master: Error: service(imap): Process 20647 is ignoring idle SIGINT May 16 08:55:10 dcot-rolmail-1.rolmail.net dovecot: master: Error: service(imap): Process 18836 is ignoring idle SIGINT May 16 08:55:17 dcot-rolmail-1.rolmail.net dovecot: master: Error: service(imap): Process 18857 is ignoring idle SIGINT May 16 08:55:19 dcot-rolmail-1.rolmail.net dovecot: master: Error: service(imap): Process 21176 is ignoring idle SIGINT May 16 08:55:24 dcot-rolmail-1.rolmail.net dovecot: master: Error: service(imap): Process 20688 is ignoring idle SIGINT May 16 08:55:25 dcot-rolmail-1.rolmail.net dovecot: master: Error: service(imap): Process 20973 is ignoring idle SIGINT May 16 08:56:44 dcot-rolmail-1.rolmail.net dovecot: master: Error: service(imap): Process 20326 is ignoring idle SIGINT
Have you any Idea ho I can troubleshoot this problem?
Some more technical information about the system: # uname -a Linux mailstore4 3.0.28-vs2.3.2.3-rol-em64t #1 SMP Thu May 3 09:31:08 CEST 2012 x86_64 GNU/Linux
Dovecot Packages on Backend: installed: # dpkg -l |grep dovecot ii debian-dovecot-auto-keyring 2010.01.30 GnuPG archive keys of the Automatic Dovecot Debian repository ii dovecot-common 2:2.0.13-0~auto+54 secure mail server that supports mbox, maildir, dbox and mdbox mailboxes ii dovecot-imapd 2:2.0.13-0~auto+54 secure IMAP server that supports mbox, maildir, dbox and mdbox mailboxes ii dovecot-lmtpd 2:2.0.13-0~auto+54 secure LMTP server for Dovecot ii dovecot-managesieved 2:2.0.13-0~auto+54 secure ManageSieve server for Dovecot ii dovecot-mysql 2:2.0.13-0~auto+54 MySQL support for Dovecot ii dovecot-pop3d 2:2.0.13-0~auto+54 secure POP3 server that supports mbox, maildir, dbox and mdbox mailboxes ii dovecot-sieve 2:2.0.13-0~auto+54 sieve filters support for Dovecot
Dovecot Packages on both Proxies installed: ii debian-dovecot-auto-keyring 2010.01.30 GnuPG archive keys of the Automatic Dovecot ii dovecot-common 2:2.0.17-0~auto+4 Transitional package for dovecot ii dovecot-core 2:2.0.17-0~auto+4 secure mail server that supports mbox, maild ii dovecot-gssapi 2:2.0.17-0~auto+4 GSSAPI authentication support for Dovecot ii dovecot-imapd 2:2.0.17-0~auto+4 secure IMAP server that supports mbox, maild ii dovecot-managesieved 2:2.0.17-0~auto+4 secure ManageSieve server for Dovecot ii dovecot-mysql 2:2.0.17-0~auto+4 MySQL support for Dovecot ii dovecot-pgsql 2:2.0.17-0~auto+4 PostgreSQL support for Dovecot ii dovecot-pop3d 2:2.0.17-0~auto+4 secure POP3 server that supports mbox, maild ii dovecot-sieve 2:2.0.17-0~auto+4 sieve filters support for Dovecot ii dovecot-sqlite 2:2.0.17-0~auto+4 SQLite support for Dovecot
# doveconf -n # 2.0.13 (02d97fb66047): /etc/dovecot/dovecot.conf # OS: Linux 3.0.28-vs2.3.2.3-rol-em64t x86_64 Debian 6.0.2 ext4 auth_cache_negative_ttl = 0 auth_cache_size = 40 M auth_cache_ttl = 12 hours auth_mechanisms = plain login auth_username_format = %Lu auth_verbose = yes deliver_log_format = msgid=%m: %$ %p %w disable_plaintext_auth = no login_trusted_networks = 195.254.252.207 195.254.252.208 2a02:2e8:0:2:0:143:0:1 2a02:2e8:0:2:0:143:0:2 mail_gid = mailstore mail_location = mdbox:/home/vmail/%d/%n:INDEX=/home/dovecotindex/%d/%n mail_plugins = " quota zlib" mail_uid = mailstore managesieve_notify_capability = mailto managesieve_sieve_capability = fileinto reject envelope encoded-character vacation subaddress comparator-i;ascii-numeric relational regex imap4flags copy include variables body enotify environment mailbox date ihave imapflags notify mdbox_rotate_size = 5 M passdb { args = /etc/dovecot/dovecot-sql-account.conf driver = sql } plugin { mail_log_events = delete undelete expunge copy mailbox_delete mailbox_rename mail_log_fields = uid box msgid size from mail_log_group_events = no quota = dict:Storage used::file:%h/dovecot-quota sieve = ~/.dovecot.sieve sieve_dir = ~/sieve sieve_extensions = +notify +imapflags sieve_max_redirects = 10 zlib_save = gz zlib_save_level = 5 } protocols = imap pop3 lmtp sieve service auth-worker { process_min_avail = 12 service_count = 0 vsz_limit = 512 M } service imap-login { inet_listener imap { port = 143 } process_min_avail = 12 service_count = 0 vsz_limit = 256 M } service imap { process_min_avail = 12 service_count = 0 vsz_limit = 512 M } service lmtp { inet_listener lmtp { address = * port = 24 } unix_listener /var/spool/postfix/private/dovecot-lmtp { group = postfix mode = 0666 user = postfix } vsz_limit = 512 M } service pop3-login { inet_listener pop3 { port = 110 } process_min_avail = 12 service_count = 0 vsz_limit = 256 M } service pop3 { service_count = 0 vsz_limit = 256 M } ssl = no ssl_cert =
Thanks and regards Urban Loesch
On Wed, 2012-05-16 at 08:59 +0200, Urban Loesch wrote:
The Server was running about 1 year without any problems. 15Min Load was between 0,5 and max 8. No high IOWAIT. CPU Idletime about 98%. .. # iostat -k Linux 3.0.28-vs2.3.2.3-rol-em64t (mailstore4) 16.05.2012 _x86_64_ (24 CPU)
Did you change the kernel just before it broke? I'd try another version.
Am 19.05.2012 21:05, schrieb Timo Sirainen:
On Wed, 2012-05-16 at 08:59 +0200, Urban Loesch wrote:
The Server was running about 1 year without any problems. 15Min Load was between 0,5 and max 8. No high IOWAIT. CPU Idletime about 98%. .. # iostat -k Linux 3.0.28-vs2.3.2.3-rol-em64t (mailstore4) 16.05.2012 _x86_64_ (24 CPU)
Did you change the kernel just before it broke? I'd try another version.
The first time it brokes with kernel 2.6.38.8-vs2.3.0.37-rc17. Then I tried it with 3.0.28 and it brokes again. On friday evening I disabled the cgroup feature compleetly and until now it seems to work normally. But this could be because we have weekend and now there are not many connections active. So I have to wait until monday. If it happens again I will try version 3.2.17.
On the other side it could be that the server is overloaded, because this problem happens only when there are more than 1000 tasks active. Sounds strange for me, because it has been working without problems since 1 year and we made no changes. Also there were almost more than 1000 tasks active over the last year and we had no problems.
thanks Urban
I know that you are NOT running RHEL / CentOS, but this problem with
1000 child processes bit us hard, read this red hat kernel bugzilla (Timo has comments inside):
https://bugzilla.redhat.com/show_bug.cgi?id=681578
Maybe you are hitting the same limit?
Regards
Javier
El 20/05/2012 11:59, Urban Loesch escribió:
Am 19.05.2012 21:05, schrieb Timo Sirainen:
On Wed, 2012-05-16 at 08:59 +0200, Urban Loesch wrote:
The
Server was running about 1 year without any problems. 15Min Load was between 0,5 and max 8. No high IOWAIT. CPU Idletime about 98%.
..
# iostat -k Linux 3.0.28-vs2.3.2.3-rol-em64t (mailstore4) 16.05.2012 _x86_64_ (24 CPU) Did you change the kernel just before it broke? I'd try another version.
The first time it brokes with kernel 2.6.38.8-vs2.3.0.37-rc17. Then I tried it with 3.0.28 and it brokes again. On friday evening I disabled the cgroup feature compleetly and until now it seems to work normally. But this could be because we have weekend and now there are not many connections active. So I have to wait until monday. If it happens again I will try version 3.2.17.
On the other side it could be that the server is overloaded, because this problem happens only when there are more than 1000 tasks active. Sounds strange for me, because it has been
and we made no changes. Also
working without problems since 1 year there were almost more than 1000 tasks
active over the last year and we had no problems.
thanks Urban
Hi Javier,
thanks for your help.
Am 20.05.2012 13:58, schrieb Javier Miguel Rodríguez:
I know that you are NOT running RHEL / CentOS, but this problem with
1000 child processes bit us hard, read this red hat kernel bugzilla (Timo has comments inside):
https://bugzilla.redhat.com/show_bug.cgi?id=681578
Maybe you are hitting the same limit?
yes maybe. The only strange thing is that I don't see any erros in my dovecot logs. I don't see erros like "Panic: epoll_ctl" ore something else.
I checked my kernel and the patch mentioned in https://bugzilla.redhat.com/show_bug.cgi?id=681578
(comment 31) is not applied. It comes in version 3.0.30 and 3.2.17.
I will see what tomorrow happens under more load. If I have the problem again, I give 3.2.17 a chance.
thanks Urban
Regards
Javier
El 20/05/2012 11:59, Urban Loesch escribió:
Am 19.05.2012 21:05, schrieb Timo Sirainen:
On Wed, 2012-05-16 at 08:59 +0200, Urban Loesch wrote:
The
Server was running about 1 year without any problems. 15Min Load was between 0,5 and max 8. No high IOWAIT. CPU Idletime about 98%.
..
# iostat -k Linux 3.0.28-vs2.3.2.3-rol-em64t (mailstore4) 16.05.2012 _x86_64_ (24 CPU) Did you change the kernel just before it broke? I'd try another version.
The first time it brokes with kernel 2.6.38.8-vs2.3.0.37-rc17. Then I tried it with 3.0.28 and it brokes again. On friday evening I disabled the cgroup feature compleetly and until now it seems to work normally. But this could be because we have weekend and now there are not many connections active. So I have to wait until monday. If it happens again I will try version 3.2.17.
On the other side it could be that the server is overloaded, because this problem happens only when there are more than 1000 tasks active. Sounds strange for me, because it has been
and we made no changes. Also
working without problems since 1 year there were almost more than 1000 tasks
active over the last year and we had no problems.
thanks Urban
On May 20 16:29, Urban Loesch wrote:
I checked my kernel and the patch mentioned in https://bugzilla.redhat.com/show_bug.cgi?id=681578
(comment 31) is not applied. It comes in version 3.0.30 and 3.2.17.
I will see what tomorrow happens under more load. If I have the problem again, I give 3.2.17 a chance.
We've seen similar behavior on a similar system with a similar workload.
We've tried a 3.0.31 - after the epoll patch was applied upstream - without seeing a difference. Right now we're running a 3.3.7 with vs2.3.3.4, and this has reduced the problem quite a bit, but not eliminated it completely.
Stracing the processes in D state from before they hang has just revealed something interesting, however, pointing to an issue with inotify rather than epoll.
[snip]
[...]
15414 23:27:36 inotify_init() = 12 <0.000024>
[...]
15414 23:27:36 close(12
In short, as far as we can tell, all the processes in D state appear to be waiting to close the file handle they got from their inotify_init(), and eventually all these close()s go through almost simultaneously.
Right now we're trawling for locking issues related to inotify, with our focus mainly at the VServer patch set. I would very much appreciate updates on your - or anyone else's - findings and progress.
Yours,
Jesper Nyerup.
On 11.6.2012, at 11.09, Jesper Dahl Nyerup wrote:
Stracing the processes in D state from before they hang has just revealed something interesting, however, pointing to an issue with inotify rather than epoll.
[snip] [...] 15414 23:27:36 inotify_init() = 12 <0.000024> [...] 15414 23:27:36 close(12
15414 23:28:51 <... close resumed> ) = 0 <74.593917> 15414 23:28:51 close(9 15414 23:28:51 <... close resumed> ) = 0 <0.000080> 15414 23:28:51 exit_group(0) = ? [/snip] In short, as far as we can tell, all the processes in D state appear to be waiting to close the file handle they got from their inotify_init(), and eventually all these close()s go through almost simultaneously.
Yeah. Looks like a kernel bug. You could try if it goes away by disabling inotify in Dovecot. Either recompile with "configure --with-notify=none" or maybe you can disable inotify globally with:
echo 0 > /proc/sys/fs/inotify/max_user_watches echo 0 > /proc/sys/fs/inotify/max_user_instances
On Jun 11 14:51, Timo Sirainen wrote:
On 11.6.2012, at 11.09, Jesper Dahl Nyerup wrote:
In short, as far as we can tell, all the processes in D state appear to be waiting to close the file handle they got from their inotify_init(), and eventually all these close()s go through almost simultaneously.
Yeah. Looks like a kernel bug. You could try if it goes away by disabling inotify in Dovecot. Either recompile with "configure --with-notify=none" or maybe you can disable inotify globally with:
echo 0 > /proc/sys/fs/inotify/max_user_watches echo 0 > /proc/sys/fs/inotify/max_user_instances
I can confirm that this removes the symptoms, and that it doesn't affect the service. Obviously IDLEing users are now only notified upon polling of the file system, but the I/O overhead of doing this seems minimal.
It may be important to note, that even though load on our servers surpass 2000, both Dovecot and the server as a whole is responsive and servicing requests, up until the point where Dovecot reaches its configured maximal number of child processes.
We're still chasing the root cause in the kernel or the VServer patch set. We'll of course make sure to post our findings here, and I'd very much appreciate to hear about other people's progress.
Jesper.
On 12.6.2012, at 0.37, Jesper Dahl Nyerup wrote:
Yeah. Looks like a kernel bug. You could try if it goes away by disabling inotify in Dovecot. Either recompile with "configure --with-notify=none" or maybe you can disable inotify globally with:
echo 0 > /proc/sys/fs/inotify/max_user_watches echo 0 > /proc/sys/fs/inotify/max_user_instances
I can confirm that this removes the symptoms, and that it doesn't affect the service. Obviously IDLEing users are now only notified upon polling of the file system, but the I/O overhead of doing this seems minimal.
It actually doesn't increase I/O overhead at all. Dovecot always does polling, even with inotify, since inotify doesn't necessarily work with shared filesystems (e.g. NFS). The main difference is that users don't get immediate notifications of new mails now, but have to wait for mailbox_idle_check_interval.
On Jun 11 23:37, Jesper Dahl Nyerup wrote:
We're still chasing the root cause in the kernel or the VServer patch set. We'll of course make sure to post our findings here, and I'd very much appreciate to hear about other people's progress.
We still haven't found a solution, but here's what we've got thus far:
The issue is not VServer specific. We're able to reproduce it on recent vanilla kernels.
The issue has a strong correlation with the number of processor cores in the machine. The behavior is impossible to provoke on a dual core workstation, but is very widespread on 16 or 24 core machines.
One of my colleagues has written a snippet of code that reproduces and exposes the problem, and we've sent this to the Inotify maintainers and the kernel mailing list, hoping that someone more familiar with the code will be quicker to figure out what is broken.
If anyone's interested - either in following the issue or the code snippet that reproduces it - here's the post: http://thread.gmane.org/gmane.linux.kernel/1315430
As this is clearly a kernel issue, we're going to try to keep the discussion there, and I'll probably not follow up here, until the issue has been resolved.
Jesper.
Hi,
yesterday I disabled the inotify as mentioned in the previous post and it works for me also. Thanks to all for the hint.
On 20.06.2012 08:35, Jesper Dahl Nyerup wrote:
On Jun 11 23:37, Jesper Dahl Nyerup wrote:
We're still chasing the root cause in the kernel or the VServer patch set. We'll of course make sure to post our findings here, and I'd very much appreciate to hear about other people's progress.
We still haven't found a solution, but here's what we've got thus far:
The issue is not VServer specific. We're able to reproduce it on recent vanilla kernels.
The issue has a strong correlation with the number of processor cores in the machine. The behavior is impossible to provoke on a dual core workstation, but is very widespread on 16 or 24 core machines.
For the records: I have the problem on 2 different machines with different CPU's
- PE2950 with 2x Intel Xeon X5450 3.00Ghz (8) CPU's (problem happens not so often as with PER610)
- PER610 with 2x Intel Xeon X5650 2.67GHz (24) CPU's
One of my colleagues has written a snippet of code that reproduces and exposes the problem, and we've sent this to the Inotify maintainers and the kernel mailing list, hoping that someone more familiar with the code will be quicker to figure out what is broken.
If anyone's interested - either in following the issue or the code snippet that reproduces it - here's the post: http://thread.gmane.org/gmane.linux.kernel/1315430
As you described on the kernel maillinglist, I can confirm. The higher the number of cpu's, the worse it gets.
As this is clearly a kernel issue, we're going to try to keep the discussion there, and I'll probably not follow up here, until the issue has been resolved.
Jesper.
Thanks Urban
participants (4)
-
Javier Miguel Rodríguez
-
Jesper Dahl Nyerup
-
Timo Sirainen
-
Urban Loesch