[Dovecot] severe performance problem (mail cache related?)

Dan Price dp at eng.sun.com
Fri Feb 16 09:06:06 UTC 2007


On Fri 16 Feb 2007 at 12:20AM, Dan Price wrote:
> So here we see dovecot allocate a whopping 12MB of anonymous memory,
> then munmap it all away in 1MB chunks.
> 
> Timo, do you know why this is happening?  It is ruining performance for me.

More data: watching another user suffer from this and watching just the mmap's,
I see:

                   vvvvvvvv
mmap64(0x00000000, 16973824, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANON...
mmap64(0x00000000, 16982016, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANON,...
mmap64(0x00000000, 16990208, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANON,...
mmap64(0x00000000, 16998400, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANON,...
....
mmap64(0x00000000, 20602880, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANON,...
mmap64(0x00000000, 20611072, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANON,...
mmap64(0x00000000, 20619264, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANON,...
...
mmap64(0x00000000, 25067520, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANON...
...

So this sure looks serious-- it's almost like a memory leak.

Using DTrace I was able to pinpoint the stack track of the mmap
system call:

$ dtrace -n 'syscall::mmap64:entry/pid==164794/{ustack()}'
 12  52690                     mmap64:entry 
              libc.so.1`mmap64+0xc
              imap`mremap_anon+0xb8
              imap`file_cache_set_size+0xe0
              imap`file_cache_read+0xa4
              imap`mail_cache_map+0xec
              imap`mail_cache_get_record+0x20
              imap`mail_cache_foreach_rec+0x10
              imap`mail_cache_foreach+0xb4
              imap`mail_cache_field_exists+0x84
              imap`mail_cache_lookup_field+0xc
              imap`index_mail_get_fixed_field+0x4c
              imap`index_mail_set_seq+0xbc
              imap`mail_set_seq+0xc
              imap`index_storage_search_next+0x70
              imap`mailbox_search_next+0x14
              imap`cmd_search+0xf4
              imap`client_handle_input+0x138
              imap`client_handle_input+0xb0
              imap`_client_input+0x68
              imap`io_loop_handler_run+0x140

I also have a complete trace of the flow of control between the return from one
call to mmap and the start of the next, but it's 165K, so I have not attached
it.

The user reported to me that pine was "doing a selection" or "doing a search"
when this happened.

        -dp

-- 
Daniel Price - Solaris Kernel Engineering - dp at eng.sun.com - blogs.sun.com/dp


More information about the dovecot mailing list