thanks very much urban. this was very helpful.
i have around 12500 users spread over 3 independent servers each having around 4000+ users i am using qmailtoaster, vpopmail, spamassassin and dovecot.
in future i am planning to consolidate all using a HA cluster.
if it is ok with you could you kindly share some information about your email server configuration. if you do not wish to put it on the list then you can directly email me.
- is your email volume high ?
- server hardware to support 28000 users
- mailserver software - exim or postfix ??.
- antispam software like spamassassin if any
also if you have faced any email re-download issues with dovecot sometimes randomly incase of pop3 users storing emails on the server ?
thanks rajesh
----- Original Message ----- From: Urban Loesch [mailto:bind@enas.net] To: dovecot@dovecot.org Sent: Sun, 13 Sep 2015 09:33:14 +0200 Subject: Re: concerning dovecot settings for high volume server
Hi,
I have running dovecot with about 28k users. Here comes my relevant config for pop3 and imap from "doveconf -n". No problems so far.
-- snip -- default_client_limit = 2000 ...
service imap-login { inet_listener imap { port = 143 } process_limit = 256 process_min_avail = 50 service_count = 1 } service imap { process_limit = 2048 process_min_avail = 50 service_count = 1 vsz_limit = 512 M } ...
service pop3-login { inet_listener pop3 { port = 110 } process_limit = 256 process_min_avail = 25 service_count = 1 } service pop3 { process_limit = 256 process_min_avail = 25 service_count = 1 } ...
protocol imap { imap_client_workarounds = tb-extra-mailbox-sep imap_id_log = * imap_logout_format = bytes=%i/%o session=<%{session}> mail_max_userip_connections = 40 mail_plugins = " quota mail_log notify zlib imap_quota imap_zlib" }
... protocol pop3 { mail_plugins = " quota mail_log notify zlib" pop3_logout_format = bytes_sent=%o top=%t/%p, retr=%r/%b, del=%d/%m, \ size=%s uidl_hash=%u session=<%{session}> } -- snip --
Regards Urban
Am 12.09.2015 um 20:53 schrieb Rajesh M:
hi
centos 6 64 bit
hex core processor with hyperthreading ie display shows 12 cores 16 gb ram 600 gb 15000 rpm drive
we are having around 4000 users on a server
i wish to allow 1500 pop3 and 1500 imap connections simultaneously.
need help regarding the settings to handle the above
imap-login, pop3-login imap pop3 service settings
i recently i got an error imap-login: Error: read(imap) failed: Remote closed connection (process_limit reached?)
my current dovecot config file
# 2.2.7: /etc/dovecot/dovecot.conf # OS: Linux 2.6.32-431.23.3.el6.x86_64 x86_64 CentOS release 6.5 (Final) auth_cache_negative_ttl = 0 auth_cache_ttl = 0 auth_mechanisms = plain login digest-md5 cram-md5 default_login_user = vpopmail disable_plaintext_auth = no first_valid_gid = 89 first_valid_uid = 89 log_path = /var/log/dovecot.log login_greeting = ready. mail_max_userip_connections = 50 mail_plugins = " quota" managesieve_notify_capability = mailto managesieve_sieve_capability = fileinto reject envelope encoded-character vacation subaddress comparator-i;ascii-numeric relational regex imap4flags copy include variables body enotify environment mailbox date ihave namespace { inbox = yes location = prefix = separator = . type = private } passdb { args = cache_key=%u webmail=127.0.0.1 driver = vpopmail } plugin { quota = maildir:ignore=Trash quota_rule = ?:storage=0 } protocols = imap pop3 service imap-login { client_limit = 256 process_limit = 400 process_min_avail = 4 service_count = 0 vsz_limit = 512 M } service pop3-login { client_limit = 1000 process_limit = 400 process_min_avail = 12 service_count = 0 vsz_limit = 512 M } ssl_cert = </var/qmail/control/servercert.pem ssl_dh_parameters_length = 2048 ssl_key = </var/qmail/control/servercert.pem userdb { args = cache_key=%u quota_template=quota_rule=*:backend=%q driver = vpopmail } protocol imap { imap_client_workarounds = delay-newmail mail_plugins = " quota imap_quota" } protocol pop3 { pop3_client_workarounds = outlook-no-nuls oe-ns-eoh pop3_fast_size_lookups = yes pop3_lock_session = no pop3_no_flag_updates = yes }
thanks very much,
rajesh