From stan at hardwarefreak.com Sun Jan 1 00:12:00 2012 From: stan at hardwarefreak.com (Stan Hoeppner) Date: Sat, 31 Dec 2011 16:12:00 -0600 Subject: [Dovecot] imap process limits problem In-Reply-To: <4EFE6367.5000408@hardwarefreak.com> References: <4EFE6367.5000408@hardwarefreak.com> Message-ID: <4EFF88B0.6090908@hardwarefreak.com> On 12/30/2011 7:20 PM, Stan Hoeppner wrote: > Just out of curiosity, have you tried the non > one-login-process-per-connection setup? > > login_process_size = 64 > login_process_per_connection = yes Correction. This should be 'no' ^^^ > login_processes_count = 3 > login_max_processes_count = 128 > login_max_connections = 256 > > Season values to taste. -- Stan From tlx at leuxner.net Sun Jan 1 11:31:22 2012 From: tlx at leuxner.net (Thomas Leuxner) Date: Sun, 1 Jan 2012 10:31:22 +0100 Subject: [Dovecot] Some Doveadm Tools lack proper exit codes Message-ID: <172CBBB1-DEFD-42E6-937E-B625FB9028EF@leuxner.net> Happy New Year Everyone, and *yes* it's that time of the year to archive old stuff again. Please implement proper error codes to support this (scripting) endeavor. => Good $ doveadm user foo userdb lookup: user foo doesn't exist $ echo $? 2 => Bad $ doveadm acl get -u tlx at leuxner.net FOO doveadm(tlx at leuxner.net): Error: Can't open mailbox FOO: Mailbox doesn't exist: FOO ID Global Rights $ echo $? 0 $ doveconf -n | head # 2.1.rc1 (056934abd2ef): /etc/dovecot/dovecot.conf # OS: Linux 2.6.32-5-amd64 x86_64 Debian 6.0.3 Thanks Thomas -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 163 bytes Desc: Message signed with OpenPGP using GPGMail URL: From gedalya at gedalya.net Sun Jan 1 18:48:06 2012 From: gedalya at gedalya.net (Gedalya) Date: Sun, 01 Jan 2012 11:48:06 -0500 Subject: [Dovecot] Trouble with proxy_maybe and auth_default_realm In-Reply-To: <4EFCA20F.10107@gedalya.net> References: <4EFCA20F.10107@gedalya.net> Message-ID: <4F008E46.6040000@gedalya.net> On 12/29/2011 12:23 PM, Gedalya wrote: > Hello, > > I'm using proxy_maybe and auth_default_realm. It seems that when a > user logs in without the domain name, relying on auth_default_realm, > and the "host" field points to the local server, I get the Proxying > loops to itself error. It does work as expected - log on to the local > server without proxying, if the user does include the domain name in > the login. > > (IP's and domain name masked below) > > No domain: > > Dec 29 11:49:07 imap01 dovecot: pop3-login: Error: Proxying loops to > itself: user=, method=PLAIN, rip=00.00.52.18, > lip=00.00.241.140 > Dec 29 11:49:27 imap01 dovecot: pop3-login: Disconnected (auth failed, > 1 attempts): user=, method=PLAIN, rip=00.00.52.18, > lip=00.00.241.140 > > With domain: > > Dec 29 11:52:13 imap01 dovecot: pop3-login: Login: > user=, method=PLAIN, rip=00.00.52.18, lip=00.00.241.140, > mpid=19969 > Dec 29 11:52:18 imap01 dovecot: pop3(jedi at ---.com): Disconnected: > Logged out top=0/0, retr=0/0, del=0/1, size=731 > > Otherwise, e.g. when the proxy host is indeed another host, > auth_default_domain works fine, including or not including the domain > seems to make no difference, and everything works. > > I'm using mysql, and I'm able to get around this problem including the > following in the password query: > IF(host='' or host='00.00.241.140', NULL, 'Y') as proxy_maybe > > # dovecot --version > 2.0.15 > > # dovecot -n > # 2.0.15: /etc/dovecot/dovecot.conf > # OS: Linux 2.6.32-5-amd64 x86_64 Debian 6.0.3 > auth_default_realm = ----.com > auth_mechanisms = plain login cram-md5 ntlm > auth_username_format = %Lu > auth_verbose = yes > auth_verbose_passwords = plain > dict { > quota = mysql:/etc/dovecot/dovecot-dict-sql.conf.ext > } > disable_plaintext_auth = no > login_greeting = How can I help you? > mail_gid = vmail > mail_uid = vmail > passdb { > args = /etc/dovecot/dovecot-sql.conf.ext > driver = sql > } > protocols = imap pop3 lmtp > service lmtp { > inet_listener lmtp { > address = 0.0.0.0 > port = 7025 > } > } > ssl_cert = ssl_key = userdb { > driver = prefetch > } > userdb { > args = /etc/dovecot/dovecot-sql.conf.ext > driver = sql > } > verbose_proctitle = yes > > ----- dovecot-sql.conf.ext ---- > driver = mysql > connect = host=localhost dbname=email user=email > default_pass_scheme = PLAIN > password_query = SELECT password, \ > IF('%s' = 'pop3', host_pop3, host) as host, \ > IF(host='' or host='00.00.241.140', NULL, 'Y') as proxy_maybe, \ > concat(userid, '@', domain) as destuser, \ > password as pass, \ > '/stor/mail/domains/%d/%n' AS userdb_home, \ > 'maildir:/stor/mail/domains/%d/%n/Maildir' as userdb_mail, \ > concat('*:storage=', quota_mb, 'M') as userdb_quota_rule, \ > 'vmail' AS userdb_uid, 'vmail' AS userdb_gid \ > FROM email WHERE userid = '%n' AND domain = '%d' > user_query = SELECT '/stor/mail/domains/%d/%n' AS home, \ > 'maildir:/stor/mail/domains/%d/%n/Maildir' as mail, \ > concat('*:storage=', quota_mb, 'M') as quota_rule, \ > 'vmail' AS uid, 'vmail' AS gid \ > FROM email WHERE userid = '%n' AND domain = '%d' > > OK, it turns out the problem went away when I removed the destuser field from the password query - it turned out to be unnecessary anyhow. My requirements are to allow users to log in using non-plaintext mechanisms such as CRAM-MD5, while my IMAP backends are non-dovecot and do not have a master user feature. Passwords are stored in the database in plaintext and presumably what I need to do is fetch the plaintext password from the database and simply use the user's own username and password when logging in to the backend. The wiki page on this subject only discusses a master-user setup, and my misunderstanding of that page lead me to think I need the destuser field. This turns out to be a simple setup, the only field involved being the "pass" field, and should probably be documented on the wiki. Either way, proxy_maybe doesn't work with auth_default_realm and destuser, even if destuser ends up containing the exact same username that would be constructed by auth_default_realm. From janfrode at tanso.net Sun Jan 1 21:59:07 2012 From: janfrode at tanso.net (Jan-Frode Myklebust) Date: Sun, 1 Jan 2012 20:59:07 +0100 Subject: [Dovecot] Dsync fails on second sync for folders with dot in the name Message-ID: <20120101195907.GA21500@dibs.tanso.net> I'm in the processes of running our first dsync backup of all users (from maildir to mdbox on remote server), and one problem I'm hitting that dsync will work fine on first run for some users, and then reliably fail whenever I try a new run: $ sudo dsync -u janfrode at example.net backup ssh -q mailbackup at repo1.example.net dsync -u janfrode at example.net $ sudo dsync -u janfrode at example.net backup ssh -q mailbackup at repo1.example.net dsync -u janfrode at example.net dsync-remote(janfrode at example.net): Error: Can't delete mailbox directory INBOX/a: Mailbox has children, delete them first The problem here seems to be that this user has a maildir named ".a.b". On the backup side I see this as "a/b/". So dsync doesn't quite seem to agree with itself for how to handle folders with dot in the name. -jf From rnabioullin at gmail.com Mon Jan 2 03:32:38 2012 From: rnabioullin at gmail.com (Ruslan Nabioullin) Date: Sun, 01 Jan 2012 20:32:38 -0500 Subject: [Dovecot] Multiple Maildirs per Virtual User Message-ID: <4F010936.7080107@gmail.com> How would it be possible to configure dovecot (2.0.16) in such a way that it would serve several maildirs (e.g., INBOX, INBOX.Drafts, INBOX.Sent, forum_email, [Gmail].Trash, etc.) per virtual user? I am only able to specify a single maildir, but I want all maildirs in /home/my-username/mail/account1/ to be served. e.g., /etc/dovecot/passwd: my-username_account1:{PLAIN}password:my-username:my-group::::userdb_mail=maildir:/home/my-username/mail/account1/INBOX Thanks in advance, Ruslan -- Ruslan Nabioullin rnabioullin at gmail.com -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 900 bytes Desc: OpenPGP digital signature URL: From Juergen.Obermann at hrz.uni-giessen.de Mon Jan 2 16:33:07 2012 From: Juergen.Obermann at hrz.uni-giessen.de (=?UTF-8?Q?J=C3=BCrgen_Obermann?=) Date: Mon, 02 Jan 2012 15:33:07 +0100 Subject: [Dovecot] error bad file number with compressed mbox files Message-ID: <77e69f67dbffe67a6205ed1de7d2d0df@imapproxy.hrz> Hello, can dsync convert from compressed mbox to compressed mdbox format? When I use compressed mbox files, either with gzip or with bzip2, I can read the mails as usual, but I find the following errors in dovecots log file: imap(userxy): Error: nfs_flush_fcntl: fcntl(/home/hrz/userxy/Mail/mymbox.gz, F_RDLCK) failed: Bad file number imap(userxy): Error: nfs_flush_fcntl: fcntl(/home/hrz/userxy/Mail/mymbox.bz2, F_RDLCK) failed: Bad file number These errors also appear when I use dsync to convert the compressed mbox to mdbox format on a second dovecot server: /opt/local/bin/dsync -v -u userxy backup mdbox:/sanpool/mail/home/hrz/userxy/mdbox dsync(userxy): Error: nfs_flush_fcntl: fcntl(/home/hrz/userxy/Mail/mymbox.gz, F_RDLCK) failed: Bad file number But now dovecot does not find the mails in the folder mymbox.gz on the second dovecot server in mdbox format! The relevant part of the dovcot configuration is: # 2.0.16: /opt/local/etc/dovecot/dovecot.conf # OS: SunOS 5.10 sun4v mail_fsync = always mail_location = mbox:~/Mail:INBOX=/var/mail/%u mail_nfs_index = yes mail_nfs_storage = yes mail_plugins = mail_log notify zlib mmap_disable = yes Thank you, -- J?rgen Obermann Hochschulrechenzentrum der Justus-Liebig-Universit?t Gie?en Heinrich-Buff-Ring 44 Tel. 0641-9913054 From CMarcus at Media-Brokers.com Mon Jan 2 16:51:00 2012 From: CMarcus at Media-Brokers.com (Charles Marcus) Date: Mon, 02 Jan 2012 09:51:00 -0500 Subject: [Dovecot] Dsync fails on second sync for folders with dot in the name In-Reply-To: <20120101195907.GA21500@dibs.tanso.net> References: <20120101195907.GA21500@dibs.tanso.net> Message-ID: <4F01C454.8030701@Media-Brokers.com> On 2012-01-01 2:59 PM, Jan-Frode Myklebust wrote: > I'm in the processes of running our first dsync backup of all users > (from maildir to mdbox on remote server), and one problem I'm hitting > that dsync will work fine on first run for some users, and then > reliably fail whenever I try a new run: > > $ sudo dsync -u janfrode at example.net backup ssh -q mailbackup at repo1.example.net dsync -u janfrode at example.net > $ sudo dsync -u janfrode at example.net backup ssh -q mailbackup at repo1.example.net dsync -u janfrode at example.net > dsync-remote(janfrode at example.net): Error: Can't delete mailbox directory INBOX/a: Mailbox has children, delete them first > > The problem here seems to be that this user has a maildir named > ".a.b". On the backup side I see this as "a/b/". > > So dsync doesn't quite seem to agree with itself for how to handle > folders with dot in the name. dovecot -n output? What are you using for the namespace hierarchy separator? http://wiki2.dovecot.org/Namespaces -- Best regards, Charles From janfrode at tanso.net Mon Jan 2 17:11:00 2012 From: janfrode at tanso.net (Jan-Frode Myklebust) Date: Mon, 2 Jan 2012 16:11:00 +0100 Subject: [Dovecot] Dsync fails on second sync for folders with dot in the name In-Reply-To: <4F01C454.8030701@Media-Brokers.com> References: <20120101195907.GA21500@dibs.tanso.net> <4F01C454.8030701@Media-Brokers.com> Message-ID: <20120102151059.GA10419@dibs.tanso.net> On Mon, Jan 02, 2012 at 09:51:00AM -0500, Charles Marcus wrote: > > dovecot -n output? What are you using for the namespace hierarchy separator? I have the folder format default separator (maildir "."), but still dovecot creates directories named ".a.b". On receiving dsync server: ===================================================================== $ dovecot -n # 2.0.14: /etc/dovecot/dovecot.conf mail_location = mdbox:~/mdbox mail_plugins = zlib mdbox_rotate_size = 5 M passdb { driver = static } plugin { zlib_save = gz zlib_save_level = 9 } protocols = service auth-worker { user = $default_internal_user } service auth { unix_listener auth-userdb { mode = 0600 user = mailbackup } } ssl = no userdb { args = home=/srv/mailbackup/%256Hu/%d/%n driver = static } On POP/IMAP-server: ===================================================================== $ doveconf -n # 2.0.14: /etc/dovecot/dovecot.conf auth_cache_size = 100 M auth_verbose = yes auth_verbose_passwords = sha1 disable_plaintext_auth = no login_trusted_networks = 192.168.0.0/16 mail_gid = 3000 mail_location = maildir:~/:INDEX=/indexes/%1u/%1.1u/%u mail_plugins = quota zlib mail_uid = 3000 maildir_stat_dirs = yes maildir_very_dirty_syncs = yes managesieve_notify_capability = mailto managesieve_sieve_capability = fileinto reject envelope encoded-character vacation subaddress comparator-i;ascii-numeric relational regex imap4flags copy include variables body enotify environment mailbox date mmap_disable = yes namespace { inbox = yes location = prefix = INBOX. type = private } passdb { args = /etc/dovecot/dovecot-ldap.conf.ext driver = ldap } plugin { quota = maildir:UserQuota sieve = /sieve/%1u/%1.1u/%u/.dovecot.sieve sieve_dir = /sieve/%1u/%1.1u/%u sieve_max_script_size = 1M zlib_save = gz zlib_save_level = 6 } postmaster_address = postmaster at example.net protocols = imap pop3 lmtp sieve service auth-worker { user = $default_internal_user } service auth { client_limit = 4521 unix_listener auth-userdb { group = mode = 0600 user = atmail } } service imap-login { inet_listener imap { address = * port = 143 } process_min_avail = 4 service_count = 0 vsz_limit = 1 G } service imap-postlogin { executable = script-login /usr/local/sbin/imap-postlogin.sh } service imap { executable = imap imap-postlogin process_limit = 2048 } service lmtp { client_limit = 1 inet_listener lmtp { address = * port = 24 } process_limit = 25 } service managesieve-login { inet_listener sieve { address = * port = 4190 } service_count = 1 } service pop3-login { inet_listener pop3 { address = * port = 110 } process_min_avail = 4 service_count = 0 vsz_limit = 1 G } service pop3-postlogin { executable = script-login /usr/local/sbin/pop3-postlogin.sh } service pop3 { executable = pop3 pop3-postlogin process_limit = 2048 } ssl = no userdb { args = /etc/dovecot/dovecot-ldap.conf.ext driver = ldap } protocol lmtp { mail_plugins = quota zlib sieve } protocol imap { imap_client_workarounds = delay-newmail mail_plugins = quota zlib imap_quota } protocol pop3 { mail_plugins = quota zlib pop3_client_workarounds = outlook-no-nuls oe-ns-eoh pop3_uidl_format = UID%u-%v } protocol sieve { managesieve_logout_format = bytes=%i/%o } -jf From preacher_net at gmx.net Mon Jan 2 18:17:10 2012 From: preacher_net at gmx.net (Preacher) Date: Mon, 02 Jan 2012 17:17:10 +0100 Subject: [Dovecot] Problem with huge IMAP Archive after Courier migration Message-ID: <4F01D886.6070905@gmx.net> I have a mail server running Debian 6.0 with Courier IMAP to store project related mail. Currently the maildir of the archive (one user) contains about 37GB of data. Our staff is acessing the archive via Outlook 2007 where they drag their Exchange inbox or sent files to it. The problem with courier is that is sometimes mixes up headers with message bodies, so I wanted to migrate to dovecot. I tried this on my proxy running Debian 7.0 with some test data and this worked fine (OK, spent some hours to get the config files done - Dovecot without authentication). Dovecot version here is 2.0.15. Tried it with our productive system today, but got Dovecot 1.2.15 installed on Debian 6.0 Config files and parameters I took from my test system were not compatible and I didn't get it to work. So I forced to install the Debisn 7.0 packages with 2.0.15 and finally got the server running, I also restarted the whole machine to empty caches. But the problem I got was that in the huge folder hierarchy the downloaded headers in the individual folders disappeared, some folders showed a few very old messages, some none. Also some subfolders disappeared. I checked this with Outlook and Thunderbird. The difference was, that Thunderbird shows more messages (but not all) than Outlook in some folders, but also none in some others. Outlook brought up a message in some cases, that the connection timed out, although I set the timeout to 60s. After being frustrated uninstalled dovecot, went back to Courier and folder contents are displayed correctly again. Anyone a clue what's wrong here? Finally some config information: proxy-server:~# dovecot -n # 2.0.15: /etc/dovecot/dovecot.conf # OS: Linux 3.1.0-1-686-pae i686 Debian wheezy/sid auth_debug_passwords = yes auth_mechanisms = plain login disable_plaintext_auth = no namespace { inbox = yes location = prefix = INBOX. separator = . type = private } passdb { driver = pam } plugin { sieve = ~/.dovecot.sieve sieve_dir = ~/sieve } protocols = imap ssl = no ssl_cert = References: <4F01D886.6070905@gmx.net> Message-ID: <4F020328.7090303@hardwarefreak.com> On 1/2/2012 10:17 AM, Preacher wrote: ... > So I forced to install the Debisn 7.0 packages with 2.0.15 and finally > got the server running, I also restarted the whole machine to empty caches. > But the problem I got was that in the huge folder hierarchy the > downloaded headers in the individual folders disappeared, some folders > showed a few very old messages, some none. Also some subfolders > disappeared. > I checked this with Outlook and Thunderbird. The difference was, that > Thunderbird shows more messages (but not all) than Outlook in some > folders, but also none in some others. Outlook brought up a message in > some cases, that the connection timed out, although I set the timeout to > 60s. ... > Anyone a clue what's wrong here? Absolutely. What's wrong is a lack of planning, self education, and patience on the part of the admin. Dovecot gets its speed from its indexes. How long do you think it takes Dovecot to index 37GB of maildir messages, many thousands per directory, hundreds of directories, millions of files total? Until those indexes are built you will not see a complete folder tree and all kinds of stuff will be missing. For your education: Dovecot indexes every message and these indexes are the key to its speed. Normally indexing occurs during delivery when using deliver or lmtp, so the index updates are small and incremental, keeping performance high. You tried to do this and expected Dovecot to instantly process it all: http://www.youtube.com/watch?v=THVz5aweqYU If you don't know, that's a coal train car being dumped. 100 tons of coal in a few seconds. Visuals are always good teaching tools. I think this drives the point home rather well. -- Stan From mpapet at yahoo.com Tue Jan 3 08:48:15 2012 From: mpapet at yahoo.com (Michael Papet) Date: Mon, 2 Jan 2012 22:48:15 -0800 (PST) Subject: [Dovecot] Newbie: LDA Isn't Logging Message-ID: <1325573295.74202.YahooMailClassic@web125405.mail.ne1.yahoo.com> Hi, I'm a newbie having some trouble getting deliver to log anything. Related to this, there are no return values unless the -d is missing. I'm using LDAP to store virtual domain and user account information. Test #1: /usr/lib/dovecot/deliver -e -f mpapet at yahoo.com -d zed at mailswansong.dom < bad.mail Expected result: supposed to fail, there's no zed account via ldap lookup and supposed to get a return code per the wiki at http://wiki2.dovecot.org/LDA. Supposed to log too. Actual result: nothing gets delivered, no return code, nothing is logged. Test #2: /usr/lib/dovecot/deliver -f mpapet at yahoo.com -d dude at mailswansong.dom < good.mail Expected result: deliver to dude and return 0. Actual result: delivers, but no return code. Nothing logged. The wiki is vague about the difficulties of getting deliver LDA to log, but I thought I had it covered in my config. I even opened permissions up wide (777) on my log files specified below. Nothing gets logged. The ONLY thing changed in 15-lda.conf is as follows. protocol lda { # Space separated list of plugins to load (default is global mail_plugins). #mail_plugins = $mail_plugins log_path = /var/log/dovecot/lda.log info_log_path = /var/log/dovecot/lda-info.log service auth { unix_listener auth-client { mode = 0600 user = vmail } } I'm running plain Debian Testing and used dovecot from Debian's repository. The end-goal is to write a qpsmtpd queue plugin, but I need to figure out what's the matter first. Thanks in advance. mpapet From janfrode at tanso.net Tue Jan 3 10:14:49 2012 From: janfrode at tanso.net (Jan-Frode Myklebust) Date: Tue, 3 Jan 2012 09:14:49 +0100 Subject: [Dovecot] GPFS for mail-storage (Was: Re: Compressing existing maildirs) In-Reply-To: <4EFEBFB8.1070301@hardwarefreak.com> References: <20111224152050.GA3958@dibs.tanso.net> <20111229084916.GA5895@dibs.tanso.net> <4EFC6453.8020304@hardwarefreak.com> <20111230144124.GA3936@dibs.tanso.net> <4EFE5984.9080905@hardwarefreak.com> <20111231065649.GA19046@dibs.tanso.net> <4EFEBFB8.1070301@hardwarefreak.com> Message-ID: <20120103081449.GA26269@dibs.tanso.net> On Sat, Dec 31, 2011 at 01:54:32AM -0600, Stan Hoeppner wrote: > Nice setup. I've mentioned GPFS for cluster use on this list before, > but I think you're the only operator to confirm using it. I'm sure > others would be interested in hearing of your first hand experience: > pros, cons, performance, etc. And a ball park figure on the licensing > costs, whether one can only use GPFS on IBM storage or if storage from > others vendors is allowed in the GPFS pool. I used to work for IBM, so I've been a bit uneasy about pushing GPFS too hard publicly, for risk of being accused of being biased. But I changed job in November, so now I'm only a satisfied customer :-) Pros: Extremely simple to configure and manage. Assuming root on all nodes can ssh freely, and port 1191/tcp is open between the nodes, these are the commands to create the cluster, create a NSD (network shared disks), and create a filesystem: # echo hostname1:manager-quorum > NodeFile # "manager" means this node can be selected as filesystem manager # echo hostname2:manager-quorum >> NodeFile # "quorum" means this node has a vote in the quorum selection # echo hostname3:manager-quorum >> NodeFile # all my nodes are usually the same, so they all have same roles. # mmcrcluster -n NodeFile -p $(hostname) -A ### sdb1 is either a local disk on hostname1 (in which case the other nodes will access it over tcp to ### hostname1), or a SAN-disk that they can access directly over FC/iSCSI. # echo sdb1:hostname1::dataAndMetadata:: > DescFile # This disk can be used for both data and metadata # mmcrnsd -F DescFile # mmstartup -A # starts GPFS services on all nodes # mmcrfs /gpfs1 gpfs1 -F DescFile # mount /gpfs1 You can add and remove disks from the filesystem, and change most settings without downtime. You can scale out your workload by adding more nodes (SAN attached or not), and scale out your disk performance by adding more disks on the fly. (IBM uses GPFS to create scale-out NAS solutions http://www-03.ibm.com/systems/storage/network/sonas/ , which highlights a few of the features available with GPFS) There's no problem running GPFS on other vendors disk systems. I've used Nexsan SATAboy earlier, for a HPC cluster. One can easily move from one disksystem to another without downtime. Cons: It has it's own page cache, staticly configured. So you don't get the "all available memory used for page caching" behaviour as you normally do on linux. There is a kernel module that needs to be rebuilt on every upgrade. It's a simple process, but it needs to be done and means we can't just run "yum update ; reboot" to upgrade. % export SHARKCLONEROOT=/usr/lpp/mmfs/src % cp /usr/lpp/mmfs/src/config/site.mcr.proto /usr/lpp/mmfs/src/config/site.mcr % vi /usr/lpp/mmfs/src/config/site.mcr # correct GPFS_ARCH, LINUX_DISTRIBUTION and LINUX_KERNEL_VERSION % cd /usr/lpp/mmfs/src/ ; make clean ; make World % su - root # export SHARKCLONEROOT=/usr/lpp/mmfs/src # cd /usr/lpp/mmfs/src/ ; make InstallImages > > To this point IIRC everyone here doing clusters is using NFS, GFS, or > OCFS. Each has its downsides, mostly because everyone is using maildir. > NFS has locking issues with shared dovecot index files. GFS and OCFS > have filesystem metadata performance issues. How does GPFS perform with > your maildir workload? Maildir is likely a worst case type workload for filesystems. Millions of tiny-tiny files, making all IO random, and getting minimal controller read cache utilized (unless you can cache all active files). So I've concluded that our performance issues are mostly design errors (and the fact that there were no better mail storage formats than maildir at the time these servers were implemented). I expect moving to mdbox will fix all our performance issues. I *think* GPFS is as good as it gets for maildir storage on clusterfs, but have no number to back that up ... Would be very interesting if we could somehow compare numbers for a few clusterfs'. I believe our main limitation in this setup is the iops we can get from the backend storage system. It's hard to balance the IO over enough RAID arrays (the fs is spread over 11 RAID5 arrays of 5 disks each), and we're always having hotspots. Right now two arrays are doing <100 iops, while others are doing 4-500 iops. Would very much like to replace it by something smarter where we can utilize SSDs for active data and something slower for stale data. GPFS can manage this by itself trough it's ILM interface, but we don't have the very fast storage to put in as tier-1. -jf From tss at iki.fi Tue Jan 3 10:49:27 2012 From: tss at iki.fi (Timo Sirainen) Date: Tue, 3 Jan 2012 10:49:27 +0200 Subject: [Dovecot] Compressing existing maildirs In-Reply-To: <4EFEBFB8.1070301@hardwarefreak.com> References: <20111224152050.GA3958@dibs.tanso.net> <20111229084916.GA5895@dibs.tanso.net> <4EFC6453.8020304@hardwarefreak.com> <20111230144124.GA3936@dibs.tanso.net> <4EFE5984.9080905@hardwarefreak.com> <20111231065649.GA19046@dibs.tanso.net> <4EFEBFB8.1070301@hardwarefreak.com> Message-ID: On 31.12.2011, at 9.54, Stan Hoeppner wrote: > Timo, is there any technical or sanity based upper bound on mdbox size? > Anything wrong with using 64MB, 128MB, or even larger for > mdbox_rotate_size? Should be fine. The only issue is the extra disk I/O required to recreate the files during doveadm purge. From ludek.finstrle at pzkagis.cz Mon Jan 2 20:20:15 2012 From: ludek.finstrle at pzkagis.cz (Ludek Finstrle) Date: Mon, 2 Jan 2012 19:20:15 +0100 Subject: [Dovecot] Small LOGIN_MAX_INBUF_SIZE for GSSAPI with samba4 (AD) Message-ID: <20120102182014.GA20872@pzkagis.cz> Hello, I faced the problem with samba (AD) + mutt (gssapi) + dovecot (imap). From dovecot log: Jan 2 17:58:42 server dovecot: imap-login: Disconnected: Input buffer full (no auth attempts): rip=192.167.14.16, lip=192.167.14.16, secured My situation: CentOS 6.2 IMAP: dovecot --version: 2.0.9 (CentOS 6.2) MUA: mutt 1.5.20 (CentOS 6.2) Kerberos: samba4 4.0.0alpha17 as AD PDC $ klist -e Ticket cache: FILE:/tmp/krb5cc_1002_Mmg2Rc Default principal: luf at TEST Valid starting Expires Service principal 01/02/12 15:56:16 01/03/12 01:56:16 krbtgt/TEST at TEST renew until 01/03/12 01:56:16, Etype (skey, tkt): arcfour-hmac, arcfour-hmac 01/02/12 16:33:19 01/03/12 01:56:16 imap/server.test at TEST Etype (skey, tkt): arcfour-hmac, arcfour-hmac I fixed this problem with enlarging LOGIN_MAX_INBUF_SIZE. I also red about wrong lower/uppercase but it's not definitely my problem (I tried all possibilities of lower/uppercas in login). I sniffed the plain communication and the "a0000 AUTHENTICATE GSSAPI" line has around 1873 chars. When I enlarged the LOGIN_MAX_INBUF_SIZE to 2048 the problem disappeared and I'm now able to login to dovecot using gssapi in mutt client. I use also thunderbird (on windows with sspi) and it works ok with LOGIN_MAX_INBUF_SIZE = 1024. Does anybody have any idea why it's so large or how to fix it another way? It's terrible to patch each version of dovecot rpm package. Or is there any possibility to change constant? I have no idea how much this should affect memory usage. The simple patch I have to use is attached. Please cc: to me (luf at pzkagis dot cz) as I'm not member of the this list. Best regards, Ludek Finstrle -------------- next part -------------- diff -cr dovecot-2.0.9.orig/src/login-common/client-common.h dovecot-2.0.9/src/login-common/client-common.h *** dovecot-2.0.9.orig/src/login-common/client-common.h 2012-01-02 18:09:53.371909782 +0100 --- dovecot-2.0.9/src/login-common/client-common.h 2012-01-02 18:30:58.057787619 +0100 *************** *** 10,16 **** IMAP: Max. length of a single parameter POP3: Max. length of a command line (spec says 512 would be enough) */ ! #define LOGIN_MAX_INBUF_SIZE 1024 /* max. size of output buffer. if it gets full, the client is disconnected. SASL authentication gives the largest output. */ #define LOGIN_MAX_OUTBUF_SIZE 4096 --- 10,16 ---- IMAP: Max. length of a single parameter POP3: Max. length of a command line (spec says 512 would be enough) */ ! #define LOGIN_MAX_INBUF_SIZE 2048 /* max. size of output buffer. if it gets full, the client is disconnected. SASL authentication gives the largest output. */ #define LOGIN_MAX_OUTBUF_SIZE 4096 From tss at iki.fi Tue Jan 3 13:16:29 2012 From: tss at iki.fi (Timo Sirainen) Date: Tue, 03 Jan 2012 13:16:29 +0200 Subject: [Dovecot] Small LOGIN_MAX_INBUF_SIZE for GSSAPI with samba4 (AD) In-Reply-To: <20120102182014.GA20872@pzkagis.cz> References: <20120102182014.GA20872@pzkagis.cz> Message-ID: <1325589389.6987.55.camel@innu> On Mon, 2012-01-02 at 19:20 +0100, Ludek Finstrle wrote: > Jan 2 17:58:42 server dovecot: imap-login: Disconnected: Input buffer full (no auth attempts): rip=192.167.14.16, lip=192.167.14.16, secured .. > I fixed this problem with enlarging LOGIN_MAX_INBUF_SIZE. I also red about wrong lower/uppercase > but it's not definitely my problem (I tried all possibilities of lower/uppercas in login). > > I sniffed the plain communication and the "a0000 AUTHENTICATE GSSAPI" line has around 1873 chars. > When I enlarged the LOGIN_MAX_INBUF_SIZE to 2048 the problem disappeared and I'm now able to login > to dovecot using gssapi in mutt client. There was already code that allowed 16kB SAS messages, but that didn't work for initial SASL reponse with IMAP SASL-IR extension. > I use also thunderbird (on windows with sspi) and it works ok with LOGIN_MAX_INBUF_SIZE = 1024. TB probably doesn't support SASL-IR. > Does anybody have any idea why it's so large or how to fix it another way? It's terrible to > patch each version of dovecot rpm package. Or is there any possibility to change constant? > I have no idea how much this should affect memory usage. > > The simple patch I have to use is attached. I increased it to 4 kB: http://hg.dovecot.org/dovecot-2.0/rev/d06061408f6d From tss at iki.fi Tue Jan 3 13:29:36 2012 From: tss at iki.fi (Timo Sirainen) Date: Tue, 03 Jan 2012 13:29:36 +0200 Subject: [Dovecot] error bad file number with compressed mbox files In-Reply-To: <77e69f67dbffe67a6205ed1de7d2d0df@imapproxy.hrz> References: <77e69f67dbffe67a6205ed1de7d2d0df@imapproxy.hrz> Message-ID: <1325590176.6987.57.camel@innu> On Mon, 2012-01-02 at 15:33 +0100, J?rgen Obermann wrote: > can dsync convert from compressed mbox to compressed mdbox format? > > When I use compressed mbox files, either with gzip or with bzip2, I can > read the mails as usual, but I find the following errors in dovecots log > file: > > imap(userxy): Error: nfs_flush_fcntl: > fcntl(/home/hrz/userxy/Mail/mymbox.gz, F_RDLCK) failed: Bad file number > imap(userxy): Error: nfs_flush_fcntl: > fcntl(/home/hrz/userxy/Mail/mymbox.bz2, F_RDLCK) failed: Bad file number This happens because of mail_nfs_* settings. You can either ignore those errors, or disable the settings. Those settings are useful only if you attempt to access the same mailbox from multiple servers at the same time, which is randomly going to fail even with those settings, so they aren't hugely useful. From tss at iki.fi Tue Jan 3 13:42:13 2012 From: tss at iki.fi (Timo Sirainen) Date: Tue, 03 Jan 2012 13:42:13 +0200 Subject: [Dovecot] Problem with huge IMAP Archive after Courier migration In-Reply-To: <4F01D886.6070905@gmx.net> References: <4F01D886.6070905@gmx.net> Message-ID: <1325590933.6987.59.camel@innu> On Mon, 2012-01-02 at 17:17 +0100, Preacher wrote: > So I forced to install the Debisn 7.0 packages with 2.0.15 and finally > got the server running, I also restarted the whole machine to empty caches. > But the problem I got was that in the huge folder hierarchy the > downloaded headers in the individual folders disappeared, some folders > showed a few very old messages, some none. Also some subfolders disappeared. > I checked this with Outlook and Thunderbird. The difference was, that > Thunderbird shows more messages (but not all) than Outlook in some > folders, but also none in some others. Outlook brought up a message in > some cases, that the connection timed out, although I set the timeout to > 60s. Did you run the Courier migration script? http://wiki2.dovecot.org/Migration/Courier Also explicitly setting mail_location would be a good idea. From tss at iki.fi Tue Jan 3 13:52:12 2012 From: tss at iki.fi (Timo Sirainen) Date: Tue, 03 Jan 2012 13:52:12 +0200 Subject: [Dovecot] Multiple Maildirs per Virtual User In-Reply-To: <4F010936.7080107@gmail.com> References: <4F010936.7080107@gmail.com> Message-ID: <1325591532.6987.60.camel@innu> On Sun, 2012-01-01 at 20:32 -0500, Ruslan Nabioullin wrote: > How would it be possible to configure dovecot (2.0.16) in such a way > that it would serve several maildirs (e.g., INBOX, INBOX.Drafts, > INBOX.Sent, forum_email, [Gmail].Trash, etc.) per virtual user? > > I am only able to specify a single maildir, but I want all maildirs in > /home/my-username/mail/account1/ to be served. Sounds like you want LAYOUT=fs rather than the default LAYOUT=maildir++. http://wiki2.dovecot.org/MailboxFormat/Maildir#Directory_Structure From tss at iki.fi Tue Jan 3 13:55:01 2012 From: tss at iki.fi (Timo Sirainen) Date: Tue, 03 Jan 2012 13:55:01 +0200 Subject: [Dovecot] dsync / separator / namespace config-problem In-Reply-To: <20111229200345.GA17871@dibs.tanso.net> References: <20111229111455.GA9344@dibs.tanso.net> <3F4112A3-FF46-4ABA-9EC5-E04651D50E87@iki.fi> <20111229134234.GB11809@dibs.tanso.net> <20111229200345.GA17871@dibs.tanso.net> Message-ID: <1325591701.6987.62.camel@innu> On Thu, 2011-12-29 at 21:03 +0100, Jan-Frode Myklebust wrote: > On Thu, Dec 29, 2011 at 03:49:57PM +0200, Timo Sirainen wrote: > > >> > > >> With mdbox the internal separator is '/', but it's not valid to have "INBOX." prefix then (it should be "INBOX/"). > > > > > > But how should this be handled in the migration phase from maildir to > > > mdbox then? Can we have different namespaces for users with maildirs vs. > > > mdboxes? (..or am i misunderstanding something?) > > > > You'll most likely want to keep the '.' separator with mdbox, at > least initially. Some clients don't like if the separator changes. > Perhaps in future if you want to allow users to use '.' character in > mailbox names you could change it, or possibly make it a per-user > setting. > > > > Sorry for being so dense, but I don't quite get it still. Do you suggest > dropping the trailing dot from prefix=INBOX. ? I.e. > > namespace { > inbox = yes > location = > prefix = INBOX > type = private > separator = . > } > > when we do the migration to mdbox? And this should work without issues > for both current maildir users, and mdbox users ? With that setup you can't even start up Dovecot. The prefix must end with the separator. So initially just do it like above, but with "prefix=INBOX." > Ideally I don't want to use the . as a separator, since it's causing > problems for our users who expect to be able to use them in folder > names. But I don't understand if I can change them without causing > problems to existing users.. or how these problems will appear to the > users. It's going to be problematic to change the separator for existing users. Clients can become confused. From tss at iki.fi Tue Jan 3 14:00:08 2012 From: tss at iki.fi (Timo Sirainen) Date: Tue, 03 Jan 2012 14:00:08 +0200 Subject: [Dovecot] Dsync fails on second sync for folders with dot in the name In-Reply-To: <20120101195907.GA21500@dibs.tanso.net> References: <20120101195907.GA21500@dibs.tanso.net> Message-ID: <1325592008.6987.63.camel@innu> On Sun, 2012-01-01 at 20:59 +0100, Jan-Frode Myklebust wrote: > I'm in the processes of running our first dsync backup of all users > (from maildir to mdbox on remote server), and one problem I'm hitting > that dsync will work fine on first run for some users, and then > reliably fail whenever I try a new run: > > $ sudo dsync -u janfrode at example.net backup ssh -q mailbackup at repo1.example.net dsync -u janfrode at example.net > $ sudo dsync -u janfrode at example.net backup ssh -q mailbackup at repo1.example.net dsync -u janfrode at example.net > dsync-remote(janfrode at example.net): Error: Can't delete mailbox directory INBOX/a: Mailbox has children, delete them first > > The problem here seems to be that this user has a maildir named > ".a.b". On the backup side I see this as "a/b/". > > So dsync doesn't quite seem to agree with itself for how to handle > folders with dot in the name. So here on source you have namespace separator '.' and in destination you have separator '/'? Maybe that's the problem? Try with both having '.' separator. From janfrode at tanso.net Tue Jan 3 14:12:22 2012 From: janfrode at tanso.net (Jan-Frode Myklebust) Date: Tue, 3 Jan 2012 13:12:22 +0100 Subject: [Dovecot] Dsync fails on second sync for folders with dot in the name In-Reply-To: <1325592008.6987.63.camel@innu> References: <20120101195907.GA21500@dibs.tanso.net> <1325592008.6987.63.camel@innu> Message-ID: <20120103121222.GA30793@dibs.tanso.net> On Tue, Jan 03, 2012 at 02:00:08PM +0200, Timo Sirainen wrote: > > So here on source you have namespace separator '.' and in destination > you have separator '/'? Maybe that's the problem? Try with both having > '.' separator. I added this namespace on the destination: namespace { inbox = yes location = prefix = INBOX. separator = . type = private } and am getting the same error: dsync-remote(janfrode at tanso.net): Error: Can't delete mailbox directory INBOX.a: Mailbox has children, delete them first This was with a freshly created .a.b folder on source. With no messages in .a.b and also no plain .a folder on source: $ find /usr/local/atmail/users/j/a/janfrode at tanso.net/.a* /usr/local/atmail/users/j/a/janfrode at tanso.net/.a.b /usr/local/atmail/users/j/a/janfrode at tanso.net/.a.b/maildirfolder /usr/local/atmail/users/j/a/janfrode at tanso.net/.a.b/cur /usr/local/atmail/users/j/a/janfrode at tanso.net/.a.b/new /usr/local/atmail/users/j/a/janfrode at tanso.net/.a.b/tmp /usr/local/atmail/users/j/a/janfrode at tanso.net/.a.b/dovecot-uidlist -jf From tss at iki.fi Tue Jan 3 14:15:45 2012 From: tss at iki.fi (Timo Sirainen) Date: Tue, 03 Jan 2012 14:15:45 +0200 Subject: [Dovecot] Newbie: LDA Isn't Logging In-Reply-To: <1325573295.74202.YahooMailClassic@web125405.mail.ne1.yahoo.com> References: <1325573295.74202.YahooMailClassic@web125405.mail.ne1.yahoo.com> Message-ID: <1325592945.6987.70.camel@innu> On Mon, 2012-01-02 at 22:48 -0800, Michael Papet wrote: > Hi, > > I'm a newbie having some trouble getting deliver to log anything. Related to this, there are no return values unless the -d is missing. I'm using LDAP to store virtual domain and user account information. > > Test #1: /usr/lib/dovecot/deliver -e -f mpapet at yahoo.com -d zed at mailswansong.dom < bad.mail > Expected result: supposed to fail, there's no zed account via ldap lookup and supposed to get a return code per the wiki at http://wiki2.dovecot.org/LDA. Supposed to log too. > Actual result: nothing gets delivered, no return code, nothing is logged. As in return code is 0? Something's definitely wrong there then. First check that deliver at least reads the config file. Add something broken in there, such as: "foo=bar" at the beginning of dovecot.conf. Does deliver fail now? Also running deliver via strace could show something useful. From tss at iki.fi Tue Jan 3 14:34:59 2012 From: tss at iki.fi (Timo Sirainen) Date: Tue, 03 Jan 2012 14:34:59 +0200 Subject: [Dovecot] Dsync fails on second sync for folders with dot in the name In-Reply-To: <20120103121222.GA30793@dibs.tanso.net> References: <20120101195907.GA21500@dibs.tanso.net> <1325592008.6987.63.camel@innu> <20120103121222.GA30793@dibs.tanso.net> Message-ID: <1325594099.6987.71.camel@innu> On Tue, 2012-01-03 at 13:12 +0100, Jan-Frode Myklebust wrote: > dsync-remote(janfrode at tanso.net): Error: Can't delete mailbox directory INBOX.a: Mailbox has children, delete them first Oh, this happens only with dsync backup, and only with Maildir++ -> FS layout change. You can simply ignore this error, or patch with http://hg.dovecot.org/dovecot-2.0/rev/69c6d7436f7f that hides it. From tss at iki.fi Tue Jan 3 14:36:52 2012 From: tss at iki.fi (Timo Sirainen) Date: Tue, 03 Jan 2012 14:36:52 +0200 Subject: [Dovecot] lmtp-postlogin ? In-Reply-To: <20111230130804.GA2107@dibs.tanso.net> References: <20111230090053.GA30820@dibs.tanso.net> <16B30E6C-AE5E-44CB-8F48-66274FEAB357@iki.fi> <20111230130804.GA2107@dibs.tanso.net> Message-ID: <1325594212.6987.73.camel@innu> On Fri, 2011-12-30 at 14:08 +0100, Jan-Frode Myklebust wrote: > > Maybe create a new plugin for this using notify plugin. > > Is there any documentation for this plugin? I've tried searching both > this list, and the wiki's. Nope. You could look at mail-log and http://dovecot.org/patches/2.0/touch-plugin.c and write something based on them. From janfrode at tanso.net Tue Jan 3 14:54:10 2012 From: janfrode at tanso.net (Jan-Frode Myklebust) Date: Tue, 3 Jan 2012 13:54:10 +0100 Subject: [Dovecot] Dsync fails on second sync for folders with dot in the name In-Reply-To: <1325594099.6987.71.camel@innu> References: <20120101195907.GA21500@dibs.tanso.net> <1325592008.6987.63.camel@innu> <20120103121222.GA30793@dibs.tanso.net> <1325594099.6987.71.camel@innu> Message-ID: <20120103125410.GA2966@dibs.tanso.net> On Tue, Jan 03, 2012 at 02:34:59PM +0200, Timo Sirainen wrote: > On Tue, 2012-01-03 at 13:12 +0100, Jan-Frode Myklebust wrote: > > dsync-remote(janfrode at tanso.net): Error: Can't delete mailbox directory INBOX.a: Mailbox has children, delete them first > > Oh, this happens only with dsync backup, and only with Maildir++ -> FS > layout change. You can simply ignore this error, or patch with > http://hg.dovecot.org/dovecot-2.0/rev/69c6d7436f7f that hides it. Oh, it was so quick to fail that I didn't realize it had successfully updated the remote mailboxes :-) Thanks! But isn't it a bug that users are allowed to create folders named .a.b, or that dovecot creates this as a folder named .a.b instead of .a/.b when the separator is "." ? -jf From mikko at woima.fi Tue Jan 3 16:54:11 2012 From: mikko at woima.fi (Mikko Lampikoski) Date: Tue, 3 Jan 2012 16:54:11 +0200 Subject: [Dovecot] What is normal CPU usage of dovecot imap? Message-ID: <6FD1B169-1409-40BF-9B2F-53598B1300CB@woima.fi> I got Dual Core Intel Xeon CPU 3.00GHz, over 1000 mailbox and almost 1 dovecot login / second (peak time). Server stats says that load is continually over 2 and cpu usage is 60%. top says that imap is making this load. virtual users are in mysql database and mysqld is running on another server (this server is ok). Do I need better CPU or is there something going on that I do not understand? # dovecot -n # 1.1.11: /etc/dovecot/dovecot.conf # OS: Linux 2.6.32-4-pve i686 Ubuntu 9.10 nfs log_timestamp: %Y-%m-%d %H:%M:%S protocols: imap imaps pop3 pop3s listen: *, [::] ssl_ca_file: /etc/ssl/**********.crt ssl_cert_file: /etc/ssl/**********.crt ssl_key_file: /etc/ssl/**********.key ssl_key_password: ********** disable_plaintext_auth: no verbose_ssl: yes shutdown_clients: no login_dir: /var/run/dovecot/login login_executable(default): /usr/lib/dovecot/imap-login login_executable(imap): /usr/lib/dovecot/imap-login login_executable(pop3): /usr/lib/dovecot/pop3-login login_greeting_capability(default): yes login_greeting_capability(imap): yes login_greeting_capability(pop3): no login_process_size: 128 login_processes_count: 10 login_max_processes_count: 2048 mail_max_userip_connections(default): 10 mail_max_userip_connections(imap): 10 mail_max_userip_connections(pop3): 3 first_valid_uid: 99 last_valid_uid: 100 mail_privileged_group: mail mail_location: maildir:/var/vmail/%d/%n:INDEX=/var/indexes/%d/%n fsync_disable: yes mail_nfs_storage: yes mbox_write_locks: fcntl mbox_min_index_size: 4 mail_executable(default): /usr/lib/dovecot/imap mail_executable(imap): /usr/lib/dovecot/imap mail_executable(pop3): /usr/lib/dovecot/pop3 mail_process_size: 2048 mail_plugin_dir(default): /usr/lib/dovecot/modules/imap mail_plugin_dir(imap): /usr/lib/dovecot/modules/imap mail_plugin_dir(pop3): /usr/lib/dovecot/modules/pop3 imap_client_workarounds(default): outlook-idle imap_client_workarounds(imap): outlook-idle imap_client_workarounds(pop3): pop3_client_workarounds(default): pop3_client_workarounds(imap): pop3_client_workarounds(pop3): outlook-no-nuls auth default: mechanisms: plain login cram-md5 cache_size: 1024 passdb: driver: sql args: /etc/dovecot/dovecot-sql.conf userdb: driver: static args: uid=99 gid=99 home=/var/vmail/%d/%n allow_all_users=yes socket: type: listen client: path: /var/spool/postfix/private/auth-client mode: 432 user: postfix group: postfix master: path: /var/run/dovecot/auth-master mode: 384 user: vmail From tss at iki.fi Tue Jan 3 17:08:14 2012 From: tss at iki.fi (Timo Sirainen) Date: Tue, 3 Jan 2012 17:08:14 +0200 Subject: [Dovecot] Dsync fails on second sync for folders with dot in the name In-Reply-To: <20120103125410.GA2966@dibs.tanso.net> References: <20120101195907.GA21500@dibs.tanso.net> <1325592008.6987.63.camel@innu> <20120103121222.GA30793@dibs.tanso.net> <1325594099.6987.71.camel@innu> <20120103125410.GA2966@dibs.tanso.net> Message-ID: <9D352B5F-77C3-4473-92E1-9ED2AFB5FFFB@iki.fi> On 3.1.2012, at 14.54, Jan-Frode Myklebust wrote: > But isn't it a bug that users are allowed to create folders named .a.b, The folder name is "a.b", it just exists in filesystem with Maildir++ as ".a.b". > or that dovecot creates this as a folder named .a.b instead of .a/.b > when the separator is "." ? The separator is the IMAP separator, not the filesystem separator. From tss at iki.fi Tue Jan 3 17:12:34 2012 From: tss at iki.fi (Timo Sirainen) Date: Tue, 3 Jan 2012 17:12:34 +0200 Subject: [Dovecot] What is normal CPU usage of dovecot imap? In-Reply-To: <6FD1B169-1409-40BF-9B2F-53598B1300CB@woima.fi> References: <6FD1B169-1409-40BF-9B2F-53598B1300CB@woima.fi> Message-ID: On 3.1.2012, at 16.54, Mikko Lampikoski wrote: > I got Dual Core Intel Xeon CPU 3.00GHz, over 1000 mailbox and almost 1 dovecot login / second (peak time). > Server stats says that load is continually over 2 and cpu usage is 60%. top says that imap is making this load. You mean an actual "imap" process? Or more than one imap processes? Or something else, e.g. "imap-login" process? If there's one long running IMAP process eating CPU, it might have simply gone to an infinite loop, and upgrading could help. > virtual users are in mysql database and mysqld is running on another server (this server is ok). > > Do I need better CPU or is there something going on that I do not understand? Your CPU usage should probably be closer to 0%. > login_process_size: 128 > login_processes_count: 10 > login_max_processes_count: 2048 Switching to http://wiki2.dovecot.org/LoginProcess#High-performance_mode may be helpful. > mail_nfs_storage: yes Do you have more than one Dovecot server? This setting doesn't anyway work reliably. If you've only one server accessing mails, you can set this to "no". From stan at hardwarefreak.com Tue Jan 3 17:20:28 2012 From: stan at hardwarefreak.com (Stan Hoeppner) Date: Tue, 03 Jan 2012 09:20:28 -0600 Subject: [Dovecot] GPFS for mail-storage (Was: Re: Compressing existing maildirs) In-Reply-To: <20120103081449.GA26269@dibs.tanso.net> References: <20111224152050.GA3958@dibs.tanso.net> <20111229084916.GA5895@dibs.tanso.net> <4EFC6453.8020304@hardwarefreak.com> <20111230144124.GA3936@dibs.tanso.net> <4EFE5984.9080905@hardwarefreak.com> <20111231065649.GA19046@dibs.tanso.net> <4EFEBFB8.1070301@hardwarefreak.com> <20120103081449.GA26269@dibs.tanso.net> Message-ID: <4F031CBC.60302@hardwarefreak.com> On 1/3/2012 2:14 AM, Jan-Frode Myklebust wrote: > On Sat, Dec 31, 2011 at 01:54:32AM -0600, Stan Hoeppner wrote: >> Nice setup. I've mentioned GPFS for cluster use on this list before, >> but I think you're the only operator to confirm using it. I'm sure >> others would be interested in hearing of your first hand experience: >> pros, cons, performance, etc. And a ball park figure on the licensing >> costs, whether one can only use GPFS on IBM storage or if storage from >> others vendors is allowed in the GPFS pool. > > I used to work for IBM, so I've been a bit uneasy about pushing GPFS too > hard publicly, for risk of being accused of being biased. But I changed job in > November, so now I'm only a satisfied customer :-) Fascinating. And good timing. :) > Pros: > Extremely simple to configure and manage. Assuming root on all > nodes can ssh freely, and port 1191/tcp is open between the > nodes, these are the commands to create the cluster, create a > NSD (network shared disks), and create a filesystem: > > # echo hostname1:manager-quorum > NodeFile # "manager" means this node can be selected as filesystem manager > # echo hostname2:manager-quorum >> NodeFile # "quorum" means this node has a vote in the quorum selection > # echo hostname3:manager-quorum >> NodeFile # all my nodes are usually the same, so they all have same roles. > # mmcrcluster -n NodeFile -p $(hostname) -A > > ### sdb1 is either a local disk on hostname1 (in which case the other nodes will access it over tcp to > ### hostname1), or a SAN-disk that they can access directly over FC/iSCSI. > # echo sdb1:hostname1::dataAndMetadata:: > DescFile # This disk can be used for both data and metadata > # mmcrnsd -F DescFile > > # mmstartup -A # starts GPFS services on all nodes > # mmcrfs /gpfs1 gpfs1 -F DescFile > # mount /gpfs1 > > You can add and remove disks from the filesystem, and change most > settings without downtime. You can scale out your workload by adding > more nodes (SAN attached or not), and scale out your disk performance > by adding more disks on the fly. (IBM uses GPFS to create > scale-out NAS solutions http://www-03.ibm.com/systems/storage/network/sonas/ , > which highlights a few of the features available with GPFS) > > There's no problem running GPFS on other vendors disk systems. I've used Nexsan > SATAboy earlier, for a HPC cluster. One can easily move from one disksystem to > another without downtime. That's good to know. The only FC SAN arrays I've installed/used are IBM FasTt 600 and Nexsan SataBlade/Boy. I much prefer the web management interface on the Nexsan units, much more intuitive, more flexible. The FasTt is obviously much more suitable to random IOPS workloads with its 15k FC disks vs 7.2K SATA disks in the Nexsan units (although Nexsan has offered 15K SAS disks and SSDs for a while now). > Cons: > It has it's own page cache, staticly configured. So you don't get the "all > available memory used for page caching" behaviour as you normally do on linux. Yep, that's ugly. > There is a kernel module that needs to be rebuilt on every > upgrade. It's a simple process, but it needs to be done and means we > can't just run "yum update ; reboot" to upgrade. > > % export SHARKCLONEROOT=/usr/lpp/mmfs/src > % cp /usr/lpp/mmfs/src/config/site.mcr.proto /usr/lpp/mmfs/src/config/site.mcr > % vi /usr/lpp/mmfs/src/config/site.mcr # correct GPFS_ARCH, LINUX_DISTRIBUTION and LINUX_KERNEL_VERSION > % cd /usr/lpp/mmfs/src/ ; make clean ; make World > % su - root > # export SHARKCLONEROOT=/usr/lpp/mmfs/src > # cd /usr/lpp/mmfs/src/ ; make InstallImages So is this, but it's totally expected since this is proprietary code and not in mainline. >> To this point IIRC everyone here doing clusters is using NFS, GFS, or >> OCFS. Each has its downsides, mostly because everyone is using maildir. >> NFS has locking issues with shared dovecot index files. GFS and OCFS >> have filesystem metadata performance issues. How does GPFS perform with >> your maildir workload? > > Maildir is likely a worst case type workload for filesystems. Millions > of tiny-tiny files, making all IO random, and getting minimal controller > read cache utilized (unless you can cache all active files). So I've Yep. Which is the reason I've stuck with mbox everywhere I can over the years, minor warts and all, and will be moving to mdbox at some point. IMHO maildir solved one set of problems but created a bigger problem. Many sites hailed maildir as a savior in many ways, then decried it as their user base and IO demands exceeded their storage, scrambling for budget money for fix an "unforeseen" problem, which is absolutely clear from day one. At least for anyone with more than a cursory knowledge of filesystem design and hardware performance. > concluded that our performance issues are mostly design errors (and the > fact that there were no better mail storage formats than maildir at the > time these servers were implemented). I expect moving to mdbox will > fix all our performance issues. Yeah, it should decrease FS IOPS by a couple orders or magnitude, especially if you go with large mdbox files. The larger the better. > I *think* GPFS is as good as it gets for maildir storage on clusterfs, > but have no number to back that up ... Would be very interesting if we > could somehow compare numbers for a few clusterfs'. Apparently no one (vendor) with the resources to do so has the desire to do so. > I believe our main limitation in this setup is the iops we can get from > the backend storage system. It's hard to balance the IO over enough > RAID arrays (the fs is spread over 11 RAID5 arrays of 5 disks each), > and we're always having hotspots. Right now two arrays are doing <100 iops, > while others are doing 4-500 iops. Would very much like to replace > it by something smarter where we can utilize SSDs for active data and > something slower for stale data. GPFS can manage this by itself trough > it's ILM interface, but we don't have the very fast storage to put in as > tier-1. Obviously not news to you, balancing mail workload IO across large filesystems and wide disk farms will always be a problem, due to which users are logged in at a given moment, and the fact you can't stripe all users' small mail files across all disks. And this is true of all mailbox formats to one degree or another, obviously worst with maildir. A properly engineered XFS can get far closer to linear IO distribution across arrays than most filesystems due to its allocation group design, but it still won't be perfect. Simply getting away from maildir, with its extraneous metadata IOs, is a huge win for decreasing custerFS and SAN IOPs. I'm anxious to see your report on your SAN IOPs after you've converted to mdbox, especially if you go with 16/32MB or larger mdbox files. -- Stan From mikko at woima.fi Tue Jan 3 17:38:48 2012 From: mikko at woima.fi (Mikko Lampikoski) Date: Tue, 3 Jan 2012 17:38:48 +0200 Subject: [Dovecot] What is normal CPU usage of dovecot imap? In-Reply-To: References: <6FD1B169-1409-40BF-9B2F-53598B1300CB@woima.fi> Message-ID: <3B6D056C-1D1E-46F4-AB56-FDD5B98BC669@woima.fi> On 3.1.2012, at 17.12, Timo Sirainen wrote: > On 3.1.2012, at 16.54, Mikko Lampikoski wrote: > >> I got Dual Core Intel Xeon CPU 3.00GHz, over 1000 mailbox and almost 1 dovecot login / second (peak time). >> Server stats says that load is continually over 2 and cpu usage is 60%. top says that imap is making this load. > > You mean an actual "imap" process? Or more than one imap processes? Or something else, e.g. "imap-login" process? If there's one long running IMAP process eating CPU, it might have simply gone to an infinite loop, and upgrading could help. It is "imap" process and process takes cpu like 10-30 seconds and then PID changes to another imap process (process also takes 10% of memory = 150MB). Restarting dovecot does not help. >> virtual users are in mysql database and mysqld is running on another server (this server is ok). >> Do I need better CPU or is there something going on that I do not understand? > > Your CPU usage should probably be closer to 0%. I think so too, but I ran out of good ideas. If someone have lots of mails in mailbox can it make effect like this? >> login_process_size: 128 >> login_processes_count: 10 >> login_max_processes_count: 2048 > > Switching to http://wiki2.dovecot.org/LoginProcess#High-performance_mode may be helpful. This loses much of the security benefits, no thanks. >> mail_nfs_storage: yes > > Do you have more than one Dovecot server? This setting doesn't anyway work reliably. If you've only one server accessing mails, you can set this to "no". Trying this too, but I think its not going to help.. From tss at iki.fi Tue Jan 3 17:44:21 2012 From: tss at iki.fi (Timo Sirainen) Date: Tue, 3 Jan 2012 17:44:21 +0200 Subject: [Dovecot] What is normal CPU usage of dovecot imap? In-Reply-To: <3B6D056C-1D1E-46F4-AB56-FDD5B98BC669@woima.fi> References: <6FD1B169-1409-40BF-9B2F-53598B1300CB@woima.fi> <3B6D056C-1D1E-46F4-AB56-FDD5B98BC669@woima.fi> Message-ID: <8188AE59-6646-4686-9320-F11D25A42F5D@iki.fi> On 3.1.2012, at 17.38, Mikko Lampikoski wrote: > On 3.1.2012, at 17.12, Timo Sirainen wrote: > >> On 3.1.2012, at 16.54, Mikko Lampikoski wrote: >> >>> I got Dual Core Intel Xeon CPU 3.00GHz, over 1000 mailbox and almost 1 dovecot login / second (peak time). >>> Server stats says that load is continually over 2 and cpu usage is 60%. top says that imap is making this load. >> >> You mean an actual "imap" process? Or more than one imap processes? Or something else, e.g. "imap-login" process? If there's one long running IMAP process eating CPU, it might have simply gone to an infinite loop, and upgrading could help. > > It is "imap" process and process takes cpu like 10-30 seconds and then PID changes to another imap process (process also takes 10% of memory = 150MB). > Restarting dovecot does not help. Is the IMAP process always for the same user (or the same few users)? verbose_proctitle=yes shows the username in ps output. > If someone have lots of mails in mailbox can it make effect like this? Possibly. maildir_very_dirty_syncs=yes is helpful with huge maildirs (I don't remember if it exists in v1.1 yet). From preacher_net at gmx.net Tue Jan 3 18:47:23 2012 From: preacher_net at gmx.net (Preacher) Date: Tue, 03 Jan 2012 17:47:23 +0100 Subject: [Dovecot] Problem with huge IMAP Archive after Courier migration In-Reply-To: <4F020328.7090303@hardwarefreak.com> References: <4F01D886.6070905@gmx.net> <4F020328.7090303@hardwarefreak.com> Message-ID: <4F03311B.8030109@gmx.net> Actually I took a look inside the folders right after starting up and waited for two hours to let Dovecot work. Saving the whole Maildir into a tar on the same partition also took only 2 hours before. But nothing did change and when looking at activities with top, the server was idle, dovecot not indexing. Also I wasn't able to drag new messages to the folder hierachy. With courier it takes not more than 5 seconds to download the headers in a folder containing more than 3.000 messages. Stan Hoeppner schrieb: > On 1/2/2012 10:17 AM, Preacher wrote: > ... >> So I forced to install the Debisn 7.0 packages with 2.0.15 and finally >> got the server running, I also restarted the whole machine to empty caches. >> But the problem I got was that in the huge folder hierarchy the >> downloaded headers in the individual folders disappeared, some folders >> showed a few very old messages, some none. Also some subfolders >> disappeared. >> I checked this with Outlook and Thunderbird. The difference was, that >> Thunderbird shows more messages (but not all) than Outlook in some >> folders, but also none in some others. Outlook brought up a message in >> some cases, that the connection timed out, although I set the timeout to >> 60s. > ... >> Anyone a clue what's wrong here? > Absolutely. What's wrong is a lack of planning, self education, and > patience on the part of the admin. > > Dovecot gets its speed from its indexes. How long do you think it takes > Dovecot to index 37GB of maildir messages, many thousands per directory, > hundreds of directories, millions of files total? Until those indexes > are built you will not see a complete folder tree and all kinds of stuff > will be missing. > > For your education: Dovecot indexes every message and these indexes are > the key to its speed. Normally indexing occurs during delivery when > using deliver or lmtp, so the index updates are small and incremental, > keeping performance high. You tried to do this and expected Dovecot to > instantly process it all: > > http://www.youtube.com/watch?v=THVz5aweqYU > > If you don't know, that's a coal train car being dumped. 100 tons of > coal in a few seconds. Visuals are always good teaching tools. I think > this drives the point home rather well. > From preacher_net at gmx.net Tue Jan 3 18:50:51 2012 From: preacher_net at gmx.net (Preacher) Date: Tue, 03 Jan 2012 17:50:51 +0100 Subject: [Dovecot] Problem with huge IMAP Archive after Courier migration In-Reply-To: <1325590933.6987.59.camel@innu> References: <4F01D886.6070905@gmx.net> <1325590933.6987.59.camel@innu> Message-ID: <4F0331EB.2000006@gmx.net> Yes i did, followed this guide you mentioned, it said that it found the 3 mailboxes I have set-up in total, conversion took only a few moments. I guess the mail location was automaticall set correctly as the folder hierachy was displayed correctly Timo Sirainen schrieb: > On Mon, 2012-01-02 at 17:17 +0100, Preacher wrote: >> So I forced to install the Debisn 7.0 packages with 2.0.15 and finally >> got the server running, I also restarted the whole machine to empty caches. >> But the problem I got was that in the huge folder hierarchy the >> downloaded headers in the individual folders disappeared, some folders >> showed a few very old messages, some none. Also some subfolders disappeared. >> I checked this with Outlook and Thunderbird. The difference was, that >> Thunderbird shows more messages (but not all) than Outlook in some >> folders, but also none in some others. Outlook brought up a message in >> some cases, that the connection timed out, although I set the timeout to >> 60s. > Did you run the Courier migration script? > http://wiki2.dovecot.org/Migration/Courier > > Also explicitly setting mail_location would be a good idea. > > > From rnabioullin at gmail.com Tue Jan 3 19:33:31 2012 From: rnabioullin at gmail.com (Ruslan Nabioullin) Date: Tue, 03 Jan 2012 12:33:31 -0500 Subject: [Dovecot] Multiple Maildirs per Virtual User In-Reply-To: <1325591532.6987.60.camel@innu> References: <4F010936.7080107@gmail.com> <1325591532.6987.60.camel@innu> Message-ID: <4F033BEB.4070103@gmail.com> On 01/03/2012 06:52 AM, Timo Sirainen wrote: > On Sun, 2012-01-01 at 20:32 -0500, Ruslan Nabioullin wrote: >> How would it be possible to configure dovecot (2.0.16) in such a way >> that it would serve several maildirs (e.g., INBOX, INBOX.Drafts, >> INBOX.Sent, forum_email, [Gmail].Trash, etc.) per virtual user? >> >> I am only able to specify a single maildir, but I want all maildirs in >> /home/my-username/mail/account1/ to be served. > > Sounds like you want LAYOUT=fs rather than the default LAYOUT=maildir++. > http://wiki2.dovecot.org/MailboxFormat/Maildir#Directory_Structure > > I changed /etc/dovecot/passwd to: my-username_account1:{PLAIN}password:my-username:my-group::::userdb_mail=maildir:/home/my-username/mail/account1:LAYOUT=fs Dovecot creates {tmp,new,cur} dirs within account1 (the root), apparently not recognizing the maildirs within the root (e.g., /home/my-username/mail/account1/INBOX/{tmp,new,cur}). -- Ruslan Nabioullin rnabioullin at gmail.com -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 900 bytes Desc: OpenPGP digital signature URL: From tss at iki.fi Tue Jan 3 19:47:52 2012 From: tss at iki.fi (Timo Sirainen) Date: Tue, 3 Jan 2012 19:47:52 +0200 Subject: [Dovecot] Multiple Maildirs per Virtual User In-Reply-To: <4F033BEB.4070103@gmail.com> References: <4F010936.7080107@gmail.com> <1325591532.6987.60.camel@innu> <4F033BEB.4070103@gmail.com> Message-ID: <7543064C-1F66-49D1-8694-4793958CCFD8@iki.fi> On 3.1.2012, at 19.33, Ruslan Nabioullin wrote: > I changed /etc/dovecot/passwd to: > my-username_account1:{PLAIN}password:my-username:my-group::::userdb_mail=maildir:/home/my-username/mail/account1:LAYOUT=fs > > Dovecot creates {tmp,new,cur} dirs within account1 (the root), > apparently not recognizing the maildirs within the root (e.g., > /home/my-username/mail/account1/INBOX/{tmp,new,cur}). Your client probably only shows subscribed folders, and none are subscribed. From stan at hardwarefreak.com Tue Jan 3 19:55:27 2012 From: stan at hardwarefreak.com (Stan Hoeppner) Date: Tue, 03 Jan 2012 11:55:27 -0600 Subject: [Dovecot] Problem with huge IMAP Archive after Courier migration In-Reply-To: <4F03311B.8030109@gmx.net> References: <4F01D886.6070905@gmx.net> <4F020328.7090303@hardwarefreak.com> <4F03311B.8030109@gmx.net> Message-ID: <4F03410F.6080404@hardwarefreak.com> On 1/3/2012 10:47 AM, Preacher wrote: > Actually I took a look inside the folders right after starting up and > waited for two hours to let Dovecot work. So two hours after clicking on an IMAP folder the contents of that folder were still not displayed correctly? > Saving the whole Maildir into a tar on the same partition also took only > 2 hours before. This doesn't have any relevance. > But nothing did change and when looking at activities with top, the > server was idle, dovecot not indexing. > Also I wasn't able to drag new messages to the folder hierachy. Then something is seriously wrong. The fact that you "forced" the Wheezy Dovecot package into a Squeeze system may have something, if not everything, to do with your problem (somehow I missed this fact in your previous email). Debian testing/sid packages are compiled against newer system libraries. If you check various logs you'll likely see problems related to this. This is also why the Backports project exists--TESTING packages are compiled against the STABLE libraries so newer application revs can be used on the STABLE distribution. Currently there is no Dovecot 2.x backport for Squeeze. I would suggest you thoroughly remove the Wheezy 2.0.15 package and install the 1.2.15-7 STABLE package. Read the documentation for 1.2.x and configure it properly. Then things will likely work as they should. -- Stan From Ralf.Hildebrandt at charite.de Tue Jan 3 20:09:29 2012 From: Ralf.Hildebrandt at charite.de (Ralf Hildebrandt) Date: Tue, 3 Jan 2012 19:09:29 +0100 Subject: [Dovecot] Deliver all addresses to the same mdbox:? Message-ID: <20120103180929.GA21651@charite.de> For archiving purposes I'm delivering all addresses to the same mdbox: like this: passdb { driver = passwd-file args = username_format=%u /etc/dovecot/passwd } userdb { driver = static args = uid=1000 gid=1000 home=/home/copymail allow_all_users=yes } Yet I'm getting this: Jan 3 19:03:27 mail postfix/lmtp[29378]: 3THjg02wfWzFvmL: to=, relay=mail.charite.de[private/dovecot-lmtp], conn_use=20, delay=323, delays=323/0/0/0, dsn=4.1.1, status=SOFTBOUNCE (host mail.charite.de[private/dovecot-lmtp] said: 550 5.1.1 <"firstname.lastname at charite.de"@backup.invalid> User doesn't exist: "firstname.lastname at charite.de"@backup.invalid (in reply to RCPT TO command)) (using soft_bounce = yes here in Postfix) In short: backup.invalid is delivered to dovecot by means of LMTP (local socket). I thought my static mapping in userdb would enable the lmtp listener to accept ALL recipients and map their $home to /home/copymail - why is that not working? -- Ralf Hildebrandt Gesch?ftsbereich IT | Abteilung Netzwerk Charit? - Universit?tsmedizin Berlin Campus Benjamin Franklin Hindenburgdamm 30 | D-12203 Berlin Tel. +49 30 450 570 155 | Fax: +49 30 450 570 962 ralf.hildebrandt at charite.de | http://www.charite.de From tss at iki.fi Tue Jan 3 20:34:11 2012 From: tss at iki.fi (Timo Sirainen) Date: Tue, 3 Jan 2012 20:34:11 +0200 Subject: [Dovecot] Deliver all addresses to the same mdbox:? In-Reply-To: <20120103180929.GA21651@charite.de> References: <20120103180929.GA21651@charite.de> Message-ID: On 3.1.2012, at 20.09, Ralf Hildebrandt wrote: > For archiving purposes I'm delivering all addresses to the same mdbox: > like this: > > passdb { > driver = passwd-file > args = username_format=%u /etc/dovecot/passwd > } > > userdb { > driver = static > args = uid=1000 gid=1000 home=/home/copymail allow_all_users=yes > } allow_all_users=yes is used only when the passdb is incapable of telling if the user exists or not. > Yet I'm getting this: > > Jan 3 19:03:27 mail postfix/lmtp[29378]: 3THjg02wfWzFvmL: to=, > relay=mail.charite.de[private/dovecot-lmtp], conn_use=20, delay=323, delays=323/0/0/0, dsn=4.1.1, status=SOFTBOUNCE (host > mail.charite.de[private/dovecot-lmtp] said: 550 5.1.1 <"firstname.lastname at charite.de"@backup.invalid> User doesn't exist: "firstname.lastname at charite.de"@backup.invalid (in reply to RCPT TO > command)) Fails because user doesn't exist in passwd-file, I guess. Maybe use passdb static? If you also need authentication to work, put passdb static in protocol lmtp {} and passdb passwd-file in protocol !lmtp {} From Ralf.Hildebrandt at charite.de Tue Jan 3 20:43:38 2012 From: Ralf.Hildebrandt at charite.de (Ralf Hildebrandt) Date: Tue, 3 Jan 2012 19:43:38 +0100 Subject: [Dovecot] Deliver all addresses to the same mdbox:? In-Reply-To: References: <20120103180929.GA21651@charite.de> Message-ID: <20120103184338.GC21651@charite.de> * Timo Sirainen : > On 3.1.2012, at 20.09, Ralf Hildebrandt wrote: > > > For archiving purposes I'm delivering all addresses to the same mdbox: > > like this: > > > > passdb { > > driver = passwd-file > > args = username_format=%u /etc/dovecot/passwd > > } > > > > userdb { > > driver = static > > args = uid=1000 gid=1000 home=/home/copymail allow_all_users=yes > > } > > allow_all_users=yes is used only when the passdb is incapable of telling if the user exists or not. Ah, damn :| > Fails because user doesn't exist in passwd-file, I guess. Indeed. > Maybe use passdb static? Right now I simply solved it by using + addressing like this: Jan 3 19:42:49 mail postfix/lmtp[2728]: 3THkfd20f1zFvlF: to=, relay=mail.charite.de[private/dovecot-lmtp], delay=0.01, delays=0.01/0/0/0, dsn=2.0.0, status=sent (250 2.0.0 IHdDM9VLA0/aCwAAY73zkw Saved) Call me lazy :) > If you also need authentication to work, put passdb static in protocol > lmtp {} and passdb passwd-file in protocol !lmtp {} Ah yes, good idea. -- Ralf Hildebrandt Gesch?ftsbereich IT | Abteilung Netzwerk Charit? - Universit?tsmedizin Berlin Campus Benjamin Franklin Hindenburgdamm 30 | D-12203 Berlin Tel. +49 30 450 570 155 | Fax: +49 30 450 570 962 ralf.hildebrandt at charite.de | http://www.charite.de From djonas at vitalwerks.com Tue Jan 3 21:14:28 2012 From: djonas at vitalwerks.com (David Jonas) Date: Tue, 03 Jan 2012 11:14:28 -0800 Subject: [Dovecot] Maildir migration and uids In-Reply-To: <81E45F76-34A4-4666-9F10-7566B7BD496C@iki.fi> References: <4EF28D7B.8050601@vitalwerks.com> <81E45F76-34A4-4666-9F10-7566B7BD496C@iki.fi> Message-ID: <4F035394.8090701@vitalwerks.com> On 12/29/11 5:35 AM, Timo Sirainen wrote: > On 22.12.2011, at 3.52, David Jonas wrote: > >> I'm in the process of migrating a large number of maildirs to a 3rd >> party dovecot server (from a dovecot server). Tests have shown that >> using imap to sync the accounts doesn't preserve the uidl for pop3 access. >> >> My current attempt is to convert the maildir to mbox and add an X-UIDL >> header in the process. Run a second dovecot that serves the converted >> mbox. But dovecot's docs say, "None of these headers are sent to >> IMAP/POP3 clients when they read the mail". > > That's rather complex. Thanks, Timo. Unfortunately I don't have shell access at the new dovecot servers. They have a migration tool that doesn't keep the uids intact when I sync via imap. Looks like I'm going to have to sync twice, once with POP3 (which maintains uids) and once with imap skipping the inbox. Ugh. >> Is there any way to sync these maildirs to the new server and maintain >> the uids? > > What Dovecot versions? dsync could do this easily. You could simply install the dsync binary even if you're using Dovecot v1.x. Good idea with dsync though, I had forgotten about that. Perhaps they'll do something custom for me. > You could also log in with POP3 and get the UIDL list and write a script to add them to dovecot-uidlist. > From CMarcus at Media-Brokers.com Tue Jan 3 22:40:02 2012 From: CMarcus at Media-Brokers.com (Charles Marcus) Date: Tue, 03 Jan 2012 15:40:02 -0500 Subject: [Dovecot] Storing passwords encrypted... bcrypt? Message-ID: <4F0367A2.1000807@Media-Brokers.com> Hi everyone, Was just perusing this article about how trivial it is to decrypt passwords that are stored using most (standard) encryption methods (like MD5), and was wondering - is it possible to use bcrypt with dovecot+postfix+mysql (or posgres)? -- Best regards, Charles From CMarcus at Media-Brokers.com Tue Jan 3 22:59:39 2012 From: CMarcus at Media-Brokers.com (Charles Marcus) Date: Tue, 03 Jan 2012 15:59:39 -0500 Subject: [Dovecot] Storing passwords encrypted... bcrypt? In-Reply-To: <4F0367A2.1000807@Media-Brokers.com> References: <4F0367A2.1000807@Media-Brokers.com> Message-ID: <4F036C3B.5080904@Media-Brokers.com> On 2012-01-03 3:40 PM, Charles Marcus wrote: > Hi everyone, > > Was just perusing this article about how trivial it is to decrypt > passwords that are stored using most (standard) encryption methods (like > MD5), and was wondering - is it possible to use bcrypt with > dovecot+postfix+mysql (or posgres)? Ooop... forgot the link: http://codahale.com/how-to-safely-store-a-password/ But after perusing the wiki: http://wiki2.dovecot.org/Authentication/PasswordSchemes it appears not? Timo - any chance for adding support for it? Or is that web page incorrect? -- Best regards, Charles From david at blue-labs.org Tue Jan 3 23:03:34 2012 From: david at blue-labs.org (David Ford) Date: Tue, 03 Jan 2012 16:03:34 -0500 Subject: [Dovecot] Storing passwords encrypted... bcrypt? In-Reply-To: <4F036C3B.5080904@Media-Brokers.com> References: <4F0367A2.1000807@Media-Brokers.com> <4F036C3B.5080904@Media-Brokers.com> Message-ID: <4F036D26.9010409@blue-labs.org> md5 is deprecated, *nix has used sha1 for a while now From bill-dovecot at carpenter.org Wed Jan 4 00:10:13 2012 From: bill-dovecot at carpenter.org (WJCarpenter) Date: Tue, 03 Jan 2012 14:10:13 -0800 Subject: [Dovecot] Storing passwords encrypted... bcrypt? In-Reply-To: <4F036C3B.5080904@Media-Brokers.com> References: <4F0367A2.1000807@Media-Brokers.com> <4F036C3B.5080904@Media-Brokers.com> Message-ID: <4F037CC5.9030900@carpenter.org> >> Was just perusing this article about how trivial it is to decrypt >> passwords that are stored using most (standard) encryption methods (like >> MD5), and was wondering - is it possible to use bcrypt with >> dovecot+postfix+mysql (or posgres)? > > Ooop... forgot the link: > > http://codahale.com/how-to-safely-store-a-password/ AFAIK, that web page is correct in a relative sense, but getting bcrypt support might not be the most urgent priority. In his description, he uses the example of passwords which are "lowercase, alphanumeric, and 6 characters long" (and in another place the example is "lowercase, alphabetic passwords which are ?7 characters", I guess to illustrate that things have gotten faster). If you are allowing your users to create such weak passwords, using bcrypt will not save you/them. Attackers will just be wasting more of your CPU time making attempts. If they get a copy of your hashed passwords, they'll likely be wasting their own CPU time, but they have plenty of that, too. There are plenty of recommendations for what makes a good password / passphrase. If you are not already enforcing such rules (perhaps also with a lookaside to one or more of the leaked tables of passwords floating around), then IMHO that's much more urgent. (One of the best twists I read somewhere [sorry, I forget where] was to require at least one uppercase and one digit, but to not count them as fulfilling the requirement if they were used as the first or last character.) Side note, but for the sake of precision ... attackers are not literally decrypting passwords. They are guessing passwords and then performing a one-way hash to see if they guessed correctly. As a practical matter, that means that you have to ask your users to update their passwords any time you change the password storage scheme. (I don't know enough about bcrypt to know if that would be required if you wanted to simply increase the work factor.) From CMarcus at Media-Brokers.com Wed Jan 4 00:27:16 2012 From: CMarcus at Media-Brokers.com (Charles Marcus) Date: Tue, 03 Jan 2012 17:27:16 -0500 Subject: [Dovecot] Storing passwords encrypted... bcrypt? In-Reply-To: <4F036D26.9010409@blue-labs.org> References: <4F0367A2.1000807@Media-Brokers.com> <4F036C3B.5080904@Media-Brokers.com> <4F036D26.9010409@blue-labs.org> Message-ID: <4F0380C4.8040205@Media-Brokers.com> On 2012-01-03 4:03 PM, David Ford wrote: > md5 is deprecated, *nix has used sha1 for a while now That link lumps sha1 in with MD5 and others: "Why Not {MD5, SHA1, SHA256, SHA512, SHA-3, etc}?" -- Best regards, Charles From CMarcus at Media-Brokers.com Wed Jan 4 00:30:30 2012 From: CMarcus at Media-Brokers.com (Charles Marcus) Date: Tue, 03 Jan 2012 17:30:30 -0500 Subject: [Dovecot] Storing passwords encrypted... bcrypt? In-Reply-To: <4F037CC5.9030900@carpenter.org> References: <4F0367A2.1000807@Media-Brokers.com> <4F036C3B.5080904@Media-Brokers.com> <4F037CC5.9030900@carpenter.org> Message-ID: <4F038186.3030505@Media-Brokers.com> On 2012-01-03 5:10 PM, WJCarpenter wrote: > In his description, he uses the example of passwords which are > "lowercase, alphanumeric, and 6 characters long" (and in another place > the example is "lowercase, alphabetic passwords which are ?7 > characters", I guess to illustrate that things have gotten faster). If > you are allowing your users to create such weak passwords, using bcrypt > will not save you/them. Attackers will just be wasting more of your CPU > time making attempts. If they get a copy of your hashed passwords, > they'll likely be wasting their own CPU time, but they have plenty of > that, too. I require strong passwords of 15 characters in length. Whats more, they are assigned (by me), and the user cannot change it. But, he isn't talking about brute force attacks against the server. He is talking about if someone gained access to the SQL database where the passwords are stored (as has happened countless times in the last few years), and then had the luxury of brute forcing an attack locally (on their own systems) against your password database. -- Best regards, Charles From david at blue-labs.org Wed Jan 4 00:35:14 2012 From: david at blue-labs.org (David Ford) Date: Tue, 03 Jan 2012 17:35:14 -0500 Subject: [Dovecot] Storing passwords encrypted... bcrypt? In-Reply-To: <4F038186.3030505@Media-Brokers.com> References: <4F0367A2.1000807@Media-Brokers.com> <4F036C3B.5080904@Media-Brokers.com> <4F037CC5.9030900@carpenter.org> <4F038186.3030505@Media-Brokers.com> Message-ID: <4F0382A2.9010105@blue-labs.org> On 01/03/2012 05:30 PM, Charles Marcus wrote: > On 2012-01-03 5:10 PM, WJCarpenter wrote: >> In his description, he uses the example of passwords which are >> "lowercase, alphanumeric, and 6 characters long" (and in another place >> the example is "lowercase, alphabetic passwords which are ?7 >> characters", I guess to illustrate that things have gotten faster). If >> you are allowing your users to create such weak passwords, using bcrypt >> will not save you/them. Attackers will just be wasting more of your CPU >> time making attempts. If they get a copy of your hashed passwords, >> they'll likely be wasting their own CPU time, but they have plenty of >> that, too. > > I require strong passwords of 15 characters in length. Whats more, > they are assigned (by me), and the user cannot change it. But, he > isn't talking about brute force attacks against the server. He is > talking about if someone gained access to the SQL database where the > passwords are stored (as has happened countless times in the last few > years), and then had the luxury of brute forcing an attack locally (on > their own systems) against your password database. when it comes to brute force, passwords like "51k$jh#21hiaj2" are significantly weaker than "wePut85umbrellasIn2shoes". considerably difficult for humans which makes them far more likely to write it on a sticky and make it handily available. From simon.brereton at buongiorno.com Wed Jan 4 00:38:36 2012 From: simon.brereton at buongiorno.com (Simon Brereton) Date: Tue, 3 Jan 2012 17:38:36 -0500 Subject: [Dovecot] Storing passwords encrypted... bcrypt? In-Reply-To: <4F038186.3030505@Media-Brokers.com> References: <4F0367A2.1000807@Media-Brokers.com> <4F036C3B.5080904@Media-Brokers.com> <4F037CC5.9030900@carpenter.org> <4F038186.3030505@Media-Brokers.com> Message-ID: On 3 January 2012 17:30, Charles Marcus wrote: > On 2012-01-03 5:10 PM, WJCarpenter wrote: >> >> In his description, he uses the example of passwords which are >> "lowercase, alphanumeric, and 6 characters long" (and in another place >> the example is "lowercase, alphabetic passwords which are ?7 >> characters", I guess to illustrate that things have gotten faster). ?If >> you are allowing your users to create such weak passwords, using bcrypt >> will not save you/them. ?Attackers will just be wasting more of your CPU >> time making attempts. ?If they get a copy of your hashed passwords, >> they'll likely be wasting their own CPU time, but they have plenty of >> that, too. > > > I require strong passwords of 15 characters in length. Whats more, they are > assigned (by me), and the user cannot change it. But, he isn't talking about > brute force attacks against the server. He is talking about if someone > gained access to the SQL database where the passwords are stored (as has > happened countless times in the last few years), and then had the luxury of > brute forcing an attack locally (on their own systems) against your password > database. 24+ would be better.. http://xkcd.com/936/ Simon From dg at dguhl.org Wed Jan 4 00:48:05 2012 From: dg at dguhl.org (Dennis Guhl) Date: Tue, 3 Jan 2012 23:48:05 +0100 Subject: [Dovecot] Problem with huge IMAP Archive after Courier migration In-Reply-To: <4F03410F.6080404@hardwarefreak.com> References: <4F01D886.6070905@gmx.net> <4F020328.7090303@hardwarefreak.com> <4F03311B.8030109@gmx.net> <4F03410F.6080404@hardwarefreak.com> Message-ID: <20120103224804.GA16434@laptop-dg.leere.eu> On Tue, Jan 03, 2012 at 11:55:27AM -0600, Stan Hoeppner wrote: [..] > I would suggest you thoroughly remove the Wheezy 2.0.15 package and Not to use the Wheezy package might be wise. > install the 1.2.15-7 STABLE package. Read the documentation for 1.2.x Alternatively you could use Stephan Bosch's repository: deb http://xi.rename-it.nl/debian/ stable-auto/dovecot-2.0 main Despite the warning at http://wiki2.dovecot.org/PrebuiltBinaries#Automatically_Built_Packages they work very stable. > and configure it properly. Then things will likely work as they should. Dennis From bill-dovecot at carpenter.org Wed Jan 4 01:12:50 2012 From: bill-dovecot at carpenter.org (WJCarpenter) Date: Tue, 03 Jan 2012 15:12:50 -0800 Subject: [Dovecot] Storing passwords encrypted... bcrypt? In-Reply-To: References: <4F0367A2.1000807@Media-Brokers.com> <4F036C3B.5080904@Media-Brokers.com> <4F037CC5.9030900@carpenter.org> <4F038186.3030505@Media-Brokers.com> Message-ID: <4F038B72.1000003@carpenter.org> On 1/3/2012 2:38 PM, Simon Brereton wrote: > http://xkcd.com/936/ As they saying goes, entropy ain't what it used to be. https://www.grc.com/haystack.htm However, both links actually illustrate the same point: once you get past dictionary attacks, the length of the password is dominant factor in the strength of the password against brute force attack. From gedalya at gedalya.net Wed Jan 4 01:59:28 2012 From: gedalya at gedalya.net (Gedalya) Date: Tue, 03 Jan 2012 18:59:28 -0500 Subject: [Dovecot] Problem with huge IMAP Archive after Courier migration In-Reply-To: <20120103224804.GA16434@laptop-dg.leere.eu> References: <4F01D886.6070905@gmx.net> <4F020328.7090303@hardwarefreak.com> <4F03311B.8030109@gmx.net> <4F03410F.6080404@hardwarefreak.com> <20120103224804.GA16434@laptop-dg.leere.eu> Message-ID: <4F039660.1010903@gedalya.net> On 01/03/2012 05:48 PM, Dennis Guhl wrote: > On Tue, Jan 03, 2012 at 11:55:27AM -0600, Stan Hoeppner wrote: > > [..] > >> I would suggest you thoroughly remove the Wheezy 2.0.15 package and > Not to use the Wheezy package might be wise. > >> install the 1.2.15-7 STABLE package. Read the documentation for 1.2.x > Alternatively you could use Stephan Bosch's repository: > > deb http://xi.rename-it.nl/debian/ stable-auto/dovecot-2.0 main > > Despite the warning at > http://wiki2.dovecot.org/PrebuiltBinaries#Automatically_Built_Packages > they work very stable. > >> and configure it properly. Then things will likely work as they should. > Dennis See http://www.prato.linux.it/~mnencia/debian/dovecot-squeeze/ and http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=592959 I have the packages from this repository running in production on a squeeze system, working fine. From CMarcus at Media-Brokers.com Wed Jan 4 03:25:02 2012 From: CMarcus at Media-Brokers.com (Charles Marcus) Date: Tue, 03 Jan 2012 20:25:02 -0500 Subject: [Dovecot] Storing passwords encrypted... bcrypt? In-Reply-To: <4F038B72.1000003@carpenter.org> References: <4F0367A2.1000807@Media-Brokers.com> <4F036C3B.5080904@Media-Brokers.com> <4F037CC5.9030900@carpenter.org> <4F038186.3030505@Media-Brokers.com> <4F038B72.1000003@carpenter.org> Message-ID: <4F03AA6E.30003@Media-Brokers.com> On 2012-01-03 6:12 PM, WJCarpenter wrote: > On 1/3/2012 2:38 PM, Simon Brereton wrote: >> http://xkcd.com/936/ > > As they saying goes, entropy ain't what it used to be. > > https://www.grc.com/haystack.htm > > However, both links actually illustrate the same point: once you get > past dictionary attacks, the length of the password is dominant factor > in the strength of the password against brute force attack. I think ya'll are missing the point... not sure, because I'm still not completely sure that this is saying what I think it is saying (that's why I asked)... I'm not worried about *active* brute force attacks against my server using the standard smtp or imap protocols - fail2ban takes care of those in a hurry. What I'm worried about is the worst case scenario of someone getting ahold of the entire user database of *stored* passwords, where they can then take their time and brute force them at their leisure, on *their* *own* systems, without having to hammer my server over smtp/imap and without the automated limit of *my* fail2ban getting in their way. As for people writing their passwords down... our policy is that it is a potentially *firable* *offense* (never even encountered one case of anyone posting their password, and I'm on these systems off and on all the time) if they do post these anywhere that is not under lock and key. Also, I always set up their email clients for them (on their workstations and on their phones - and of course tell it to remember the password, so they basically never have to enter it. -- Best regards, Charles From david at blue-labs.org Wed Jan 4 03:37:21 2012 From: david at blue-labs.org (David Ford) Date: Tue, 03 Jan 2012 20:37:21 -0500 Subject: [Dovecot] Storing passwords encrypted... bcrypt? In-Reply-To: <4F03AA6E.30003@Media-Brokers.com> References: <4F0367A2.1000807@Media-Brokers.com> <4F036C3B.5080904@Media-Brokers.com> <4F037CC5.9030900@carpenter.org> <4F038186.3030505@Media-Brokers.com> <4F038B72.1000003@carpenter.org> <4F03AA6E.30003@Media-Brokers.com> Message-ID: <4F03AD51.7080506@blue-labs.org> On 01/03/2012 08:25 PM, Charles Marcus wrote: > > I think ya'll are missing the point... not sure, because I'm still not > completely sure that this is saying what I think it is saying (that's > why I asked)... > > I'm not worried about *active* brute force attacks against my server > using the standard smtp or imap protocols - fail2ban takes care of > those in a hurry. > > What I'm worried about is the worst case scenario of someone getting > ahold of the entire user database of *stored* passwords, where they > can then take their time and brute force them at their leisure, on > *their* *own* systems, without having to hammer my server over > smtp/imap and without the automated limit of *my* fail2ban getting in > their way. > > As for people writing their passwords down... our policy is that it is > a potentially *firable* *offense* (never even encountered one case of > anyone posting their password, and I'm on these systems off and on all > the time) if they do post these anywhere that is not under lock and > key. Also, I always set up their email clients for them (on their > workstations and on their phones - and of course tell it to remember > the password, so they basically never have to enter it. perhaps. part of my point along that of brute force resistance, is that when security becomes onerous to the typical user such as requiring non-repeat passwords of "10 characters including punctuation and mixed case", even stalwart policy followers start tending toward avoiding it. if anyone has a stressful job, spends a lot of time working, missing sleep, is thereby prone to memory lapse, it's almost a sure guarantee they *will* write it down/store it somewhere -- usually not in a password safe. or, they'll export their saved passwords to make a backup plain text copy, and leave it on their Desktop folder but coyly named and prefixed with a few random emails to grandma, so mr. sysadmin doesn't notice it. on a tangent, you should worry about active brute force attacks. fail2ban and iptables heuristics become meaningless when the brute forcing is done by bot nets which is more and more common than single-host attacks these days. one IP per attempt in a 10-20 minute window will probably never trigger any of these methods. From michael at orlitzky.com Wed Jan 4 03:58:51 2012 From: michael at orlitzky.com (Michael Orlitzky) Date: Tue, 03 Jan 2012 20:58:51 -0500 Subject: [Dovecot] Storing passwords encrypted... bcrypt? In-Reply-To: <4F03AA6E.30003@Media-Brokers.com> References: <4F0367A2.1000807@Media-Brokers.com> <4F036C3B.5080904@Media-Brokers.com> <4F037CC5.9030900@carpenter.org> <4F038186.3030505@Media-Brokers.com> <4F038B72.1000003@carpenter.org> <4F03AA6E.30003@Media-Brokers.com> Message-ID: <4F03B25B.2020309@orlitzky.com> On 01/03/2012 08:25 PM, Charles Marcus wrote: > > What I'm worried about is the worst case scenario of someone getting > ahold of the entire user database of *stored* passwords, where they can > then take their time and brute force them at their leisure, on *their* > *own* systems, without having to hammer my server over smtp/imap and > without the automated limit of *my* fail2ban getting in their way. To prevent rainbow table attacks, salt your passwords. You can make them a little bit more difficult in plenty of ways, but salt is the /solution/. > As for people writing their passwords down... our policy is that it is a > potentially *firable* *offense* (never even encountered one case of > anyone posting their password, and I'm on these systems off and on all > the time) if they do post these anywhere that is not under lock and key. > Also, I always set up their email clients for them (on their > workstations and on their phones - and of course tell it to remember the > password, so they basically never have to enter it. You realize they're just walking around with a $400 post-it note with the password written on it, right? From bill-dovecot at carpenter.org Wed Jan 4 05:07:47 2012 From: bill-dovecot at carpenter.org (WJCarpenter) Date: Tue, 03 Jan 2012 19:07:47 -0800 Subject: [Dovecot] Storing passwords encrypted... bcrypt? In-Reply-To: <4F03AA6E.30003@Media-Brokers.com> References: <4F0367A2.1000807@Media-Brokers.com> <4F036C3B.5080904@Media-Brokers.com> <4F037CC5.9030900@carpenter.org> <4F038186.3030505@Media-Brokers.com> <4F038B72.1000003@carpenter.org> <4F03AA6E.30003@Media-Brokers.com> Message-ID: <4F03C283.6070005@carpenter.org> On 1/3/2012 5:25 PM, Charles Marcus wrote: > I think ya'll are missing the point... not sure, because I'm still not > completely sure that this is saying what I think it is saying (that's > why I asked)... I'm sure I'm not missing the point. My comment was that password length and complexity are probably more important than bcrypt versus sha1, and you've already addressed those. Given that you use strong 15-character passwords, pretty much all hash functions are already out of reach for brute force. bcrypt is probably better in the same sense that it's harder to drive a car to Saturn than it is to drive to Mars. From list at airstreamcomm.net Wed Jan 4 08:09:39 2012 From: list at airstreamcomm.net (=?utf-8?B?bGlzdEBhaXJzdHJlYW1jb21tLm5ldA==?=) Date: Wed, 04 Jan 2012 00:09:39 -0600 Subject: [Dovecot] =?utf-8?q?GPFS_for_mail-storage_=28Was=3A_Re=3A_Compres?= =?utf-8?q?sing_existing_maildirs=29?= Message-ID: <20120104060924.0E7C727659@osmtp-1.airstreamcomm.net> Great information, thank you. Could you remark on GPFS services hosting mail storage over a WAN between two geographically separated data centers? ----- Reply message ----- From: "Jan-Frode Myklebust" To: "Stan Hoeppner" Cc: "Timo Sirainen" , Subject: [Dovecot] GPFS for mail-storage (Was: Re: Compressing existing maildirs) Date: Tue, Jan 3, 2012 2:14 am On Sat, Dec 31, 2011 at 01:54:32AM -0600, Stan Hoeppner wrote: > Nice setup. I've mentioned GPFS for cluster use on this list before, > but I think you're the only operator to confirm using it. I'm sure > others would be interested in hearing of your first hand experience: > pros, cons, performance, etc. And a ball park figure on the licensing > costs, whether one can only use GPFS on IBM storage or if storage from > others vendors is allowed in the GPFS pool. I used to work for IBM, so I've been a bit uneasy about pushing GPFS too hard publicly, for risk of being accused of being biased. But I changed job in November, so now I'm only a satisfied customer :-) Pros: Extremely simple to configure and manage. Assuming root on all nodes can ssh freely, and port 1191/tcp is open between the nodes, these are the commands to create the cluster, create a NSD (network shared disks), and create a filesystem: # echo hostname1:manager-quorum > NodeFile # "manager" means this node can be selected as filesystem manager # echo hostname2:manager-quorum >> NodeFile # "quorum" means this node has a vote in the quorum selection # echo hostname3:manager-quorum >> NodeFile # all my nodes are usually the same, so they all have same roles. # mmcrcluster -n NodeFile -p $(hostname) -A ### sdb1 is either a local disk on hostname1 (in which case the other nodes will access it over tcp to ### hostname1), or a SAN-disk that they can access directly over FC/iSCSI. # echo sdb1:hostname1::dataAndMetadata:: > DescFile # This disk can be used for both data and metadata # mmcrnsd -F DescFile # mmstartup -A # starts GPFS services on all nodes # mmcrfs /gpfs1 gpfs1 -F DescFile # mount /gpfs1 You can add and remove disks from the filesystem, and change most settings without downtime. You can scale out your workload by adding more nodes (SAN attached or not), and scale out your disk performance by adding more disks on the fly. (IBM uses GPFS to create scale-out NAS solutions http://www-03.ibm.com/systems/storage/network/sonas/ , which highlights a few of the features available with GPFS) There's no problem running GPFS on other vendors disk systems. I've used Nexsan SATAboy earlier, for a HPC cluster. One can easily move from one disksystem to another without downtime. Cons: It has it's own page cache, staticly configured. So you don't get the "all available memory used for page caching" behaviour as you normally do on linux. There is a kernel module that needs to be rebuilt on every upgrade. It's a simple process, but it needs to be done and means we can't just run "yum update ; reboot" to upgrade. % export SHARKCLONEROOT=/usr/lpp/mmfs/src % cp /usr/lpp/mmfs/src/config/site.mcr.proto /usr/lpp/mmfs/src/config/site.mcr % vi /usr/lpp/mmfs/src/config/site.mcr # correct GPFS_ARCH, LINUX_DISTRIBUTION and LINUX_KERNEL_VERSION % cd /usr/lpp/mmfs/src/ ; make clean ; make World % su - root # export SHARKCLONEROOT=/usr/lpp/mmfs/src # cd /usr/lpp/mmfs/src/ ; make InstallImages > > To this point IIRC everyone here doing clusters is using NFS, GFS, or > OCFS. Each has its downsides, mostly because everyone is using maildir. > NFS has locking issues with shared dovecot index files. GFS and OCFS > have filesystem metadata performance issues. How does GPFS perform with > your maildir workload? Maildir is likely a worst case type workload for filesystems. Millions of tiny-tiny files, making all IO random, and getting minimal controller read cache utilized (unless you can cache all active files). So I've concluded that our performance issues are mostly design errors (and the fact that there were no better mail storage formats than maildir at the time these servers were implemented). I expect moving to mdbox will fix all our performance issues. I *think* GPFS is as good as it gets for maildir storage on clusterfs, but have no number to back that up ... Would be very interesting if we could somehow compare numbers for a few clusterfs'. I believe our main limitation in this setup is the iops we can get from the backend storage system. It's hard to balance the IO over enough RAID arrays (the fs is spread over 11 RAID5 arrays of 5 disks each), and we're always having hotspots. Right now two arrays are doing <100 iops, while others are doing 4-500 iops. Would very much like to replace it by something smarter where we can utilize SSDs for active data and something slower for stale data. GPFS can manage this by itself trough it's ILM interface, but we don't have the very fast storage to put in as tier-1. -jf From janfrode at tanso.net Wed Jan 4 09:33:55 2012 From: janfrode at tanso.net (Jan-Frode Myklebust) Date: Wed, 4 Jan 2012 08:33:55 +0100 Subject: [Dovecot] GPFS for mail-storage (Was: Re: Compressing existing maildirs) In-Reply-To: <20120104060924.0E7C727659@osmtp-1.airstreamcomm.net> References: <20120104060924.0E7C727659@osmtp-1.airstreamcomm.net> Message-ID: <20120104073355.GA20482@dibs.tanso.net> On Wed, Jan 04, 2012 at 12:09:39AM -0600, list at airstreamcomm.net wrote: > Could you remark on GPFS services hosting mail storage over a WAN between two geographically separated data centers? I haven't tried that, but know the theory quite well. There are 2 or 3 options: 1 - shared SAN between the data centers. Should work the same as a single data center, but you'd want to use disk quorum or a quorum node on a 3. site to avoid split brain. 2 - different SANs on the two sites. Disks on SAN1 would belong to failure group 1 and disks on SAN2 would belong to failure group 2. GPFS will write every block to disks in different failure groups. Nodes on location 1 will use SAN1 directly, and write to SAN2 via tcp/ip to nodes on location 2 (and vica versa). It's configurable if you want to return success when first block is written (asyncronous replication), or if you need both replicas to be written. Ref: mmcrfs -K: http://publib.boulder.ibm.com/infocenter/clresctr/vxrx/index.jsp?topic=%2Fcom.ibm.cluster.gpfs.v3r4.gpfs300.doc%2Fbl1adm_mmcrfs.html With asyncronous replication it will try to allocate both replicas, but if it fails you can re-establish the replication level later using "mmrestripefs". Reading will happen from a direct attached disk if possible, and over tcp/ip if there are no local replica of the needed block. Again you'll need a quorum node on a 3. site to avoid split brain. 3 - GPFS multi-cluster. Separate GPFS clusters on the two locations. Let them mount each others filesystems over IP, and access disks over either SAN or IP network. Each cluster is managed locally, if one site goes down the other site also loses access to the fs. I don't have any experience with this kind of config, but believe it's quite popular to use to share fs between HPC-sites. http://www.ibm.com/developerworks/systems/library/es-multiclustergpfs/index.html http://www.cisl.ucar.edu/hss/ssg/presentations/storage/NCAR-GPFS_Elahi.pdf -jf From Juergen.Obermann at hrz.uni-giessen.de Wed Jan 4 11:40:25 2012 From: Juergen.Obermann at hrz.uni-giessen.de (=?utf-8?b?SsO8cmdlbg==?= Obermann) Date: Wed, 04 Jan 2012 10:40:25 +0100 Subject: [Dovecot] error bad file number with compressed mbox files In-Reply-To: <1325590176.6987.57.camel@innu> References: <77e69f67dbffe67a6205ed1de7d2d0df@imapproxy.hrz> <1325590176.6987.57.camel@innu> Message-ID: <20120104104025.13503j06dkxnqg08@webmail.hrz.uni-giessen.de> > On Mon, 2012-01-02 at 15:33 +0100, J?rgen Obermann wrote: > >> can dsync convert from compressed mbox to compressed mdbox format? >> >> When I use compressed mbox files, either with gzip or with bzip2, I can >> read the mails as usual, but I find the following errors in dovecots log >> file: >> >> imap(userxy): Error: nfs_flush_fcntl: >> fcntl(/home/hrz/userxy/Mail/mymbox.gz, F_RDLCK) failed: Bad file number >> imap(userxy): Error: nfs_flush_fcntl: >> fcntl(/home/hrz/userxy/Mail/mymbox.bz2, F_RDLCK) failed: Bad file number > > This happens because of mail_nfs_* settings. You can either ignore those > errors, or disable the settings. Those settings are useful only if you > attempt to access the same mailbox from multiple servers at the same > time, which is randomly going to fail even with those settings, so they > aren't hugely useful. > > > After removing the mail_nfs_* settings this problem went away. Thank you, Timo. Greetings, J?rgen From openbsd at e-solutions.re Wed Jan 4 15:08:35 2012 From: openbsd at e-solutions.re (Wesley M.) Date: Wed, 04 Jan 2012 17:08:35 +0400 Subject: [Dovecot] migrate dovecot files 1.2.16 to 2.0.13 (OpenBSD 5.0) Message-ID: <9183836c1dc45b710712d4985f04df81@localhost> Hi, I have a mailserver(Postfix+MySql) on OpenBSD 4.9 with Dovecot 1.2.16, all works fine. Now i want to do the same but on OpenBSD 5.0. I meet problems using dovecot 2.0.13 on OpenBSD 5.0. Some tests (on the box): telnet 127.0.0.1 110 Trying 127.0.0.1... Connected to 127.0.0.1. Escape character is '^]'. Connection closed by foreign host. telnet 127.0.0.1 143 Trying 127.0.0.1... Connected to 127.0.0.1. Escape character is '^]'. Connection closed by foreign host. Seems that pop3/imap doesn't work 'netstat -anf inet' tcp 0 0 *.993 *.* LISTEN tcp 0 0 *.143 *.* LISTEN tcp 0 0 *.995 *.* LISTEN tcp 0 0 *.110 *.* LISTEN Therefore, ports are open. When i use Roundcube webmail, i have errors : error imap connection If someone can help me on. Thank you very much. Files to migrate (already tried to modify them) : dovecot.conf / dovecot-sql.conf / and 'dovecot -n ' ###############::::::::dovecot.conf:::::::::::################################# base_dir = /var/dovecot/ protocols = imap pop3 ssl_cert = /etc/ssl/dovecotcert.pem ssl_key = /etc/ssl/private/dovecot.pem ssl_cipher_list = HIGH:MEDIUM:+TLSv1:!SSLv2:+SSLv3 disable_plaintext_auth = yes default_login_user = _dovecot default_internal_user = _dovecot login_process_per_connection = no login_process_size = 64 mail_location = maildir:/var/mailserv/mail/%d/%n first_valid_uid = 1000 mmap_disable = yes protocol imap { mail_plugins = quota imap_quota autocreate imap_client_workarounds = delay-newmail } protocol pop3 { pop3_uidl_format = %08Xv%08Xu mail_plugins = quota pop3_client_workarounds = outlook-no-nuls oe-ns-eoh } protocol lda { mail_plugins = sieve quota postmaster_address = postmaster at mailr130.localdomain sendmail_path = /usr/sbin/sendmail auth_socket_path = /var/run/dovecot-auth-master } auth default { mechanisms = plain login digest-md5 cram-md5 apop passdb { driver=sql args = /etc/dovecot/dovecot-sql.conf } userdb { driver=sql args = /etc/dovecot/dovecot-sql.conf } user = root socket listen { client { path = /var/spool/postfix/private/auth mode = 0660 user = _postfix group = _postfix } master { path = /var/run/dovecot-auth-master mode = 0600 user = _dovecot # User running Dovecot LDA group = _dovecot # Or alternatively mode 0660 + LDA user in this group } } } plugin { sieve=~/.dovecot.sieve sieve_storage=~/sieve } plugin { quota = maildir quota_rule = *:storage=5G quota_rule2 = Trash:storage=100M quota_warning = storage=95%% /usr/local/bin/quota-warning.sh 95 quota_warning2 = storage=80%% /usr/local/bin/quota-warning.sh 80 } plugin { autocreate = Trash autocreate2 = Spam autocreate3 = Sent autocreate4 = Drafts autosubscribe = Trash autosubscribe2 = Spam autosubscribe3 = Sent autosubscribe4 = Drafts } plugin { antispam_signature = X-Spam-Flag antispam_signature_missing = move # move silently without training antispam_trash = trash;Trash;Deleted Items; Deleted Messages antispam_spam = SPAM;Spam;spam;Junk;junk antispam_mail_sendmail = /usr/local/bin/sa-learn antispam_mail_sendmail_args = --username=%u antispam_mail_spam = --spam antispam_mail_notspam = --ham antispam_mail_tmpdir = /tmp } ###############::::::::dovecot-sql.conf:::::::################################## driver = mysql connect = host=localhost dbname=mail user=postfix password=postfix default_pass_scheme = PLAIN password_query = SELECT email as user, password FROM users WHERE email = '%u' user_query = SELECT id as uid, id as gid, home, concat('*:storage=', quota, 'M') AS quota_rule FROM users WHERE email = '%u' ################### dovecot -n######################################## # 2.0.13: /etc/dovecot/dovecot.conf # OS: OpenBSD 5.0 i386 ffs auth_mechanisms = plain login digest-md5 cram-md5 apop base_dir = /var/dovecot/ default_internal_user = _dovecot default_login_user = _dovecot first_valid_uid = 1000 mail_location = maildir:/var/mailserv/mail/%d/%n mmap_disable = yes passdb { args = /etc/dovecot/dovecot-sql.conf driver = sql } plugin { antispam_mail_notspam = --ham antispam_mail_sendmail = /usr/local/bin/sa-learn antispam_mail_sendmail_args = --username=%u antispam_mail_spam = --spam antispam_mail_tmpdir = /tmp antispam_signature = X-Spam-Flag antispam_signature_missing = move antispam_spam = SPAM;Spam;spam;Junk;junk antispam_trash = trash;Trash;Deleted Items; Deleted Messages autocreate = Trash autocreate2 = Spam autocreate3 = Sent autocreate4 = Drafts autosubscribe = Trash autosubscribe2 = Spam autosubscribe3 = Sent autosubscribe4 = Drafts quota = maildir quota_rule = *:storage=5G quota_rule2 = Trash:storage=100M quota_warning = storage=95%% /usr/local/bin/quota-warning.sh 95 quota_warning2 = storage=80%% /usr/local/bin/quota-warning.sh 80 sieve = ~/.dovecot.sieve sieve_storage = ~/sieve } protocols = imap pop3 service auth { unix_listener /var/run/dovecot-auth-master { group = _dovecot mode = 0600 user = _dovecot } unix_listener /var/spool/postfix/private/auth { group = _postfix mode = 0660 user = _postfix } user = root } service imap-login { service_count = 0 vsz_limit = 64 M } service pop3-login { service_count = 0 vsz_limit = 64 M } ssl_cert = /etc/ssl/dovecotcert.pem ssl_cipher_list = HIGH:MEDIUM:+TLSv1:!SSLv2:+SSLv3 ssl_key = /etc/ssl/private/dovecot.pem userdb { args = /etc/dovecot/dovecot-sql.conf driver = sql } protocol imap { imap_client_workarounds = delay-newmail mail_plugins = quota imap_quota autocreate } protocol pop3 { mail_plugins = quota pop3_client_workarounds = outlook-no-nuls oe-ns-eoh pop3_uidl_format = %08Xv%08Xu } protocol lda { auth_socket_path = /var/run/dovecot-auth-master mail_plugins = sieve quota postmaster_address = postmaster at mailr130.localdomain sendmail_path = /usr/sbin/sendmail } Cheers, Wesley. M www.mouedine.net From Ralf.Hildebrandt at charite.de Wed Jan 4 16:06:40 2012 From: Ralf.Hildebrandt at charite.de (Ralf Hildebrandt) Date: Wed, 4 Jan 2012 15:06:40 +0100 Subject: [Dovecot] doveadm move from one user's mailbox to another user's mailbox? Message-ID: <20120104140640.GT5536@charite.de> Is something along the lines: doveadm move -u sourceuser destinationuser:/inbox search_query possible with 2.0.16? I want to move mails from a backup mailbox (which has no valid password assigned) to a "restore" mailbox (which *HAS* a password assigned to it). -- Ralf Hildebrandt Gesch?ftsbereich IT | Abteilung Netzwerk Charit? - Universit?tsmedizin Berlin Campus Benjamin Franklin Hindenburgdamm 30 | D-12203 Berlin Tel. +49 30 450 570 155 | Fax: +49 30 450 570 962 ralf.hildebrandt at charite.de | http://www.charite.de From tss at iki.fi Wed Jan 4 16:11:26 2012 From: tss at iki.fi (Timo Sirainen) Date: Wed, 04 Jan 2012 16:11:26 +0200 Subject: [Dovecot] doveadm move from one user's mailbox to another user's mailbox? In-Reply-To: <20120104140640.GT5536@charite.de> References: <20120104140640.GT5536@charite.de> Message-ID: <1325686286.6987.82.camel@innu> On Wed, 2012-01-04 at 15:06 +0100, Ralf Hildebrandt wrote: > Is something along the lines: > doveadm move -u sourceuser destinationuser:/inbox search_query > possible with 2.0.16? > > I want to move mails from a backup mailbox (which has no valid > password assigned) to a "restore" mailbox (which *HAS* a password > assigned to it). I guess: doveadm import -u dest maildir:/source/Maildir "" search_query There's no direct command to move mails between users. Or you could create a shared namespace.. From Ralf.Hildebrandt at charite.de Wed Jan 4 16:33:01 2012 From: Ralf.Hildebrandt at charite.de (Ralf Hildebrandt) Date: Wed, 4 Jan 2012 15:33:01 +0100 Subject: [Dovecot] doveadm move from one user's mailbox to another user's mailbox? In-Reply-To: <1325686286.6987.82.camel@innu> References: <20120104140640.GT5536@charite.de> <1325686286.6987.82.camel@innu> Message-ID: <20120104143301.GU5536@charite.de> * Timo Sirainen : > On Wed, 2012-01-04 at 15:06 +0100, Ralf Hildebrandt wrote: > > Is something along the lines: > > doveadm move -u sourceuser destinationuser:/inbox search_query > > possible with 2.0.16? > > > > I want to move mails from a backup mailbox (which has no valid > > password assigned) to a "restore" mailbox (which *HAS* a password > > assigned to it). > > I guess: > > doveadm import -u dest maildir:/source/Maildir "" search_query Yes, just the other way round. It's even better, since the MOVE operation would have REMOVED the mails from the backup. IMPORT instead only copies what it needs. -- Ralf Hildebrandt Gesch?ftsbereich IT | Abteilung Netzwerk Charit? - Universit?tsmedizin Berlin Campus Benjamin Franklin Hindenburgdamm 30 | D-12203 Berlin Tel. +49 30 450 570 155 | Fax: +49 30 450 570 962 ralf.hildebrandt at charite.de | http://www.charite.de From ludek.finstrle at pzkagis.cz Wed Jan 4 17:11:41 2012 From: ludek.finstrle at pzkagis.cz (Ludek Finstrle) Date: Wed, 4 Jan 2012 16:11:41 +0100 Subject: [Dovecot] Small LOGIN_MAX_INBUF_SIZE for GSSAPI with samba4 (AD) In-Reply-To: <1325589389.6987.55.camel@innu> References: <20120102182014.GA20872@pzkagis.cz> <1325589389.6987.55.camel@innu> Message-ID: <20120104151141.GA5755@pzkagis.cz> Hi Timo, Tue, Jan 03, 2012 at 01:16:29PM +0200, Timo Sirainen napsal(a): > On Mon, 2012-01-02 at 19:20 +0100, Ludek Finstrle wrote: > > > Jan 2 17:58:42 server dovecot: imap-login: Disconnected: Input buffer full (no auth attempts): rip=192.167.14.16, lip=192.167.14.16, secured > .. > > I fixed this problem with enlarging LOGIN_MAX_INBUF_SIZE. I also red about wrong lower/uppercase > > but it's not definitely my problem (I tried all possibilities of lower/uppercas in login). > > > > I sniffed the plain communication and the "a0000 AUTHENTICATE GSSAPI" line has around 1873 chars. > > When I enlarged the LOGIN_MAX_INBUF_SIZE to 2048 the problem disappeared and I'm now able to login > > to dovecot using gssapi in mutt client. > > There was already code that allowed 16kB SAS messages, but that didn't > work for initial SASL reponse with IMAP SASL-IR extension. > > > The simple patch I have to use is attached. > > I increased it to 4 kB: > http://hg.dovecot.org/dovecot-2.0/rev/d06061408f6d thank you very much for such fast reaction and for such good piece of SW. Luf From bra at fsn.hu Wed Jan 4 17:19:33 2012 From: bra at fsn.hu (Attila Nagy) Date: Wed, 04 Jan 2012 16:19:33 +0100 Subject: [Dovecot] assertion failed in mail-index.c Message-ID: <4F046E05.8070000@fsn.hu> Hi, I have this: Jan 04 15:55:21 pop3(jfm47): Panic: file mail-index.c: line 257 (mail_index_keyword_lookup_or_create): assertion failed: (*keyword != '\0') Jan 04 15:55:21 master: Error: service(pop3): child 3391 killed with signal 6 (core not dumped - set service pop3 { drop_priv_before_exec=yes }) I don't know why this happened, but wouldn't be the "self healing" (seen in the wiki I think :) kick in here? I mean it's even better to completely remove the index than dying and make the mailbox inaccessible. Thanks, From tss at iki.fi Wed Jan 4 17:28:15 2012 From: tss at iki.fi (Timo Sirainen) Date: Wed, 04 Jan 2012 17:28:15 +0200 Subject: [Dovecot] assertion failed in mail-index.c In-Reply-To: <4F046E05.8070000@fsn.hu> References: <4F046E05.8070000@fsn.hu> Message-ID: <1325690895.6987.88.camel@innu> On Wed, 2012-01-04 at 16:19 +0100, Attila Nagy wrote: > Hi, > > I have this: > Jan 04 15:55:21 pop3(jfm47): Panic: file mail-index.c: line 257 > (mail_index_keyword_lookup_or_create): assertion failed: (*keyword != '\0') > Jan 04 15:55:21 master: Error: service(pop3): child 3391 killed with > signal 6 (core not dumped - set service pop3 { drop_priv_before_exec=yes }) > I don't know why this happened, but wouldn't be the "self healing" (seen > in the wiki I think :) kick in here? > I mean it's even better to completely remove the index than dying and > make the mailbox inaccessible. See if http://hg.dovecot.org/dovecot-2.0/raw-rev/5ef791398c8c helps. If not, I'd need a gdb backtrace to find out what is causing it: http://dovecot.org/bugreport.html From sottilette at rfx.it Wed Jan 4 19:08:52 2012 From: sottilette at rfx.it (sottilette at rfx.it) Date: Wed, 4 Jan 2012 18:08:52 +0100 (CET) Subject: [Dovecot] POP3 problems Message-ID: Migrated a 1.0.2 server to 2.0.16 (same old box). IMAP seems working Ok. POP3 give problems with some clients (Outlook 2010 and Thunderbird reported). Seems authentication problem Below my doveconf -n (debug enbled, but no answer found to the problems) Any hints? Thanks, P. # doveconf -n # 2.0.16: /etc/dovecot/dovecot.conf doveconf: Warning: NOTE: You can get a new clean config file with: doveconf -n > dovecot-new.conf doveconf: Warning: Obsolete setting in /etc/dovecot/dovecot.conf:791: add auth_ prefix to all settings inside auth {} and remove the auth {} section completely doveconf: Warning: Obsolete setting in /etc/dovecot/dovecot.conf:864: passdb {} has been replaced by passdb { driver= } doveconf: Warning: Obsolete setting in /etc/dovecot/dovecot.conf:935: userdb passwd {} has been replaced by userdb { driver=passwd } doveconf: Warning: Obsolete setting in /etc/dovecot/dovecot.conf:998: auth_user has been replaced by service auth { user } doveconf: Warning: Obsolete setting in /etc/dovecot/dovecot.conf:1131: ssl_disable has been renamed to ssl # OS: Linux 2.6.9-42.0.10.ELsmp i686 CentOS release 4.9 (Final) auth_debug = yes auth_debug_passwords = yes auth_mechanisms = plain login auth_verbose = yes disable_plaintext_auth = no info_log_path = /var/log/mail/dovecot.info.log listen = * log_path = /var/log/mail/dovecot.log mail_full_filesystem_access = yes mail_location = mbox:~/:INBOX=/var/mail/%u mbox_read_locks = dotlock fcntl passdb { driver = pam } protocols = pop3 imap service auth { user = root } ssl = no ssl_cert = /etc/pki/dovecot/certs/dovecot.pem ssl_key = /etc/pki/dovecot/private/dovecot.pem userdb { driver = passwd } userdb { driver = passwd } protocol lda { postmaster_address = postmaster at example.com } From wgillespie+dovecot at es2eng.com Wed Jan 4 19:16:08 2012 From: wgillespie+dovecot at es2eng.com (Willie Gillespie) Date: Wed, 04 Jan 2012 10:16:08 -0700 Subject: [Dovecot] POP3 problems In-Reply-To: References: Message-ID: <4F048958.5070208@es2eng.com> On 01/04/2012 10:08 AM, sottilette at rfx.it wrote: > Migrated a 1.0.2 server to 2.0.16 (same old box). Some of the configuration settings changed between 1.x and 2.x > doveconf: Warning: Obsolete setting in /etc/dovecot/dovecot.conf:791: add auth_ prefix to all settings inside auth {} and remove the auth {} section completely > doveconf: Warning: Obsolete setting in /etc/dovecot/dovecot.conf:864: passdb {} has been replaced by passdb { driver= } > doveconf: Warning: Obsolete setting in /etc/dovecot/dovecot.conf:935: userdb passwd {} has been replaced by userdb { driver=passwd } > doveconf: Warning: Obsolete setting in /etc/dovecot/dovecot.conf:998: auth_user has been replaced by service auth { user } > doveconf: Warning: Obsolete setting in /etc/dovecot/dovecot.conf:1131: ssl_disable has been renamed to ssl You'll probably want to make sure everything is correct as per a 2.x config. From tss at iki.fi Wed Jan 4 19:24:28 2012 From: tss at iki.fi (Timo Sirainen) Date: Wed, 4 Jan 2012 19:24:28 +0200 Subject: [Dovecot] POP3 problems In-Reply-To: References: Message-ID: <55076E79-BD81-455B-BD68-9ABCFE53ED22@iki.fi> On 4.1.2012, at 19.08, sottilette at rfx.it wrote: > Migrated a 1.0.2 server to 2.0.16 (same old box). > IMAP seems working Ok. > POP3 give problems with some clients (Outlook 2010 and Thunderbird reported). > Seems authentication problem > Below my doveconf -n (debug enbled, but no answer found to the problems) What do the logs say when a client logs in? The debug logs should tell everything. > doveconf: Warning: NOTE: You can get a new clean config file with: doveconf -n > dovecot-new.conf You should do this and replace your old dovecot.conf with the new generated one. > userdb { > driver = passwd > } > userdb { > driver = passwd > } Also remove the duplicated userdb passwd. From sottilette at rfx.it Wed Jan 4 20:11:47 2012 From: sottilette at rfx.it (sottilette at rfx.it) Date: Wed, 4 Jan 2012 19:11:47 +0100 (CET) Subject: [Dovecot] POP3 problems In-Reply-To: <55076E79-BD81-455B-BD68-9ABCFE53ED22@iki.fi> References: <55076E79-BD81-455B-BD68-9ABCFE53ED22@iki.fi> Message-ID: On Wed, 4 Jan 2012, Timo Sirainen wrote: >> Migrated a 1.0.2 server to 2.0.16 (same old box). >> IMAP seems working Ok. >> POP3 give problems with some clients (Outlook 2010 and Thunderbird reported). >> Seems authentication problem >> Below my doveconf -n (debug enbled, but no answer found to the problems) > > What do the logs say when a client logs in? The debug logs should tell > everything. Yes, but my problem is that this is a production server with a really fast increasing log, so (in my limited skill of dovecot), I have some difficult to select interesting rows from it (I hoped this period was less busy, but my customers don't have the same idea ... ;-) ) Thanks for hints in selecting interesting rows. >> doveconf: Warning: NOTE: You can get a new clean config file with: doveconf -n > dovecot-new.conf > > You should do this and replace your old dovecot.conf with the new generated one. > >> userdb { >> driver = passwd >> } >> userdb { >> driver = passwd >> } > > Also remove the duplicated userdb passwd. This was an experimental configs manually derived from old 1.0.2 (mix of old working and new). If I replace it with a new config (below), authentication seems Ok, but fetch of mail from client is very slow (compared with old 1.0.2). Thanks for your very fast support ;-) P. # doveconf -n # 2.0.16: /etc/dovecot/dovecot.conf # OS: Linux 2.6.9-42.0.10.ELsmp i686 CentOS release 4.9 (Final) auth_mechanisms = plain login disable_plaintext_auth = no info_log_path = /var/log/mail/dovecot.info.log log_path = /var/log/mail/dovecot.log mail_full_filesystem_access = yes mail_location = mbox:~/:INBOX=/var/mail/%u mbox_read_locks = dotlock fcntl passdb { driver = pam } protocols = imap pop3 ssl_cert = References: <55076E79-BD81-455B-BD68-9ABCFE53ED22@iki.fi> Message-ID: <4F04B6B2.1030903@enas.net> Am 04.01.2012 19:11, schrieb sottilette at rfx.it: > On Wed, 4 Jan 2012, Timo Sirainen wrote: > >>> Migrated a 1.0.2 server to 2.0.16 (same old box). >>> IMAP seems working Ok. >>> POP3 give problems with some clients (Outlook 2010 and Thunderbird >>> reported). >>> Seems authentication problem >>> Below my doveconf -n (debug enbled, but no answer found to the >>> problems) >> >> What do the logs say when a client logs in? The debug logs should >> tell everything. > > Yes, but my problem is that this is a production server with a really > fast increasing log, so (in my limited skill of dovecot), I have some > difficult to select interesting rows from it (I hoped this period was > less busy, but my customers don't have the same idea ... ;-) ) > Thanks for hints in selecting interesting rows. Try to run "tail -f $MAILLOG | grep $USERNAME" until the user logs in and tries to fetch his emails. $MAILLOG = logfile to which dovecot logs all info $USERNAME = Username of your client which has the problems > >>> doveconf: Warning: NOTE: You can get a new clean config file with: >>> doveconf -n > dovecot-new.conf >> >> You should do this and replace your old dovecot.conf with the new >> generated one. >> >>> userdb { >>> driver = passwd >>> } >>> userdb { >>> driver = passwd >>> } >> >> Also remove the duplicated userdb passwd. > > > This was an experimental configs manually derived from old 1.0.2 (mix > of old working and new). > > If I replace it with a new config (below), authentication seems Ok, > but fetch of mail from client is very slow (compared with old 1.0.2). > > Thanks for your very fast support ;-) > > P. > > > > # doveconf -n > # 2.0.16: /etc/dovecot/dovecot.conf > # OS: Linux 2.6.9-42.0.10.ELsmp i686 CentOS release 4.9 (Final) > auth_mechanisms = plain login > disable_plaintext_auth = no > info_log_path = /var/log/mail/dovecot.info.log > log_path = /var/log/mail/dovecot.log > mail_full_filesystem_access = yes > mail_location = mbox:~/:INBOX=/var/mail/%u > mbox_read_locks = dotlock fcntl > passdb { > driver = pam > } > protocols = imap pop3 > ssl_cert = ssl_key = userdb { > driver = passwd > } > protocol pop3 { > pop3_uidl_format = %08Xu%08Xv > } > From dmiller at amfes.com Thu Jan 5 02:24:40 2012 From: dmiller at amfes.com (Daniel L. Miller) Date: Wed, 04 Jan 2012 16:24:40 -0800 Subject: [Dovecot] Possible mdbox corruption Message-ID: I thought I had cleared out the corruption I had before - perhaps I was mistaken. What steps should I take to help locate these issues? Currently using 2.1rc1. I see the following errors in my logs, including out of memory and message size issues (at 15:30): Jan 4 05:17:17 bubba dovecot: indexer-worker(user1 at domain.com): Error: Raw backtrace: /usr/local/lib/dovecot/libdovecot.so.0(+0x3ed0a) [0x7f6e17cbfd0a] -> /usr/local/lib/dovecot/libdovecot.so.0(+0x3ed56) [0x7f6e17cbfd56] -> /usr/local/lib/dovecot/libdovecot.so.0(i_fatal+0) [0x7f6e17c98d08] -> /usr/local/lib/dovecot/libdovecot.so.0(+0x4f310) [0x7f6e17cd0310] -> /usr/local/lib/dovecot/libdovecot.so.0(+0x3b965) [0x7f6e17cbc965] -> /usr/local/lib/dovecot/libdovecot.so.0(buffer_write+0x7c) [0x7f6e17cbd0ec] -> /usr/local/lib/dovecot/lib21_fts_solr_plugin.so(+0x3292) [0x7f6e164b7292] -> /usr/local/lib/dovecot/lib21_fts_solr_plugin.so(+0x3a97) [0x7f6e164b7a97] -> /usr/local/lib/dovecot/lib20_fts_plugin.so(fts_backend_update_set_build_key+0x2c) [0x7f6e166c4abc] -> /usr/local/lib/dovecot/lib20_fts_plugin.so(fts_build_mail+0x2d1) [0x7f6e166c5561] -> /usr/local/lib/dovecot/lib20_fts_plugin.so(+0xc630) [0x7f6e166ca630] -> dovecot/indexer-worker [user1 at domain.com Sent - 5500/7510]() [0x40245f] -> dovecot/indexer-worker [user1 at domain.com Sent - 5500/7510]() [0x4027dd] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_call_io+0x36) [0x7f6e17ccc0f6] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_handler_run+0x9f) [0x7f6e17ccd17f] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_run+0x28) [0x7f6e17ccc098] -> /usr/local/lib/dovecot/libdovecot.so.0(master_service_run+0x13) [0x7f6e17cb9123] -> dovecot/indexer-worker [user1 at domain.com Sent - 5500/7510](main+0x109) [0x401f29] -> /lib/libc.so.6(__libc_start_main+0xfe) [0x7f6e1791cd8e] -> dovecot/indexer-worker [user1 at domain.com Sent - 5500/7510]() [0x401d19] Jan 4 05:17:17 bubba dovecot: indexer: Error: Indexer worker disconnected, discarding 1 requests for user1 at domain.com Jan 4 05:17:17 bubba dovecot: master: Error: service(indexer-worker): child 10896 returned error 83 (Out of memory (vsz_limit=256 MB, you may need to increase it)) Jan 4 06:17:17 bubba dovecot: indexer-worker(user1 at domain.com): Fatal: pool_system_realloc(134217728): Out of memory Jan 4 06:17:17 bubba dovecot: indexer-worker(user1 at domain.com): Error: Raw backtrace: /usr/local/lib/dovecot/libdovecot.so.0(+0x3ed0a) [0x7f0ec1a57d0a] -> /usr/local/lib/dovecot/libdovecot.so.0(+0x3ed56) [0x7f0ec1a57d56] -> /usr/local/lib/dovecot/libdovecot.so.0(i_fatal+0) [0x7f0ec1a30d08] -> /usr/local/lib/dovecot/libdovecot.so.0(+0x4f310) [0x7f0ec1a68310] -> /usr/local/lib/dovecot/libdovecot.so.0(+0x3b965) [0x7f0ec1a54965] -> /usr/local/lib/dovecot/libdovecot.so.0(buffer_write+0x7c) [0x7f0ec1a550ec] -> /usr/local/lib/dovecot/lib21_fts_solr_plugin.so(+0x3292) [0x7f0ec024f292] -> /usr/local/lib/dovecot/lib21_fts_solr_plugin.so(+0x3a97) [0x7f0ec024fa97] -> /usr/local/lib/dovecot/lib20_fts_plugin.so(fts_backend_update_set_build_key+0x2c) [0x7f0ec045cabc] -> /usr/local/lib/dovecot/lib20_fts_plugin.so(fts_build_mail+0x2d1) [0x7f0ec045d561] -> /usr/local/lib/dovecot/lib20_fts_plugin.so(+0xc630) [0x7f0ec0462630] -> dovecot/indexer-worker [user1 at domain.com Sent - 5500/7510]() [0x40245f] -> dovecot/indexer-worker [user1 at domain.com Sent - 5500/7510]() [0x4027dd] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_call_io+0x36) [0x7f0ec1a640f6] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_handler_run+0x9f) [0x7f0ec1a6517f] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_run+0x28) [0x7f0ec1a64098] -> /usr/local/lib/dovecot/libdovecot.so.0(master_service_run+0x13) [0x7f0ec1a51123] -> dovecot/indexer-worker [user1 at domain.com Sent - 5500/7510](main+0x109) [0x401f29] -> /lib/libc.so.6(__libc_start_main+0xfe) [0x7f0ec16b4d8e] -> dovecot/indexer-worker [user1 at domain.com Sent - 5500/7510]() [0x401d19] Jan 4 06:17:17 bubba dovecot: indexer: Error: Indexer worker disconnected, discarding 1 requests for user1 at domain.com Jan 4 06:17:17 bubba dovecot: master: Error: service(indexer-worker): child 11941 returned error 83 (Out of memory (vsz_limit=256 MB, you may need to increase it)) Jan 4 07:17:18 bubba dovecot: indexer-worker(user1 at domain.com): Fatal: pool_system_realloc(134217728): Out of memory Jan 4 07:17:18 bubba dovecot: indexer-worker(user1 at domain.com): Error: Raw backtrace: /usr/local/lib/dovecot/libdovecot.so.0(+0x3ed0a) [0x7faed4e56d0a] -> /usr/local/lib/dovecot/libdovecot.so.0(+0x3ed56) [0x7faed4e56d56] -> /usr/local/lib/dovecot/libdovecot.so.0(i_fatal+0) [0x7faed4e2fd08] -> /usr/local/lib/dovecot/libdovecot.so.0(+0x4f310) [0x7faed4e67310] -> /usr/local/lib/dovecot/libdovecot.so.0(+0x3b965) [0x7faed4e53965] -> /usr/local/lib/dovecot/libdovecot.so.0(buffer_write+0x7c) [0x7faed4e540ec] -> /usr/local/lib/dovecot/lib21_fts_solr_plugin.so(+0x3292) [0x7faed364e292] -> /usr/local/lib/dovecot/lib21_fts_solr_plugin.so(+0x3a97) [0x7faed364ea97] -> /usr/local/lib/dovecot/lib20_fts_plugin.so(fts_backend_update_set_build_key+0x2c) [0x7faed385babc] -> /usr/local/lib/dovecot/lib20_fts_plugin.so(fts_build_mail+0x2d1) [0x7faed385c561] -> /usr/local/lib/dovecot/lib20_fts_plugin.so(+0xc630) [0x7faed3861630] -> dovecot/indexer-worker [user1 at domain.com Sent - 5500/7510]() [0x40245f] -> dovecot/indexer-worker [user1 at domain.com Sent - 5500/7510]() [0x4027dd] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_call_io+0x36) [0x7faed4e630f6] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_handler_run+0x9f) [0x7faed4e6417f] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_run+0x28) [0x7faed4e63098] -> /usr/local/lib/dovecot/libdovecot.so.0(master_service_run+0x13) [0x7faed4e50123] -> dovecot/indexer-worker [user1 at domain.com Sent - 5500/7510](main+0x109) [0x401f29] -> /lib/libc.so.6(__libc_start_main+0xfe) [0x7faed4ab3d8e] -> dovecot/indexer-worker [user1 at domain.com Sent - 5500/7510]() [0x401d19] Jan 4 07:17:18 bubba dovecot: indexer: Error: Indexer worker disconnected, discarding 1 requests for user1 at domain.com Jan 4 07:17:18 bubba dovecot: master: Error: service(indexer-worker): child 13299 returned error 83 (Out of memory (vsz_limit=256 MB, you may need to increase it)) Jan 4 08:17:17 bubba dovecot: indexer-worker(user1 at domain.com): Fatal: pool_system_realloc(134217728): Out of memory Jan 4 08:17:17 bubba dovecot: indexer-worker(user1 at domain.com): Error: Raw backtrace: /usr/local/lib/dovecot/libdovecot.so.0(+0x3ed0a) [0x7ffd84382d0a] -> /usr/local/lib/dovecot/libdovecot.so.0(+0x3ed56) [0x7ffd84382d56] -> /usr/local/lib/dovecot/libdovecot.so.0(i_fatal+0) [0x7ffd8435bd08] -> /usr/local/lib/dovecot/libdovecot.so.0(+0x4f310) [0x7ffd84393310] -> /usr/local/lib/dovecot/libdovecot.so.0(+0x3b965) [0x7ffd8437f965] -> /usr/local/lib/dovecot/libdovecot.so.0(buffer_write+0x7c) [0x7ffd843800ec] -> /usr/local/lib/dovecot/lib21_fts_solr_plugin.so(+0x3292) [0x7ffd82b7a292] -> /usr/local/lib/dovecot/lib21_fts_solr_plugin.so(+0x3a97) [0x7ffd82b7aa97] -> /usr/local/lib/dovecot/lib20_fts_plugin.so(fts_backend_update_set_build_key+0x2c) [0x7ffd82d87abc] -> /usr/local/lib/dovecot/lib20_fts_plugin.so(fts_build_mail+0x2d1) [0x7ffd82d88561] -> /usr/local/lib/dovecot/lib20_fts_plugin.so(+0xc630) [0x7ffd82d8d630] -> dovecot/indexer-worker [user1 at domain.com Sent - 5500/7510]() [0x40245f] -> dovecot/indexer-worker [user1 at domain.com Sent - 5500/7510]() [0x4027dd] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_call_io+0x36) [0x7ffd8438f0f6] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_handler_run+0x9f) [0x7ffd8439017f] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_run+0x28) [0x7ffd8438f098] -> /usr/local/lib/dovecot/libdovecot.so.0(master_service_run+0x13) [0x7ffd8437c123] -> dovecot/indexer-worker [user1 at domain.com Sent - 5500/7510](main+0x109) [0x401f29] -> /lib/libc.so.6(__libc_start_main+0xfe) [0x7ffd83fdfd8e] -> dovecot/indexer-worker [user1 at domain.com Sent - 5500/7510]() [0x401d19] Jan 4 08:17:17 bubba dovecot: indexer: Error: Indexer worker disconnected, discarding 1 requests for user1 at domain.com Jan 4 08:17:17 bubba dovecot: master: Error: service(indexer-worker): child 14413 returned error 83 (Out of memory (vsz_limit=256 MB, you may need to increase it)) Jan 4 09:17:19 bubba dovecot: indexer-worker(user1 at domain.com): Fatal: pool_system_realloc(134217728): Out of memory Jan 4 09:17:19 bubba dovecot: indexer-worker(user1 at domain.com): Error: Raw backtrace: /usr/local/lib/dovecot/libdovecot.so.0(+0x3ed0a) [0x7fb701bf5d0a] -> /usr/local/lib/dovecot/libdovecot.so.0(+0x3ed56) [0x7fb701bf5d56] -> /usr/local/lib/dovecot/libdovecot.so.0(i_fatal+0) [0x7fb701bced08] -> /usr/local/lib/dovecot/libdovecot.so.0(+0x4f310) [0x7fb701c06310] -> /usr/local/lib/dovecot/libdovecot.so.0(+0x3b965) [0x7fb701bf2965] -> /usr/local/lib/dovecot/libdovecot.so.0(buffer_write+0x7c) [0x7fb701bf30ec] -> /usr/local/lib/dovecot/lib21_fts_solr_plugin.so(+0x3292) [0x7fb7003ed292] -> /usr/local/lib/dovecot/lib21_fts_solr_plugin.so(+0x3a97) [0x7fb7003eda97] -> /usr/local/lib/dovecot/lib20_fts_plugin.so(fts_backend_update_set_build_key+0x2c) [0x7fb7005faabc] -> /usr/local/lib/dovecot/lib20_fts_plugin.so(fts_build_mail+0x2d1) [0x7fb7005fb561] -> /usr/local/lib/dovecot/lib20_fts_plugin.so(+0xc630) [0x7fb700600630] -> dovecot/indexer-worker [user1 at domain.com Sent - 5500/7510]() [0x40245f] -> dovecot/indexer-worker [user1 at domain.com Sent - 5500/7510]() [0x4027dd] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_call_io+0x36) [0x7fb701c020f6] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_handler_run+0x9f) [0x7fb701c0317f] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_run+0x28) [0x7fb701c02098] -> /usr/local/lib/dovecot/libdovecot.so.0(master_service_run+0x13) [0x7fb701bef123] -> dovecot/indexer-worker [user1 at domain.com Sent - 5500/7510](main+0x109) [0x401f29] -> /lib/libc.so.6(__libc_start_main+0xfe) [0x7fb701852d8e] -> dovecot/indexer-worker [user1 at domain.com Sent - 5500/7510]() [0x401d19] Jan 4 09:17:19 bubba dovecot: indexer: Error: Indexer worker disconnected, discarding 1 requests for user1 at domain.com Jan 4 09:17:19 bubba dovecot: master: Error: service(indexer-worker): child 15486 returned error 83 (Out of memory (vsz_limit=256 MB, you may need to increase it)) Jan 4 10:17:17 bubba dovecot: indexer-worker(user1 at domain.com): Fatal: pool_system_realloc(134217728): Out of memory Jan 4 10:17:17 bubba dovecot: indexer-worker(user1 at domain.com): Error: Raw backtrace: /usr/local/lib/dovecot/libdovecot.so.0(+0x3ed0a) [0x7f8dc590ed0a] -> /usr/local/lib/dovecot/libdovecot.so.0(+0x3ed56) [0x7f8dc590ed56] -> /usr/local/lib/dovecot/libdovecot.so.0(i_fatal+0) [0x7f8dc58e7d08] -> /usr/local/lib/dovecot/libdovecot.so.0(+0x4f310) [0x7f8dc591f310] -> /usr/local/lib/dovecot/libdovecot.so.0(+0x3b965) [0x7f8dc590b965] -> /usr/local/lib/dovecot/libdovecot.so.0(buffer_write+0x7c) [0x7f8dc590c0ec] -> /usr/local/lib/dovecot/lib21_fts_solr_plugin.so(+0x3292) [0x7f8dc4106292] -> /usr/local/lib/dovecot/lib21_fts_solr_plugin.so(+0x3a97) [0x7f8dc4106a97] -> /usr/local/lib/dovecot/lib20_fts_plugin.so(fts_backend_update_set_build_key+0x2c) [0x7f8dc4313abc] -> /usr/local/lib/dovecot/lib20_fts_plugin.so(fts_build_mail+0x2d1) [0x7f8dc4314561] -> /usr/local/lib/dovecot/lib20_fts_plugin.so(+0xc630) [0x7f8dc4319630] -> dovecot/indexer-worker [user1 at domain.com Sent - 5500/7510]() [0x40245f] -> dovecot/indexer-worker [user1 at domain.com Sent - 5500/7510]() [0x4027dd] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_call_io+0x36) [0x7f8dc591b0f6] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_handler_run+0x9f) [0x7f8dc591c17f] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_run+0x28) [0x7f8dc591b098] -> /usr/local/lib/dovecot/libdovecot.so.0(master_service_run+0x13) [0x7f8dc5908123] -> dovecot/indexer-worker [user1 at domain.com Sent - 5500/7510](main+0x109) [0x401f29] -> /lib/libc.so.6(__libc_start_main+0xfe) [0x7f8dc556bd8e] -> dovecot/indexer-worker [user1 at domain.com Sent - 5500/7510]() [0x401d19] Jan 4 10:17:17 bubba dovecot: indexer: Error: Indexer worker disconnected, discarding 1 requests for user1 at domain.com Jan 4 10:17:17 bubba dovecot: master: Error: service(indexer-worker): child 16472 returned error 83 (Out of memory (vsz_limit=256 MB, you may need to increase it)) Jan 4 11:17:17 bubba dovecot: indexer-worker(user1 at domain.com): Fatal: pool_system_realloc(134217728): Out of memory Jan 4 11:17:17 bubba dovecot: indexer-worker(user1 at domain.com): Error: Raw backtrace: /usr/local/lib/dovecot/libdovecot.so.0(+0x3ed0a) [0x7ff619c1dd0a] -> /usr/local/lib/dovecot/libdovecot.so.0(+0x3ed56) [0x7ff619c1dd56] -> /usr/local/lib/dovecot/libdovecot.so.0(i_fatal+0) [0x7ff619bf6d08] -> /usr/local/lib/dovecot/libdovecot.so.0(+0x4f310) [0x7ff619c2e310] -> /usr/local/lib/dovecot/libdovecot.so.0(+0x3b965) [0x7ff619c1a965] -> /usr/local/lib/dovecot/libdovecot.so.0(buffer_write+0x7c) [0x7ff619c1b0ec] -> /usr/local/lib/dovecot/lib21_fts_solr_plugin.so(+0x3292) [0x7ff618415292] -> /usr/local/lib/dovecot/lib21_fts_solr_plugin.so(+0x3a97) [0x7ff618415a97] -> /usr/local/lib/dovecot/lib20_fts_plugin.so(fts_backend_update_set_build_key+0x2c) [0x7ff618622abc] -> /usr/local/lib/dovecot/lib20_fts_plugin.so(fts_build_mail+0x2d1) [0x7ff618623561] -> /usr/local/lib/dovecot/lib20_fts_plugin.so(+0xc630) [0x7ff618628630] -> dovecot/indexer-worker [user1 at domain.com Sent - 5500/7514]() [0x40245f] -> dovecot/indexer-worker [user1 at domain.com Sent - 5500/7514]() [0x4027dd] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_call_io+0x36) [0x7ff619c2a0f6] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_handler_run+0x9f) [0x7ff619c2b17f] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_run+0x28) [0x7ff619c2a098] -> /usr/local/lib/dovecot/libdovecot.so.0(master_service_run+0x13) [0x7ff619c17123] -> dovecot/indexer-worker [user1 at domain.com Sent - 5500/7514](main+0x109) [0x401f29] -> /lib/libc.so.6(__libc_start_main+0xfe) [0x7ff61987ad8e] -> dovecot/indexer-worker [user1 at domain.com Sent - 5500/7514]() [0x401d19] Jan 4 11:17:17 bubba dovecot: indexer: Error: Indexer worker disconnected, discarding 1 requests for user1 at domain.com Jan 4 11:17:17 bubba dovecot: master: Error: service(indexer-worker): child 17522 returned error 83 (Out of memory (vsz_limit=256 MB, you may need to increase it)) Jan 4 12:17:17 bubba dovecot: indexer-worker(user1 at domain.com): Fatal: pool_system_realloc(134217728): Out of memory Jan 4 12:17:17 bubba dovecot: indexer-worker(user1 at domain.com): Error: Raw backtrace: /usr/local/lib/dovecot/libdovecot.so.0(+0x3ed0a) [0x7ffd988c1d0a] -> /usr/local/lib/dovecot/libdovecot.so.0(+0x3ed56) [0x7ffd988c1d56] -> /usr/local/lib/dovecot/libdovecot.so.0(i_fatal+0) [0x7ffd9889ad08] -> /usr/local/lib/dovecot/libdovecot.so.0(+0x4f310) [0x7ffd988d2310] -> /usr/local/lib/dovecot/libdovecot.so.0(+0x3b965) [0x7ffd988be965] -> /usr/local/lib/dovecot/libdovecot.so.0(buffer_write+0x7c) [0x7ffd988bf0ec] -> /usr/local/lib/dovecot/lib21_fts_solr_plugin.so(+0x3292) [0x7ffd970b9292] -> /usr/local/lib/dovecot/lib21_fts_solr_plugin.so(+0x3a97) [0x7ffd970b9a97] -> /usr/local/lib/dovecot/lib20_fts_plugin.so(fts_backend_update_set_build_key+0x2c) [0x7ffd972c6abc] -> /usr/local/lib/dovecot/lib20_fts_plugin.so(fts_build_mail+0x2d1) [0x7ffd972c7561] -> /usr/local/lib/dovecot/lib20_fts_plugin.so(+0xc630) [0x7ffd972cc630] -> dovecot/indexer-worker [user1 at domain.com Sent - 5500/7515]() [0x40245f] -> dovecot/indexer-worker [user1 at domain.com Sent - 5500/7515]() [0x4027dd] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_call_io+0x36) [0x7ffd988ce0f6] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_handler_run+0x9f) [0x7ffd988cf17f] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_run+0x28) [0x7ffd988ce098] -> /usr/local/lib/dovecot/libdovecot.so.0(master_service_run+0x13) [0x7ffd988bb123] -> dovecot/indexer-worker [user1 at domain.com Sent - 5500/7515](main+0x109) [0x401f29] -> /lib/libc.so.6(__libc_start_main+0xfe) [0x7ffd9851ed8e] -> dovecot/indexer-worker [user1 at domain.com Sent - 5500/7515]() [0x401d19] Jan 4 12:17:17 bubba dovecot: indexer: Error: Indexer worker disconnected, discarding 1 requests for user1 at domain.com Jan 4 12:17:17 bubba dovecot: master: Error: service(indexer-worker): child 18498 returned error 83 (Out of memory (vsz_limit=256 MB, you may need to increase it)) Jan 4 13:17:16 bubba dovecot: indexer-worker(user1 at domain.com): Fatal: pool_system_realloc(134217728): Out of memory Jan 4 13:17:16 bubba dovecot: indexer-worker(user1 at domain.com): Error: Raw backtrace: /usr/local/lib/dovecot/libdovecot.so.0(+0x3ed0a) [0x7f51b0163d0a] -> /usr/local/lib/dovecot/libdovecot.so.0(+0x3ed56) [0x7f51b0163d56] -> /usr/local/lib/dovecot/libdovecot.so.0(i_fatal+0) [0x7f51b013cd08] -> /usr/local/lib/dovecot/libdovecot.so.0(+0x4f310) [0x7f51b0174310] -> /usr/local/lib/dovecot/libdovecot.so.0(+0x3b965) [0x7f51b0160965] -> /usr/local/lib/dovecot/libdovecot.so.0(buffer_write+0x7c) [0x7f51b01610ec] -> /usr/local/lib/dovecot/lib21_fts_solr_plugin.so(+0x3292) [0x7f51ae95b292] -> /usr/local/lib/dovecot/lib21_fts_solr_plugin.so(+0x3a97) [0x7f51ae95ba97] -> /usr/local/lib/dovecot/lib20_fts_plugin.so(fts_backend_update_set_build_key+0x2c) [0x7f51aeb68abc] -> /usr/local/lib/dovecot/lib20_fts_plugin.so(fts_build_mail+0x2d1) [0x7f51aeb69561] -> /usr/local/lib/dovecot/lib20_fts_plugin.so(+0xc630) [0x7f51aeb6e630] -> dovecot/indexer-worker [user1 at domain.com Sent - 5500/7515]() [0x40245f] -> dovecot/indexer-worker [user1 at domain.com Sent - 5500/7515]() [0x4027dd] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_call_io+0x36) [0x7f51b01700f6] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_handler_run+0x9f) [0x7f51b017117f] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_run+0x28) [0x7f51b0170098] -> /usr/local/lib/dovecot/libdovecot.so.0(master_service_run+0x13) [0x7f51b015d123] -> dovecot/indexer-worker [user1 at domain.com Sent - 5500/7515](main+0x109) [0x401f29] -> /lib/libc.so.6(__libc_start_main+0xfe) [0x7f51afdc0d8e] -> dovecot/indexer-worker [user1 at domain.com Sent - 5500/7515]() [0x401d19] Jan 4 13:17:16 bubba dovecot: indexer: Error: Indexer worker disconnected, discarding 1 requests for user1 at domain.com Jan 4 13:17:16 bubba dovecot: master: Error: service(indexer-worker): child 19550 returned error 83 (Out of memory (vsz_limit=256 MB, you may need to increase it)) Jan 4 14:17:17 bubba dovecot: indexer-worker(user1 at domain.com): Fatal: pool_system_realloc(134217728): Out of memory Jan 4 14:17:17 bubba dovecot: indexer-worker(user1 at domain.com): Error: Raw backtrace: /usr/local/lib/dovecot/libdovecot.so.0(+0x3ed0a) [0x7f423b546d0a] -> /usr/local/lib/dovecot/libdovecot.so.0(+0x3ed56) [0x7f423b546d56] -> /usr/local/lib/dovecot/libdovecot.so.0(i_fatal+0) [0x7f423b51fd08] -> /usr/local/lib/dovecot/libdovecot.so.0(+0x4f310) [0x7f423b557310] -> /usr/local/lib/dovecot/libdovecot.so.0(+0x3b965) [0x7f423b543965] -> /usr/local/lib/dovecot/libdovecot.so.0(buffer_write+0x7c) [0x7f423b5440ec] -> /usr/local/lib/dovecot/lib21_fts_solr_plugin.so(+0x3292) [0x7f4239d3e292] -> /usr/local/lib/dovecot/lib21_fts_solr_plugin.so(+0x3a97) [0x7f4239d3ea97] -> /usr/local/lib/dovecot/lib20_fts_plugin.so(fts_backend_update_set_build_key+0x2c) [0x7f4239f4babc] -> /usr/local/lib/dovecot/lib20_fts_plugin.so(fts_build_mail+0x2d1) [0x7f4239f4c561] -> /usr/local/lib/dovecot/lib20_fts_plugin.so(+0xc630) [0x7f4239f51630] -> dovecot/indexer-worker [user1 at domain.com Sent - 5500/7516]() [0x40245f] -> dovecot/indexer-worker [user1 at domain.com Sent - 5500/7516]() [0x4027dd] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_call_io+0x36) [0x7f423b5530f6] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_handler_run+0x9f) [0x7f423b55417f] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_run+0x28) [0x7f423b553098] -> /usr/local/lib/dovecot/libdovecot.so.0(master_service_run+0x13) [0x7f423b540123] -> dovecot/indexer-worker [user1 at domain.com Sent - 5500/7516](main+0x109) [0x401f29] -> /lib/libc.so.6(__libc_start_main+0xfe) [0x7f423b1a3d8e] -> dovecot/indexer-worker [user1 at domain.com Sent - 5500/7516]() [0x401d19] Jan 4 14:17:17 bubba dovecot: master: Error: service(indexer-worker): child 20638 returned error 83 (Out of memory (vsz_limit=256 MB, you may need to increase it)) Jan 4 14:17:17 bubba dovecot: indexer: Error: Indexer worker disconnected, discarding 1 requests for user1 at domain.com Jan 4 15:17:17 bubba dovecot: indexer-worker(user1 at domain.com): Fatal: pool_system_realloc(134217728): Out of memory Jan 4 15:17:17 bubba dovecot: indexer-worker(user1 at domain.com): Error: Raw backtrace: /usr/local/lib/dovecot/libdovecot.so.0(+0x3ed0a) [0x7f3ab5e51d0a] -> /usr/local/lib/dovecot/libdovecot.so.0(+0x3ed56) [0x7f3ab5e51d56] -> /usr/local/lib/dovecot/libdovecot.so.0(i_fatal+0) [0x7f3ab5e2ad08] -> /usr/local/lib/dovecot/libdovecot.so.0(+0x4f310) [0x7f3ab5e62310] -> /usr/local/lib/dovecot/libdovecot.so.0(+0x3b965) [0x7f3ab5e4e965] -> /usr/local/lib/dovecot/libdovecot.so.0(buffer_write+0x7c) [0x7f3ab5e4f0ec] -> /usr/local/lib/dovecot/lib21_fts_solr_plugin.so(+0x3292) [0x7f3ab4649292] -> /usr/local/lib/dovecot/lib21_fts_solr_plugin.so(+0x3a97) [0x7f3ab4649a97] -> /usr/local/lib/dovecot/lib20_fts_plugin.so(fts_backend_update_set_build_key+0x2c) [0x7f3ab4856abc] -> /usr/local/lib/dovecot/lib20_fts_plugin.so(fts_build_mail+0x2d1) [0x7f3ab4857561] -> /usr/local/lib/dovecot/lib20_fts_plugin.so(+0xc630) [0x7f3ab485c630] -> dovecot/indexer-worker [user1 at domain.com Sent - 5500/7517]() [0x40245f] -> dovecot/indexer-worker [user1 at domain.com Sent - 5500/7517]() [0x4027dd] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_call_io+0x36) [0x7f3ab5e5e0f6] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_handler_run+0x9f) [0x7f3ab5e5f17f] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_run+0x28) [0x7f3ab5e5e098] -> /usr/local/lib/dovecot/libdovecot.so.0(master_service_run+0x13) [0x7f3ab5e4b123] -> dovecot/indexer-worker [user1 at domain.com Sent - 5500/7517](main+0x109) [0x401f29] -> /lib/libc.so.6(__libc_start_main+0xfe) [0x7f3ab5aaed8e] -> dovecot/indexer-worker [user1 at domain.com Sent - 5500/7517]() [0x401d19] Jan 4 15:17:17 bubba dovecot: master: Error: service(indexer-worker): child 21821 returned error 83 (Out of memory (vsz_limit=256 MB, you may need to increase it)) Jan 4 15:17:17 bubba dovecot: indexer: Error: Indexer worker disconnected, discarding 1 requests for user1 at domain.com Jan 4 15:30:48 bubba dovecot: imap(user2 at domain.com): Error: Cached message size smaller than expected (822 < 1493) Jan 4 15:30:48 bubba dovecot: imap(user2 at domain.com): Error: Corrupted index cache file /var/mail/amfes.com/lmiller/mdbox/mailboxes/Sent/dbox-Mails/dovecot.index.cache: Broken physical size for mail UID 1786 Jan 4 15:30:48 bubba dovecot: imap(user2 at domain.com): Error: read(/var/mail/amfes.com/lmiller/mdbox/storage/m.208) failed: Input/output error (FETCH for mailbox Sent UID 1786) Jan 4 16:17:20 bubba dovecot: indexer-worker(user1 at domain.com): Fatal: pool_system_realloc(134217728): Out of memory Jan 4 16:17:20 bubba dovecot: indexer-worker(user1 at domain.com): Error: Raw backtrace: /usr/local/lib/dovecot/libdovecot.so.0(+0x3ed0a) [0x7ffc91276d0a] -> /usr/local/lib/dovecot/libdovecot.so.0(+0x3ed56) [0x7ffc91276d56] -> /usr/local/lib/dovecot/libdovecot.so.0(i_fatal+0) [0x7ffc9124fd08] -> /usr/local/lib/dovecot/libdovecot.so.0(+0x4f310) [0x7ffc91287310] -> /usr/local/lib/dovecot/libdovecot.so.0(+0x3b965) [0x7ffc91273965] -> /usr/local/lib/dovecot/libdovecot.so.0(buffer_write+0x7c) [0x7ffc912740ec] -> /usr/local/lib/dovecot/lib21_fts_solr_plugin.so(+0x3292) [0x7ffc8fa6e292] -> /usr/local/lib/dovecot/lib21_fts_solr_plugin.so(+0x3a97) [0x7ffc8fa6ea97] -> /usr/local/lib/dovecot/lib20_fts_plugin.so(fts_backend_update_set_build_key+0x2c) [0x7ffc8fc7babc] -> /usr/local/lib/dovecot/lib20_fts_plugin.so(fts_build_mail+0x2d1) [0x7ffc8fc7c561] -> /usr/local/lib/dovecot/lib20_fts_plugin.so(+0xc630) [0x7ffc8fc81630] -> dovecot/indexer-worker [user1 at domain.com Sent - 5500/7517]() [0x40245f] -> dovecot/indexer-worker [user1 at domain.com Sent - 5500/7517]() [0x4027dd] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_call_io+0x36) [0x7ffc912830f6] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_handler_run+0x9f) [0x7ffc9128417f] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_run+0x28) [0x7ffc91283098] -> /usr/local/lib/dovecot/libdovecot.so.0(master_service_run+0x13) [0x7ffc91270123] -> dovecot/indexer-worker [user1 at domain.com Sent - 5500/7517](main+0x109) [0x401f29] -> /lib/libc.so.6(__libc_start_main+0xfe) [0x7ffc90ed3d8e] -> dovecot/indexer-worker [user1 at domain.com Sent - 5500/7517]() [0x401d19] Jan 4 16:17:20 bubba dovecot: master: Error: service(indexer-worker): child 22927 returned error 83 (Out of memory (vsz_limit=256 MB, you may need to increase it)) Jan 4 16:17:20 bubba dovecot: indexer: Error: Indexer worker disconnected, discarding 1 requests for user1 at domain.com -- Daniel From user+dovecot at localhost.localdomain.org Thu Jan 5 03:19:37 2012 From: user+dovecot at localhost.localdomain.org (Pascal Volk) Date: Thu, 05 Jan 2012 02:19:37 +0100 Subject: [Dovecot] Storing passwords encrypted... bcrypt? In-Reply-To: <4F0367A2.1000807@Media-Brokers.com> References: <4F0367A2.1000807@Media-Brokers.com> Message-ID: <4F04FAA9.3020908@localhost.localdomain.org> On 01/03/2012 09:40 PM Charles Marcus wrote: > Hi everyone, > > Was just perusing this article about how trivial it is to decrypt > passwords that are stored using most (standard) encryption methods (like > MD5), and was wondering - is it possible to use bcrypt with > dovecot+postfix+mysql (or posgres)? Yes it is possible to use bcrypt with dovecot. Currently you have only to write your password scheme plugin. The bcrypt algorithm is described at http://en.wikipedia.org/wiki/Bcrypt. If you are using Dovecot >= 2.0 'doveadm pw' supports the schemes: *BSD: Blowfish-Crypt *Linux (since glibc 2.7): SHA-256-Crypt and SHA-512-Crypt Some distributions have also added support for Blowfish-Crypt See also: doveadm-pw(1) If you are using Dovecot < 2.0 you can also use any of the algorithms supported by your system's libc. But then you have to prefix the hashes with {CRYPT} - not {{BLF,SHA256,SHA512}-CRYPT}. Regards, Pascal -- The trapper recommends today: deadbeef.1200501 at localdomain.org From noel.butler at ausics.net Thu Jan 5 03:59:12 2012 From: noel.butler at ausics.net (Noel Butler) Date: Thu, 05 Jan 2012 11:59:12 +1000 Subject: [Dovecot] Storing passwords encrypted... bcrypt? In-Reply-To: <4F03B25B.2020309@orlitzky.com> References: <4F0367A2.1000807@Media-Brokers.com> <4F036C3B.5080904@Media-Brokers.com> <4F037CC5.9030900@carpenter.org> <4F038186.3030505@Media-Brokers.com> <4F038B72.1000003@carpenter.org> <4F03AA6E.30003@Media-Brokers.com> <4F03B25B.2020309@orlitzky.com> Message-ID: <1325728752.9555.8.camel@tardis> On Tue, 2012-01-03 at 20:58 -0500, Michael Orlitzky wrote: > To prevent rainbow table attacks, salt your passwords. You can make them > a little bit more difficult in plenty of ways, but salt is the /solution/. Agreed... We use Crypt::PasswdMD5 - unix_md5_crypt() for all general password storage including mail/ftp etc, except for web, where we need to use apache_md5_crypt(). -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 490 bytes Desc: This is a digitally signed message part URL: From patrickdk at patrickdk.com Thu Jan 5 04:06:44 2012 From: patrickdk at patrickdk.com (Patrick Domack) Date: Wed, 04 Jan 2012 21:06:44 -0500 Subject: [Dovecot] Storing passwords encrypted... bcrypt? In-Reply-To: <1325728752.9555.8.camel@tardis> References: <4F0367A2.1000807@Media-Brokers.com> <4F036C3B.5080904@Media-Brokers.com> <4F037CC5.9030900@carpenter.org> <4F038186.3030505@Media-Brokers.com> <4F038B72.1000003@carpenter.org> <4F03AA6E.30003@Media-Brokers.com> <4F03B25B.2020309@orlitzky.com> <1325728752.9555.8.camel@tardis> Message-ID: <20120104210644.Horde.YEJENpLnE6FPBQW0C1KEd8A@kishi.patrickdk.com> Quoting Noel Butler : > On Tue, 2012-01-03 at 20:58 -0500, Michael Orlitzky wrote: > > >> To prevent rainbow table attacks, salt your passwords. You can make them >> a little bit more difficult in plenty of ways, but salt is the /solution/. > > > > Agreed... > We use Crypt::PasswdMD5 - > unix_md5_crypt() for all general password storage including mail/ftp > etc, except for web, where we need to use apache_md5_crypt(). But still, the results are all the same, if they get the hash, it can be broken, given time. Using more cpu expensive methods make it take longer (like adding salt, more complex hash). But the end result is they will have it if they want it. The only solution is to use two factor authenication, or rotate your passwords quicker than they can get broken. From user+dovecot at localhost.localdomain.org Thu Jan 5 04:26:59 2012 From: user+dovecot at localhost.localdomain.org (Pascal Volk) Date: Thu, 05 Jan 2012 03:26:59 +0100 Subject: [Dovecot] Storing passwords encrypted... bcrypt? In-Reply-To: <1325728752.9555.8.camel@tardis> References: <4F0367A2.1000807@Media-Brokers.com> <4F036C3B.5080904@Media-Brokers.com> <4F037CC5.9030900@carpenter.org> <4F038186.3030505@Media-Brokers.com> <4F038B72.1000003@carpenter.org> <4F03AA6E.30003@Media-Brokers.com> <4F03B25B.2020309@orlitzky.com> <1325728752.9555.8.camel@tardis> Message-ID: <4F050A73.7090300@localhost.localdomain.org> On 01/05/2012 02:59 AM Noel Butler wrote: > We use Crypt::PasswdMD5 - > unix_md5_crypt() for all general password storage including mail/ftp > etc, except for web, where we need to use apache_md5_crypt(). Huh, why do you need to store passwords in Apaches md5 crypt() format? ,--[ Apache config ]-- | AuthType Basic | AuthName "bla ?" | AuthBasicProvider dbm | AuthDBMUserFile /path/2/.htpasswd | Require valid-user | Order allow,deny | Allow from 203.0.113.0/24 2001:db8::/32 | Satisfy any `-- ,--[ stdin/stdout ]-- | user at localhost ~ $ python | Python 2.5.4 (r254:67916, Feb 17 2009, 20:16:45) | [GCC 4.3.3] on linux2 | Type "help", "copyright", "credits" or "license" for more information. | >>> import anydbm | >>> dbm = anydbm.open('/path/2/.htpasswd') | >>> dbm['user'] | '$6$Rn6L.3hT2x6dnX0t$d0/Tx.Ps3KSRxxm.ggFBYqum54/k8JmDzUcpoCXre88cBEXK8WB.Vdb1YzN.8fOvz3fJU4uLgW0/AlTiB9Ui2.::Real Name' | >>> `-- Regards, Pascal -- The trapper recommends today: deadbeef.1200503 at localdomain.org From noel.butler at ausics.net Thu Jan 5 04:31:37 2012 From: noel.butler at ausics.net (Noel Butler) Date: Thu, 05 Jan 2012 12:31:37 +1000 Subject: [Dovecot] Storing passwords encrypted... bcrypt? In-Reply-To: <20120104210644.Horde.YEJENpLnE6FPBQW0C1KEd8A@kishi.patrickdk.com> References: <4F0367A2.1000807@Media-Brokers.com> <4F036C3B.5080904@Media-Brokers.com> <4F037CC5.9030900@carpenter.org> <4F038186.3030505@Media-Brokers.com> <4F038B72.1000003@carpenter.org> <4F03AA6E.30003@Media-Brokers.com> <4F03B25B.2020309@orlitzky.com> <1325728752.9555.8.camel@tardis> <20120104210644.Horde.YEJENpLnE6FPBQW0C1KEd8A@kishi.patrickdk.com> Message-ID: <1325730697.9555.15.camel@tardis> On Wed, 2012-01-04 at 21:06 -0500, Patrick Domack wrote: > Quoting Noel Butler : > > > On Tue, 2012-01-03 at 20:58 -0500, Michael Orlitzky wrote: > > > > > >> To prevent rainbow table attacks, salt your passwords. You can make them > >> a little bit more difficult in plenty of ways, but salt is the /solution/. > > > > > > > > Agreed... > > We use Crypt::PasswdMD5 - > > unix_md5_crypt() for all general password storage including mail/ftp > > etc, except for web, where we need to use apache_md5_crypt(). > > But still, the results are all the same, if they get the hash, it can > be broken, given time. Using more cpu expensive methods make it take > longer (like adding salt, more complex hash). But the end result is > they will have it if they want it. > > The only solution is to use two factor authenication, or rotate your > passwords quicker than they can get broken. > Yup, anything can be broken, given time and resources, no mater what, but using crypted MD5 is better than using normal md5 (like sadly way too many use) and having easy rainbow attacks succeed in mere seconds. No mater how good your database security is, always assume the worse, too many think that a DB compromise just can't happen to them, and as murphy's law shows, their usually the ones it does happen to. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 490 bytes Desc: This is a digitally signed message part URL: From noel.butler at ausics.net Thu Jan 5 04:36:38 2012 From: noel.butler at ausics.net (Noel Butler) Date: Thu, 05 Jan 2012 12:36:38 +1000 Subject: [Dovecot] Storing passwords encrypted... bcrypt? In-Reply-To: <4F050A73.7090300@localhost.localdomain.org> References: <4F0367A2.1000807@Media-Brokers.com> <4F036C3B.5080904@Media-Brokers.com> <4F037CC5.9030900@carpenter.org> <4F038186.3030505@Media-Brokers.com> <4F038B72.1000003@carpenter.org> <4F03AA6E.30003@Media-Brokers.com> <4F03B25B.2020309@orlitzky.com> <1325728752.9555.8.camel@tardis> <4F050A73.7090300@localhost.localdomain.org> Message-ID: <1325730998.9555.21.camel@tardis> On Thu, 2012-01-05 at 03:26 +0100, Pascal Volk wrote: > On 01/05/2012 02:59 AM Noel Butler wrote: > > We use Crypt::PasswdMD5 - > > unix_md5_crypt() for all general password storage including mail/ftp > > etc, except for web, where we need to use apache_md5_crypt(). > > Huh, why do you need to store passwords in Apaches md5 crypt() format? > Because with multiple servers, we store them all in (replicated) mysql :) (the same with postfix/dovecot). and as I'm sure you are aware, Apache does not understand standard crypted MD5, hence why there is the second option of apache_md5_crypt() > ,--[ Apache config ]-- > | AuthType Basic > | AuthName "bla ?" > | AuthBasicProvider dbm > | AuthDBMUserFile /path/2/.htpasswd > | Require valid-user > | Order allow,deny > | Allow from 203.0.113.0/24 2001:db8::/32 > | Satisfy any > `-- -------------- next part -------------- A non-text attachment was scrubbed... Name: face-smile.png Type: image/png Size: 873 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 490 bytes Desc: This is a digitally signed message part URL: From user+dovecot at localhost.localdomain.org Thu Jan 5 05:05:53 2012 From: user+dovecot at localhost.localdomain.org (Pascal Volk) Date: Thu, 05 Jan 2012 04:05:53 +0100 Subject: [Dovecot] Storing passwords encrypted... bcrypt? In-Reply-To: <1325730998.9555.21.camel@tardis> References: <4F0367A2.1000807@Media-Brokers.com> <4F036C3B.5080904@Media-Brokers.com> <4F037CC5.9030900@carpenter.org> <4F038186.3030505@Media-Brokers.com> <4F038B72.1000003@carpenter.org> <4F03AA6E.30003@Media-Brokers.com> <4F03B25B.2020309@orlitzky.com> <1325728752.9555.8.camel@tardis> <4F050A73.7090300@localhost.localdomain.org> <1325730998.9555.21.camel@tardis> Message-ID: <4F051391.404@localhost.localdomain.org> On 01/05/2012 03:36 AM Noel Butler wrote: > > Because with multiple servers, we store them all in (replicated) > mysql :) (the same with postfix/dovecot). > and as I'm sure you are aware, Apache does not understand standard > crypted MD5, hence why there is the second option of apache_md5_crypt() Oh, let me guess: You are using Windows, Netware, TPF as OS for your web servers? ;-) man htpasswd | grep -- '-d ' -d Use crypt() encryption for passwords. This is not supported by the httpd server on Windows and Netware and TPF. As you may have seen in my previous mail, the password is generated using crypt(). HTTP Authentication works with that password hash, even with the httpd from the ASF. Regards, Pascal -- The trapper recommends today: cafefeed.1200504 at localdomain.org From david at blue-labs.org Thu Jan 5 05:16:15 2012 From: david at blue-labs.org (David Ford) Date: Wed, 04 Jan 2012 22:16:15 -0500 Subject: [Dovecot] Storing passwords encrypted... bcrypt? In-Reply-To: <1325730998.9555.21.camel@tardis> References: <4F0367A2.1000807@Media-Brokers.com> <4F036C3B.5080904@Media-Brokers.com> <4F037CC5.9030900@carpenter.org> <4F038186.3030505@Media-Brokers.com> <4F038B72.1000003@carpenter.org> <4F03AA6E.30003@Media-Brokers.com> <4F03B25B.2020309@orlitzky.com> <1325728752.9555.8.camel@tardis> <4F050A73.7090300@localhost.localdomain.org> <1325730998.9555.21.camel@tardis> Message-ID: <4F0515FF.9050101@blue-labs.org> > Because with multiple servers, we store them all in (replicated) mysql > :) (the same with postfix/dovecot). and as I'm sure you are aware, > Apache does not understand standard crypted MD5, hence why there is > the second option of apache_md5_crypt() with multiple servers, we use pam & nss, with a replicated ldap backed. this serves all auth requests for all services and no services cares if it is sha, md5, or a crypt method. -d From noel.butler at ausics.net Thu Jan 5 09:19:10 2012 From: noel.butler at ausics.net (Noel Butler) Date: Thu, 05 Jan 2012 17:19:10 +1000 Subject: [Dovecot] Storing passwords encrypted... bcrypt? In-Reply-To: <4F051391.404@localhost.localdomain.org> References: <4F0367A2.1000807@Media-Brokers.com> <4F036C3B.5080904@Media-Brokers.com> <4F037CC5.9030900@carpenter.org> <4F038186.3030505@Media-Brokers.com> <4F038B72.1000003@carpenter.org> <4F03AA6E.30003@Media-Brokers.com> <4F03B25B.2020309@orlitzky.com> <1325728752.9555.8.camel@tardis> <4F050A73.7090300@localhost.localdomain.org> <1325730998.9555.21.camel@tardis> <4F051391.404@localhost.localdomain.org> Message-ID: <1325747950.5349.31.camel@tardis> On Thu, 2012-01-05 at 04:05 +0100, Pascal Volk wrote: > On 01/05/2012 03:36 AM Noel Butler wrote: > > > > > Because with multiple servers, we store them all in (replicated) > > mysql :) (the same with postfix/dovecot). > > and as I'm sure you are aware, Apache does not understand standard > > crypted MD5, hence why there is the second option of apache_md5_crypt() > > Oh, let me guess: You are using Windows, Netware, TPF as OS for your > web servers? ;-) > > man htpasswd | grep -- '-d ' > -d Use crypt() encryption for passwords. This is not supported by the httpd server on Windows and Netware and TPF. > > > As you may have seen in my previous mail, the password is generated > using crypt(). HTTP Authentication works with that password hash, even > with the httpd from the ASF. > I think you need to do some homework, and although I now have 3.25 days of holidays remaining, I don't intend to waste them educating anybody hehe. Assuming you even know what I'm talking about, which I suspect you don't since you keep using console commands and things like htpasswd, which does not write to a mysql db, you don't seem to have comprehended that I do not work with flat files nor local so it is irrelevant, I use perl scripts for all systems management, so I hope you are not going to suggest that I should make a system call when I can do it natively in perl. But please, by all means, create a mysql db using a system crpyted md5 password, I'll even help ya, openssl passwd -1 foobartilly $1$e3a.f3uW$SYRQiMlEhC5XlnSxtxiNC/ pop the entry into the db and go for your life trying to authenticate. and when you've gone through half bottle of bourbon trying to figure out why its not working, try the apache crypted md5 version $apr1$yKxk.DrQ $ybcmM8mC1qD5t5FvoY9820 If you stop, and think about what I've said, you just might wake up to what I've been saying. Cheers PS me use windaz? wash your bloody mouth out mister! ;) (Slackware all the way) -------------- next part -------------- A non-text attachment was scrubbed... Name: face-wink.png Type: image/png Size: 876 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 490 bytes Desc: This is a digitally signed message part URL: From noel.butler at ausics.net Thu Jan 5 09:28:10 2012 From: noel.butler at ausics.net (Noel Butler) Date: Thu, 05 Jan 2012 17:28:10 +1000 Subject: [Dovecot] Storing passwords encrypted... bcrypt? In-Reply-To: <4F0515FF.9050101@blue-labs.org> References: <4F0367A2.1000807@Media-Brokers.com> <4F036C3B.5080904@Media-Brokers.com> <4F037CC5.9030900@carpenter.org> <4F038186.3030505@Media-Brokers.com> <4F038B72.1000003@carpenter.org> <4F03AA6E.30003@Media-Brokers.com> <4F03B25B.2020309@orlitzky.com> <1325728752.9555.8.camel@tardis> <4F050A73.7090300@localhost.localdomain.org> <1325730998.9555.21.camel@tardis> <4F0515FF.9050101@blue-labs.org> Message-ID: <1325748490.5349.37.camel@tardis> On Wed, 2012-01-04 at 22:16 -0500, David Ford wrote: > > with multiple servers, we use pam & nss, with a replicated ldap backed. public accessible mode :P oh dont start me on that, but luckily I'm not subjected to its dangers...and telling Pascal bout Bourbon made me realise its time to head out for last couple of nights of freedom and have a few. -------------- next part -------------- A non-text attachment was scrubbed... Name: face-raspberry.png Type: image/png Size: 865 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 490 bytes Desc: This is a digitally signed message part URL: From openbsd at e-solutions.re Thu Jan 5 09:45:06 2012 From: openbsd at e-solutions.re (Wesley M.) Date: Thu, 05 Jan 2012 11:45:06 +0400 Subject: [Dovecot] dovecot-lda error Message-ID: Hi, I use Dovecot 2.0.13 on OpenBSD 5.0 When i try to send emails i have the following error in /var/log/maillog Jan 5 11:23:49 mail50 postfix/pipe[29423]: D951842244C: to=, relay=dovecot, delay=0.02, delays=0.01/0/0/0.01, dsn=5.3.0, status=bounced (command line usage error. Command output: deliver: unknown option -- n Usage: dovecot-lda [-c ] [-a ] [-d ] [-p ] [-f ] [-m ] [-e] [-k] ) Jan 5 11:23:49 mail50 postfix/qmgr[13787]: D951842244C: removed In my /etc/postfix/master.cf : # Dovecot LDA dovecot unix - n n - - pipe flags=ODR user=_dovecot:_dovecot argv=/usr/local/libexec/dovecot/deliver -f ${ sender} -d ${user}@${nexthop} -n -m ${extension} How can i resolve that ? Thank you very much for your replies. Cheers, Wesley. From e-frog at gmx.de Thu Jan 5 10:39:50 2012 From: e-frog at gmx.de (e-frog) Date: Thu, 05 Jan 2012 09:39:50 +0100 Subject: [Dovecot] dovecot-lda error In-Reply-To: References: Message-ID: <4F0561D6.4020300@gmx.de> On 05.01.2012 08:45, wrote Wesley M.: > > > Hi, Hi, > > I use Dovecot 2.0.13 on OpenBSD 5.0 > When i try to send emails i > have the following error in /var/log/maillog > > Jan 5 11:23:49 mail50 > postfix/pipe[29423]: D951842244C: to=, relay=dovecot, delay=0.02, > delays=0.01/0/0/0.01, dsn=5.3.0, status=bounced (command line usage error. > Command output: deliver: unknown option -- n Usage: dovecot-lda [-c ] [-a ] Look at the bottom of this page: http://wiki2.dovecot.org/Upgrading/2.0 > [-d ] [-p ] [-f ] [-m ] [-e] [-k] ) > Jan 5 11:23:49 mail50 > postfix/qmgr[13787]: D951842244C: removed > > In my /etc/postfix/master.cf > : > # Dovecot LDA > dovecot unix - n n - - pipe > flags=ODR > user=_dovecot:_dovecot argv=/usr/local/libexec/dovecot/deliver -f > ${ > sender} -d ${user}@${nexthop} -n -m ${extension} > > How can i resolve that > ? > Thank you very much for your replies. > > Cheers, > > Wesley. > > From CMarcus at Media-Brokers.com Thu Jan 5 13:24:26 2012 From: CMarcus at Media-Brokers.com (Charles Marcus) Date: Thu, 05 Jan 2012 06:24:26 -0500 Subject: [Dovecot] Storing passwords encrypted... bcrypt? In-Reply-To: <4F03AD51.7080506@blue-labs.org> References: <4F0367A2.1000807@Media-Brokers.com> <4F036C3B.5080904@Media-Brokers.com> <4F037CC5.9030900@carpenter.org> <4F038186.3030505@Media-Brokers.com> <4F038B72.1000003@carpenter.org> <4F03AA6E.30003@Media-Brokers.com> <4F03AD51.7080506@blue-labs.org> Message-ID: <4F05886A.4080907@Media-Brokers.com> On 2012-01-03 8:37 PM, David Ford wrote: > part of my point along that of brute force resistance, is that > when security becomes onerous to the typical user such as requiring > non-repeat passwords of "10 characters including punctuation and mixed > case", even stalwart policy followers start tending toward avoiding it. Our policy is that we also don't force password changes unless/until there is a reason (an account is hacked/abused. I've been managing this mail system for 11+ years now, and this has *never* happened (knock wood). I'm not saying we're immune, or it can never happen, I'm simply saying it has never happened, so out policy is working as far as I'm concerned. > if anyone has a stressful job, spends a lot of time working, missing > sleep, is thereby prone to memory lapse, it's almost a sure guarantee > they *will* write it down/store it somewhere -- usually not in a > password safe. Again - there is no *need* form them to write it down. Once their workstation/home computer/phone is set up, it remembers the password for them. > or, they'll export their saved passwords to make a backup plain text > copy, and leave it on their Desktop folder but coyly named and > prefixed with a few random emails to grandma, so mr. sysadmin doesn't > notice it. And if I don't notice it, no one else will either, most likely. There is *no* perfect way, but ours works and has been working for 11+ years. > on a tangent, you should worry about active brute force attacks. > fail2ban and iptables heuristics become meaningless when the brute > forcing is done by bot nets which is more and more common than > single-host attacks these days. one IP per attempt in a 10-20 minute > window will probably never trigger any of these methods. Nor will it ever be successful in brute forcing a strong password either, because a botnet has to try the same user+different passwords, and is easy to monitor for an excessive number of failures (of the same user login attempts) and notify the sys admin (me) well in advance of the hack attempt being successful. -- Best regards, Charles From CMarcus at Media-Brokers.com Thu Jan 5 13:26:17 2012 From: CMarcus at Media-Brokers.com (Charles Marcus) Date: Thu, 05 Jan 2012 06:26:17 -0500 Subject: [Dovecot] Storing passwords encrypted... bcrypt? In-Reply-To: <4F03B25B.2020309@orlitzky.com> References: <4F0367A2.1000807@Media-Brokers.com> <4F036C3B.5080904@Media-Brokers.com> <4F037CC5.9030900@carpenter.org> <4F038186.3030505@Media-Brokers.com> <4F038B72.1000003@carpenter.org> <4F03AA6E.30003@Media-Brokers.com> <4F03B25B.2020309@orlitzky.com> Message-ID: <4F0588D9.1030709@Media-Brokers.com> On 2012-01-03 8:58 PM, Michael Orlitzky wrote: > On 01/03/2012 08:25 PM, Charles Marcus wrote: >> What I'm worried about is the worst case scenario of someone getting >> ahold of the entire user database of *stored* passwords, where they can >> then take their time and brute force them at their leisure, on *their* >> *own* systems, without having to hammer my server over smtp/imap and >> without the automated limit of *my* fail2ban getting in their way. > To prevent rainbow table attacks, salt your passwords. You can make them > a little bit more difficult in plenty of ways, but salt is the /solution/. Go read that link (you obviously didn't yet, because he claims that salting passwords is next to *useless*... >> As for people writing their passwords down... our policy is that it is a >> potentially *firable* *offense* (never even encountered one case of >> anyone posting their password, and I'm on these systems off and on all >> the time) if they do post these anywhere that is not under lock and key. >> Also, I always set up their email clients for them (on their >> workstations and on their phones - and of course tell it to remember the >> password, so they basically never have to enter it. > You realize they're just walking around with a $400 post-it note with > the password written on it, right? Nope, you are wrong - as I have patiently explained before. They do not *need* to write their password down. -- Best regards, Charles From CMarcus at Media-Brokers.com Thu Jan 5 13:31:32 2012 From: CMarcus at Media-Brokers.com (Charles Marcus) Date: Thu, 05 Jan 2012 06:31:32 -0500 Subject: [Dovecot] Storing passwords encrypted... bcrypt? In-Reply-To: <4F04FAA9.3020908@localhost.localdomain.org> References: <4F0367A2.1000807@Media-Brokers.com> <4F04FAA9.3020908@localhost.localdomain.org> Message-ID: <4F058A14.2060303@Media-Brokers.com> On 2012-01-04 8:19 PM, Pascal Volk wrote: > On 01/03/2012 09:40 PM Charles Marcus wrote: >> Hi everyone, >> >> Was just perusing this article about how trivial it is to decrypt >> passwords that are stored using most (standard) encryption methods (like >> MD5), and was wondering - is it possible to use bcrypt with >> dovecot+postfix+mysql (or posgres)? > Yes it is possible to use bcrypt with dovecot. Currently you have only > to write your password scheme plugin. The bcrypt algorithm is described > at http://en.wikipedia.org/wiki/Bcrypt. > > If you are using Dovecot>= 2.0 'doveadm pw' supports the schemes: > *BSD: Blowfish-Crypt > *Linux (since glibc 2.7): SHA-256-Crypt and SHA-512-Crypt > Some distributions have also added support for Blowfish-Crypt > See also: doveadm-pw(1) > > If you are using Dovecot< 2.0 you can also use any of the algorithms > supported by your system's libc. But then you have to prefix the hashes > with {CRYPT} - not {{BLF,SHA256,SHA512}-CRYPT}. Hmmm... thanks very much Pascal, I think that gets me half-way to an answer (but since ianap, this is mostly greek to me and so is not quite a solution I can implement yet)... You said above that 'yes, I can use it with dovecot' - but what about postfix and mysql... where/how do they fit into this mix? My thought was that there are two issues here: 1. Storing them in bcrypted form, and 2. The clients must support *decrypting* them... So, since I use postfixadmin, I'm guessing that for #1, it will have to support encrypting them in bcrypt form, and then I have to worry about dovecot - and since I'm planning on using postfix+dovecot-sasl, once dovecot supports it, postfix will too... Is that about right? Thanks again, -- Best regards, Charles From patrickdk at patrickdk.com Thu Jan 5 16:53:38 2012 From: patrickdk at patrickdk.com (Patrick Domack) Date: Thu, 05 Jan 2012 09:53:38 -0500 Subject: [Dovecot] Storing passwords encrypted... bcrypt? In-Reply-To: <1325747950.5349.31.camel@tardis> References: <4F0367A2.1000807@Media-Brokers.com> <4F036C3B.5080904@Media-Brokers.com> <4F037CC5.9030900@carpenter.org> <4F038186.3030505@Media-Brokers.com> <4F038B72.1000003@carpenter.org> <4F03AA6E.30003@Media-Brokers.com> <4F03B25B.2020309@orlitzky.com> <1325728752.9555.8.camel@tardis> <4F050A73.7090300@localhost.localdomain.org> <1325730998.9555.21.camel@tardis> <4F051391.404@localhost.localdomain.org> <1325747950.5349.31.camel@tardis> Message-ID: <20120105095338.Horde.6Wa7KJLnE6FPBbly7kZFh-A@kishi.patrickdk.com> Quoting Noel Butler : > On Thu, 2012-01-05 at 04:05 +0100, Pascal Volk wrote: > >> On 01/05/2012 03:36 AM Noel Butler wrote: >> >> > >> > Because with multiple servers, we store them all in (replicated) >> > mysql :) (the same with postfix/dovecot). >> > and as I'm sure you are aware, Apache does not understand standard >> > crypted MD5, hence why there is the second option of apache_md5_crypt() >> >> Oh, let me guess: You are using Windows, Netware, TPF as OS for your >> web servers? ;-) >> >> man htpasswd | grep -- '-d ' >> -d Use crypt() encryption for passwords. This is not >> supported by the httpd server on Windows and Netware and TPF. >> >> >> As you may have seen in my previous mail, the password is generated >> using crypt(). HTTP Authentication works with that password hash, even >> with the httpd from the ASF. >> > > > I think you need to do some homework, and although I now have 3.25 days > of holidays remaining, I don't intend to waste them educating anybody > hehe. Assuming you even know what I'm talking about, which I suspect you > don't since you keep using console commands and things like htpasswd, > which does not write to a mysql db, you don't seem to have comprehended > that I do not work with flat files nor local so it is irrelevant, I use > perl scripts for all systems management, so I hope you are not going to > suggest that I should make a system call when I can do it natively in > perl. > > But please, by all means, create a mysql db using a system crpyted md5 > password, I'll even help ya, openssl passwd -1 foobartilly > > $1$e3a.f3uW$SYRQiMlEhC5XlnSxtxiNC/ > > pop the entry into the db and go for your life trying to authenticate. > > > and when you've gone through half bottle of bourbon trying to figure out > why its not working, try the apache crypted md5 version $apr1$yKxk.DrQ > $ybcmM8mC1qD5t5FvoY9820 Mysql supports crypt right in it, so you can just submit the password to the mysql crypt function. We know perl has to support it also. The first thing I did when I was hired was to convert the password database from md5 to $6$. After that, I secured the machines that could and majorly limited what ones of them could get access to the list. About a month or two after this, we had about a thousand accounts compromised. So someone obviously got the list in how the old system was set, as every compromised password contains only lowercase letters less than 8 long. I wont say salted anything is bad, but keep the salt lengths up. Start with 8bytes atleast. crypts new option to support rounds also makes it a lot of fun, too bad I haven't seen consistant support for it yet, so I haven't been able to make use of that option. From michael at orlitzky.com Thu Jan 5 17:28:26 2012 From: michael at orlitzky.com (Michael Orlitzky) Date: Thu, 05 Jan 2012 10:28:26 -0500 Subject: [Dovecot] Storing passwords encrypted... bcrypt? In-Reply-To: <4F0588D9.1030709@Media-Brokers.com> References: <4F0367A2.1000807@Media-Brokers.com> <4F036C3B.5080904@Media-Brokers.com> <4F037CC5.9030900@carpenter.org> <4F038186.3030505@Media-Brokers.com> <4F038B72.1000003@carpenter.org> <4F03AA6E.30003@Media-Brokers.com> <4F03B25B.2020309@orlitzky.com> <4F0588D9.1030709@Media-Brokers.com> Message-ID: <4F05C19A.4030603@orlitzky.com> On 01/05/12 06:26, Charles Marcus wrote: > >> To prevent rainbow table attacks, salt your passwords. You can make them >> a little bit more difficult in plenty of ways, but salt is the >> /solution/. > > Go read that link (you obviously didn't yet, because he claims that > salting passwords is next to *useless*... > He doesn't claim that, but he's a crackpot anyway. Use a slow algorithm (others already mentioned bcrypt) to prevent brute-force search, and use salt to prevent pre-computed lookups. Anyone who tells you otherwise can probably be ignored. Extraordinary claims require extraordinary evidence. >> You realize they're just walking around with a $400 post-it note with >> the password written on it, right? > > Nope, you are wrong - as I have patiently explained before. They do not > *need* to write their password down. > They have them written down on their phones. If someone gets a hold of the phone, he can just read the password off of it. From michael at orlitzky.com Thu Jan 5 17:32:32 2012 From: michael at orlitzky.com (Michael Orlitzky) Date: Thu, 05 Jan 2012 10:32:32 -0500 Subject: [Dovecot] Storing passwords encrypted... bcrypt? In-Reply-To: <20120104210644.Horde.YEJENpLnE6FPBQW0C1KEd8A@kishi.patrickdk.com> References: <4F0367A2.1000807@Media-Brokers.com> <4F036C3B.5080904@Media-Brokers.com> <4F037CC5.9030900@carpenter.org> <4F038186.3030505@Media-Brokers.com> <4F038B72.1000003@carpenter.org> <4F03AA6E.30003@Media-Brokers.com> <4F03B25B.2020309@orlitzky.com> <1325728752.9555.8.camel@tardis> <20120104210644.Horde.YEJENpLnE6FPBQW0C1KEd8A@kishi.patrickdk.com> Message-ID: <4F05C290.5020308@orlitzky.com> On 01/04/12 21:06, Patrick Domack wrote: > > But still, the results are all the same, if they get the hash, it can be > broken, given time. Using more cpu expensive methods make it take longer > (like adding salt, more complex hash). But the end result is they will > have it if they want it. > Unless someone breaks either math or the hash algorithm, this is false. My password will be of little use to you in 10^20 years. From michael at orlitzky.com Thu Jan 5 17:46:23 2012 From: michael at orlitzky.com (Michael Orlitzky) Date: Thu, 05 Jan 2012 10:46:23 -0500 Subject: [Dovecot] Storing passwords encrypted... bcrypt? In-Reply-To: <4F05C19A.4030603@orlitzky.com> References: <4F0367A2.1000807@Media-Brokers.com> <4F036C3B.5080904@Media-Brokers.com> <4F037CC5.9030900@carpenter.org> <4F038186.3030505@Media-Brokers.com> <4F038B72.1000003@carpenter.org> <4F03AA6E.30003@Media-Brokers.com> <4F03B25B.2020309@orlitzky.com> <4F0588D9.1030709@Media-Brokers.com> <4F05C19A.4030603@orlitzky.com> Message-ID: <4F05C5CF.7010804@orlitzky.com> On 01/05/12 10:28, Michael Orlitzky wrote: >> >> Nope, you are wrong - as I have patiently explained before. They do not >> *need* to write their password down. >> > > They have them written down on their phones. If someone gets a hold of > the phone, he can just read the password off of it. I should point out, I don't think this is a bad thing! From CMarcus at Media-Brokers.com Thu Jan 5 18:14:20 2012 From: CMarcus at Media-Brokers.com (Charles Marcus) Date: Thu, 05 Jan 2012 11:14:20 -0500 Subject: [Dovecot] Storing passwords encrypted... bcrypt? In-Reply-To: <4F05C19A.4030603@orlitzky.com> References: <4F0367A2.1000807@Media-Brokers.com> <4F036C3B.5080904@Media-Brokers.com> <4F037CC5.9030900@carpenter.org> <4F038186.3030505@Media-Brokers.com> <4F038B72.1000003@carpenter.org> <4F03AA6E.30003@Media-Brokers.com> <4F03B25B.2020309@orlitzky.com> <4F0588D9.1030709@Media-Brokers.com> <4F05C19A.4030603@orlitzky.com> Message-ID: <4F05CC5C.7020807@Media-Brokers.com> On 2012-01-05 10:28 AM, Michael Orlitzky wrote: > On 01/05/12 06:26, Charles Marcus wrote: >>> To prevent rainbow table attacks, salt your passwords. You can make them >>> a little bit more difficult in plenty of ways, but salt is the >>> /solution/. >> Go read that link (you obviously didn't yet, because he claims that >> salting passwords is next to *useless*... > He doesn't claim that, Ummm... yes, he does... from tfa: "Salts Will Not Help You It?s important to note that salts are useless for preventing dictionary attacks or brute force attacks. You can use huge salts or many salts or hand-harvested, shade-grown, organic Himalayan pink salt. It doesn?t affect how fast an attacker can try a candidate password, given the hash and the salt from your database. Salt or no, if you?re using a general-purpose hash function designed for speed you?re well and truly effed." > but he's a crackpot anyway. Why? I asked because I'm genuinely unsure (don't know enough about the innards of the different encryption methods), and that's why I asked. Simply saying he's a crackpot means nothing. Also... > Use a slow algorithm (others already mentioned bcrypt)to prevent > brute-force search, Actually, that (bcrypt) is precisely what *the author of the article* (the one who you are saying is a crackpot) is suggesting to use - I guess you didn't even bother to read it or else you'd know that, so why bother commenting? > and use salt to prevent pre-computed lookups. Anyone who tells you > otherwise can probably be ignored. Extraordinary claims require > extraordinary evidence. I don't see it as an extraordinary claim, and anyone who goes around claiming someone else is a crackpot without evidence to support the claim is just yammering. >>> You realize they're just walking around with a $400 post-it note with >>> the password written on it, right? >> Nope, you are wrong - as I have patiently explained before. They do not >> *need* to write their password down. > They have them written down on their phones. If someone gets a hold of > the phone, he can just read the password off of it. No, they don't, your claim is baseless and without merit. Most people have never even known what their password *is*, much less written it down, because as I said (more than once), *I* set up their email clients (workstations, home computers and phones) *for them*. -- Best regards, Charles From wgillespie at es2eng.com Thu Jan 5 18:21:45 2012 From: wgillespie at es2eng.com (Willie Gillespie) Date: Thu, 05 Jan 2012 09:21:45 -0700 Subject: [Dovecot] Storing passwords encrypted... bcrypt? In-Reply-To: <4F05CC5C.7020807@Media-Brokers.com> References: <4F0367A2.1000807@Media-Brokers.com> <4F036C3B.5080904@Media-Brokers.com> <4F037CC5.9030900@carpenter.org> <4F038186.3030505@Media-Brokers.com> <4F038B72.1000003@carpenter.org> <4F03AA6E.30003@Media-Brokers.com> <4F03B25B.2020309@orlitzky.com> <4F0588D9.1030709@Media-Brokers.com> <4F05C19A.4030603@orlitzky.com> <4F05CC5C.7020807@Media-Brokers.com> Message-ID: <4F05CE19.8030204@es2eng.com> On 1/5/2012 9:14 AM, Charles Marcus wrote: > On 2012-01-05 10:28 AM, Michael Orlitzky wrote: >> On 01/05/12 06:26, Charles Marcus wrote: >>>> You realize they're just walking around with a $400 post-it note with >>>> the password written on it, right? > >>> Nope, you are wrong - as I have patiently explained before. They do not >>> *need* to write their password down. > >> They have them written down on their phones. If someone gets a hold of >> the phone, he can just read the password off of it. > > No, they don't, your claim is baseless and without merit. > > Most people have never even known what their password *is*, much less > written it down, because as I said (more than once), *I* set up their > email clients (workstations, home computers and phones) *for them*. If the phone knows the password and I have the phone, then I have the password. Similarly, if I compromise the workstation that knows the password, then I also have the password. Even if the user doesn't know the password, the phone/workstation does. And it has to be stored in a retrievable way. That's what he's trying to say when he was talking about a "$400 post-it note." From michael at orlitzky.com Thu Jan 5 18:31:17 2012 From: michael at orlitzky.com (Michael Orlitzky) Date: Thu, 05 Jan 2012 11:31:17 -0500 Subject: [Dovecot] Storing passwords encrypted... bcrypt? In-Reply-To: <4F05CC5C.7020807@Media-Brokers.com> References: <4F0367A2.1000807@Media-Brokers.com> <4F036C3B.5080904@Media-Brokers.com> <4F037CC5.9030900@carpenter.org> <4F038186.3030505@Media-Brokers.com> <4F038B72.1000003@carpenter.org> <4F03AA6E.30003@Media-Brokers.com> <4F03B25B.2020309@orlitzky.com> <4F0588D9.1030709@Media-Brokers.com> <4F05C19A.4030603@orlitzky.com> <4F05CC5C.7020807@Media-Brokers.com> Message-ID: <4F05D055.7020305@orlitzky.com> On 01/05/12 11:14, Charles Marcus wrote: > > Ummm... yes, he does... from tfa: > > "Salts Will Not Help You > > It?s important to note that salts are useless for preventing dictionary > attacks or brute force attacks. You can use huge salts or many salts or > hand-harvested, shade-grown, organic Himalayan pink salt. It doesn?t > affect how fast an attacker can try a candidate password, given the hash > and the salt from your database. > > Salt or no, if you?re using a general-purpose hash function designed for > speed you?re well and truly effed." Ugh, sorry. I went to the link that someone else quoted: https://www.grc.com/haystack.htm The article you posted is correct. Salt will not prevent brute-force search, but it isn't meant to. Salt is meant to prevent the attacker from using precomputed tables of hashed passwords, called rainbow tables. To prevent brute-force search, you use a better algorithm, like the author says. >> but he's a crackpot anyway. Gibson *is* a renowned crackpot. > Why? I asked because I'm genuinely unsure (don't know enough about the > innards of the different encryption methods), and that's why I asked. > Simply saying he's a crackpot means nothing. > > Also... > >> Use a slow algorithm (others already mentioned bcrypt)to prevent >> brute-force search, > > Actually, that (bcrypt) is precisely what *the author of the article* > (the one who you are saying is a crackpot) is suggesting to use - I > guess you didn't even bother to read it or else you'd know that, so why > bother commenting? Again, sorry, I don't always know how to work my email client. > > I don't see it as an extraordinary claim, and anyone who goes around > claiming someone else is a crackpot without evidence to support the > claim is just yammering. > Your article is fine, but you should always be skeptical because for every article like the one you posted, there are 100 like Gibson's. > > No, they don't, your claim is baseless and without merit. > > Most people have never even known what their password *is*, much less > written it down, because as I said (more than once), *I* set up their > email clients (workstations, home computers and phones) *for them*. > The password is on the phone, in plain text. If I have the phone, I can read it as easily as if it was written in sharpie. From yubao.liu at gmail.com Thu Jan 5 20:23:56 2012 From: yubao.liu at gmail.com (Yubao Liu) Date: Fri, 06 Jan 2012 02:23:56 +0800 Subject: [Dovecot] Strange error: DIGEST-MD5 mechanism can't be supported with given passdbs Message-ID: <4F05EABC.7070309@gmail.com> Hi all, I have no idea about that message, here is my configuration, what's wrong? Debian testing, Dovecot 2.0.15 $ doveconf -n # 2.0.15: /etc/dovecot/dovecot.conf # OS: Linux 3.1.0-1-686-pae i686 Debian wheezy/sid auth_default_realm = corp.example.com auth_krb5_keytab = /etc/dovecot.keytab auth_master_user_separator = * auth_mechanisms = gssapi digest-md5 auth_realms = corp.example.com auth_username_format = %n first_valid_gid = 1000 first_valid_uid = 1000 mail_location = mdbox:/srv/mail/%u/Mail managesieve_notify_capability = mailto managesieve_sieve_capability = fileinto reject envelope encoded-character vacation subaddress comparator-i;ascii-numeric relational regex imap4flags copy include variables body enotify environment mailbox date ihave passdb { args = /etc/dovecot/master-users driver = passwd-file master = yes pass = yes } passdb { driver = pam } plugin { sieve = /srv/mail/%u/.dovecot.sieve sieve_dir = /srv/mail/%u/sieve } protocols = " imap lmtp sieve" service auth { unix_listener auth-client { group = Debian-exim mode = 0660 } } ssl_cert = References: <4F0367A2.1000807@Media-Brokers.com> <4F036C3B.5080904@Media-Brokers.com> <4F037CC5.9030900@carpenter.org> <4F038186.3030505@Media-Brokers.com> <4F038B72.1000003@carpenter.org> <4F03AA6E.30003@Media-Brokers.com> <4F03B25B.2020309@orlitzky.com> <4F0588D9.1030709@Media-Brokers.com> <4F05C19A.4030603@orlitzky.com> <4F05CC5C.7020807@Media-Brokers.com> <4F05CE19.8030204@es2eng.com> Message-ID: <4F05ED9B.10901@Media-Brokers.com> On 2012-01-05 11:21 AM, Willie Gillespie wrote: > If the phone knows the password and I have the phone, then I have the > password. Similarly, if I compromise the workstation that knows the > password, then I also have the password. Interesting... I thought they were stored encrypted. I definitely use a (strong) Master Password in Thunderbird to protect the passwords, so it would take some doing on the workstations. > Even if the user doesn't know the password, the phone/workstation does. > And it has to be stored in a retrievable way. Yes, if an attacker has unfettered physical access to the workstation/phone, it can be compromised... > That's what he's trying to say when he was talking about a "$400 post-it > note." Got it... As I said, there is no perfect system... but ours has worked well in the 11+ years we've been doing it this way. -- Best regards, Charles From CMarcus at Media-Brokers.com Thu Jan 5 20:37:58 2012 From: CMarcus at Media-Brokers.com (Charles Marcus) Date: Thu, 05 Jan 2012 13:37:58 -0500 Subject: [Dovecot] Storing passwords encrypted... bcrypt? In-Reply-To: <4F05D055.7020305@orlitzky.com> References: <4F0367A2.1000807@Media-Brokers.com> <4F036C3B.5080904@Media-Brokers.com> <4F037CC5.9030900@carpenter.org> <4F038186.3030505@Media-Brokers.com> <4F038B72.1000003@carpenter.org> <4F03AA6E.30003@Media-Brokers.com> <4F03B25B.2020309@orlitzky.com> <4F0588D9.1030709@Media-Brokers.com> <4F05C19A.4030603@orlitzky.com> <4F05CC5C.7020807@Media-Brokers.com> <4F05D055.7020305@orlitzky.com> Message-ID: <4F05EE06.8070302@Media-Brokers.com> On 2012-01-05 11:31 AM, Michael Orlitzky wrote: > Ugh, sorry. I went to the link that someone else quoted: > > https://www.grc.com/haystack.htm > Gibson*is* a renowned crackpot. Don't know about that, but I do know from long experience Spinrite rocks! Maybe -- Best regards, Charles From david at blue-labs.org Thu Jan 5 20:47:58 2012 From: david at blue-labs.org (David Ford) Date: Thu, 05 Jan 2012 13:47:58 -0500 Subject: [Dovecot] Storing passwords encrypted... bcrypt? In-Reply-To: <4F05EE06.8070302@Media-Brokers.com> References: <4F0367A2.1000807@Media-Brokers.com> <4F036C3B.5080904@Media-Brokers.com> <4F037CC5.9030900@carpenter.org> <4F038186.3030505@Media-Brokers.com> <4F038B72.1000003@carpenter.org> <4F03AA6E.30003@Media-Brokers.com> <4F03B25B.2020309@orlitzky.com> <4F0588D9.1030709@Media-Brokers.com> <4F05C19A.4030603@orlitzky.com> <4F05CC5C.7020807@Media-Brokers.com> <4F05D055.7020305@orlitzky.com> <4F05EE06.8070302@Media-Brokers.com> Message-ID: <4F05F05E.60104@blue-labs.org> On 01/05/2012 01:37 PM, Charles Marcus wrote: > On 2012-01-05 11:31 AM, Michael Orlitzky wrote: >> Ugh, sorry. I went to the link that someone else quoted: >> >> https://www.grc.com/haystack.htm > >> Gibson*is* a renowned crackpot. > > Don't know about that, but I do know from long experience Spinrite rocks! > > Maybe he often piggybacks on common sense but makes it into an elaborate grandiose presentation. a lot of his topics tend to wander out to left field come half-time. -d From wgillespie+dovecot at es2eng.com Thu Jan 5 21:22:47 2012 From: wgillespie+dovecot at es2eng.com (Willie Gillespie) Date: Thu, 05 Jan 2012 12:22:47 -0700 Subject: [Dovecot] Storing passwords encrypted... bcrypt? In-Reply-To: <4F05ED9B.10901@Media-Brokers.com> References: <4F0367A2.1000807@Media-Brokers.com> <4F036C3B.5080904@Media-Brokers.com> <4F037CC5.9030900@carpenter.org> <4F038186.3030505@Media-Brokers.com> <4F038B72.1000003@carpenter.org> <4F03AA6E.30003@Media-Brokers.com> <4F03B25B.2020309@orlitzky.com> <4F0588D9.1030709@Media-Brokers.com> <4F05C19A.4030603@orlitzky.com> <4F05CC5C.7020807@Media-Brokers.com> <4F05CE19.8030204@es2eng.com> <4F05ED9B.10901@Media-Brokers.com> Message-ID: <4F05F887.70204@es2eng.com> On 01/05/2012 11:36 AM, Charles Marcus wrote: > On 2012-01-05 11:21 AM, Willie Gillespie wrote: >> If the phone knows the password and I have the phone, then I have the >> password. Similarly, if I compromise the workstation that knows the >> password, then I also have the password. > > Interesting... I thought they were stored encrypted. I definitely use a > (strong) Master Password in Thunderbird to protect the passwords, so it > would take some doing on the workstations. True. If you are using a master password, they are encrypted. From user+dovecot at localhost.localdomain.org Fri Jan 6 00:28:27 2012 From: user+dovecot at localhost.localdomain.org (Pascal Volk) Date: Thu, 05 Jan 2012 23:28:27 +0100 Subject: [Dovecot] Storing passwords encrypted... bcrypt? In-Reply-To: <4F058A14.2060303@Media-Brokers.com> References: <4F0367A2.1000807@Media-Brokers.com> <4F04FAA9.3020908@localhost.localdomain.org> <4F058A14.2060303@Media-Brokers.com> Message-ID: <4F06240B.2040101@localhost.localdomain.org> On 01/05/2012 12:31 PM Charles Marcus wrote: > ? > You said above that 'yes, I can use it with dovecot' - but what about > postfix and mysql... where/how do they fit into this mix? My thought was > that there are two issues here: > > 1. Storing them in bcrypted form, and For MySQL the bcrypted password is just a varchar. > 2. The clients must support *decrypting* them... Sorry, i don't know if clients need to know anything about the used password scheme. The used password scheme is mostly relevant for Dovecot. Don't mix password scheme and authentication scheme. > So, since I use postfixadmin, I'm guessing that for #1, it will have to > support encrypting them in bcrypt form, and then I have to worry about > dovecot - and since I'm planning on using postfix+dovecot-sasl, once > dovecot supports it, postfix will too... > > Is that about right? I think that's correct. Postfix uses Dovecot for the authentication stuff. If I'm wrong, please let me know it. Regards, Pascal -- The trapper recommends today: c01dcafe.1200523 at localdomain.org From Ralf.Hildebrandt at charite.de Fri Jan 6 12:09:53 2012 From: Ralf.Hildebrandt at charite.de (Ralf Hildebrandt) Date: Fri, 6 Jan 2012 11:09:53 +0100 Subject: [Dovecot] Deduplication active - but how good does it perform? Message-ID: <20120106100953.GV24134@charite.de> I have deduplication active in my first mdbox: type mailbox, but how do I find out how well the deduplication works? Is there a way of finding out how much disk space I saved (if I saved some :) )? -- Ralf Hildebrandt Gesch?ftsbereich IT | Abteilung Netzwerk Charit? - Universit?tsmedizin Berlin Campus Benjamin Franklin Hindenburgdamm 30 | D-12203 Berlin Tel. +49 30 450 570 155 | Fax: +49 30 450 570 962 ralf.hildebrandt at charite.de | http://www.charite.de From nick+dovecot at bunbun.be Fri Jan 6 12:52:34 2012 From: nick+dovecot at bunbun.be (Nick Rosier) Date: Fri, 06 Jan 2012 11:52:34 +0100 Subject: [Dovecot] Strange error: DIGEST-MD5 mechanism can't be supported with given passdbs In-Reply-To: <4F05EABC.7070309@gmail.com> References: <4F05EABC.7070309@gmail.com> Message-ID: <4F06D272.5010200@bunbun.be> Yubao Liu wrote: > Hi all, > > I have no idea about that message, here is my configuration, what's wrong? You have 2 passdb entries; 1 with a file and 1 with pam. I'm pretty sure PAM doesn't support DIGEST-MD5 authentication. Could be the cause of the problem. > Debian testing, Dovecot 2.0.15 > > $ doveconf -n > # 2.0.15: /etc/dovecot/dovecot.conf > # OS: Linux 3.1.0-1-686-pae i686 Debian wheezy/sid > auth_default_realm = corp.example.com > auth_krb5_keytab = /etc/dovecot.keytab > auth_master_user_separator = * > auth_mechanisms = gssapi digest-md5 > auth_realms = corp.example.com > auth_username_format = %n > first_valid_gid = 1000 > first_valid_uid = 1000 > mail_location = mdbox:/srv/mail/%u/Mail > managesieve_notify_capability = mailto > managesieve_sieve_capability = fileinto reject envelope > encoded-character vacation subaddress comparator-i;ascii-numeric > relational regex imap4flags copy include variables body enotify > environment mailbox date ihave > passdb { > args = /etc/dovecot/master-users > driver = passwd-file > master = yes > pass = yes > } > passdb { > driver = pam > } > plugin { > sieve = /srv/mail/%u/.dovecot.sieve > sieve_dir = /srv/mail/%u/sieve > } > protocols = " imap lmtp sieve" > service auth { > unix_listener auth-client { > group = Debian-exim > mode = 0660 > } > } > ssl_cert = ssl_key = userdb { > args = home=/srv/mail/%u > driver = passwd > } > protocol lmtp { > mail_plugins = " sieve" > } > protocol lda { > mail_plugins = " sieve" > } > > # cat /etc/dovecot/master-users > xxx at corp.example.com:zzzzzzzz > > The zzzzz is obtained by "doveadm pw -s digest-md5 -u > xxx at corp.example.com", > I tried to add prefix "{DIGEST-MD5}" before the generated hash and/or add > "scheme=DIGEST-MD5" to the passwd-file passdb's "args" option, both > don't help. > > The error message: > dovecot: master: Dovecot v2.0.15 starting up (core dumps disabled) > dovecot: auth: Fatal: DIGEST-MD5 mechanism can't be supported with given > passdbs > gold dovecot: master: Error: service(auth): command startup failed, > throttling > > I opened debug auth log, it showed dovecot read /etc/dovecot/master-users > and parsed one line, then the error occurred. Doesn't passwd-file > passdb support > digest-md5 password scheme? If it doesn't support, how do I configure > digest-md5 auth > mechanism with digest-md5 password scheme for virtual users? > > Regards, > Yubao Liu > Rgds, N. From tss at iki.fi Fri Jan 6 12:54:19 2012 From: tss at iki.fi (Timo Sirainen) Date: Fri, 6 Jan 2012 12:54:19 +0200 Subject: [Dovecot] Deduplication active - but how good does it perform? In-Reply-To: <20120106100953.GV24134@charite.de> References: <20120106100953.GV24134@charite.de> Message-ID: On 6.1.2012, at 12.09, Ralf Hildebrandt wrote: > I have deduplication active in my first mdbox: type mailbox, but how > do I find out how well the deduplication works? Is there a way of > finding out how much disk space I saved (if I saved some :) )? You could look at the files in the attachments directory, and see how many links they have. Each file has 2 initially. Each additional link has saved you bytes of space. From tss at iki.fi Fri Jan 6 12:55:49 2012 From: tss at iki.fi (Timo Sirainen) Date: Fri, 6 Jan 2012 12:55:49 +0200 Subject: [Dovecot] Possible mdbox corruption In-Reply-To: References: Message-ID: On 5.1.2012, at 2.24, Daniel L. Miller wrote: > I thought I had cleared out the corruption I had before - perhaps I was mistaken. What steps should I take to help locate these issues? Currently using 2.1rc1. I see the following errors in my logs, including out of memory and message size issues (at 15:30): .. > Jan 4 05:17:17 bubba dovecot: master: Error: service(indexer-worker): child 10896 returned error 83 (Out of memory (vsz_limit=256 MB, you may need to increase it)) > Jan 4 06:17:17 bubba dovecot: indexer-worker(user1 at domain.com): Fatal: pool_system_realloc(134217728): Out of memory The problem is clearly that index-worker's vsz_limit is too low. Increase it (or default_vsz_limit). From tss at iki.fi Fri Jan 6 12:57:43 2012 From: tss at iki.fi (Timo Sirainen) Date: Fri, 6 Jan 2012 12:57:43 +0200 Subject: [Dovecot] Possible mdbox corruption In-Reply-To: References: Message-ID: <9D72A466-6ECD-4A10-A52A-20BF56B32916@iki.fi> On 6.1.2012, at 12.55, Timo Sirainen wrote: >> Jan 4 05:17:17 bubba dovecot: master: Error: service(indexer-worker): child 10896 returned error 83 (Out of memory (vsz_limit=256 MB, you may need to increase it)) >> Jan 4 06:17:17 bubba dovecot: indexer-worker(user1 at domain.com): Fatal: pool_system_realloc(134217728): Out of memory > > The problem is clearly that index-worker's vsz_limit is too low. Increase it (or default_vsz_limit). Although the source of the out-of-memory /usr/local/lib/dovecot/libdovecot.so.0(buffer_write+0x7c) [0x7f0ec1a550ec] -> /usr/local/lib/dovecot/lib21_fts_solr_plugin.so(+0x3292) [0x7f0ec024f292] -> is something that shouldn't really be happening. I guess the Solr plugin wastes memory unnecessarily, I'll see what I can do about it. But for now just increase vsz limit. From nick+dovecot at bunbun.be Fri Jan 6 13:04:51 2012 From: nick+dovecot at bunbun.be (Nick Rosier) Date: Fri, 06 Jan 2012 12:04:51 +0100 Subject: [Dovecot] Deduplication active - but how good does it perform? In-Reply-To: <20120106100953.GV24134@charite.de> References: <20120106100953.GV24134@charite.de> Message-ID: <4F06D553.2010605@bunbun.be> Ralf Hildebrandt wrote: > I have deduplication active in my first mdbox: type mailbox, but how > do I find out how well the deduplication works? Is there a way of > finding out how much disk space I saved (if I saved some :) )? You could check how much diskspace all the mail uses (or the mail of a user) and compare it to the quota dovecot reports. But I think you would need quota's activated for this. E.g. on my small server used diskquota is 2GB where doveadm quota reports all users use 3.1GB. From adrian.minta at gmail.com Fri Jan 6 13:07:05 2012 From: adrian.minta at gmail.com (Adrian Minta) Date: Fri, 06 Jan 2012 13:07:05 +0200 Subject: [Dovecot] howto disable indexing on dovecot-lda ? Message-ID: <4F06D5D9.20001@gmail.com> Hello, is it possible to disable indexing on dovecot-lda ? Right now postfix delivers the mail directly to the nfs server without any problems. If I switch to dovecot-lda the system crashes do to the high I/O and locking. Indexing on lda is not very useful because the number of of imap logins is less than 5% that of incoming mails, so an user could wait for 3 sec to get his mail index, but a new mail can't. Dovecot version 1.2.15 mail_nfs_storage = yes mail_nfs_index = yes Than you ! From tss at iki.fi Fri Jan 6 13:27:41 2012 From: tss at iki.fi (Timo Sirainen) Date: Fri, 06 Jan 2012 13:27:41 +0200 Subject: [Dovecot] Possible mdbox corruption In-Reply-To: <9D72A466-6ECD-4A10-A52A-20BF56B32916@iki.fi> References: <9D72A466-6ECD-4A10-A52A-20BF56B32916@iki.fi> Message-ID: <1325849261.17774.0.camel@hurina> On Fri, 2012-01-06 at 12:57 +0200, Timo Sirainen wrote: > On 6.1.2012, at 12.55, Timo Sirainen wrote: > > >> Jan 4 05:17:17 bubba dovecot: master: Error: service(indexer-worker): child 10896 returned error 83 (Out of memory (vsz_limit=256 MB, you may need to increase it)) > >> Jan 4 06:17:17 bubba dovecot: indexer-worker(user1 at domain.com): Fatal: pool_system_realloc(134217728): Out of memory > > > > The problem is clearly that index-worker's vsz_limit is too low. Increase it (or default_vsz_limit). > > Although the source of the out-of-memory > > /usr/local/lib/dovecot/libdovecot.so.0(buffer_write+0x7c) [0x7f0ec1a550ec] -> /usr/local/lib/dovecot/lib21_fts_solr_plugin.so(+0x3292) [0x7f0ec024f292] -> > > is something that shouldn't really be happening. I guess the Solr plugin wastes memory unnecessarily, I'll see what I can do about it. But for now just increase vsz limit. I don't see any obvious reason why it would be using a lot of memory, unless you have a message that has huge (MIME) headers. See if http://hg.dovecot.org/dovecot-2.1/rev/380b0667e0a5 helps / logs a warning about it. From tss at iki.fi Fri Jan 6 13:39:44 2012 From: tss at iki.fi (Timo Sirainen) Date: Fri, 06 Jan 2012 13:39:44 +0200 Subject: [Dovecot] howto disable indexing on dovecot-lda ? In-Reply-To: <4F06D5D9.20001@gmail.com> References: <4F06D5D9.20001@gmail.com> Message-ID: <1325849985.17774.10.camel@hurina> On Fri, 2012-01-06 at 13:07 +0200, Adrian Minta wrote: > Hello, > is it possible to disable indexing on dovecot-lda ? protocol lda { mail_location = whatever-you-have-now:INDEX=MEMORY } > Right now postfix delivers the mail directly to the nfs server without > any problems. If I switch to dovecot-lda the system crashes do to the > high I/O and locking. Disabling indexing won't disable writing to dovecot-uidlist file. So I don't know if disabling indexes actually helps. From alexis.lelion at gmail.com Fri Jan 6 13:36:15 2012 From: alexis.lelion at gmail.com (Alexis Lelion) Date: Fri, 6 Jan 2012 12:36:15 +0100 Subject: [Dovecot] ACL with IMAP proxying Message-ID: Hello, I'm trying to use ACLs to restrict subscription on public mailboxes, but I went into trouble. My setup is made of two servers, and users are shared between them via a proxy. User authentication is done with LDAP, and credentials aren't shared between the mailservers. Instead, the proxies are using master password. The thing is that when the ACLs are checked, it actually doesn't give the user login, but the master login, which is useless. Is there a way to use the first part of destuser as it is done when fetching info from the userdb? Any help is appreciated, Thansk! Alexis -------------------------------------------------- ACL bug logs : 104184 Jan 6 12:09:35 mail02 dovecot: imap(user at domain): Debug: acl: acl username = proxy 104185 Jan 6 12:09:35 mail02 dovecot: imap(user at domain): Debug: acl: owner = 0 104186 Jan 6 12:09:35 mail02 dovecot: imap(user at domain): Debug: acl vfile: Global ACL directory: (none) 104187 Jan 6 12:09:35 mail02 dovecot: imap(user at domain): Debug: Namespace : type=public, prefix=Shared., sep=., inbox=no, hidden=no, list=yes, subscriptions=no location=maildir:/var/vmail/domain/Shared -------------------------------------------------- Output of "dovecot -n" # 2.0.15: /etc/dovecot/dovecot.conf # OS: Linux 2.6.32-5-amd64 x86_64 Debian 6.0.3 ext3 auth_debug = yes auth_master_user_separator = * auth_socket_path = /var/run/dovecot/auth-userdb auth_verbose = yes first_valid_uid = 150 lmtp_proxy = yes login_trusted_networks = mail01.ip mail_debug = yes mail_location = maildir:/var/vmail/%d/%n mail_nfs_storage = yes mail_plugins = acl mail_privileged_group = mail managesieve_notify_capability = mailto managesieve_sieve_capability = fileinto reject envelope encoded-character vacation subaddress comparator-i;ascii-numeric relational regex imap4flags copy include variables body enotify environment mailbox date ihave namespace { inbox = yes location = maildir:/var/vmail/%d/%n prefix = separator = . type = private } namespace { location = maildir:/var/vmail/domain/Shared prefix = Shared. separator = . subscriptions = no type = public } passdb { args = /etc/dovecot/master-users driver = passwd-file master = yes } passdb { args = /etc/dovecot/dovecot-ldap.conf driver = ldap } plugin { acl = vfile:/etc/dovecot/global-acls:cache_secs=300 recipient_delimiter = + sieve_after = /var/lib/dovecot/sieve/after.d/ sieve_before = /var/lib/dovecot/sieve/pre.d/ sieve_dir = /var/vmail/%d/%n/sieve sieve_global_path = /var/lib/dovecot/sieve/default.sieve } postmaster_address = user at domain protocols = " imap lmtp sieve" service auth { unix_listener /var/spool/postfix/private/auth { group = postfix mode = 0666 user = postfix } unix_listener auth-userdb { group = mail mode = 0600 user = vmail } } service lmtp { inet_listener lmtp { address = mail02.ip port = 24 } unix_listener /var/spool/postfix/private/dovecot-lmtp { group = postfix mode = 0660 user = postfix } } ssl = required ssl_cert = References: Message-ID: <1325850528.17774.13.camel@hurina> On Fri, 2012-01-06 at 12:36 +0100, Alexis Lelion wrote: > The thing is that when the ACLs are checked, it actually doesn't give > the user login, but the master login, which is useless. Yes, this is intentional. > Is there a way to use the first part of destuser as it is done when > fetching info from the userdb? You should be able to work around this with modifying userdb's query: user_query = select '%n' AS master_user, ... From stan at hardwarefreak.com Fri Jan 6 13:50:13 2012 From: stan at hardwarefreak.com (Stan Hoeppner) Date: Fri, 06 Jan 2012 05:50:13 -0600 Subject: [Dovecot] howto disable indexing on dovecot-lda ? In-Reply-To: <4F06D5D9.20001@gmail.com> References: <4F06D5D9.20001@gmail.com> Message-ID: <4F06DFF5.40707@hardwarefreak.com> On 1/6/2012 5:07 AM, Adrian Minta wrote: > Hello, > is it possible to disable indexing on dovecot-lda ? > > Right now postfix delivers the mail directly to the nfs server without > any problems. If I switch to dovecot-lda the system crashes do to the > high I/O and locking. > Indexing on lda is not very useful because the number of of imap logins > is less than 5% that of incoming mails, so an user could wait for 3 sec > to get his mail index, but a new mail can't. Then why bother with Dovecot LDA w/disabled indexing (the main reason for using it in the first place) instead of simply sticking with Postfix Local(8)? -- Stan From CMarcus at Media-Brokers.com Fri Jan 6 13:58:16 2012 From: CMarcus at Media-Brokers.com (Charles Marcus) Date: Fri, 06 Jan 2012 06:58:16 -0500 Subject: [Dovecot] Deduplication active - but how good does it perform? In-Reply-To: References: <20120106100953.GV24134@charite.de> Message-ID: <4F06E1D8.7090507@Media-Brokers.com> On 2012-01-06 5:54 AM, Timo Sirainen wrote: > On 6.1.2012, at 12.09, Ralf Hildebrandt wrote: >> I have deduplication active in my first mdbox: type mailbox, but how >> do I find out how well the deduplication works? Is there a way of >> finding out how much disk space I saved (if I saved some :) )? > You could look at the files in the attachments directory, and see how > many links they have. Each file has 2 initially. Each additional link > has saved you bytes of space. Maybe there could be a doveadm command for this? That would be really useful for some kind of stats applications... especially for promoting its use in environments where large attachments are common... -- Best regards, Charles From CMarcus at Media-Brokers.com Fri Jan 6 14:09:05 2012 From: CMarcus at Media-Brokers.com (Charles Marcus) Date: Fri, 06 Jan 2012 07:09:05 -0500 Subject: [Dovecot] Deduplication active - but how good does it perform? In-Reply-To: <4F06E1D8.7090507@Media-Brokers.com> References: <20120106100953.GV24134@charite.de> <4F06E1D8.7090507@Media-Brokers.com> Message-ID: <4F06E461.3010906@Media-Brokers.com> On 2012-01-06 6:58 AM, Charles Marcus wrote: > On 2012-01-06 5:54 AM, Timo Sirainen wrote: >> On 6.1.2012, at 12.09, Ralf Hildebrandt wrote: >>> I have deduplication active in my first mdbox: type mailbox, but how >>> do I find out how well the deduplication works? Is there a way of >>> finding out how much disk space I saved (if I saved some :) )? > >> You could look at the files in the attachments directory, and see how >> many links they have. Each file has 2 initially. Each additional link >> has saved you bytes of space. > > Maybe there could be a doveadm command for this? Incidentally, I use rsnapshot (which is simply a wrapper script for rsync) for my disk based backups. It uses hard links so that you can have hourly/daily/weekly/monthly (or whatever naming scheme you want) snapshots of your backups, but each snapshot simply contains hardlinks to the previous snapshots, so you can literally have hundreds of snapshots that only consume a little more space that one single whole snapshot. Anyway, rsnapshot has to leverage the du command to determine the amount of disk space each snapshot uses (when considered as a separate/standalone snapshot), or how much *actual* space each snapshot consumes (ie, only the files that are *not* hardlinked against a previous backup)... Maybe this could be a starting point for how to do this... http://rsnapshot.org/rsnapshot.html#usage and scroll down to the rsnapshot du command... -- Best regards, Charles From alexis.lelion at gmail.com Fri Jan 6 14:22:02 2012 From: alexis.lelion at gmail.com (Alexis Lelion) Date: Fri, 6 Jan 2012 13:22:02 +0100 Subject: [Dovecot] ACL with IMAP proxying In-Reply-To: <1325850528.17774.13.camel@hurina> References: <1325850528.17774.13.camel@hurina> Message-ID: Hi Timo, Thanks for your prompt answer, I wasn't expecting an answer that soon ;-) I just tried your workaround, and actually, master_user is properly set to the username, but then is overriden with the proxy login again : Jan 6 13:14:19 mail01 dovecot: imap: Debug: Added userdb setting: mail=maildir:/var/vmail/domain/user Jan 6 13:14:19 mail01 dovecot: imap: Debug: Added userdb setting: plugin/quota=dirsize:storage=0 Jan 6 13:14:19 mail01 dovecot: imap: Debug: Added userdb setting: plugin/master_user=user Jan 6 13:14:19 mail01 dovecot: imap: Debug: Added userdb setting: plugin/master_user=proxy Is there any other flag I can set to avoid this? (Something like Y for the password)? Alexis On Fri, Jan 6, 2012 at 12:48 PM, Timo Sirainen wrote: > On Fri, 2012-01-06 at 12:36 +0100, Alexis Lelion wrote: > > The thing is that when the ACLs are checked, it actually doesn't give > > the user login, but the master login, which is useless. > > Yes, this is intentional. > > > Is there a way to use the first part of destuser as it is done when > > fetching info from the userdb? > > You should be able to work around this with modifying userdb's query: > > user_query = select '%n' AS master_user, ... > > > From tss at iki.fi Fri Jan 6 14:26:37 2012 From: tss at iki.fi (Timo Sirainen) Date: Fri, 06 Jan 2012 14:26:37 +0200 Subject: [Dovecot] doveadm + dsync merging In-Reply-To: <4EFC76F0.2050705@localhost.localdomain.org> References: <20111229125326.GA2295@state-of-mind.de> <4EFC76F0.2050705@localhost.localdomain.org> Message-ID: <1325852800.17774.17.camel@hurina> On Thu, 2011-12-29 at 15:19 +0100, Pascal Volk wrote: > >> b) Don't have the dsync prefix: > >> > >> dsync mirror -> doveadm mirror > >> dsync backup -> doveadm backup > >> dsync server -> doveadm dsync-server (could be hidden from the doveadm commands list) I did this now, with mirror -> sync. > I'd prefer doveadm commands with the dsync prefix. (a)) Because: > > * doveadm already has other 'command groups' like mailbox, director ? > * that's the way to avoid command clashes (w/o hiding anything) There are already many mail related commands that don't have any prefix. For example I think "doveadm import" and "doveadm backup" are quite related. Also "dsync" is perhaps more about the internal implementation, so in future it's possible that sync/backup works some other way.. From tss at iki.fi Fri Jan 6 14:30:12 2012 From: tss at iki.fi (Timo Sirainen) Date: Fri, 06 Jan 2012 14:30:12 +0200 Subject: [Dovecot] ACL with IMAP proxying In-Reply-To: References: <1325850528.17774.13.camel@hurina> Message-ID: <1325853012.17774.19.camel@hurina> On Fri, 2012-01-06 at 13:22 +0100, Alexis Lelion wrote: > Thanks for your prompt answer, I wasn't expecting an answer that soon ;-) > I just tried your workaround, and actually, master_user is properly set to > the username, but then is overriden with the proxy login again : > > Jan 6 13:14:19 mail01 dovecot: imap: Debug: Added userdb setting: > mail=maildir:/var/vmail/domain/user > Jan 6 13:14:19 mail01 dovecot: imap: Debug: Added userdb setting: > plugin/quota=dirsize:storage=0 > Jan 6 13:14:19 mail01 dovecot: imap: Debug: Added userdb setting: > plugin/master_user=user > Jan 6 13:14:19 mail01 dovecot: imap: Debug: Added userdb setting: > plugin/master_user=proxy I thought it would have been the other way around.. See if http://hg.dovecot.org/dovecot-2.0/raw-rev/684381041dc4 helps? > Is there any other flag I can set to avoid this? (Something like Y for the > password)? Nope. From alexis.lelion at gmail.com Fri Jan 6 14:55:03 2012 From: alexis.lelion at gmail.com (Alexis Lelion) Date: Fri, 6 Jan 2012 13:55:03 +0100 Subject: [Dovecot] ACL with IMAP proxying In-Reply-To: <1325853012.17774.19.camel@hurina> References: <1325850528.17774.13.camel@hurina> <1325853012.17774.19.camel@hurina> Message-ID: Thanks Timo. I'm actually using a packaged version of Dovecot 2.0 from Debian, so I can't apply the patch easily right now. I'll try do build dovecot this weekend and see if it solves the issue. Cheers Alexis On Fri, Jan 6, 2012 at 1:30 PM, Timo Sirainen wrote: > On Fri, 2012-01-06 at 13:22 +0100, Alexis Lelion wrote: > > > Thanks for your prompt answer, I wasn't expecting an answer that soon ;-) > > I just tried your workaround, and actually, master_user is properly set > to > > the username, but then is overriden with the proxy login again : > > > > Jan 6 13:14:19 mail01 dovecot: imap: Debug: Added userdb setting: > > mail=maildir:/var/vmail/domain/user > > Jan 6 13:14:19 mail01 dovecot: imap: Debug: Added userdb setting: > > plugin/quota=dirsize:storage=0 > > Jan 6 13:14:19 mail01 dovecot: imap: Debug: Added userdb setting: > > plugin/master_user=user > > Jan 6 13:14:19 mail01 dovecot: imap: Debug: Added userdb setting: > > plugin/master_user=proxy > > I thought it would have been the other way around.. See if > http://hg.dovecot.org/dovecot-2.0/raw-rev/684381041dc4 helps? > > > Is there any other flag I can set to avoid this? (Something like Y for > the > > password)? > > Nope. > > > From tss at iki.fi Fri Jan 6 14:57:24 2012 From: tss at iki.fi (Timo Sirainen) Date: Fri, 06 Jan 2012 14:57:24 +0200 Subject: [Dovecot] ACL with IMAP proxying In-Reply-To: References: <1325850528.17774.13.camel@hurina> <1325853012.17774.19.camel@hurina> Message-ID: <1325854644.17774.20.camel@hurina> Another possibility: http://wiki2.dovecot.org/PostLoginScripting and set MASTER_USER environment. On Fri, 2012-01-06 at 13:55 +0100, Alexis Lelion wrote: > Thanks Timo. > I'm actually using a packaged version of Dovecot 2.0 from Debian, so I > can't apply the patch easily right now. > I'll try do build dovecot this weekend and see if it solves the issue. > > Cheers > > Alexis > > On Fri, Jan 6, 2012 at 1:30 PM, Timo Sirainen wrote: > > > On Fri, 2012-01-06 at 13:22 +0100, Alexis Lelion wrote: > > > > > Thanks for your prompt answer, I wasn't expecting an answer that soon ;-) > > > I just tried your workaround, and actually, master_user is properly set > > to > > > the username, but then is overriden with the proxy login again : > > > > > > Jan 6 13:14:19 mail01 dovecot: imap: Debug: Added userdb setting: > > > mail=maildir:/var/vmail/domain/user > > > Jan 6 13:14:19 mail01 dovecot: imap: Debug: Added userdb setting: > > > plugin/quota=dirsize:storage=0 > > > Jan 6 13:14:19 mail01 dovecot: imap: Debug: Added userdb setting: > > > plugin/master_user=user > > > Jan 6 13:14:19 mail01 dovecot: imap: Debug: Added userdb setting: > > > plugin/master_user=proxy > > > > I thought it would have been the other way around.. See if > > http://hg.dovecot.org/dovecot-2.0/raw-rev/684381041dc4 helps? > > > > > Is there any other flag I can set to avoid this? (Something like Y for > > the > > > password)? > > > > Nope. > > > > > > From adrian.minta at gmail.com Fri Jan 6 15:01:52 2012 From: adrian.minta at gmail.com (Adrian Minta) Date: Fri, 06 Jan 2012 15:01:52 +0200 Subject: [Dovecot] howto disable indexing on dovecot-lda ? In-Reply-To: <1325849985.17774.10.camel@hurina> References: <4F06D5D9.20001@gmail.com> <1325849985.17774.10.camel@hurina> Message-ID: <4F06F0C0.30906@gmail.com> On 01/06/12 13:39, Timo Sirainen wrote: > On Fri, 2012-01-06 at 13:07 +0200, Adrian Minta wrote: >> Hello, >> is it possible to disable indexing on dovecot-lda ? > protocol lda { > mail_location = whatever-you-have-now:INDEX=MEMORY > } > >> Right now postfix delivers the mail directly to the nfs server without >> any problems. If I switch to dovecot-lda the system crashes do to the >> high I/O and locking. > Disabling indexing won't disable writing to dovecot-uidlist file. So I > don't know if disabling indexes actually helps. > I don't have mail_location under "protocol lda": protocol lda { # Address to use when sending rejection mails. postmaster_address = postmaster at xxx sendmail_path = /usr/lib/sendmail auth_socket_path = /var/run/dovecot/auth-master mail_plugins = quota syslog_facility = mail } The mail_location is present only global. What to do then ? From adrian.minta at gmail.com Fri Jan 6 15:02:31 2012 From: adrian.minta at gmail.com (Adrian Minta) Date: Fri, 06 Jan 2012 15:02:31 +0200 Subject: [Dovecot] howto disable indexing on dovecot-lda ? In-Reply-To: <4F06DFF5.40707@hardwarefreak.com> References: <4F06D5D9.20001@gmail.com> <4F06DFF5.40707@hardwarefreak.com> Message-ID: <4F06F0E7.904@gmail.com> On 01/06/12 13:50, Stan Hoeppner wrote: > On 1/6/2012 5:07 AM, Adrian Minta wrote: >> Hello, >> is it possible to disable indexing on dovecot-lda ? >> >> Right now postfix delivers the mail directly to the nfs server without >> any problems. If I switch to dovecot-lda the system crashes do to the >> high I/O and locking. >> Indexing on lda is not very useful because the number of of imap logins >> is less than 5% that of incoming mails, so an user could wait for 3 sec >> to get his mail index, but a new mail can't. > Then why bother with Dovecot LDA w/disabled indexing (the main reason > for using it in the first place) instead of simply sticking with Postfix > Local(8)? > Because of sieve and quota support. Another possible advantage will be the support for hashed mailbox directories. From tss at iki.fi Fri Jan 6 15:08:26 2012 From: tss at iki.fi (Timo Sirainen) Date: Fri, 06 Jan 2012 15:08:26 +0200 Subject: [Dovecot] howto disable indexing on dovecot-lda ? In-Reply-To: <4F06F0C0.30906@gmail.com> References: <4F06D5D9.20001@gmail.com> <1325849985.17774.10.camel@hurina> <4F06F0C0.30906@gmail.com> Message-ID: <1325855306.17774.21.camel@hurina> On Fri, 2012-01-06 at 15:01 +0200, Adrian Minta wrote: > > protocol lda { > > mail_location = whatever-you-have-now:INDEX=MEMORY > > } > > > I don't have mail_location under "protocol lda": Just add it there. From alexis.lelion at gmail.com Fri Jan 6 15:20:26 2012 From: alexis.lelion at gmail.com (Alexis Lelion) Date: Fri, 6 Jan 2012 14:20:26 +0100 Subject: [Dovecot] ACL with IMAP proxying In-Reply-To: <1325854644.17774.20.camel@hurina> References: <1325850528.17774.13.camel@hurina> <1325853012.17774.19.camel@hurina> <1325854644.17774.20.camel@hurina> Message-ID: It worked! Thanks a lot for your help and have a wonderful day! On Fri, Jan 6, 2012 at 1:57 PM, Timo Sirainen wrote: > Another possibility: http://wiki2.dovecot.org/PostLoginScripting > > and set MASTER_USER environment. > > On Fri, 2012-01-06 at 13:55 +0100, Alexis Lelion wrote: > > Thanks Timo. > > I'm actually using a packaged version of Dovecot 2.0 from Debian, so I > > can't apply the patch easily right now. > > I'll try do build dovecot this weekend and see if it solves the issue. > > > > Cheers > > > > Alexis > > > > On Fri, Jan 6, 2012 at 1:30 PM, Timo Sirainen wrote: > > > > > On Fri, 2012-01-06 at 13:22 +0100, Alexis Lelion wrote: > > > > > > > Thanks for your prompt answer, I wasn't expecting an answer that > soon ;-) > > > > I just tried your workaround, and actually, master_user is properly > set > > > to > > > > the username, but then is overriden with the proxy login again : > > > > > > > > Jan 6 13:14:19 mail01 dovecot: imap: Debug: Added userdb setting: > > > > mail=maildir:/var/vmail/domain/user > > > > Jan 6 13:14:19 mail01 dovecot: imap: Debug: Added userdb setting: > > > > plugin/quota=dirsize:storage=0 > > > > Jan 6 13:14:19 mail01 dovecot: imap: Debug: Added userdb setting: > > > > plugin/master_user=user > > > > Jan 6 13:14:19 mail01 dovecot: imap: Debug: Added userdb setting: > > > > plugin/master_user=proxy > > > > > > I thought it would have been the other way around.. See if > > > http://hg.dovecot.org/dovecot-2.0/raw-rev/684381041dc4 helps? > > > > > > > Is there any other flag I can set to avoid this? (Something like Y > for > > > the > > > > password)? > > > > > > Nope. > > > > > > > > > > > > From adrian.minta at gmail.com Fri Jan 6 15:25:11 2012 From: adrian.minta at gmail.com (Adrian Minta) Date: Fri, 06 Jan 2012 15:25:11 +0200 Subject: [Dovecot] howto disable indexing on dovecot-lda ? In-Reply-To: <1325855306.17774.21.camel@hurina> References: <4F06D5D9.20001@gmail.com> <1325849985.17774.10.camel@hurina> <4F06F0C0.30906@gmail.com> <1325855306.17774.21.camel@hurina> Message-ID: <4F06F637.3070504@gmail.com> On 01/06/12 15:08, Timo Sirainen wrote: > On Fri, 2012-01-06 at 15:01 +0200, Adrian Minta wrote: >>> protocol lda { >>> mail_location = whatever-you-have-now:INDEX=MEMORY >>> } >>> >> I don't have mail_location under "protocol lda": > Just add it there. > Thank you ! Dovecot didn't complain after restart and the "dovecot -a" reports it correctly: lda: postmaster_address: postmaster at xxx sendmail_path: /usr/lib/sendmail auth_socket_path: /var/run/dovecot/auth-master mail_plugins: quota syslog_facility: mail mail_location: maildir:/var/virtual/%d/%u:INDEX=MEMORY I will do a test with this. From yubao.liu at gmail.com Fri Jan 6 18:15:55 2012 From: yubao.liu at gmail.com (Yubao Liu) Date: Sat, 07 Jan 2012 00:15:55 +0800 Subject: [Dovecot] Strange error: DIGEST-MD5 mechanism can't be supported with given passdbs In-Reply-To: <4F06D272.5010200@bunbun.be> References: <4F05EABC.7070309@gmail.com> <4F06D272.5010200@bunbun.be> Message-ID: <4F071E3B.2060405@gmail.com> On 01/06/2012 06:52 PM, Nick Rosier wrote: > Yubao Liu wrote: >> Hi all, >> >> I have no idea about that message, here is my configuration, what's wrong? > You have 2 passdb entries; 1 with a file and 1 with pam. I'm pretty sure > PAM doesn't support DIGEST-MD5 authentication. Could be the cause of the > problem. > Thanks, that does be the cause. http://hg.dovecot.org/dovecot-2.0/file/684381041dc4/src/auth/auth.c 121 static bool auth_passdb_list_have_lookup_credentials(struct auth *auth) 122 { 123 struct auth_passdb *passdb; 124 125 for (passdb = auth->passdbs; passdb != NULL; passdb = passdb->next) { 126 if (passdb->passdb->iface.lookup_credentials != NULL) 127 return TRUE; 128 } 129 return FALSE; 130 } I don't know why this function doesn't check auth->masterdbs, if I insert these lines after line 128, that error goes away, and dovecot's imap-login process happily does DIGEST-MD5 authentication [1]. In my configuration, "masterdbs" contains "passdb passwd-file", "passdbs" contains " passdb pam". for (passdb = auth->masterdbs; passdb != NULL; passdb = passdb->next) { if (passdb->passdb->iface.lookup_credentials != NULL) return TRUE; } [1] But the authentication for "user*master" always fails, I realized master users can't login as other users by DIGEST-MD5 or CRAM-MD5 auth mechanisms because these authentication mechanisms use "user*master" as username in hash algorithm, not just "master". Regards, Yubao Liu >> Debian testing, Dovecot 2.0.15 >> >> $ doveconf -n >> # 2.0.15: /etc/dovecot/dovecot.conf >> # OS: Linux 3.1.0-1-686-pae i686 Debian wheezy/sid >> auth_default_realm = corp.example.com >> auth_krb5_keytab = /etc/dovecot.keytab >> auth_master_user_separator = * >> auth_mechanisms = gssapi digest-md5 >> auth_realms = corp.example.com >> auth_username_format = %n >> first_valid_gid = 1000 >> first_valid_uid = 1000 >> mail_location = mdbox:/srv/mail/%u/Mail >> managesieve_notify_capability = mailto >> managesieve_sieve_capability = fileinto reject envelope >> encoded-character vacation subaddress comparator-i;ascii-numeric >> relational regex imap4flags copy include variables body enotify >> environment mailbox date ihave >> passdb { >> args = /etc/dovecot/master-users >> driver = passwd-file >> master = yes >> pass = yes >> } >> passdb { >> driver = pam >> } >> plugin { >> sieve = /srv/mail/%u/.dovecot.sieve >> sieve_dir = /srv/mail/%u/sieve >> } >> protocols = " imap lmtp sieve" >> service auth { >> unix_listener auth-client { >> group = Debian-exim >> mode = 0660 >> } >> } >> ssl_cert => ssl_key => userdb { >> args = home=/srv/mail/%u >> driver = passwd >> } >> protocol lmtp { >> mail_plugins = " sieve" >> } >> protocol lda { >> mail_plugins = " sieve" >> } >> >> # cat /etc/dovecot/master-users >> xxx at corp.example.com:zzzzzzzz >> >> The zzzzz is obtained by "doveadm pw -s digest-md5 -u >> xxx at corp.example.com", >> I tried to add prefix "{DIGEST-MD5}" before the generated hash and/or add >> "scheme=DIGEST-MD5" to the passwd-file passdb's "args" option, both >> don't help. >> >> The error message: >> dovecot: master: Dovecot v2.0.15 starting up (core dumps disabled) >> dovecot: auth: Fatal: DIGEST-MD5 mechanism can't be supported with given >> passdbs >> gold dovecot: master: Error: service(auth): command startup failed, >> throttling >> >> I opened debug auth log, it showed dovecot read /etc/dovecot/master-users >> and parsed one line, then the error occurred. Doesn't passwd-file >> passdb support >> digest-md5 password scheme? If it doesn't support, how do I configure >> digest-md5 auth >> mechanism with digest-md5 password scheme for virtual users? >> >> Regards, >> Yubao Liu >> > Rgds, > N. From tss at iki.fi Fri Jan 6 18:41:48 2012 From: tss at iki.fi (Timo Sirainen) Date: Fri, 06 Jan 2012 18:41:48 +0200 Subject: [Dovecot] v2.0.17 released Message-ID: <1325868113.17774.28.camel@hurina> http://dovecot.org/releases/2.0/dovecot-2.0.17.tar.gz http://dovecot.org/releases/2.0/dovecot-2.0.17.tar.gz.sig Among other changes: + Proxying now supports sending SSL client certificate to server with ssl_client_cert/key settings. + doveadm dump: Added support for dumping dbox headers/metadata. - Fixed memory leaks in login processes with SSL connections - vpopmail support was broken in v2.0.16 From tss at iki.fi Fri Jan 6 18:42:07 2012 From: tss at iki.fi (Timo Sirainen) Date: Fri, 06 Jan 2012 18:42:07 +0200 Subject: [Dovecot] v2.1.rc2 released Message-ID: <1325868127.17774.29.camel@hurina> http://dovecot.org/releases/2.1/rc/dovecot-2.1.rc2.tar.gz http://dovecot.org/releases/2.1/rc/dovecot-2.1.rc2.tar.gz.sig Lots of fixes since rc1. Some of the changes were larger than I wanted at RC stage, but they had to be done now.. Hopefully it's all over now, and we can have v2.1.0 soon. :) Some of the more important changes: * dsync was merged into doveadm. There is still "dsync" symlink pointing to "doveadm", which you can use the old way for now. The preferred ways to run dsync are "doveadm sync" (for old "dsync mirror") and "doveadm backup". + IMAP SPECIAL-USE extension to describe mailboxes + Added mailbox {} sections, which deprecate autocreate plugin + lib-fs: Added "mode" parameter to "posix" backend to specify mode for created files/dirs (for mail_attachment_dir). + inet_listener names are now used to figure out what type the socket is when useful. For example naming service auth { inet_listener } to auth-client vs. auth-userdb has different behavior. + Added pop3c (= POP3 client) storage backend. - LMTP proxying code was simplified, hopefully fixing its problems. - dsync: Don't remove user's subscriptions for subscriptions=no namespaces. From tss at iki.fi Fri Jan 6 18:44:44 2012 From: tss at iki.fi (Timo Sirainen) Date: Fri, 06 Jan 2012 18:44:44 +0200 Subject: [Dovecot] Strange error: DIGEST-MD5 mechanism can't be supported with given passdbs In-Reply-To: <4F071E3B.2060405@gmail.com> References: <4F05EABC.7070309@gmail.com> <4F06D272.5010200@bunbun.be> <4F071E3B.2060405@gmail.com> Message-ID: <1325868288.17774.30.camel@hurina> On Sat, 2012-01-07 at 00:15 +0800, Yubao Liu wrote: > I don't know why this function doesn't check auth->masterdbs, if I > insert these lines after line 128, that error goes away, and dovecot's > imap-login process happily does DIGEST-MD5 authentication [1]. > In my configuration, "masterdbs" contains "passdb passwd-file", > "passdbs" contains " passdb pam". So .. you want DIGEST-MD5 authentication for the master users, but not for anyone else? I hadn't really thought anyone would want that.. From lists at luigirosa.com Fri Jan 6 19:13:20 2012 From: lists at luigirosa.com (Luigi Rosa) Date: Fri, 06 Jan 2012 18:13:20 +0100 Subject: [Dovecot] v2.1.rc2 released In-Reply-To: <1325868127.17774.29.camel@hurina> References: <1325868127.17774.29.camel@hurina> Message-ID: <4F072BB0.7040507@luigirosa.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Timo Sirainen said the following on 06/01/12 17:42: > http://dovecot.org/releases/2.1/rc/dovecot-2.1.rc2.tar.gz Making all in doveadm make[3]: Entering directory `/usr/src/dovecot-2.1.rc2/src/doveadm' Making all in dsync make[4]: Entering directory `/usr/src/dovecot-2.1.rc2/src/doveadm/dsync' gcc -DHAVE_CONFIG_H -I. -I../../.. -I../../../src/lib -I../../../src/lib-test - -I../../../src/lib-settings -I../../../src/lib-master -I../../../src/lib-mail - -I../../../src/lib-imap -I../../../src/lib-index -I../../../src/lib-storage - -I../../../src/doveadm -std=gnu99 -g -O2 -Wall -W -Wmissing-prototypes - -Wmissing-declarations -Wpointer-arith -Wchar-subscripts -Wformat=2 - -Wbad-function-cast -Wstrict-aliasing=2 -I/usr/kerberos/include -MT doveadm-dsync.o -MD -MP -MF .deps/doveadm-dsync.Tpo -c -o doveadm-dsync.o doveadm-dsync.c doveadm-dsync.c:17:27: error: doveadm-dsync.h: No such file or directory doveadm-dsync.c:386: warning: no previous prototype for ?doveadm_dsync_main? make[4]: *** [doveadm-dsync.o] Error 1 make[4]: Leaving directory `/usr/src/dovecot-2.1.rc2/src/doveadm/dsync' make[3]: *** [all-recursive] Error 1 make[3]: Leaving directory `/usr/src/dovecot-2.1.rc2/src/doveadm' make[2]: *** [all-recursive] Error 1 make[2]: Leaving directory `/usr/src/dovecot-2.1.rc2/src' make[1]: *** [all-recursive] Error 1 make[1]: Leaving directory `/usr/src/dovecot-2.1.rc2' make: *** [all] Error 2 In fact the file doveadm-dsync.h is not in the tarball Ciao, luigi - -- / +--[Luigi Rosa]-- \ Non cercare di vincere mai un gatto in testardaggine. --Robert A. Heinlein -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.11 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iEYEARECAAYFAk8HK68ACgkQ3kWu7Tfl6ZRCkgCgwUGMxj12NBI3p8FO0W2AIBwW uSAAn3YuEAtm5ulsvWaPuPeylK2e/Vpc =kzD0 -----END PGP SIGNATURE----- From yubao.liu at gmail.com Fri Jan 6 19:29:14 2012 From: yubao.liu at gmail.com (Yubao Liu) Date: Sat, 07 Jan 2012 01:29:14 +0800 Subject: [Dovecot] Strange error: DIGEST-MD5 mechanism can't be supported with given passdbs In-Reply-To: <1325868288.17774.30.camel@hurina> References: <4F05EABC.7070309@gmail.com> <4F06D272.5010200@bunbun.be> <4F071E3B.2060405@gmail.com> <1325868288.17774.30.camel@hurina> Message-ID: <4F072F6A.8050801@gmail.com> On 01/07/2012 12:44 AM, Timo Sirainen wrote: > On Sat, 2012-01-07 at 00:15 +0800, Yubao Liu wrote: > >> I don't know why this function doesn't check auth->masterdbs, if I >> insert these lines after line 128, that error goes away, and dovecot's >> imap-login process happily does DIGEST-MD5 authentication [1]. >> In my configuration, "masterdbs" contains "passdb passwd-file", >> "passdbs" contains " passdb pam". > So .. you want DIGEST-MD5 authentication for the master users, but not > for anyone else? I hadn't really thought anyone would want that.. > I hope users use GSSAPI authentication from native MUA, but RoundCube webmail doesn't support that, so that I have to use DIGEST-MD5/CRAM-MD5/ PLAIN/LOGIN for authentication between RoundCube and Dovecot, and let RoundCube login as master user for normal user. I really don't like to transfer password as plain text, so I prefer DIGEST-MD5 and CRAM-MD5 for both auth mechanisms and password schemes. My last email is partially wrong, DIGEST-MD5 can't be used for master users because 'real_user*master_user' is used to calculate digest in IMAP client, this can't be consistent with digest in passdb because only 'master_user' is used to calculate digest. But CRAM-MD5 doesn't use user name to calculate digest, I just tried it successfully with my rude patch to src/auth/auth.c in my previous email:-) # doveadm pw -s CRAM-MD5 -u webmail (use 123456 as passwd) # cat > /etc/dovecot/master-users webmail:{CRAM-MD5}dd59f669267e9bb13d42a1ba57c972c5b13a4b2ae457c9ada8035dc7d8bae41b ^D $ gsasl --imap imap.corp.example.com --verbose -m CRAM-MD5 -a 'dieken*webmail at corp.example.com' -p 123456 Trying `gold.corp.example.com'... * OK [CAPABILITY IMAP4rev1 LITERAL+ SASL-IR LOGIN-REFERRALS ID ENABLE IDLE STARTTLS LOGINDISABLED AUTH=GSSAPI AUTH=DIGEST-MD5 AUTH=CRAM-MD5] Dovecot ready. . CAPABILITY * CAPABILITY IMAP4rev1 LITERAL+ SASL-IR LOGIN-REFERRALS ID ENABLE IDLE STARTTLS LOGINDISABLED AUTH=GSSAPI AUTH=DIGEST-MD5 AUTH=CRAM-MD5 . OK Pre-login capabilities listed, post-login capabilities have more. . STARTTLS . OK Begin TLS negotiation now. . CAPABILITY * CAPABILITY IMAP4rev1 LITERAL+ SASL-IR LOGIN-REFERRALS ID ENABLE IDLE AUTH=GSSAPI AUTH=DIGEST-MD5 AUTH=CRAM-MD5 . OK Pre-login capabilities listed, post-login capabilities have more. . AUTHENTICATE CRAM-MD5 + PDM1OTIzODgxNjgyNzUxMjUuMTMyNTg3MDQwMkBnb2xkPg== ZGlla2VuKndlYm1haWxAY29ycC5leGFtcGxlLmNvbSBkYjRlZWJlMTUwZGZjZjg5NTVkODZhNDBlMGJiZmQzNA== * CAPABILITY IMAP4rev1 LITERAL+ SASL-IR LOGIN-REFERRALS ID ENABLE IDLE SORT SORT=DISPLAY THREAD=REFERENCES THREAD=REFS MULTIAPPEND UNSELECT CHILDREN NAMESPACE UIDPLUS LIST-EXTENDED I18NLEVEL=1 CONDSTORE QRESYNC ESEARCH ESORT SEARCHRES WITHIN CONTEXT=SEARCH LIST-STATUS Client authentication finished (server trusted)... Enter application data (EOF to finish): It's also OK to use "-a 'dieken*webmail'" instead of "-a 'dieken*webmail at corp.example.com'. # doveconf -n # 2.0.15: /etc/dovecot/dovecot.conf # OS: Linux 3.1.0-1-686-pae i686 Debian wheezy/sid auth_debug = yes auth_debug_passwords = yes auth_default_realm = corp.example.com auth_krb5_keytab = /etc/dovecot.keytab auth_master_user_separator = * auth_mechanisms = gssapi digest-md5 cram-md5 auth_realms = corp.example.com auth_username_format = %n auth_verbose = yes auth_verbose_passwords = plain first_valid_gid = 1000 first_valid_uid = 1000 mail_debug = yes mail_location = mdbox:/srv/mail/%u/Mail managesieve_notify_capability = mailto managesieve_sieve_capability = fileinto reject envelope encoded-character vacation subaddress comparator-i;ascii-numeric relational regex imap4flags copy include variables body enotify environment mailbox date ihave passdb { args = /etc/dovecot/master-users driver = passwd-file master = yes } passdb { driver = pam } plugin { sieve = /srv/mail/%u/.dovecot.sieve sieve_dir = /srv/mail/%u/sieve } protocols = " imap lmtp sieve" service auth { unix_listener auth-client { group = Debian-exim mode = 0660 } } ssl_cert = References: <9D72A466-6ECD-4A10-A52A-20BF56B32916@iki.fi> Message-ID: On 1/6/2012 2:57 AM, Timo Sirainen wrote: > On 6.1.2012, at 12.55, Timo Sirainen wrote: > >>> Jan 4 05:17:17 bubba dovecot: master: Error: service(indexer-worker): child 10896 returned error 83 (Out of memory (vsz_limit=256 MB, you may need to increase it)) >>> Jan 4 06:17:17 bubba dovecot: indexer-worker(user1 at domain.com): Fatal: pool_system_realloc(134217728): Out of memory >> The problem is clearly that index-worker's vsz_limit is too low. Increase it (or default_vsz_limit). > Although the source of the out-of-memory > > /usr/local/lib/dovecot/libdovecot.so.0(buffer_write+0x7c) [0x7f0ec1a550ec] -> /usr/local/lib/dovecot/lib21_fts_solr_plugin.so(+0x3292) [0x7f0ec024f292] -> > > is something that shouldn't really be happening. I guess the Solr plugin wastes memory unnecessarily, I'll see what I can do about it. But for now just increase vsz limit. > I set default_vsz_limit = 1024M. Those errors appear gone - but I do have messages like: Jan 6 09:22:42 bubba dovecot: indexer-worker(user1 at domain.com): Error: fts_solr: Indexing failed: 400 Illegal character ((CTRL-CHAR, code 18)) at [row,col {unknown-source}]: [482765,16] Jan 6 09:22:42 bubba dovecot: indexer-worker: Error: Google seems to indicate that Solr cannot handle "invalid" characters - and that it is the responsibility of the calling program to strip out such. A quick search shows me a both an individual character comparison in Java and a regex used for the purpose. Is there any "illegal character protection" in the Dovecot Solr plugin? -- Daniel From dmiller at amfes.com Fri Jan 6 19:35:34 2012 From: dmiller at amfes.com (Daniel L. Miller) Date: Fri, 06 Jan 2012 09:35:34 -0800 Subject: [Dovecot] FTS-Solr plugin Message-ID: Solr plugin appears to break when mailbox names have an ampersand in the name. The messages appear to indicate '&' gets translated to '&--'. -- Daniel From tss at iki.fi Fri Jan 6 19:36:41 2012 From: tss at iki.fi (Timo Sirainen) Date: Fri, 6 Jan 2012 19:36:41 +0200 Subject: [Dovecot] Possible mdbox corruption In-Reply-To: References: <9D72A466-6ECD-4A10-A52A-20BF56B32916@iki.fi> Message-ID: <75CA67D3-81FF-4FD5-8D61-6992E808DB09@iki.fi> On 6.1.2012, at 19.30, Daniel L. Miller wrote: > Jan 6 09:22:42 bubba dovecot: indexer-worker(user1 at domain.com): Error: fts_solr: Indexing failed: 400 Illegal character ((CTRL-CHAR, code 18)) at [row,col {unknown-source}]: [482765,16] > Jan 6 09:22:42 bubba dovecot: indexer-worker: Error: > > Google seems to indicate that Solr cannot handle "invalid" characters - and that it is the responsibility of the calling program to strip out such. A quick search shows me a both an individual character comparison in Java and a regex used for the purpose. Is there any "illegal character protection" in the Dovecot Solr plugin? Yes, there is. So I'm not really sure what it's complaining about. Are you using the "solr" or "solr_old" backend? From yubao.liu at gmail.com Fri Jan 6 19:45:15 2012 From: yubao.liu at gmail.com (Yubao Liu) Date: Sat, 07 Jan 2012 01:45:15 +0800 Subject: [Dovecot] Strange error: DIGEST-MD5 mechanism can't be supported with given passdbs In-Reply-To: <1325868288.17774.30.camel@hurina> References: <4F05EABC.7070309@gmail.com> <4F06D272.5010200@bunbun.be> <4F071E3B.2060405@gmail.com> <1325868288.17774.30.camel@hurina> Message-ID: <4F07332B.70708@gmail.com> On 01/07/2012 12:44 AM, Timo Sirainen wrote: > On Sat, 2012-01-07 at 00:15 +0800, Yubao Liu wrote: > >> I don't know why this function doesn't check auth->masterdbs, if I >> insert these lines after line 128, that error goes away, and dovecot's >> imap-login process happily does DIGEST-MD5 authentication [1]. >> In my configuration, "masterdbs" contains "passdb passwd-file", >> "passdbs" contains " passdb pam". > So .. you want DIGEST-MD5 authentication for the master users, but not > for anyone else? I hadn't really thought anyone would want that.. > Is there any special reason that master passdb isn't taken into account in src/auth/auth.c:auth_passdb_list_have_lookup_credentials() ? I feel master passdb is also a kind of passdb. http://wiki2.dovecot.org/PasswordDatabase > You can use multiple databases, so if the password doesn't match > in the first database, Dovecot checks the next one. This can be useful > if you want to easily support having both virtual users and also local > system users (see Authentication/MultipleDatabases ). This is exactly my use case, I use Kerberos for system users, I'm curious why master passdb isn't used to check "have_lookup_credentials" ability. http://wiki2.dovecot.org/Authentication/MultipleDatabases > Currently the fallback works only with the PLAIN authentication mechanism. I hope this limitation can be relaxed. Regards, Yubao Liu From tss at iki.fi Fri Jan 6 19:51:49 2012 From: tss at iki.fi (Timo Sirainen) Date: Fri, 6 Jan 2012 19:51:49 +0200 Subject: [Dovecot] Strange error: DIGEST-MD5 mechanism can't be supported with given passdbs In-Reply-To: <4F07332B.70708@gmail.com> References: <4F05EABC.7070309@gmail.com> <4F06D272.5010200@bunbun.be> <4F071E3B.2060405@gmail.com> <1325868288.17774.30.camel@hurina> <4F07332B.70708@gmail.com> Message-ID: On 6.1.2012, at 19.45, Yubao Liu wrote: > On 01/07/2012 12:44 AM, Timo Sirainen wrote: >> On Sat, 2012-01-07 at 00:15 +0800, Yubao Liu wrote: >> >>> I don't know why this function doesn't check auth->masterdbs, if I >>> insert these lines after line 128, that error goes away, and dovecot's >>> imap-login process happily does DIGEST-MD5 authentication [1]. >>> In my configuration, "masterdbs" contains "passdb passwd-file", >>> "passdbs" contains " passdb pam". >> So .. you want DIGEST-MD5 authentication for the master users, but not >> for anyone else? I hadn't really thought anyone would want that.. >> > Is there any special reason that master passdb isn't taken into > account in src/auth/auth.c:auth_passdb_list_have_lookup_credentials() ? > I feel master passdb is also a kind of passdb. I guess it could be changed. It wasn't done intentionally that way. > This is exactly my use case, I use Kerberos for system users, > I'm curious why master passdb isn't used to check "have_lookup_credentials" ability > http://wiki2.dovecot.org/Authentication/MultipleDatabases > > Currently the fallback works only with the PLAIN authentication mechanism. > > I hope this limitation can be relaxed. It might already be .. I don't remember. In any case you have only PAM passdb, so it shouldn't matter. GSSAPI isn't a passdb. From tss at iki.fi Fri Jan 6 21:40:44 2012 From: tss at iki.fi (Timo Sirainen) Date: Fri, 06 Jan 2012 21:40:44 +0200 Subject: [Dovecot] v2.1.rc3 released Message-ID: <1325878845.17774.38.camel@hurina> http://dovecot.org/releases/2.1/rc/dovecot-2.1.rc3.tar.gz http://dovecot.org/releases/2.1/rc/dovecot-2.1.rc3.tar.gz.sig Whops, rc2 was missing a file. I always run "make distcheck", which should catch these, but recently it has always failed due to clang static checking giving one "error" that I didn't really want to fix. Because of that the distcheck didn't finish and didn't check for the missing file. So, anyway, I've made clang happy again, and now that I see how bad idea it is to just ignore the failed distcheck, I won't do that again in future. :) From mailinglist at ngong.de Fri Jan 6 18:37:22 2012 From: mailinglist at ngong.de (mailinglist) Date: Fri, 06 Jan 2012 17:37:22 +0100 Subject: [Dovecot] change initial permissions on creation of mail folder Message-ID: <4F072342.1090901@ngong.de> Installed dovcot from Debian .deb file. Creating a new account for system users sets permission for user-only. Where to change initial permissions on creation of mail folder and other subdirectories. Installed dovecot using "apt-get install dovecot-imapd dovecot-pop3d". Any time when I create a new account in my mail client for a system user, Dovecot tries to create ~/mail/.imap/INBOX. The permissions for mail and .imap are set to 0700. By this permissions INBOX can not be created leading to an error message in log files. When I manualy change the permissions to 0770, INBOX is created From doctor at doctor.nl2k.ab.ca Fri Jan 6 22:12:56 2012 From: doctor at doctor.nl2k.ab.ca (The Doctor) Date: Fri, 6 Jan 2012 13:12:56 -0700 Subject: [Dovecot] v2.1.rc2 released In-Reply-To: <1325868127.17774.29.camel@hurina> References: <1325868127.17774.29.camel@hurina> Message-ID: <20120106201255.GA20598@doctor.nl2k.ab.ca> On Fri, Jan 06, 2012 at 06:42:07PM +0200, Timo Sirainen wrote: > http://dovecot.org/releases/2.1/rc/dovecot-2.1.rc2.tar.gz > http://dovecot.org/releases/2.1/rc/dovecot-2.1.rc2.tar.gz.sig > > Lots of fixes since rc1. Some of the changes were larger than I wanted > at RC stage, but they had to be done now.. Hopefully it's all over now, > and we can have v2.1.0 soon. :) > > Some of the more important changes: > > * dsync was merged into doveadm. There is still "dsync" symlink > pointing to "doveadm", which you can use the old way for now. > The preferred ways to run dsync are "doveadm sync" (for old "dsync > mirror") and "doveadm backup". > > + IMAP SPECIAL-USE extension to describe mailboxes > + Added mailbox {} sections, which deprecate autocreate plugin > + lib-fs: Added "mode" parameter to "posix" backend to specify mode > for created files/dirs (for mail_attachment_dir). > + inet_listener names are now used to figure out what type the socket > is when useful. For example naming service auth { inet_listener } to > auth-client vs. auth-userdb has different behavior. > + Added pop3c (= POP3 client) storage backend. > - LMTP proxying code was simplified, hopefully fixing its problems. > - dsync: Don't remove user's subscriptions for subscriptions=no > namespaces. > Suggestion: Get rid of the --as-needed ld flag. This is a show stopper for me. Also, Making all in doveadm Making all in dsync gcc -DHAVE_CONFIG_H -I. -I../../.. -I../../../src/lib -I../../../src/lib-test -I../../../src/lib-settings -I../../../src/lib-master -I../../../src/lib-mail -I../../../src/lib-imap -I../../../src/lib-index -I../../../src/lib-storage -I../../../src/doveadm -std=gnu99 -g -O2 -Wall -W -Wmissing-prototypes -Wmissing-declarations -Wpointer-arith -Wchar-subscripts -Wformat=2 -Wbad-function-cast -I/usr/contrib/include -MT doveadm-dsync.o -MD -MP -MF .deps/doveadm-dsync.Tpo -c -o doveadm-dsync.o doveadm-dsync.c doveadm-dsync.c:17:27: doveadm-dsync.h: No such file or directory doveadm-dsync.c:386: warning: no previous prototype for `doveadm_dsync_main' *** Error code 1 Stop. *** Error code 1 Stop. *** Error code 1 Stop. *** Error code 1 Stop. *** Error code 1 Looks like rc3 needed . -- Member - Liberal International This is doctor at nl2k.ab.ca Ici doctor at nl2k.ab.ca God, Queen and country! Never Satan President Republic! Beware AntiChrist rising! https://www.fullyfollow.me/rootnl2k Merry Christmas 2011 and Happy New Year 2012 ! From doctor at doctor.nl2k.ab.ca Fri Jan 6 22:19:14 2012 From: doctor at doctor.nl2k.ab.ca (The Doctor) Date: Fri, 6 Jan 2012 13:19:14 -0700 Subject: [Dovecot] v2.1.rc2 released In-Reply-To: <20120106201255.GA20598@doctor.nl2k.ab.ca> References: <1325868127.17774.29.camel@hurina> <20120106201255.GA20598@doctor.nl2k.ab.ca> Message-ID: <20120106201914.GC20598@doctor.nl2k.ab.ca> On Fri, Jan 06, 2012 at 01:12:56PM -0700, The Doctor wrote: > On Fri, Jan 06, 2012 at 06:42:07PM +0200, Timo Sirainen wrote: > > http://dovecot.org/releases/2.1/rc/dovecot-2.1.rc2.tar.gz > > http://dovecot.org/releases/2.1/rc/dovecot-2.1.rc2.tar.gz.sig > > > > Lots of fixes since rc1. Some of the changes were larger than I wanted > > at RC stage, but they had to be done now.. Hopefully it's all over now, > > and we can have v2.1.0 soon. :) > > > > Some of the more important changes: > > > > * dsync was merged into doveadm. There is still "dsync" symlink > > pointing to "doveadm", which you can use the old way for now. > > The preferred ways to run dsync are "doveadm sync" (for old "dsync > > mirror") and "doveadm backup". > > > > + IMAP SPECIAL-USE extension to describe mailboxes > > + Added mailbox {} sections, which deprecate autocreate plugin > > + lib-fs: Added "mode" parameter to "posix" backend to specify mode > > for created files/dirs (for mail_attachment_dir). > > + inet_listener names are now used to figure out what type the socket > > is when useful. For example naming service auth { inet_listener } to > > auth-client vs. auth-userdb has different behavior. > > + Added pop3c (= POP3 client) storage backend. > > - LMTP proxying code was simplified, hopefully fixing its problems. > > - dsync: Don't remove user's subscriptions for subscriptions=no > > namespaces. > > > > > Suggestion: > > Get rid of the --as-needed ld flag. This is a show stopper for me. > > Also, > > Making all in doveadm > Making all in dsync > gcc -DHAVE_CONFIG_H -I. -I../../.. -I../../../src/lib -I../../../src/lib-test -I../../../src/lib-settings -I../../../src/lib-master -I../../../src/lib-mail -I../../../src/lib-imap -I../../../src/lib-index -I../../../src/lib-storage -I../../../src/doveadm -std=gnu99 -g -O2 -Wall -W -Wmissing-prototypes -Wmissing-declarations -Wpointer-arith -Wchar-subscripts -Wformat=2 -Wbad-function-cast -I/usr/contrib/include -MT doveadm-dsync.o -MD -MP -MF .deps/doveadm-dsync.Tpo -c -o doveadm-dsync.o doveadm-dsync.c > doveadm-dsync.c:17:27: doveadm-dsync.h: No such file or directory > doveadm-dsync.c:386: warning: no previous prototype for `doveadm_dsync_main' > *** Error code 1 > > Stop. > *** Error code 1 > > Stop. > *** Error code 1 > > Stop. > *** Error code 1 > > Stop. > *** Error code 1 > > > Looks like rc3 needed . > Just noted your rc3 notice. Can you get an rc4 going where the above 2 mentions are fixed? > -- > Member - Liberal International This is doctor at nl2k.ab.ca Ici doctor at nl2k.ab.ca > God, Queen and country! Never Satan President Republic! Beware AntiChrist rising! > https://www.fullyfollow.me/rootnl2k > Merry Christmas 2011 and Happy New Year 2012 ! -- Member - Liberal International This is doctor at nl2k.ab.ca Ici doctor at nl2k.ab.ca God, Queen and country! Never Satan President Republic! Beware AntiChrist rising! https://www.fullyfollow.me/rootnl2k Merry Christmas 2011 and Happy New Year 2012 ! From tss at iki.fi Fri Jan 6 22:24:45 2012 From: tss at iki.fi (Timo Sirainen) Date: Fri, 6 Jan 2012 22:24:45 +0200 Subject: [Dovecot] v2.1.rc2 released In-Reply-To: <20120106201914.GC20598@doctor.nl2k.ab.ca> References: <1325868127.17774.29.camel@hurina> <20120106201255.GA20598@doctor.nl2k.ab.ca> <20120106201914.GC20598@doctor.nl2k.ab.ca> Message-ID: <01600D7A-F1E9-4DD9-8182-B3A5CB9A2859@iki.fi> On 6.1.2012, at 22.19, The Doctor wrote: >> doveadm-dsync.c:17:27: doveadm-dsync.h: No such file or directory >> doveadm-dsync.c:386: warning: no previous prototype for `doveadm_dsync_main' >> *** Error code 1 >> Looks like rc3 needed . >> > > Just noted your rc3 notice. > > Can you get an rc4 going where the above 2 mentions are fixed? rc3 fixes these. From dmiller at amfes.com Fri Jan 6 22:32:54 2012 From: dmiller at amfes.com (Daniel L. Miller) Date: Fri, 06 Jan 2012 12:32:54 -0800 Subject: [Dovecot] Possible mdbox corruption In-Reply-To: <75CA67D3-81FF-4FD5-8D61-6992E808DB09@iki.fi> References: <9D72A466-6ECD-4A10-A52A-20BF56B32916@iki.fi> <75CA67D3-81FF-4FD5-8D61-6992E808DB09@iki.fi> Message-ID: On 1/6/2012 9:36 AM, Timo Sirainen wrote: > On 6.1.2012, at 19.30, Daniel L. Miller wrote: > >> Jan 6 09:22:42 bubba dovecot: indexer-worker(user1 at domain.com): Error: fts_solr: Indexing failed: 400 Illegal character ((CTRL-CHAR, code 18)) at [row,col {unknown-source}]: [482765,16] >> Jan 6 09:22:42 bubba dovecot: indexer-worker: Error: >> >> Google seems to indicate that Solr cannot handle "invalid" characters - and that it is the responsibility of the calling program to strip out such. A quick search shows me a both an individual character comparison in Java and a regex used for the purpose. Is there any "illegal character protection" in the Dovecot Solr plugin? > Yes, there is. So I'm not really sure what it's complaining about. Are you using the "solr" or "solr_old" backend? > > "Solr". plugin { fts = solr fts_solr = url=http://localhost:8983/solr/ } -- Daniel From david at paperclipsystems.com Fri Jan 6 22:44:51 2012 From: david at paperclipsystems.com (David Egbert) Date: Fri, 06 Jan 2012 13:44:51 -0700 Subject: [Dovecot] failed: Too many levels of symbolic links Message-ID: <4F075D43.8090706@paperclipsystems.com> All, My dovecot install works great except for one error I keep seeing this in my logs. The folder has 7138 messages in it. I am informed the user they needed to reduce the number of messages in the folder and believe this will fix the problem. My question is about where the problem lies. Is the problem related to an internal limit with Dovecot v2.0.15 or with my Debian (3.1.0-1-amd64)? Thanks --- dovecot: imap(XXXXX at XXXXX.com): Error: readdir(/XXXX/XXXX/XXXXXXXXX/XXXXX/XXXXXXX/XXXXXXXXXXXXXXXXXXX/XXX) failed: Too many levels of symbolic links David Egbert Paperclip Systems, LLC --- This message, its contents, and attachments are confidential and are only authorized for the intended recipient. Disclosure, re-distribution, or use of said information is strictly prohibited, and may be excluded from disclosure by applicable law. If you are not the intended recipient, or their intermediary, please notify the sender and delete this message. From tss at iki.fi Fri Jan 6 23:16:33 2012 From: tss at iki.fi (Timo Sirainen) Date: Fri, 6 Jan 2012 23:16:33 +0200 Subject: [Dovecot] failed: Too many levels of symbolic links In-Reply-To: <4F075D43.8090706@paperclipsystems.com> References: <4F075D43.8090706@paperclipsystems.com> Message-ID: <703E9D47-8238-4E52-A60B-A466B77C5CBF@iki.fi> On 6.1.2012, at 22.44, David Egbert wrote: > dovecot: imap(XXXXX at XXXXX.com): Error: readdir(/XXXX/XXXX/XXXXXXXXX/XXXXX/XXXXXXX/XXXXXXXXXXXXXXXXXXX/XXX) failed: Too many levels of symbolic links You have a symlink loop. Either a symlink that points to itself or one of the parent directories. From e-frog at gmx.de Fri Jan 6 23:25:49 2012 From: e-frog at gmx.de (e-frog) Date: Fri, 06 Jan 2012 22:25:49 +0100 Subject: [Dovecot] 2.1.rc1 (056934abd2ef): virtual plugin mailbox search pattern In-Reply-To: <4EF4BB6C.3050902@gmx.de> References: <4EF4BB6C.3050902@gmx.de> Message-ID: <4F0766DD.1060805@gmx.de> ON 23.12.2011 18:33, wrote e-frog: > Hello Timo, > > With dovecot 2.1.rc1 (056934abd2ef) there seems to be something wrong > with virtual plugin mailbox search patterns. > > I'm using a virtual mailbox 'unread' with the following dovecot-virtual > file > > $ cat dovecot-virtual > * > unseen > > For testing propose I created the following folders with each containing > one unread message > > INBOX, INBOX/level1 and INBOX/level1/level2 > > 2.1.rc1 (056934abd2ef) > > 1 LIST "" "*" > * LIST (\HasChildren) "/" "INBOX" > * LIST (\HasChildren) "/" "INBOX/level1" > * LIST (\HasNoChildren) "/" "INBOX/level1/level2" > * LIST (\HasChildren) "/" "virtual" > * LIST (\HasNoChildren) "/" "virtual/unread" > 1 OK List completed. > 2 STATUS "INBOX" (UNSEEN) > * STATUS "INBOX" (UNSEEN 1) > 2 OK Status completed. > 3 STATUS "INBOX/level1" (UNSEEN) > * STATUS "INBOX/level1" (UNSEEN 1) > 3 OK Status completed. > 4 STATUS "INBOX/level1/level2" (UNSEEN) > * STATUS "INBOX/level1/level2" (UNSEEN 1) > 4 OK Status completed. > 5 STATUS "virtual/unread" (UNSEEN) > * STATUS "virtual/unread" (UNSEEN 1) > 5 OK Status completed. > > Result: virtual/unread shows only 1 unseen message. Further tests showed > it's the one from INBOX. The mails from the deeper levels are not found. > > Downgrading to 2.0.16 restores the correct behavior: > > 1 LIST "" "*" > * LIST (\HasChildren) "/" "INBOX" > * LIST (\HasChildren) "/" "INBOX/level1" > * LIST (\HasNoChildren) "/" "INBOX/level1/level2" > * LIST (\HasChildren) "/" "virtual" > * LIST (\HasNoChildren) "/" "virtual/unread" > 1 OK List completed. > 2 STATUS "INBOX" (UNSEEN) > * STATUS "INBOX" (UNSEEN 1) > 2 OK Status completed. > 3 STATUS "INBOX/level1" (UNSEEN) > * STATUS "INBOX/level1" (UNSEEN 1) > 3 OK Status completed. > 4 STATUS "INBOX/level1/level2" (UNSEEN) > * STATUS "INBOX/level1/level2" (UNSEEN 1) > 4 OK Status completed. > 5 STATUS "virtual/unread" (UNSEEN) > * STATUS "virtual/unread" (UNSEEN 3) > 5 OK Status completed. > > Result: virtual/unread shows 3 unseen messages as it should > > The namespace configuration is as following > > namespace { > hidden = no > inbox = yes > list = yes > location = > prefix = > separator = / > subscriptions = yes > type = private > } > namespace { > location = virtual:~/virtual > prefix = virtual/ > separator = / > subscriptions = no > type = private > } > > I've also tried this with location = virtual:~/virtual:LAYOUT=maildir++ > leading to the same result. > > Thanks, > e-frog Just tested this on 2.1.rc3 and this still doesn't work like in v2.0. It seems like the search stops at the first hierarchy separator. Is there anything in addition I can do to help fix this issue? Thanks, e-frog From david at paperclipsystems.com Fri Jan 6 23:41:04 2012 From: david at paperclipsystems.com (David Egbert) Date: Fri, 06 Jan 2012 14:41:04 -0700 Subject: [Dovecot] failed: Too many levels of symbolic links In-Reply-To: <703E9D47-8238-4E52-A60B-A466B77C5CBF@iki.fi> References: <4F075D43.8090706@paperclipsystems.com> <703E9D47-8238-4E52-A60B-A466B77C5CBF@iki.fi> Message-ID: <4F076A70.3090905@paperclipsystems.com> On 1/6/2012 2:16 PM, Timo Sirainen wrote: > On 6.1.2012, at 22.44, David Egbert wrote: > >> dovecot: imap(XXXXX at XXXXX.com): Error: readdir(/XXXX/XXXX/XXXXXXXXX/XXXXX/XXXXXXX/XXXXXXXXXXXXXXXXXXX/XXX) failed: Too many levels of symbolic links > You have a symlink loop. Either a symlink that points to itself or one of the parent directories. > I thought that might have been the case, but I checked and there are no symlinks in that directory, or any of the directories above it in the path. All of the directories and files were created by dovecot. I didn't notice this in the logs until recently. The files are stored on an NFS Raid if that makes any difference. --- David Egbert From tss at iki.fi Fri Jan 6 23:51:41 2012 From: tss at iki.fi (Timo Sirainen) Date: Fri, 6 Jan 2012 23:51:41 +0200 Subject: [Dovecot] failed: Too many levels of symbolic links In-Reply-To: <4F076A70.3090905@paperclipsystems.com> References: <4F075D43.8090706@paperclipsystems.com> <703E9D47-8238-4E52-A60B-A466B77C5CBF@iki.fi> <4F076A70.3090905@paperclipsystems.com> Message-ID: On 6.1.2012, at 23.41, David Egbert wrote: > On 1/6/2012 2:16 PM, Timo Sirainen wrote: >> On 6.1.2012, at 22.44, David Egbert wrote: >> >>> dovecot: imap(XXXXX at XXXXX.com): Error: readdir(/XXXX/XXXX/XXXXXXXXX/XXXXX/XXXXXXX/XXXXXXXXXXXXXXXXXXX/XXX) failed: Too many levels of symbolic links >> You have a symlink loop. Either a symlink that points to itself or one of the parent directories. >> > I thought that might have been the case, but I checked and there are no symlinks in that directory, or any of the directories above it in the path. All of the directories and files were created by dovecot. I didn't notice this in the logs until recently. The files are stored on an NFS Raid if that makes any difference. Well, then.. You have a bit too many Xes in there for me to guess which readdir() is the one failing. I guess it's /new or /cur for a Maildir? Anyway, readdir() is failing with ELOOP. Does it always fail with "Too many levels of symbolic links" or is it sometimes different? This sounds like a bug in Linux NFS client code. You can reproduce this always with this one user's Maildir? Can you do "ls" in the directory? From david at paperclipsystems.com Sat Jan 7 00:10:32 2012 From: david at paperclipsystems.com (David Egbert) Date: Fri, 06 Jan 2012 15:10:32 -0700 Subject: [Dovecot] failed: Too many levels of symbolic links In-Reply-To: References: <4F075D43.8090706@paperclipsystems.com> <703E9D47-8238-4E52-A60B-A466B77C5CBF@iki.fi> <4F076A70.3090905@paperclipsystems.com> Message-ID: <4F077158.4000500@paperclipsystems.com> On 1/6/2012 2:51 PM, Timo Sirainen wrote: > On 6.1.2012, at 23.41, David Egbert wrote: > >> On 1/6/2012 2:16 PM, Timo Sirainen wrote: >>> On 6.1.2012, at 22.44, David Egbert wrote: >>> >>>> dovecot: imap(XXXXX at XXXXX.com): Error: readdir(/XXXX/XXXX/XXXXXXXXX/XXXXX/XXXXXXX/XXXXXXXXXXXXXXXXXXX/XXX) failed: Too many levels of symbolic links >>> You have a symlink loop. Either a symlink that points to itself or one of the parent directories. >>> >> I thought that might have been the case, but I checked and there are no symlinks in that directory, or any of the directories above it in the path. All of the directories and files were created by dovecot. I didn't notice this in the logs until recently. The files are stored on an NFS Raid if that makes any difference. > Well, then.. You have a bit too many Xes in there for me to guess which readdir() is the one failing. I guess it's /new or /cur for a Maildir? > > Anyway, readdir() is failing with ELOOP. Does it always fail with "Too many levels of symbolic links" or is it sometimes different? This sounds like a bug in Linux NFS client code. You can reproduce this always with this one user's Maildir? Can you do "ls" in the directory? > Sorry about the X's... it is a client directory. We support many domains and their privacy is paramount. You are correct it is in the /cur directory. I can LS all of directories without problems. This user has 10+Gb in his mail box spread across 352 subscribed folders. As for the logs it is always the directory, always the same error. David Egbert From tss at iki.fi Sat Jan 7 00:30:37 2012 From: tss at iki.fi (Timo Sirainen) Date: Sat, 7 Jan 2012 00:30:37 +0200 Subject: [Dovecot] failed: Too many levels of symbolic links In-Reply-To: <4F077158.4000500@paperclipsystems.com> References: <4F075D43.8090706@paperclipsystems.com> <703E9D47-8238-4E52-A60B-A466B77C5CBF@iki.fi> <4F076A70.3090905@paperclipsystems.com> <4F077158.4000500@paperclipsystems.com> Message-ID: <4A0E9695-E78A-487F-AE53-888D27981EF1@iki.fi> On 7.1.2012, at 0.10, David Egbert wrote: >> Anyway, readdir() is failing with ELOOP. Does it always fail with "Too many levels of symbolic links" or is it sometimes different? This sounds like a bug in Linux NFS client code. You can reproduce this always with this one user's Maildir? Can you do "ls" in the directory? >> > Sorry about the X's... it is a client directory. We support many domains and their privacy is paramount. You are correct it is in the /cur directory. I can LS all of directories without problems. This user has 10+Gb in his mail box spread across 352 subscribed folders. As for the logs it is always the directory, always the same error. Try the attached test program. Run it as: ./readdir /path/to/Maildir/cur Does it also give non-zero error? -------------- next part -------------- A non-text attachment was scrubbed... Name: readdir.c Type: application/octet-stream Size: 271 bytes Desc: not available URL: From yubao.liu at gmail.com Sat Jan 7 05:36:27 2012 From: yubao.liu at gmail.com (Yubao Liu) Date: Sat, 07 Jan 2012 11:36:27 +0800 Subject: [Dovecot] Strange error: DIGEST-MD5 mechanism can't be supported with given passdbs In-Reply-To: References: <4F05EABC.7070309@gmail.com> <4F06D272.5010200@bunbun.be> <4F071E3B.2060405@gmail.com> <1325868288.17774.30.camel@hurina> <4F07332B.70708@gmail.com> Message-ID: <4F07BDBB.3060204@gmail.com> On 01/07/2012 01:51 AM, Timo Sirainen wrote: > On 6.1.2012, at 19.45, Yubao Liu wrote: >> On 01/07/2012 12:44 AM, Timo Sirainen wrote: >>> On Sat, 2012-01-07 at 00:15 +0800, Yubao Liu wrote: >>>> I don't know why this function doesn't check auth->masterdbs, if I >>>> insert these lines after line 128, that error goes away, and dovecot's >>>> imap-login process happily does DIGEST-MD5 authentication [1]. >>>> In my configuration, "masterdbs" contains "passdb passwd-file", >>>> "passdbs" contains " passdb pam". >>> So .. you want DIGEST-MD5 authentication for the master users, but not >>> for anyone else? I hadn't really thought anyone would want that.. >> Is there any special reason that master passdb isn't taken into >> account in src/auth/auth.c:auth_passdb_list_have_lookup_credentials() ? >> I feel master passdb is also a kind of passdb. > I guess it could be changed. It wasn't done intentionally that way. > I guess this change broke old way: http://hg.dovecot.org/dovecot-2.0/rev/b05793c609ac In old version, "auth->passdbs" contains all passdbs, this revision changes "auth->passdbs" to only contain non-master passdbs. I'm not sure which fix is better or even my proposal is correct or fully: a) in src/auth/auth.c:auth_passdb_preinit(), insert master passdb to auth->passdbs too, and remove duplicate code for masterdbs in auth_init() and auth_deinit(). b) add similar code for masterdbs in auth_passdb_list_have_verify_plain(), auth_passdb_list_have_lookup_credentials(), auth_passdb_list_have_set_credentials(). >> This is exactly my use case, I use Kerberos for system users, >> I'm curious why master passdb isn't used to check "have_lookup_credentials" ability >> http://wiki2.dovecot.org/Authentication/MultipleDatabases >>> Currently the fallback works only with the PLAIN authentication mechanism. >> I hope this limitation can be relaxed. > It might already be .. I don't remember. In any case you have only PAM passdb, so it shouldn't matter. GSSAPI isn't a passdb. If the fix above is added, then I can use CRAM-MD5 with master passwd-file passdb and normal pam passdb, else imap-login process can't startup due to check in auth_mech_list_verify_passdb(). Attached two patches against dovecot-2.0 branch for the two schemes, the first is cleaner but may affect other logics in other source files. Another related question is "pass" option in master passdb, if I set it to "yes", the authentication fails: Jan 7 11:26:00 gold dovecot: auth: Debug: client in: AUTH#0111#011CRAM-MD5#011service=imap#011secured#011lip=127.0.1.1#011rip=127.0.0.1#011lport=143#011rport=51771 Jan 7 11:26:00 gold dovecot: auth: Debug: client out: CONT#0111#011PDk4NjcwMDY1MTU3NzI3MjguMTMyNTkwNjc2MEBnb2xkPg== Jan 7 11:26:00 gold dovecot: auth: Debug: client in: CONT#0111#011ZGlla2VuKndlYm1haWwgYmNkMzFiMWE1YjQ1OWQ0OGRkZWQ4ZmIzZDhmMjVhZTc= Jan 7 11:26:00 gold dovecot: auth: Debug: auth(webmail,127.0.0.1,master): Master user lookup for login: dieken Jan 7 11:26:00 gold dovecot: auth: Debug: passwd-file(webmail,127.0.0.1,master): lookup: user=webmail file=/etc/dovecot/master-users Jan 7 11:26:00 gold dovecot: auth: passdb(webmail,127.0.0.1,master): Master user logging in as dieken Jan 7 11:26:00 gold dovecot: auth: Error: passdb(dieken,127.0.0.1): No passdbs support skipping password verification - pass=yes can't be used in master passdb Jan 7 11:26:00 gold dovecot: auth: Debug: password(dieken,127.0.0.1): passdb doesn't support credential lookups My normal passdb is a PAM passdb, it doesn't support credential lookups, that's reasonable, but I feel the comment for "pass" option is confusing: $ less /etc/dovecot/conf.d/auth-master.conf.ext .... # Example master user passdb using passwd-file. You can use any passdb though. passdb { driver = passwd-file master = yes args = /etc/dovecot/master-users # Unless you're using PAM, you probably still want the destination user to # be looked up from passdb that it really exists. pass=yes does that. pass = yes } According the comment, it's to check whether the real user exists, why not to check userdb but another passdb? Even it must check against passdb, in this case, it's obvious not necessary to lookup credentials, it's enough to to lookup user name only. Regards, Yubao Liu -------------- next part -------------- A non-text attachment was scrubbed... Name: schemeA-count-master-passdb-as-passdb-too.patch Type: text/x-patch Size: 1357 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: schemeB-also-check-against-master-passdbs.patch Type: text/x-patch Size: 1187 bytes Desc: not available URL: From phil at kernick.org Sat Jan 7 02:21:53 2012 From: phil at kernick.org (Phil Kernick) Date: Sat, 07 Jan 2012 10:51:53 +1030 Subject: [Dovecot] Attribute Cache flush errors on FreeBSD 8.2 Message-ID: <4F079021.4090001@kernick.org> I'm running dovecot 2.0.16 on FreeBSD 8.2 with the mail spool and indexes on an NFS server. Lines like the following keep appearing in syslog for access to each mailbox: Error: nfs_flush_attr_cache_fd_locked: fchown(/home/philk/Mail/Deleted) failed: Bad file descriptor This is coming from nfs-workarounds.c line 210, which tracing back seems to be coming from the call to mbox_lock on lib-storage/index/mbox/mbox-lock.c line 774. I have /home mounted with options acregmin=0,acregmax=0,acdirmin=0,acdirmax=0 (as FreeBSD doesn't have a noac option), but it throws the same error either way. The output of dovecot -n is below. Phil. # 2.0.16: /usr/local/etc/dovecot/dovecot.conf # OS: FreeBSD 8.2-RELEASE-p3 i386 auth_mechanisms = plain login auth_username_format = %Lu disable_plaintext_auth = no first_valid_gid = 1000 first_valid_uid = 1000 listen = *, [::] mail_fsync = always mail_location = mbox:~/Mail/:INBOX=/var/mail/%u mail_nfs_index = yes mail_nfs_storage = yes mail_privileged_group = mail mmap_disable = yes passdb { args = session=yes dovecot driver = pam } protocols = imap pop3 service auth { unix_listener /var/spool/postfix/private/auth { group = postfix mode = 0660 user = postfix } user = root } ssl_cert = Hi *, I am currently in the planning stage for a "new and improved" mail system at my university. Right now, everything is on one big backend server but this is causing me increasing amounts of pain, beginning with the time a full backup takes. So naturally, I want to split this big server into smaller ones. To keep things simple, I want to pin a user to a server so I can avoid things like NFS or cluster aware filesystems. The mapping for each account is then inserted into the LDAP object for each user and the frontend proxy (perdition at the moment) then uses this information to route each access to the correct backend storage server running dovecot. So far this has been working nice with my test setup. But: I also have to provide shared folders for users. Thankfully users don't have the right to share their own folders, which makes things easier (I hope). Right now, the setup works like this, using Courier: - complete virtual mail setup - global shared folders configured in /etc/courier/shared/index - inside /home/shared-folder-name/Maildir/courierimapacl specific user get access to a folder - each folder a user has access is mapped to the namespace #shared like #shared.shared-folder-name Now, if I split my backend storage server into multiple ones and user-A is on server-1 and user-B is on server-2, but both need to access the same shared folder, I have a problem. I could of course move all users needing access to a shared folder to the same server, but in the end, this will be a nightmare for me, because I forsee having to move users around on a daily basis. Right now, I am pondering with using an additional server with just the shared folders on it and using NFS (or a cluster FS) to mount the shared folder filesystem to each backend storage server, so each user has potential access to a shared folders data. Ideas? Suggestions? Nudges in the right direction? Gr??e, Sven. -- Sigmentation fault. Core dumped. From stan at hardwarefreak.com Sun Jan 8 02:35:37 2012 From: stan at hardwarefreak.com (Stan Hoeppner) Date: Sat, 07 Jan 2012 18:35:37 -0600 Subject: [Dovecot] Providing shared folders with multiple backend servers In-Reply-To: <68fd4hi9kbv8@mids.svenhartge.de> References: <68fd4hi9kbv8@mids.svenhartge.de> Message-ID: <4F08E4D9.1090203@hardwarefreak.com> On 1/7/2012 4:20 PM, Sven Hartge wrote: > Hi *, > > I am currently in the planning stage for a "new and improved" mail > system at my university. > > Right now, everything is on one big backend server but this is causing > me increasing amounts of pain, beginning with the time a full backup > takes. You failed to mention your analysis and diagnosis identifying the source of the slow backup, and other issues your eluded to but didn't mention specifically. You also didn't mention how you're doing this full backup (tar, IMAP; D2D or tape), where the backup bottleneck is, what mailbox storage format you're using, total mailbox count and filesystem space occupied. What is your disk storage configuration? Direct attach? Hardware or software RAID? What RAID level? How many disks? SAS or SATA? It's highly likely your problems can be solved without the drastic architecture change, and new problems it will introduce, that you describe below. > So naturally, I want to split this big server into smaller ones. Naturally? Many OPs spend significant x/y/z resources trying to avoid the "shared nothing" storage backend setup below. > To keep things simple, I want to pin a user to a server so I can avoid > things like NFS or cluster aware filesystems. The mapping for each > account is then inserted into the LDAP object for each user and the > frontend proxy (perdition at the moment) then uses this information to > route each access to the correct backend storage server running dovecot. Splitting the IMAP workload like this isn't keeping things simple, but increases complexity, on many levels. And there's nothing wrong with NFS and cluster filesystems if they are used correctly. > So far this has been working nice with my test setup. > > But: I also have to provide shared folders for users. Thankfully users > don't have the right to share their own folders, which makes things > easier (I hope). > > Right now, the setup works like this, using Courier: > > - complete virtual mail setup > - global shared folders configured in /etc/courier/shared/index > - inside /home/shared-folder-name/Maildir/courierimapacl specific user > get access to a folder > - each folder a user has access is mapped to the namespace #shared > like #shared.shared-folder-name > > Now, if I split my backend storage server into multiple ones and user-A > is on server-1 and user-B is on server-2, but both need to access the > same shared folder, I have a problem. Yes, you do. > I could of course move all users needing access to a shared folder to > the same server, but in the end, this will be a nightmare for me, > because I forsee having to move users around on a daily basis. See my comments above. > Right now, I am pondering with using an additional server with just the > shared folders on it and using NFS (or a cluster FS) to mount the shared > folder filesystem to each backend storage server, so each user has > potential access to a shared folders data. So you're going to implement a special case of what you're desperately trying to avoid? This makes no sense. > Ideas? Suggestions? Nudges in the right direction? Yes. We need more real information. Please provide: 1. Mailbox count, total maildir file count and size 2. Average/peak concurrent user connections 3. CPU type/speed/total core count, total RAM, free RAM (incl buffers) 4. Storage configuration--total spindles, RAID level, hard or soft RAID 5. Filesystem type 6. Backup software/method 7. Operating system Instead of telling us what you think the solution to your unidentified bottleneck is and then asking "yeah or nay", tell us what the problem is and allow us to recommend solutions. This way you'll get some education and multiple solutions that may very well be a better fit, will perform better, and possibly cost less in capital outlay and administration time/effort. -- Stan From sven at svenhartge.de Sun Jan 8 03:55:28 2012 From: sven at svenhartge.de (Sven Hartge) Date: Sun, 8 Jan 2012 02:55:28 +0100 Subject: [Dovecot] Providing shared folders with multiple backend servers References: <68fd4hi9kbv8@mids.svenhartge.de> <4F08E4D9.1090203@hardwarefreak.com> Message-ID: <78fdevu9kbv8@mids.svenhartge.de> Stan Hoeppner wrote: > It's highly likely your problems can be solved without the drastic > architecture change, and new problems it will introduce, that you > describe below. The main reason is I need to replace the hardware as its service contract ends this year and I am not able to extend it further. The box so far is fine, there are normally no problems during normal operations with speed or responsiveness towards the end-user. Sometimes, higher peak loads tend to strain the system a bit and this is starting to occur more often. First thought was to move this setup into our VMware cluster (yeah, I know, spare me the screams), since the hardware used there is way more powerfull than the hardware used now and I wouldn't have to buy new servers for my mail system (which is kind of painful to do in an universitary environment, especially in Germany, if you want to invest an amount of money above a certain amount). But then I thought about the problems with VMs this size and got to the idea with the distributed setup, splitting the one server into 4 or 6 backend servers. As I said: "idea". Other ideas making my life easier are more than welcome. >> Ideas? Suggestions? Nudges in the right direction? > Yes. We need more real information. Please provide: > 1. Mailbox count, total maildir file count and size about 10,000 Maildir++ boxes 900GB for 1300GB used, "df -i" says 11 million inodes used I know, this is very _tiny_ compared to the systems ISPs are using. > 2. Average/peak concurrent user connections IMAP: Average 800 concurrent user connections, peaking at about 1400. POP3: Average 300 concurrent user connections, peaking at about 600. > 3. CPU type/speed/total core count, total RAM, free RAM (incl buffers) Currently dual-core AMD Opteron 2210, 1.8GHz. Right now, in the middle of the night (2:30 AM here) on a Sunday, thus a low point in the usage pattern: total used free shared buffers cached Mem: 12335820 9720252 2615568 0 53112 680424 -/+ buffers/cache: 8986716 3349104 Swap: 5855676 10916 5844760 System reaches its 7 year this summer which is the end of its service contract. > 4. Storage configuration--total spindles, RAID level, hard or soft RAID RAID 6 with 12 SATA1.5 disks, external 4Gbit FC Back in 2005, a SAS enclosure was way to expensive for us to afford. > 5. Filesystem type XFS in a LVM to allow snapshots for backup I of course aligned the partions on the RAID correctly and of course created a filesystem with the correct parameters wrt. spindels, chunk size, etc. > 6. Backup software/method Full backup with Bacula, taking about 24 hours right now. Because of this, I switched to virtual full backups, only ever doing incremental and differental backups off of the real system and creating synthetic full backups inside Bacula. Works fine though, incremental taking 2 hours, differential about 4 hours. The main problem of the backup time is Maildir++. During a test, I copied the mail storage to a spare box, converted it to mdbox (50MB file size) and the backup was lightning fast compared to the Maildir++ format. Additonally compressing the mails inside the mdbox and not having Bacula compress them for me reduce the backup time further (and speeding up the access through IMAP and POP3). So this is the way to go, I think, regardless of which way I implement the backend mail server. > 7. Operating system Debian Linux Lenny, currently with kernel 2.6.39 > Instead of telling us what you think the solution to your unidentified > bottleneck is and then asking "yeah or nay", tell us what the problem is > and allow us to recommend solutions. I am not asking for "yay or nay", I just pointed out my idea, but I am open to other suggestions. If the general idea is to buy a new big single storage system, I am more than happy to do just this, because this will prevent any problems I might have with a distributed one before they even can occur. Maybe two HP DL180s (one for production and one as test/standby-system) with an SAS attached enclosure for storage? Keeping in mind the new system has to work for some time (again 5 to 7 years) I have to be able to extend the storage space without to much hassle. Gr??e, S? -- Sigmentation fault. Core dumped. From yubao.liu at gmail.com Sun Jan 8 04:56:33 2012 From: yubao.liu at gmail.com (Yubao Liu) Date: Sun, 08 Jan 2012 10:56:33 +0800 Subject: [Dovecot] Strange error: DIGEST-MD5 mechanism can't be supported with given passdbs In-Reply-To: <4F07BDBB.3060204@gmail.com> References: <4F05EABC.7070309@gmail.com> <4F06D272.5010200@bunbun.be> <4F071E3B.2060405@gmail.com> <1325868288.17774.30.camel@hurina> <4F07332B.70708@gmail.com> <4F07BDBB.3060204@gmail.com> Message-ID: <4F0905E1.9090603@gmail.com> Hi Timo, Did you review the patches in previous email? I tested two patches against my configuration(pasted in this thread too), they both work well. I prefer the first patch, but I'm not sure whether it breaks something else. Regards, Yubao Liu On 01/07/2012 11:36 AM, Yubao Liu wrote: > On 01/07/2012 01:51 AM, Timo Sirainen wrote: >> On 6.1.2012, at 19.45, Yubao Liu wrote: >>> On 01/07/2012 12:44 AM, Timo Sirainen wrote: >>>> On Sat, 2012-01-07 at 00:15 +0800, Yubao Liu wrote: >>>>> I don't know why this function doesn't check auth->masterdbs, if I >>>>> insert these lines after line 128, that error goes away, and >>>>> dovecot's >>>>> imap-login process happily does DIGEST-MD5 authentication [1]. >>>>> In my configuration, "masterdbs" contains "passdb passwd-file", >>>>> "passdbs" contains " passdb pam". >>>> So .. you want DIGEST-MD5 authentication for the master users, but not >>>> for anyone else? I hadn't really thought anyone would want that.. >>> Is there any special reason that master passdb isn't taken into >>> account in src/auth/auth.c:auth_passdb_list_have_lookup_credentials() ? >>> I feel master passdb is also a kind of passdb. >> I guess it could be changed. It wasn't done intentionally that way. >> > I guess this change broke old way: > http://hg.dovecot.org/dovecot-2.0/rev/b05793c609ac > > In old version, "auth->passdbs" contains all passdbs, this revision > changes "auth->passdbs" to only contain non-master passdbs. > > I'm not sure which fix is better or even my proposal is correct or fully: > a) in src/auth/auth.c:auth_passdb_preinit(), insert master passdb to > auth->passdbs too, and remove duplicate code for masterdbs > in auth_init() and auth_deinit(). > > b) add similar code for masterdbs in > auth_passdb_list_have_verify_plain(), > auth_passdb_list_have_lookup_credentials(), > auth_passdb_list_have_set_credentials(). >>> This is exactly my use case, I use Kerberos for system users, >>> I'm curious why master passdb isn't used to check >>> "have_lookup_credentials" ability >>> http://wiki2.dovecot.org/Authentication/MultipleDatabases >>>> Currently the fallback works only with the PLAIN authentication >>>> mechanism. >>> I hope this limitation can be relaxed. >> It might already be .. I don't remember. In any case you have only >> PAM passdb, so it shouldn't matter. GSSAPI isn't a passdb. > If the fix above is added, then I can use CRAM-MD5 with master > passwd-file passdb > and normal pam passdb, else imap-login process can't startup due to > check in > auth_mech_list_verify_passdb(). > > Attached two patches against dovecot-2.0 branch for the two schemes, > the first is cleaner but may affect other logics in other source files. > > > Another related question is "pass" option in master passdb, if I set > it to "yes", > the authentication fails: > Jan 7 11:26:00 gold dovecot: auth: Debug: client in: > AUTH#0111#011CRAM-MD5#011service=imap#011secured#011lip=127.0.1.1#011rip=127.0.0.1#011lport=143#011rport=51771 > Jan 7 11:26:00 gold dovecot: auth: Debug: client out: > CONT#0111#011PDk4NjcwMDY1MTU3NzI3MjguMTMyNTkwNjc2MEBnb2xkPg== > Jan 7 11:26:00 gold dovecot: auth: Debug: client in: > CONT#0111#011ZGlla2VuKndlYm1haWwgYmNkMzFiMWE1YjQ1OWQ0OGRkZWQ4ZmIzZDhmMjVhZTc= > Jan 7 11:26:00 gold dovecot: auth: Debug: > auth(webmail,127.0.0.1,master): Master user lookup for login: dieken > Jan 7 11:26:00 gold dovecot: auth: Debug: > passwd-file(webmail,127.0.0.1,master): lookup: user=webmail > file=/etc/dovecot/master-users > Jan 7 11:26:00 gold dovecot: auth: passdb(webmail,127.0.0.1,master): > Master user logging in as dieken > Jan 7 11:26:00 gold dovecot: auth: Error: passdb(dieken,127.0.0.1): > No passdbs support skipping password verification - pass=yes can't be > used in master passdb > Jan 7 11:26:00 gold dovecot: auth: Debug: password(dieken,127.0.0.1): > passdb doesn't support credential lookups > > My normal passdb is a PAM passdb, it doesn't support credential > lookups, that's > reasonable, but I feel the comment for "pass" option is confusing: > > $ less /etc/dovecot/conf.d/auth-master.conf.ext > .... > # Example master user passdb using passwd-file. You can use any passdb > though. > passdb { > driver = passwd-file > master = yes > args = /etc/dovecot/master-users > > # Unless you're using PAM, you probably still want the destination > user to > # be looked up from passdb that it really exists. pass=yes does that. > pass = yes > } > > According the comment, it's to check whether the real user exists, why > not > to check userdb but another passdb? Even it must check against passdb, > in this case, it's obvious not necessary to lookup credentials, it's > enough to > to lookup user name only. > > Regards, > Yubao Liu > From stan at hardwarefreak.com Sun Jan 8 15:09:00 2012 From: stan at hardwarefreak.com (Stan Hoeppner) Date: Sun, 08 Jan 2012 07:09:00 -0600 Subject: [Dovecot] Providing shared folders with multiple backend servers In-Reply-To: <78fdevu9kbv8@mids.svenhartge.de> References: <68fd4hi9kbv8@mids.svenhartge.de> <4F08E4D9.1090203@hardwarefreak.com> <78fdevu9kbv8@mids.svenhartge.de> Message-ID: <4F09956C.1030109@hardwarefreak.com> On 1/7/2012 7:55 PM, Sven Hartge wrote: > Stan Hoeppner wrote: > >> It's highly likely your problems can be solved without the drastic >> architecture change, and new problems it will introduce, that you >> describe below. > > The main reason is I need to replace the hardware as its service > contract ends this year and I am not able to extend it further. > > The box so far is fine, there are normally no problems during normal > operations with speed or responsiveness towards the end-user. > > Sometimes, higher peak loads tend to strain the system a bit and this is > starting to occur more often. ... > First thought was to move this setup into our VMware cluster (yeah, I > know, spare me the screams), since the hardware used there is way more > powerfull than the hardware used now and I wouldn't have to buy new > servers for my mail system (which is kind of painful to do in an > universitary environment, especially in Germany, if you want to invest > an amount of money above a certain amount). What's wrong with moving it onto VMware? This actually seems like a smart move given your description of the node hardware. It also gives you much greater backup flexibility with VCB (or whatever they call it today). You can snapshot the LUN over the SAN during off peak hours to a backup server and do the actual backup to the library at your leisure. Forgive me if the software names have changed as I've not used VMware since ESX3 back in 07. > But then I thought about the problems with VMs this size and got to the > idea with the distributed setup, splitting the one server into 4 or 6 > backend servers. Not sure what you mean by "VMs this size". Do you mean memory requirements or filesystem size? If the nodes have enough RAM that's no issue. And surely you're not thinking of using a .vmdk for the mailbox storage. You'd use an RDM SAN LUN. In fact you should be able to map in the existing XFS storage LUN and use it as is. Assuming it's not going into retirement as well. If an individual VMware node don't have sufficient RAM you could build a VM based Dovecot cluster, run these two VMs on separate nodes, and thin out the other VMs allowed to run on these nodes. Since you can't directly share XFS, build a tiny Debian NFS server VM and map the XFS LUN to it, export the filesystem to the two Dovecot VMs. You could install the Dovecot director on this NFS server VM as well. Converting from maildir to mdbox should help eliminate the NFS locking problems. I would do the conversion before migrating to this VM setup with NFS. Also, run the NFS server VM on the same physical node as one of the Dovecot servers. The NFS traffic will be a memory-memory copy instead of going over the GbE wire, decreasing IO latency and increasing performance for that Dovecot server. If it's possible to have Dovecot director or your fav load balancer weight more connections to one Deovecot node, funnel 10-15% more connections to this one. (I'm no director guru, in fact haven't use it yet). Assuming the CPUs in the VMware cluster nodes are clocked a decent amount higher than 1.8GHz I wouldn't monkey with configuring virtual smp for these two VMs, as they'll be IO bound not CPU bound. > As I said: "idea". Other ideas making my life easier are more than > welcome. I hope my suggestions contribute to doing so. :) >>> Ideas? Suggestions? Nudges in the right direction? > >> Yes. We need more real information. Please provide: > >> 1. Mailbox count, total maildir file count and size > > about 10,000 Maildir++ boxes > > 900GB for 1300GB used, "df -i" says 11 million inodes used Converting to mdbox will take a large burden off your storage, as you've seen. With ~1.3TB consumed of ~15TB you should have plenty of space to convert to mdbox while avoiding filesystem fragmentation. With maildir you likely didn't see heavy fragmentation due to small file sizes. With mdbox, especially at 50MB, you'll likely start seeing more fragmentation. Use this to periodically check the fragmentation level: $ xfs_db -c frag [device] -r e.g. $ xfs_db -c frag /dev/sda7 -r actual 76109, ideal 75422, fragmentation factor 0.90% I'd recommend running xfs_fsr when frag factor exceeds ~20-30%. The XFS developers recommend against running xfs_fsr too often as it can actually increases free space fragmentation while it decreases file fragmentation, especially on filesystems that are relatively full. Having heavily fragmented free space is worse than having fragmented files, as newly created files will automatically be fragged. > I know, this is very _tiny_ compared to the systems ISPs are using. Not everyone is an ISP, including me. :) >> 2. Average/peak concurrent user connections > > IMAP: Average 800 concurrent user connections, peaking at about 1400. > POP3: Average 300 concurrent user connections, peaking at about 600. > >> 3. CPU type/speed/total core count, total RAM, free RAM (incl buffers) > > Currently dual-core AMD Opteron 2210, 1.8GHz. Heheh, yeah, a bit long in the tooth, but not horribly underpowered for 1100 concurrent POP/IMAP users. Though this may be the reason for the sluggishness when you hit that 2000 concurrent user peak. Any chance you have some top output for the peak period? > Right now, in the middle of the night (2:30 AM here) on a Sunday, thus a > low point in the usage pattern: > > total used free shared buffers cached > Mem: 12335820 9720252 2615568 0 53112 680424 > -/+ buffers/cache: 8986716 3349104 > Swap: 5855676 10916 5844760 Ugh... "-m" and "-g" options exist for a reason. :) So this box has 12GB RAM, currently ~2.5GB free during off peak hours. It would be interesting to see free RAM and swap usage values during peak. That would tell use whether we're CPU or RAM starved. If both turned up clean then we'd need to look at iowait. If you're not RAM starved then moving to VMware nodes with 16/24/32GB RAM should work fine, as long as you don't stack many other VMs on top. Enabling memory dedup may help a little. > System reaches its 7 year this summer which is the end of its service > contract. Enjoy your retirement old workhorse. :) >> 4. Storage configuration--total spindles, RAID level, hard or soft RAID > > RAID 6 with 12 SATA1.5 disks, external 4Gbit FC I assume this means a LUN on a SAN array somewhere on the other end of that multi-mode cable, yes? Can you tell us what brand/model the box is? > Back in 2005, a SAS enclosure was way to expensive for us to afford. How one affords an FC SAN array but not a less expensive direct attach SAS enclosure is a mystery... :) >> 5. Filesystem type > > XFS in a LVM to allow snapshots for backup XFS is the only way to fly, IMNSHO. > I of course aligned the partions on the RAID correctly and of course > created a filesystem with the correct parameters wrt. spindels, chunk > size, etc. Which is critical for mitigating the RMW penalty of parity RAID. Speaking of which, why RAID6 for maildir? Given that your array is 90% vacant, why didn't you go with RAID10 for 3-5 times the random write performance? >> 6. Backup software/method > > Full backup with Bacula, taking about 24 hours right now. Because of > this, I switched to virtual full backups, only ever doing incremental > and differental backups off of the real system and creating synthetic > full backups inside Bacula. Works fine though, incremental taking 2 > hours, differential about 4 hours. Move to VMware and use VCB. You'll fall in love. > The main problem of the backup time is Maildir++. During a test, I > copied the mail storage to a spare box, converted it to mdbox (50MB > file size) and the backup was lightning fast compared to the Maildir++ > format. Well of course. You were surprised by this? How long has it been since you used mbox? mbox backs up even faster than mdbox. Why? Larger files and fewer of them. Which means the disks can actually do streaming reads, and don't have to beat their heads to death jumping all over the platters to read maildir files, which are scattered all over the place when created. Which is while maildir is described as a "random" IO workload. > Additonally compressing the mails inside the mdbox and not having Bacula > compress them for me reduce the backup time further (and speeding up the > access through IMAP and POP3). Again, no surprise here. When files exist on disk already compressed it takes less IO bandwidth to read the file data for a given actual file size. So if you have say 10MB files that compress down to 5MB, you can read twice as many files when the pipe is saturated, twice as much file data. > So this is the way to go, I think, regardless of which way I implement > the backend mail server. Which is why I asked my questions. :) mdbox would have been one of my recommendations, but you already discovered it. >> 7. Operating system > > Debian Linux Lenny, currently with kernel 2.6.39 :) Debian, XFS, Dovecot, FC SAN storage--I like your style. Lenny with 2.6.39? Is that a backport or rolled kernel? Not Squeeze? Interesting. I'm running Squeeze with rolled vanilla 2.6.38.6. It's been about 6 months so it's 'bout time I roll a new one. :) >> Instead of telling us what you think the solution to your unidentified >> bottleneck is and then asking "yeah or nay", tell us what the problem is >> and allow us to recommend solutions. > > I am not asking for "yay or nay", I just pointed out my idea, but I am > open to other suggestions. I think you've already discovered the best suggestions on your own. > If the general idea is to buy a new big single storage system, I am more > than happy to do just this, because this will prevent any problems I might > have with a distributed one before they even can occur. One box is definitely easier to administer and troubleshoot. Though I must say that even though it's more complex, I think the VM architecture I described is worth a serious look. If your current 12x1.5TB SAN array is being retired as well, you could piggy back onto the array(s) feeding the VMware farm, or expand them if necessary/possible. Adding drives is usually much cheaper than buying a new populated array chassis. Given your service contract comments it's unlikely you're the type to build your own servers. Being a hardwarefreak, I nearly always build my servers and storage from scratch. This may be worth a look merely for educational purposes. I just happened to have finished spec'ing out a new high volume 20TB IMAP server recently which should handle 5000 concurrent users without breaking a sweat, for only ~$7500 USD: Full parts list: http://secure.newegg.com/WishList/PublicWishDetail.aspx?WishListNumber=17069985 Summary: 2GHz 8-core 12MB L3 cache Magny Cours Opteron SuperMicro MBD-H8SGL-O w/32GB qualified quad channel reg ECC DDR3/1333 dual Intel 82574 GbE ports LSI 512MB PCIe 2.0 x8 RAID, 24 port SAS expander, 20x1TB 7.2k WD RE4 20 bay SAS/SATA 6G hot swap Norco chassis Create a RAID1 pair for /boot, the root filesystem, swap partition of say 8GB, 2GB partition for external XFS log, should have ~900GB left for utilitarian purposes. Configure two spares. Configure the remaining 16 drives as RAID10 with a 64KB stripe size (8KB, 16 sector strip size), yielding 8TB raw for the XFS backed mdbox mailstore. Enable the BBWC write cache (dang, forgot the battery module, +$175). This should yield approximately 8*150 = 1200 IOPS peak to/from disk, many thousands to BBWC, more than plenty for 5000 concurrent users given the IO behavior of most MUAs. Channel bond the NICs to the switch or round robin DNS the two IPs if pathing for redundancy. What's that? You want to support 10K users? Simply drop in another 4 sticks of the 8GB Kingston Reg ECC RAM for 64GB total, and plug one of these into the external SFF8088 port on the LSI card: http://www.newegg.com/Product/Product.aspx?Item=N82E16816133047 populated with 18 of the 1TB RE4 drives. Configure 16 drives the same as the primary array, grow it into your existing XFS. Since you have two identical arrays comprising the filesystem, sunit/swidth values are still valid so you don't need to add mount options. Configure 2 drives as hot spares. The additional 16 drive RAID10 doubles our disk IOPS to ~2400, maintaining our concurrent user to IOPS ratio at ~4:1, and doubles our mail storage to ~16TB. This expansion hardware will run an additional ~$6200. Grand total to support ~10K concurrent users (maybe more) with a quality DIY build is just over $14K USD, or ~$1.40 per mailbox. Not too bad for an 8-core, 64GB server with 32TB of hardware RAID10 mailbox storage and 38 total 1TB disks. I haven't run the numbers for a comparable HP system, but an educated guess says it would be quite a bit more expensive, not the server so much, but the storage. HP's disk drive prices are outrageous, though not approaching anywhere near the level of larceny EMC commits with it's drive sales. $2400 for a $300 Seagate drive wearing an EMC cape? Please.... > Maybe two HP DL180s (one for production and one as test/standby-system) > with an SAS attached enclosure for storage? If you're hooked on 1U chassis (I hate em) go with the DL165 G7. If not I'd go 2U, the DL385 G7. Magny Cours gives you more bang for the buck in this class of machines. The performance is excellent, and, if everybody buys Intel, AMD goes bankrupt, and then Chipzilla charges whatever it desires. They've already been sanctioned, and fined by the FTC at least twice. They paid Intergraph $800 million in an antitrust settlement in 2000 after they forced them out of the hardware business. They recently paid AMD $1 Billion in an antitrust settlement. They're just like Microsoft, putting competitors out of business by any and all means necessary, even if their conduct is illegal. Yes, I'd much rather give AMD my business, given they had superior CPUs to Intel for many years, and their current chips are still more than competitive. /end rant. ;) > Keeping in mind the new system has to work for some time (again 5 to 7 > years) I have to be able to extend the storage space without to much > hassle. Given you're currently only using ~1.3TB of ~15TB do you really see this as an issue? Will you be changing your policy or quotas? Will the university double its enrollment? If not I would think a new 12-16TB raw array would be more than plenty. If you really want growth potential get a SATABeast and start with 14 2TB SATA drives. You'll still have 28 empty SAS/SATA slots in the 4U chassis, 42 total. Max capacity is 84TB. You get dual 8Gb/s FC LC ports and dual GbE iSCSI ports per controller, all ports active, two controllers max. The really basic SKU runs about $20-25K USD with the single controller and a few small drives, before institutional/educational discounts. www.nexsan.com/satabeast I've used the SATABlade and SATABoy models (8 and 14 drives) and really like the simplicity of design and the httpd management interface. Good products, and one of the least expensive and feature rich in this class. Sorry this was so windy. I am the hardwarefreak after all. :) -- Stan From xamiw at arcor.de Sun Jan 8 17:37:10 2012 From: xamiw at arcor.de (xamiw at arcor.de) Date: Sun, 8 Jan 2012 16:37:10 +0100 (CET) Subject: [Dovecot] uid / gid and systemusers Message-ID: <1809497881.1135529.1326037030206.JavaMail.ngmail@webmail10.arcor-online.net> Hi all, I'm facing a problem when a user (q and g in this example) is logging into dovecot. Can anybody tell some hint? Thanks in advance. George /var/log/mail.log: ... Jan 8 16:18:28 test dovecot: User q is missing UID (see mail_uid setting) Jan 8 16:18:28 test dovecot: imap-login: Internal login failure (auth failed, 1 attempts): user=, method=PLAIN, rip=AAA.BBB.CCC.DDD, lip=EEE.FFF.GGG.HHH TLS <--- edited by me Jan 8 16:18:28 test dovecot: dovecot: User g is missing UID (see mail_uid setting) Jan 8 16:18:28 test dovecot: imap-login: Internal login failure (auth failed, 1 attempts): user=, method=PLAIN, rip=AAA.BBB.CCC.DDD, lip=EEE.FFF.GGG.HHH TLS <--- edited by me /etc/dovecot/dovecot.conf: protocols = imaps disable_plaintext_auth = yes shutdown_clients = yes log_timestamp = "%Y-%m-%d %H:%M:%S " ssl = yes ssl_cert_file = /etc/ssl/certs/dovecot.pem ssl_key_file = /etc/ssl/private/dovecot.pem mail_location = mbox:~/mail:INBOX=/var/mail/%u mail_privileged_group = mail mbox_write_locks = fnctl dotlock auth default { mechanisms = plain passdb shadow { } } /etc/passwd: ... g:x:1000:1000:test1,,,:/home/g:/bin/bash q:x:1001:1001:test2,,,:/home/q:/bin/bash /etc/group: ... g:x:1000: q:x:1001: From sven at svenhartge.de Sun Jan 8 17:39:45 2012 From: sven at svenhartge.de (Sven Hartge) Date: Sun, 8 Jan 2012 16:39:45 +0100 Subject: [Dovecot] Providing shared folders with multiple backend servers References: <68fd4hi9kbv8@mids.svenhartge.de> <4F08E4D9.1090203@hardwarefreak.com> <78fdevu9kbv8@mids.svenhartge.de> <4F09956C.1030109@hardwarefreak.com> Message-ID: <88fev069kbv8@mids.svenhartge.de> Stan Hoeppner wrote: > On 1/7/2012 7:55 PM, Sven Hartge wrote: >> Stan Hoeppner wrote: >> >>> It's highly likely your problems can be solved without the drastic >>> architecture change, and new problems it will introduce, that you >>> describe below. >> >> The main reason is I need to replace the hardware as its service >> contract ends this year and I am not able to extend it further. >> >> The box so far is fine, there are normally no problems during normal >> operations with speed or responsiveness towards the end-user. >> >> Sometimes, higher peak loads tend to strain the system a bit and this is >> starting to occur more often. > ... >> First thought was to move this setup into our VMware cluster (yeah, I >> know, spare me the screams), since the hardware used there is way more >> powerfull than the hardware used now and I wouldn't have to buy new >> servers for my mail system (which is kind of painful to do in an >> universitary environment, especially in Germany, if you want to invest >> an amount of money above a certain amount). > What's wrong with moving it onto VMware? This actually seems like a > smart move given your description of the node hardware. It also gives > you much greater backup flexibility with VCB (or whatever they call it > today). You can snapshot the LUN over the SAN during off peak hours to > a backup server and do the actual backup to the library at your leisure. > Forgive me if the software names have changed as I've not used VMware > since ESX3 back in 07. VCB as it was back in the days is dead. But yes, one of the reasons to use a VM was to be able to easily backup the whole shebang. >> But then I thought about the problems with VMs this size and got to the >> idea with the distributed setup, splitting the one server into 4 or 6 >> backend servers. > Not sure what you mean by "VMs this size". Do you mean memory > requirements or filesystem size? If the nodes have enough RAM that's no > issue. Memory size. I am a bit hesistant to deploy a VM with 16GB of RAM. My cluster nodes each have 48GB, so no problem on this side though. > And surely you're not thinking of using a .vmdk for the mailbox > storage. You'd use an RDM SAN LUN. No, I was not planning to use a VMDK backed disk for this. > In fact you should be able to map in the existing XFS storage LUN and > use it as is. Assuming it's not going into retirement as well. It is going to be retired as well, as it is as old as the server. It also is not connected to any SAN as well, only local to the backend server. And our VMware SAN is iSCSI based, so no way to plug a FC-based storage into it. > If an individual VMware node don't have sufficient RAM you could build a > VM based Dovecot cluster, run these two VMs on separate nodes, and thin > out the other VMs allowed to run on these nodes. Since you can't > directly share XFS, build a tiny Debian NFS server VM and map the XFS > LUN to it, export the filesystem to the two Dovecot VMs. You could > install the Dovecot director on this NFS server VM as well. Converting > from maildir to mdbox should help eliminate the NFS locking problems. I > would do the conversion before migrating to this VM setup with NFS. > Also, run the NFS server VM on the same physical node as one of the > Dovecot servers. The NFS traffic will be a memory-memory copy instead > of going over the GbE wire, decreasing IO latency and increasing > performance for that Dovecot server. If it's possible to have Dovecot > director or your fav load balancer weight more connections to one > Deovecot node, funnel 10-15% more connections to this one. (I'm no > director guru, in fact haven't use it yet). So, this reads like my idea in the first place. Only you place all the mails on the NFS server, whereas my idea was to just share the shared folders from a central point and keep the normal user dirs local to the different nodes, thus reducing network impact for the way more common user access. > Assuming the CPUs in the VMware cluster nodes are clocked a decent > amount higher than 1.8GHz I wouldn't monkey with configuring virtual smp > for these two VMs, as they'll be IO bound not CPU bound. 2.3GHz for most VMware nodes. >>>> Ideas? Suggestions? Nudges in the right direction? >> >>> Yes. We need more real information. Please provide: >> >>> 1. Mailbox count, total maildir file count and size >> >> about 10,000 Maildir++ boxes >> >> 900GB for 1300GB used, "df -i" says 11 million inodes used > Converting to mdbox will take a large burden off your storage, as you've > seen. With ~1.3TB consumed of ~15TB you should have plenty of space to > convert to mdbox while avoiding filesystem fragmentation. You got the numbers wrong. And I got a word wrong ;) Should have read "900GB _of_ 1300GB used". I am using 900GB of 1300GB. The disks are SATA1.5 (not SATA3 or SATA6) as in data transfer rate. The disks each are 150GB in size, so my maximum storage size of my underlying VG is 1500GB. root at ms1:~# vgs VG #PV #LV #SN Attr VSize VFree vg01 1 6 0 wz--n- 70.80G 40.80G vg02 1 1 0 wz--n- 1.45T 265.00G vg03 1 1 0 wz--n- 1.09T 0 Filesystem Size Used Avail Use% Mounted on /dev/mapper/vg02-home_lv 1.2T 867G 357G 71% /home /dev/mapper/vg03-backup_lv 1.1T 996G 122G 90% /backup So not much wiggle room left. But modifications to our systems are made, which allow me to temp-disable a user, convert and move his mailbox and re-enable him, which allows me to move them one at a time from the old system to the new one, without losing a mail or disrupting service to long and often. >> Right now, in the middle of the night (2:30 AM here) on a Sunday, thus a >> low point in the usage pattern: >> >> total used free shared buffers cached >> Mem: 12335820 9720252 2615568 0 53112 680424 >> -/+ buffers/cache: 8986716 3349104 >> Swap: 5855676 10916 5844760 > Ugh... "-m" and "-g" options exist for a reason. :) So this box has > 12GB RAM, currently ~2.5GB free during off peak hours. It would be > interesting to see free RAM and swap usage values during peak. That > would tell use whether we're CPU or RAM starved. If both turned up > clean then we'd need to look at iowait. If you're not RAM starved then > moving to VMware nodes with 16/24/32GB RAM should work fine, as long as > you don't stack many other VMs on top. Enabling memory dedup may help a > little. Well, peak hours are somewhat between 10:00 and 14:00 o'clock. Will check then. >> System reaches its 7 year this summer which is the end of its service >> contract. > Enjoy your retirement old workhorse. :) >>> 4. Storage configuration--total spindles, RAID level, hard or soft RAID >> >> RAID 6 with 12 SATA1.5 disks, external 4Gbit FC > I assume this means a LUN on a SAN array somewhere on the other end of > that multi-mode cable, yes? Can you tell us what brand/model the box is? This is a Transtec Provigo 610. This is a 24 disk enclosure, 12 disks with 150GB (7.200k) each for the main mail storage in RAID6 and another 10 disks with 150GB (5.400k) for a backup LUN. I daily rsnapshot my /home onto this local backup (20 days of retention), because it is easier to restore from than firing up Bacula, which has the long retention time of 90 days. But must users need a restore of mails from $yesterday or $the_day_before. >> Back in 2005, a SAS enclosure was way to expensive for us to afford. > How one affords an FC SAN array but not a less expensive direct attach > SAS enclosure is a mystery... :) Well, it was either Parallel-SCSI or FC back then, as far as I can remember. The price difference between the U320 version and the FC one was not so big and I wanted to avoid having to route those big SCSI-U320 through my racks. >>> 5. Filesystem type >> >> XFS in a LVM to allow snapshots for backup > XFS is the only way to fly, IMNSHO. >> I of course aligned the partions on the RAID correctly and of course >> created a filesystem with the correct parameters wrt. spindels, chunk >> size, etc. > Which is critical for mitigating the RMW penalty of parity RAID. > Speaking of which, why RAID6 for maildir? Given that your array is 90% > vacant, why didn't you go with RAID10 for 3-5 times the random write > performance? See above, not 1500GB disks, but 150GB ones. RAID6, because I wanted the double security. I have been kind of burned by the previous system and I tend to get nervous while tinking about data loss in my mail storage, because I know my users _will_ give me hell if that happens. >>> 6. Backup software/method >> >> Full backup with Bacula, taking about 24 hours right now. Because of >> this, I switched to virtual full backups, only ever doing incremental >> and differental backups off of the real system and creating synthetic >> full backups inside Bacula. Works fine though, incremental taking 2 >> hours, differential about 4 hours. > Move to VMware and use VCB. You'll fall in love. >> The main problem of the backup time is Maildir++. During a test, I >> copied the mail storage to a spare box, converted it to mdbox (50MB >> file size) and the backup was lightning fast compared to the Maildir++ >> format. > Well of course. You were surprised by this? No, I was not surprised by the speedup, I _knew_ mdbox would backup faster. Just how big it was. That a backup of 100 big files is faster than a backup of 100,000 little files is not exactly rocket sience. > How long has it been since you used mbox? mbox backs up even faster > than mdbox. Why? Larger files and fewer of them. Which means the > disks can actually do streaming reads, and don't have to beat their > heads to death jumping all over the platters to read maildir files, > which are scattered all over the place when created. Which is while > maildir is described as a "random" IO workload. I never used mbox as an admin. The box before the box before this one uses uw-imapd with mbox and I experienced the system as a user and it was horriffic. Most users back then never heard of IMAP folders and just stored their mails inside of INBOX, which of course got huge. If one of those users with a big mbox then deleted mails, it would literally lock the box up for everyone, as uw-imapd was copying (for example) a 600MB mbox file around to delete one mail. Of course, this was mostly because of the crappy uw-imapd and secondly by some poor design choices in the server itself (underpowered RAID controller, to small cache and a RAID5 setup, low RAM in the server). So the first thing we did back then, in 2004, was to change to Courier and convert from mbox to maildir, which made the mailsystem fly again, even on the same hardware, only the disk setup changed to RAID10. Then we bought new hardware (the one previous to the current one), this time with more RAM, better RAID controller, smarter disk setup. We outgrew this one really fast and a disk upgrade was not possible; it lasted only 2 years. So the next one got this external 24 disk array with 12 disks used at deployment. But Courier is showing its age and things like Sieve are only possible with great pain, so I want to avoid it. >> So this is the way to go, I think, regardless of which way I implement >> the backend mail server. > Which is why I asked my questions. :) mdbox would have been one of my > recommendations, but you already discovered it. And this is why I kind of hold this upgrade back until dovecot 2.1 is released, as it has some optimizations here. >>> 7. Operating system >> >> Debian Linux Lenny, currently with kernel 2.6.39 > :) Debian, XFS, Dovecot, FC SAN storage--I like your style. Lenny with > 2.6.39? Is that a backport or rolled kernel? Not Squeeze? That is a BPO-kernel. Not-yet Squeeze. I admin over 150 different systems here, plus I am the main VMware and SAN admin. So upgrades take some time until I grow an extra pair of eyes and arms. ;) And since I have been planning to re-implement the mailsystem for some time now, I held the update to the storage backends back. No use in disrupting service for the end user if I'm going to replace the whole thing with a new one in the end. >>> Instead of telling us what you think the solution to your unidentified >>> bottleneck is and then asking "yeah or nay", tell us what the problem is >>> and allow us to recommend solutions. >> >> I am not asking for "yay or nay", I just pointed out my idea, but I am >> open to other suggestions. > I think you've already discovered the best suggestions on your own. >> If the general idea is to buy a new big single storage system, I am more >> than happy to do just this, because this will prevent any problems I might >> have with a distributed one before they even can occur. > One box is definitely easier to administer and troubleshoot. Though I > must say that even though it's more complex, I think the VM architecture > I described is worth a serious look. If your current 12x1.5TB SAN array > is being retired as well, you could piggy back onto the array(s) feeding > the VMware farm, or expand them if necessary/possible. Adding drives is > usually much cheaper than buying a new populated array chassis. Given > your service contract comments it's unlikely you're the type to build > your own servers. Being a hardwarefreak, I nearly always build my > servers and storage from scratch. Naa, I have been doing this for too long. While I am perfectly capable of building such a server myself, I am now the kind of guy who wants to "yell" at a vendor, when their hardware fails. Which does not mean I am using any "Express" package or preconfigured server, I still read the specs and pick the parts which make the most sense for a job and then have that one custom build by HP or IBM or Dell or ... Personal build PCs and servers out of single parts have been nothing than a nightmare for me. And: my cowworkers need to be able to service them as well while I am not available and they are not as a hardware aficionado as I am. So "professional" hardware with a 5 to 7 year support contract is the way to go for me. > If you're hooked on 1U chassis (I hate em) go with the DL165 G7. If not > I'd go 2U, the DL385 G7. Magny Cours gives you more bang for the buck > in this class of machines. I have plenty space for 2U systems and already use DL385 G7s, I am not fixed on Intel or AMD, I'll gladly use the one which is the most fit for a given jobs. Gr??e, Sven -- Sigmentation fault. Core dumped. From sven at svenhartge.de Sun Jan 8 22:15:22 2012 From: sven at svenhartge.de (Sven Hartge) Date: Sun, 8 Jan 2012 21:15:22 +0100 Subject: [Dovecot] Providing shared folders with multiple backend servers References: <68fd4hi9kbv8@mids.svenhartge.de> <4F08E4D9.1090203@hardwarefreak.com> <78fdevu9kbv8@mids.svenhartge.de> <4F09956C.1030109@hardwarefreak.com> <88fev069kbv8@mids.svenhartge.de> Message-ID: Sven Hartge wrote: > Stan Hoeppner wrote: >> If an individual VMware node don't have sufficient RAM you could build a >> VM based Dovecot cluster, run these two VMs on separate nodes, and thin >> out the other VMs allowed to run on these nodes. Since you can't >> directly share XFS, build a tiny Debian NFS server VM and map the XFS >> LUN to it, export the filesystem to the two Dovecot VMs. You could >> install the Dovecot director on this NFS server VM as well. Converting >> from maildir to mdbox should help eliminate the NFS locking problems. I >> would do the conversion before migrating to this VM setup with NFS. >> Also, run the NFS server VM on the same physical node as one of the >> Dovecot servers. The NFS traffic will be a memory-memory copy instead >> of going over the GbE wire, decreasing IO latency and increasing >> performance for that Dovecot server. If it's possible to have Dovecot >> director or your fav load balancer weight more connections to one >> Deovecot node, funnel 10-15% more connections to this one. (I'm no >> director guru, in fact haven't use it yet). > So, this reads like my idea in the first place. > Only you place all the mails on the NFS server, whereas my idea was to > just share the shared folders from a central point and keep the normal > user dirs local to the different nodes, thus reducing network impact for > the way more common user access. To be a bit more concrete on this one: a) X backend servers which my frontend (being perdition or dovecot director) redirects users to, fixed, no random redirects. I might start with 4 backend servers, but I can easily scale them, either vertically by adding more RAM or vCPUs or horizontally by adding more VMs and reshuffling some mailboxes during the night. Why 4 and not 2? If I'm going to build a cluster, I already have to do the work to implement this and with 4 backends, I can distribute the load even further without much additional administrative overhead. But the load impact on each node gets lower with more nodes, if I am able to evenly spread my users across those nodes (like md5'ing the username and using the first 2 bits from that to determine which node the user resides on). b) 1 backend server for the public shared mailboxes, exporting them via NFS to the user backend servers Configuration like this, from http://wiki2.dovecot.org/SharedMailboxes/Public ,---- | # User's private mail location | mail_location = mdbox:~/mdbox | | # When creating any namespaces, you must also have a private namespace: | namespace { | type = private | separator = . | prefix = INBOX. | #location defaults to mail_location. | inbox = yes | } | | namespace { | type = public | separator = . | prefix = #shared. | location = mdbox:/srv/shared/ | subscriptions = no | } `---- With /srv/shared being the NFS mountpoint from my central public shared mailbox server. This setup would keep the amount of data transferred via NFS small (only a tiny fraction of my 10,000 users have access to a shared folder, mostly users in the IT-Team or in the administration of the university. Wouldn't such a setup be the "Best of Both Worlds"? Having the main traffic going to local disks (being RDMs) and also being able to provide shared folders to every user who needs them without the need to move those users onto one server? Gr??e, Sven. -- Sigmentation fault. Core dumped. From sven at svenhartge.de Sun Jan 8 23:07:11 2012 From: sven at svenhartge.de (Sven Hartge) Date: Sun, 8 Jan 2012 22:07:11 +0100 Subject: [Dovecot] Providing shared folders with multiple backend servers References: <68fd4hi9kbv8@mids.svenhartge.de> <4F08E4D9.1090203@hardwarefreak.com> <78fdevu9kbv8@mids.svenhartge.de> <4F09956C.1030109@hardwarefreak.com> <88fev069kbv8@mids.svenhartge.de> Message-ID: Sven Hartge wrote: > Sven Hartge wrote: >> Stan Hoeppner wrote: >>> If an individual VMware node don't have sufficient RAM you could build a >>> VM based Dovecot cluster, run these two VMs on separate nodes, and thin >>> out the other VMs allowed to run on these nodes. Since you can't >>> directly share XFS, build a tiny Debian NFS server VM and map the XFS >>> LUN to it, export the filesystem to the two Dovecot VMs. You could >>> install the Dovecot director on this NFS server VM as well. Converting >>> from maildir to mdbox should help eliminate the NFS locking problems. I >>> would do the conversion before migrating to this VM setup with NFS. >>> Also, run the NFS server VM on the same physical node as one of the >>> Dovecot servers. The NFS traffic will be a memory-memory copy instead >>> of going over the GbE wire, decreasing IO latency and increasing >>> performance for that Dovecot server. If it's possible to have Dovecot >>> director or your fav load balancer weight more connections to one >>> Deovecot node, funnel 10-15% more connections to this one. (I'm no >>> director guru, in fact haven't use it yet). >> So, this reads like my idea in the first place. >> Only you place all the mails on the NFS server, whereas my idea was to >> just share the shared folders from a central point and keep the normal >> user dirs local to the different nodes, thus reducing network impact for >> the way more common user access. > To be a bit more concrete on this one: > a) X backend servers which my frontend (being perdition or dovecot > director) redirects users to, fixed, no random redirects. > I might start with 4 backend servers, but I can easily scale them, > either vertically by adding more RAM or vCPUs or horizontally by > adding more VMs and reshuffling some mailboxes during the night. > Why 4 and not 2? If I'm going to build a cluster, I already have to do > the work to implement this and with 4 backends, I can distribute the > load even further without much additional administrative overhead. > But the load impact on each node gets lower with more nodes, if I am > able to evenly spread my users across those nodes (like md5'ing the > username and using the first 2 bits from that to determine which > node the user resides on). Ah, I forgot: I _already_ have the mechanisms in place to statically redirect/route accesses for users to different backends, since some of the users are already redirected to a different mailsystem at another location of my university. So using this mechanism to also redirect/route users internal to _my_ location is no big deal. This is what got me into the idea of several independant backend storages without the need to share the _whole_ storage, but just the shared folders for some users. (Are my words making any sense? I got the feeling I'm writing German with English words and nobody is really understanding anything ...) Gr??e, Sven. -- Sigmentation fault. Core dumped. From dmiller at amfes.com Mon Jan 9 01:40:48 2012 From: dmiller at amfes.com (Daniel L. Miller) Date: Sun, 08 Jan 2012 15:40:48 -0800 Subject: [Dovecot] Possible mdbox corruption In-Reply-To: <4F075A76.1040807@amfes.com> References: <4F04EDC8.6060809@amfes.com> <9D72A466-6ECD-4A10-A52A-20BF56B32916@iki.fi> <4F072FA9.2020009@amfes.com> <75CA67D3-81FF-4FD5-8D61-6992E808DB09@iki.fi> <4F075A76.1040807@amfes.com> Message-ID: On 1/6/2012 12:32 PM, Daniel L. Miller wrote: > On 1/6/2012 9:36 AM, Timo Sirainen wrote: >> On 6.1.2012, at 19.30, Daniel L. Miller wrote: >> >>> Jan 6 09:22:42 bubba dovecot: indexer-worker(user1 at domain.com): >>> Error: fts_solr: Indexing failed: 400 Illegal character ((CTRL-CHAR, >>> code 18)) at [row,col {unknown-source}]: [482765,16] >>> Jan 6 09:22:42 bubba dovecot: indexer-worker: Error: >>> >>> Google seems to indicate that Solr cannot handle "invalid" >>> characters - and that it is the responsibility of the calling >>> program to strip out such. A quick search shows me a both an >>> individual character comparison in Java and a regex used for the >>> purpose. Is there any "illegal character protection" in the Dovecot >>> Solr plugin? >> Yes, there is. So I'm not really sure what it's complaining about. >> Are you using the "solr" or "solr_old" backend? >> >> > "Solr". > > plugin { > fts = solr > fts_solr = url=http://localhost:8983/solr/ > } > Now seeing: Jan 8 15:40:09 bubba dovecot: imap(user1 at domain.com): Error: fts_solr: Lookup failed: 400 undefined field CC Jan 8 15:40:09 bubba dovecot: imap: Error: -- Daniel From dmiller at amfes.com Mon Jan 9 01:48:29 2012 From: dmiller at amfes.com (Daniel L. Miller) Date: Sun, 08 Jan 2012 15:48:29 -0800 Subject: [Dovecot] Solr plugin In-Reply-To: <4F0A2980.7050003@amfes.com> References: <4F04EDC8.6060809@amfes.com> <9D72A466-6ECD-4A10-A52A-20BF56B32916@iki.fi> <4F072FA9.2020009@amfes.com> <75CA67D3-81FF-4FD5-8D61-6992E808DB09@iki.fi> <4F075A76.1040807@amfes.com> <4F0A2980.7050003@amfes.com> Message-ID: On 1/8/2012 3:40 PM, Daniel L. Miller wrote: > On 1/6/2012 12:32 PM, Daniel L. Miller wrote: >> On 1/6/2012 9:36 AM, Timo Sirainen wrote: >>> On 6.1.2012, at 19.30, Daniel L. Miller wrote: >>> > > Jan 8 15:40:09 bubba dovecot: imap(user1 at domain.com): Error: > fts_solr: Lookup failed: 400 undefined field CC > Jan 8 15:40:09 bubba dovecot: imap: Error: > > Looking at the Solr output - looks like the CC parameter is being capitalized while all the other fieldnames are lowercase. -- Daniel From Ralf.Hildebrandt at charite.de Mon Jan 9 09:40:57 2012 From: Ralf.Hildebrandt at charite.de (Ralf Hildebrandt) Date: Mon, 9 Jan 2012 08:40:57 +0100 Subject: [Dovecot] 2.0.17: Index lost -> SAVEDON lost as well? Message-ID: <20120109074057.GC22506@charite.de> Today I encoundered these errors: Jan 9 08:30:06 mail dovecot: lmtp(31174, backup at backup.invalid): Error: Log synchronization error at seq=858,offset=44672 for /home/mailboxname/mdbox/storage/dovecot.map.index: Append with UID 282388, but next_uid = 282389 Jan 9 08:30:06 mail dovecot: lmtp(31819, backup at backup.invalid): Error: Log synchronization error at seq=858,offset=44672 for /home/mailboxname/mdbox/storage/dovecot.map.index: Append with UID 282388, but next_uid = 282389 Jan 9 08:30:06 mail dovecot: lmtp(32148, backup at backup.invalid): Error: Log synchronization error at seq=858,offset=44672 for /home/mailboxname/mdbox/storage/dovecot.map.index: Append with UID 282388, but next_uid = 282389 After that, the SAVEDON date for all mails was reset to today: mail:~# doveadm search -u backup at backup.invalid mailbox INBOX SAVEDON 2012-01-09 | wc -l 75650 mail:~# doveadm search -u backup at backup.invalid mailbox INBOX SAVEDON 2012-01-08 | wc -l 0 mail:~# doveadm search -u backup at backup.invalid mailbox INBOX SAVEDON 2012-01-07 | wc -l 0 Before, I was running this: vorgestern=`date -d "-2 day" +"%Y-%m-%d"` doveadm expunge -u backup at backup.invalid mailbox INBOX SAVEDBEFORE $vorgestern doveadm purge -u backup at backup.invalid Is there a way of restoring the SAVEDON info? -- Ralf Hildebrandt Gesch?ftsbereich IT | Abteilung Netzwerk Charit? - Universit?tsmedizin Berlin Campus Benjamin Franklin Hindenburgdamm 30 | D-12203 Berlin Tel. +49 30 450 570 155 | Fax: +49 30 450 570 962 ralf.hildebrandt at charite.de | http://www.charite.de From junk4 at klunky.co.uk Mon Jan 9 11:16:58 2012 From: junk4 at klunky.co.uk (J4K) Date: Mon, 09 Jan 2012 10:16:58 +0100 Subject: [Dovecot] dovecot: imap-login: Disconnected (no auth attempts) Message-ID: <4F0AB08A.7050605@klunky.co.uk> Morning everyone, On the 8th of Jan the TLS/SSL certificate I use with Dovecot expired. I replaced it with a new on the 9th of Jan. I tested this with Thunderbird and all is well. This morning people tell me they cannot get their email using their mobile telephones : K9 Mail I have reverted the SSL cert back to the old one just in case. Thunderbird will works. Dovecot 1:1.2.15-7 running on Debian 6 The messages in the logs are: Jan 9 10:11:37 logout dovecot: imap-login: Disconnected (no auth attempts): rip=90.131.95.130, lip=88.119.95.13, TLS: Disconnected Jan 9 10:11:38 logout dovecot: imap-login: Disconnected (no auth attempts): rip=90.131.95.130, lip=88.119.95.13, TLS: Disconnected In dovecot.conf I have this set : disable_plaintext_auth = no And the auth default mechanisms are set to: mechanisms = plain login What is strange is the only item that changed is the SSL cert, which has since been changed back to the old one (which has expired... ^^). Any ideas where I may look or change? Regards, S From robert at schetterer.org Mon Jan 9 11:27:26 2012 From: robert at schetterer.org (Robert Schetterer) Date: Mon, 09 Jan 2012 10:27:26 +0100 Subject: [Dovecot] dovecot: imap-login: Disconnected (no auth attempts) In-Reply-To: <4F0AB08A.7050605@klunky.co.uk> References: <4F0AB08A.7050605@klunky.co.uk> Message-ID: <4F0AB2FE.8000306@schetterer.org> Am 09.01.2012 10:16, schrieb J4K: > Morning everyone, > > On the 8th of Jan the TLS/SSL certificate I use with Dovecot expired. I > replaced it with a new on the 9th of Jan. I tested this with > Thunderbird and all is well. > > This morning people tell me they cannot get their email using their > mobile telephones : K9 Mail > > I have reverted the SSL cert back to the old one just in case. > Thunderbird will works. > > > Dovecot 1:1.2.15-7 running on Debian 6 > > The messages in the logs are: > > Jan 9 10:11:37 logout dovecot: imap-login: Disconnected (no auth > attempts): rip=90.131.95.130, lip=88.119.95.13, TLS: Disconnected > Jan 9 10:11:38 logout dovecot: imap-login: Disconnected (no auth > attempts): rip=90.131.95.130, lip=88.119.95.13, TLS: Disconnected > > In dovecot.conf I have this set : > > disable_plaintext_auth = no > > And the auth default mechanisms are set to: > mechanisms = plain login > > What is strange is the only item that changed is the SSL cert, which has > since been changed back to the old one (which has expired... ^^). > > Any ideas where I may look or change? > > Regards, S if you only changed the crt etc, and youre sure you did everything right perhaps you have forgot adding a needed intermediate cert ? read here http://www.trustico.co.uk/install/how-to-install-ssl-certificate.php Required Intermediate Certificates (CA Certificates) To successfully install your SSL Certificate you may be required to install an Intermediate CA Certificate. Please review the above installation instructions carefully to determine if an Intermediate CA Certificate is required, how to obtain it and correctly import it into your system. For more information please Contact Us. Alternatively, and for systems not covered by the above installation instructions, please use our Intermediate Certificate Wizard to find the correct CA Certificate or Root Bundle that is required for your SSL Certificate to function correctly. Find Out More Information -- Best Regards MfG Robert Schetterer Germany/Munich/Bavaria From junk4 at klunky.co.uk Mon Jan 9 11:39:24 2012 From: junk4 at klunky.co.uk (J4K) Date: Mon, 09 Jan 2012 10:39:24 +0100 Subject: [Dovecot] dovecot: imap-login: Disconnected (no auth attempts) In-Reply-To: <4F0AB2FE.8000306@schetterer.org> References: <4F0AB08A.7050605@klunky.co.uk> <4F0AB2FE.8000306@schetterer.org> Message-ID: <4F0AB5CC.108@klunky.co.uk> On 09/01/12 10:27, Robert Schetterer wrote: > Am 09.01.2012 10:16, schrieb J4K: >> Morning everyone, >> >> On the 8th of Jan the TLS/SSL certificate I use with Dovecot expired. I >> replaced it with a new on the 9th of Jan. I tested this with >> Thunderbird and all is well. >> >> This morning people tell me they cannot get their email using their >> mobile telephones : K9 Mail >> >> I have reverted the SSL cert back to the old one just in case. >> Thunderbird will works. >> >> >> Dovecot 1:1.2.15-7 running on Debian 6 >> >> The messages in the logs are: >> >> Jan 9 10:11:37 logout dovecot: imap-login: Disconnected (no auth >> attempts): rip=90.131.95.130, lip=88.119.95.13, TLS: Disconnected >> Jan 9 10:11:38 logout dovecot: imap-login: Disconnected (no auth >> attempts): rip=90.131.95.130, lip=88.119.95.13, TLS: Disconnected >> >> In dovecot.conf I have this set : >> >> disable_plaintext_auth = no >> >> And the auth default mechanisms are set to: >> mechanisms = plain login >> >> What is strange is the only item that changed is the SSL cert, which has >> since been changed back to the old one (which has expired... ^^). >> >> Any ideas where I may look or change? >> >> Regards, S > if you only changed the crt etc, and youre sure you did everything right > > perhaps you have forgot adding a needed intermediate cert ? > > read here > http://www.trustico.co.uk/install/how-to-install-ssl-certificate.php > > Required Intermediate Certificates (CA Certificates) > > To successfully install your SSL Certificate you may be required to > install an Intermediate CA Certificate. Please review the above > installation instructions carefully to determine if an Intermediate CA > Certificate is required, how to obtain it and correctly import it into > your system. For more information please Contact Us. > Alternatively, and for systems not covered by the above installation > instructions, please use our Intermediate Certificate Wizard to find the > correct CA Certificate or Root Bundle that is required for your SSL > Certificate to function correctly. Find Out More Information You may have some email problems with the mobile phone because of the certificates. Thunderbird and webmail are fine. You have only to access its complaint about a unknown certificate. I am working on the certificate problem. From robert at schetterer.org Mon Jan 9 11:41:18 2012 From: robert at schetterer.org (Robert Schetterer) Date: Mon, 09 Jan 2012 10:41:18 +0100 Subject: [Dovecot] dovecot: imap-login: Disconnected (no auth attempts) In-Reply-To: <4F0AB556.3040103@klunky.co.uk> References: <4F0AB08A.7050605@klunky.co.uk> <4F0AB2FE.8000306@schetterer.org> <4F0AB556.3040103@klunky.co.uk> Message-ID: <4F0AB63E.4020205@schetterer.org> Am 09.01.2012 10:37, schrieb Simon Loewenthal: > On 09/01/12 10:27, Robert Schetterer wrote: >> Am 09.01.2012 10:16, schrieb J4K: >>> Morning everyone, >>> >>> On the 8th of Jan the TLS/SSL certificate I use with Dovecot expired. I >>> replaced it with a new on the 9th of Jan. I tested this with >>> Thunderbird and all is well. >>> >>> This morning people tell me they cannot get their email using their >>> mobile telephones : K9 Mail >>> >>> I have reverted the SSL cert back to the old one just in case. >>> Thunderbird will works. >>> >>> >>> Dovecot 1:1.2.15-7 running on Debian 6 >>> >>> The messages in the logs are: >>> >>> Jan 9 10:11:37 logout dovecot: imap-login: Disconnected (no auth >>> attempts): rip=90.131.95.130, lip=88.119.95.13, TLS: Disconnected >>> Jan 9 10:11:38 logout dovecot: imap-login: Disconnected (no auth >>> attempts): rip=90.131.95.130, lip=88.119.95.13, TLS: Disconnected >>> >>> In dovecot.conf I have this set : >>> >>> disable_plaintext_auth = no >>> >>> And the auth default mechanisms are set to: >>> mechanisms = plain login >>> >>> What is strange is the only item that changed is the SSL cert, which has >>> since been changed back to the old one (which has expired... ^^). >>> >>> Any ideas where I may look or change? >>> >>> Regards, S >> if you only changed the crt etc, and youre sure you did everything right >> >> perhaps you have forgot adding a needed intermediate cert ? >> >> read here >> http://www.trustico.co.uk/install/how-to-install-ssl-certificate.php >> >> Required Intermediate Certificates (CA Certificates) >> >> To successfully install your SSL Certificate you may be required to >> install an Intermediate CA Certificate. Please review the above >> installation instructions carefully to determine if an Intermediate CA >> Certificate is required, how to obtain it and correctly import it into >> your system. For more information please Contact Us. >> Alternatively, and for systems not covered by the above installation >> instructions, please use our Intermediate Certificate Wizard to find the >> correct CA Certificate or Root Bundle that is required for your SSL >> Certificate to function correctly. Find Out More Information > I know that the intermediate certs are messed up, which is why I rolled > back to the old expired certificate. I did not expect an expired > certificate to block authentication, and it does not mean that it does > block. The problem may be elsewhere. that might be a k9 problem ( older versions ) or in android older versions, is there a ignore ssl failure option as workaround what does thunderbird tell you about the new cert ? but for sure the problem may elsewhere > > -- > PGP is optional: 4BA78604 > simon @ klunky . org > simon @ klunky . co.uk > I won't accept your confidentiality > agreement, and your Emails are kept. > ~???~ > -- Best Regards MfG Robert Schetterer Germany/Munich/Bavaria From junk4 at klunky.co.uk Mon Jan 9 11:52:22 2012 From: junk4 at klunky.co.uk (J4K) Date: Mon, 09 Jan 2012 10:52:22 +0100 Subject: [Dovecot] dovecot: imap-login: Disconnected (no auth attempts) In-Reply-To: <4F0AB63E.4020205@schetterer.org> References: <4F0AB08A.7050605@klunky.co.uk> <4F0AB2FE.8000306@schetterer.org> <4F0AB556.3040103@klunky.co.uk> <4F0AB63E.4020205@schetterer.org> Message-ID: <4F0AB8D6.8060204@klunky.co.uk> On 09/01/12 10:41, Robert Schetterer wrote: > Am 09.01.2012 10:37, schrieb Simon Loewenthal: >> On 09/01/12 10:27, Robert Schetterer wrote: >>> Am 09.01.2012 10:16, schrieb J4K: >>>> Morning everyone, >>>> >>>> On the 8th of Jan the TLS/SSL certificate I use with Dovecot expired. I >>>> replaced it with a new on the 9th of Jan. I tested this with >>>> Thunderbird and all is well. >>>> >>>> This morning people tell me they cannot get their email using their >>>> mobile telephones : K9 Mail >>>> >>>> I have reverted the SSL cert back to the old one just in case. >>>> Thunderbird will works. >>>> >>>> >>>> Dovecot 1:1.2.15-7 running on Debian 6 >>>> >>>> The messages in the logs are: >>>> >>>> Jan 9 10:11:37 logout dovecot: imap-login: Disconnected (no auth >>>> attempts): rip=90.131.95.130, lip=88.119.95.13, TLS: Disconnected >>>> Jan 9 10:11:38 logout dovecot: imap-login: Disconnected (no auth >>>> attempts): rip=90.131.95.130, lip=88.119.95.13, TLS: Disconnected >>>> >>>> In dovecot.conf I have this set : >>>> >>>> disable_plaintext_auth = no >>>> >>>> And the auth default mechanisms are set to: >>>> mechanisms = plain login >>>> >>>> What is strange is the only item that changed is the SSL cert, which has >>>> since been changed back to the old one (which has expired... ^^). >>>> >>>> Any ideas where I may look or change? >>>> >>>> Regards, S >>> if you only changed the crt etc, and youre sure you did everything right >>> >>> perhaps you have forgot adding a needed intermediate cert ? >>> >>> read here >>> http://www.trustico.co.uk/install/how-to-install-ssl-certificate.php >>> >>> Required Intermediate Certificates (CA Certificates) >>> >>> To successfully install your SSL Certificate you may be required to >>> install an Intermediate CA Certificate. Please review the above >>> installation instructions carefully to determine if an Intermediate CA >>> Certificate is required, how to obtain it and correctly import it into >>> your system. For more information please Contact Us. >>> Alternatively, and for systems not covered by the above installation >>> instructions, please use our Intermediate Certificate Wizard to find the >>> correct CA Certificate or Root Bundle that is required for your SSL >>> Certificate to function correctly. Find Out More Information >> I know that the intermediate certs are messed up, which is why I rolled >> back to the old expired certificate. I did not expect an expired >> certificate to block authentication, and it does not mean that it does >> block. The problem may be elsewhere. > that might be a k9 problem ( older versions ) or in android older > versions, is there a ignore ssl failure option as workaround > > what does thunderbird tell you about the new cert ? > > but for sure the problem may elsewhere >> -- >> PGP is optional: 4BA78604 >> simon @ klunky . org >> simon @ klunky . co.uk >> I won't accept your confidentiality >> agreement, and your Emails are kept. >> ~???~ >> > TB says unknown, and I know why. I have set the class 1 and class 2 certificate chain keys to the same, when these should be different. Damn, StartCom's certs are difficult to set up. Workaround for K9 (latest version) is to go to the Account Settings -> Fetching -> Incoming Server, and click Next. It will attempt to authenicate and then complain about the certificate. One can ignore the warning and accept the certificate. Cheers all. Simon From janm at transactionware.com Sun Jan 8 10:38:04 2012 From: janm at transactionware.com (Jan Mikkelsen) Date: Sun, 8 Jan 2012 19:38:04 +1100 Subject: [Dovecot] Building 2.1.rc1 with cluence, but without libstemmer In-Reply-To: <1324377324.3597.47.camel@innu> References: <1324377324.3597.47.camel@innu> Message-ID: <8D81449C-C294-4983-961E-17907EBDBF6A@transactionware.com> On 20/12/2011, at 9:35 PM, Timo Sirainen wrote: > [?] >> and libtextcat is dovecot 2.1.rc1 intended to be used against? > > http://www.let.rug.nl/vannoord/TextCat/ probably.. Basically I've just > used the libstemmer and libtextcat that are in Debian. Hmm. That seems to be been turned into libtextcat here ? http://software.wise-guys.nl/libtextcat/ Dovecot builds against this version, so I'm hopeful it will work OK. Thanks for the answers, I'm going to test out 2.1-rc3 tomorrow. Regards, Jan. From mpapet at yahoo.com Mon Jan 9 07:34:31 2012 From: mpapet at yahoo.com (Michael Papet) Date: Sun, 8 Jan 2012 21:34:31 -0800 (PST) Subject: [Dovecot] Newbie: LDA Isn't Logging Message-ID: <1326087271.17295.YahooMailClassic@web125406.mail.ne1.yahoo.com> I did some testing on a Debian testing VM. I built 2.0.17 from sources and copied the config straight over from the malfunctioning machine. LDA logging worked. So, it could be something about my system. But, running /usr/lib/dovecot/deliver still doesn't return a value on the command line as documented on the wiki. I've attached strace files from both the malfunctioning Debian packages machine and the built from sources VM. Unfortunately, I'm a new strace user, so I don't know what it all means. Michael --- On Tue, 1/3/12, Timo Sirainen wrote: > From: Timo Sirainen > Subject: Re: [Dovecot] Newbie: LDA Isn't Logging > To: "Michael" > Cc: dovecot at dovecot.org > Date: Tuesday, January 3, 2012, 4:15 AM > On Mon, 2012-01-02 at 22:48 -0800, > Michael wrote: > > Hi, > > > > I'm a newbie having some trouble getting deliver to > log anything.? Related to this, there are no return > values unless the -d is missing.? I'm using LDAP to > store virtual domain and user account information. > > > > Test #1: /usr/lib/dovecot/deliver -e -f mpapet at yahoo.com > -d zed at mailswansong.dom > < bad.mail > > Expected result: supposed to fail, there's no zed > account via ldap lookup and supposed to get a return code > per the wiki at http://wiki2.dovecot.org/LDA.? > Supposed to log too. > > Actual result: nothing gets delivered, no return code, > nothing is logged. > > As in return code is 0? Something's definitely wrong there > then. > > First check that deliver at least reads the config file. > Add something > broken in there, such as: "foo=bar" at the beginning of > dovecot.conf. > Does deliver fail now? > > Also running deliver via strace could show something > useful. > > > -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: sources_2.0.17_strace.txt URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: malfunctioning_debian_strace.txt URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: sources_2.0.17_no-user.txt URL: From bind at enas.net Mon Jan 9 12:18:50 2012 From: bind at enas.net (Urban Loesch) Date: Mon, 09 Jan 2012 11:18:50 +0100 Subject: [Dovecot] Proxy login failures Message-ID: <4F0ABF0A.1080404@enas.net> Hi, I'm using two dovecot pop3/imap proxies in front of our dovecot servers. Since some days I see many of the following errors in the logs of the two proxy-servers: ... dovecot: pop3-login: Error: proxy: Remote "IPV6-IP":110 disconnected: Connection closed: Connection reset by peer (state=0): user=, method=PLAIN, rip=remote-ip, lip=localip ... dovecot: imap-login: Error: proxy: Remote "IPV6-IP":143 disconnected: Connection closed: Connection reset by peer (state=0): user=, method=PLAIN, rip=remote-ip, lip=localip ... When this happens the Client gets the following error from the proxy: -ERR [IN-USE] Account is temporarily unavailable. System-details: OS: Debian Linux Proxy: 2.0.5-0~auto+23 Backend: 2.0.13-0~auto+54 Have you any idea what could cause this type of error? Thanks and regards Urban Loesch doveconf -n from one of our backendservers: # 2.0.13 (02d97fb66047): /etc/dovecot/dovecot.conf # OS: Linux 2.6.38.8-vs2.3.0.37-rc17-rol-em64t-timerp x86_64 Debian 6.0.2 ext4 auth_cache_negative_ttl = 0 auth_cache_size = 40 M auth_cache_ttl = 12 hours auth_mechanisms = plain login auth_username_format = %Lu auth_verbose = yes deliver_log_format = msgid=%m: %$ %p %w disable_plaintext_auth = no login_trusted_networks = our Proxy IP's (v4 and v6) mail_gid = mailstore mail_location = mdbox:/home/vmail/%d/%n:INDEX=/home/dovecotindex/%d/%n mail_plugins = " quota mail_log notify zlib" mail_uid = mailstore managesieve_notify_capability = mailto managesieve_sieve_capability = fileinto reject envelope encoded-character vacation subaddress comparator-i;ascii-numeric relational regex imap4flags copy include variables body enotify environment mailbox date ihave imapflags notify mdbox_rotate_size = 5 M passdb { args = /etc/dovecot/dovecot-sql-account.conf driver = sql } plugin { mail_log_events = delete undelete expunge copy mailbox_delete mailbox_rename mail_log_fields = uid box msgid size from mail_log_group_events = no quota = dict:Storage used::file:%h/dovecot-quota sieve = ~/.dovecot.sieve sieve_dir = ~/sieve sieve_extensions = +notify +imapflags sieve_max_redirects = 10 zlib_save = gz zlib_save_level = 5 } protocols = imap pop3 lmtp sieve service imap-login { inet_listener imap { port = 143 } service_count = 0 vsz_limit = 256 M } service lmtp { inet_listener lmtp { address = * port = 24 } unix_listener /var/spool/postfix/private/dovecot-lmtp { group = postfix mode = 0666 user = postfix } vsz_limit = 512 M } service pop3-login { inet_listener pop3 { port = 110 } service_count = 0 vsz_limit = 256 M } ssl = no ssl_cert = References: <4F0AB08A.7050605@klunky.co.uk> <4F0AB2FE.8000306@schetterer.org> <4F0AB556.3040103@klunky.co.uk> <4F0AB63E.4020205@schetterer.org> <4F0AB8D6.8060204@klunky.co.uk> Message-ID: <4F0AD221.6060007@arx.net> > TB says unknown, and I know why. I have set the class 1 and class 2 > certificate chain keys to the same, when these should be different. > Damn, StartCom's certs are difficult to set up. read this: http://binblog.info/2010/02/02/lengthy-chains/ basically, you start with YOUR cert and work you way up to the root CA with openssl x509 -in your_servers.{crt|pem} -subject -issuer > server- allinone.crt openssl x509 -in intermediate_authority.{crt|pem} -subject -issuer >> server-allinone.crt openssl x509 -in root_ca.{crt|pem} -subject -issuer >> server-allinone.crt then, in dovecot.conf ---8<--- ssl_cert_file = /path/to/server-allinone.crt ssl_key_file = /path/to/private.key ---8<--- It works for me but YMMV of course. Androids before 2.2 do not have startcom as a trusted CA and will complain anyhow. Best Regards, Thanos Chatziathanassiou > > Workaround for K9 (latest version) is to go to the Account Settings -> > Fetching -> Incoming Server, and click Next. It will attempt to > authenicate and then complain about the certificate. One can ignore the > warning and accept the certificate. > > Cheers all. > > Simon > -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 4271 bytes Desc: ?????????????????????????????? ???????????????? S/MIME URL: From stan at hardwarefreak.com Mon Jan 9 14:28:55 2012 From: stan at hardwarefreak.com (Stan Hoeppner) Date: Mon, 09 Jan 2012 06:28:55 -0600 Subject: [Dovecot] Providing shared folders with multiple backend servers In-Reply-To: <88fev069kbv8@mids.svenhartge.de> References: <68fd4hi9kbv8@mids.svenhartge.de> <4F08E4D9.1090203@hardwarefreak.com> <78fdevu9kbv8@mids.svenhartge.de> <4F09956C.1030109@hardwarefreak.com> <88fev069kbv8@mids.svenhartge.de> Message-ID: <4F0ADD87.5080103@hardwarefreak.com> On 1/8/2012 9:39 AM, Sven Hartge wrote: > Memory size. I am a bit hesistant to deploy a VM with 16GB of RAM. My > cluster nodes each have 48GB, so no problem on this side though. Shouldn't be a problem if you're going to spread the load over 2 to 4 cluster nodes. 16/2 = 8GB per VM, 16/4 = 4GB per Dovecot VM. This, assuming you are able to evenly spread user load. > And our VMware SAN is iSCSI based, so no way to plug a FC-based storage > into it. There are standalone FC-iSCSI bridges but they're marketed to bridge FC SAN islands over an IP WAN. Director class SAN switches can connect anything to anything, just buy the cards you need. Both of these are rather pricey. These wouldn't make sense in your environment. I'm just pointing out that it can be done. > So, this reads like my idea in the first place. > > Only you place all the mails on the NFS server, whereas my idea was to > just share the shared folders from a central point and keep the normal > user dirs local to the different nodes, thus reducing network impact for > the way more common user access. To be quite honest, after thinking this through a bit, many traditional advantages of a single shared mail store start to disappear. Whether you use NFS or a clusterFS, or 'local' disk (RDMs), all IO goes to the same array, so the traditional IO load balancing advantage disappears. The other main advantage, replacing a dead hardware node, simply mapping the LUNs to the new one and booting it up, also disappears due to VMware's unique abilities, including vmotion. Efficient use of storage isn't an issue as you can just as easily slice off a small LUN to each of 2/4 Dovecot VMs as a larger one to the NFS VM. So the only disadvantages I see are with the 'local' disk RDM mailstore location. 'manual' connection/mailbox/size balancing, all increasing administrator burden. > 2.3GHz for most VMware nodes. How many total cores per VMware node (all sockets)? > You got the numbers wrong. And I got a word wrong ;) > > Should have read "900GB _of_ 1300GB used". My bad. I misunderstood. > So not much wiggle room left. And that one is retiring anyway as you state below. So do you have plenty of space on your VMware SAN arrays? If not can you add disks or do you need another array chassis? > But modifications to our systems are made, which allow me to > temp-disable a user, convert and move his mailbox and re-enable him, > which allows me to move them one at a time from the old system to the > new one, without losing a mail or disrupting service to long and often. As it should be. > This is a Transtec Provigo 610. This is a 24 disk enclosure, 12 disks > with 150GB (7.200k) each for the main mail storage in RAID6 and another > 10 disks with 150GB (5.400k) for a backup LUN. I daily rsnapshot my > /home onto this local backup (20 days of retention), because it is > easier to restore from than firing up Bacula, which has the long > retention time of 90 days. But must users need a restore of mails from > $yesterday or $the_day_before. And your current iSCSI SAN array(s) backing the VMware farm? Total disks? Is it monolithic, or do you have multiple array chassis from one or multiple vendors? > Well, it was either Parallel-SCSI or FC back then, as far as I can > remember. The price difference between the U320 version and the FC one > was not so big and I wanted to avoid having to route those big SCSI-U320 > through my racks. Can't blame you there. I take it you hadn't built the iSCSI SAN yet at that point? > See above, not 1500GB disks, but 150GB ones. RAID6, because I wanted the > double security. I have been kind of burned by the previous system and I > tend to get nervous while tinking about data loss in my mail storage, > because I know my users _will_ give me hell if that happens. And as it turns out RAID10 wouldn't have provided you enough bytes. > I never used mbox as an admin. The box before the box before this one > uses uw-imapd with mbox and I experienced the system as a user and it > was horriffic. Most users back then never heard of IMAP folders and just > stored their mails inside of INBOX, which of course got huge. If one of > those users with a big mbox then deleted mails, it would literally lock > the box up for everyone, as uw-imapd was copying (for example) a 600MB > mbox file around to delete one mail. Yeah, ouch. IMAP with mbox works pretty well when users are marginally smart about organizing their mail, or a POP then delete setup. I'd bet if that was maildir in that era on that box it would have slowed things way down as well. Especially if the filesystem was XFS, which had horrible, abysmal really, unlink performance until 2.6.35 (2009). > Of course, this was mostly because of the crappy uw-imapd and secondly > by some poor design choices in the server itself (underpowered RAID > controller, to small cache and a RAID5 setup, low RAM in the server). That's a recipe for disaster. > So the first thing we did back then, in 2004, was to change to Courier > and convert from mbox to maildir, which made the mailsystem fly again, > even on the same hardware, only the disk setup changed to RAID10. I wonder how much gain you'd have seen if you stuck with RAID5 instead... > Then we bought new hardware (the one previous to the current one), this > time with more RAM, better RAID controller, smarter disk setup. We > outgrew this one really fast and a disk upgrade was not possible; it > lasted only 2 years. Did you need more space or more spindles? > But Courier is showing its age and things like Sieve are only possible > with great pain, so I want to avoid it. Don't blame ya. Lots of people migrate from Courier for Dovecot for similar reasons. > And this is why I kind of hold this upgrade back until dovecot 2.1 is > released, as it has some optimizations here. Sounds like it's going to be a bit more than an 'upgrade'. ;) > That is a BPO-kernel. Not-yet Squeeze. I admin over 150 different > systems here, plus I am the main VMware and SAN admin. So upgrades take > some time until I grow an extra pair of eyes and arms. ;) /me nods > And since I have been planning to re-implement the mailsystem for some > time now, I held the update to the storage backends back. No use in > disrupting service for the end user if I'm going to replace the whole > thing with a new one in the end. /me nods > Naa, I have been doing this for too long. While I am perfectly capable > of building such a server myself, I am now the kind of guy who wants to > "yell" at a vendor, when their hardware fails. At your scale it would simply be impractical, and impossible from a time management standpoint. > Personal build PCs and servers out of single parts have been nothing > than a nightmare for me. I've had nothing but good luck with "DIY" systems. My background is probably a bit different than most though. Hardware has been in my blood since I was a teenager in about '86. I used to design and build relatively high end custom -48vdc white box servers and SCSI arrays for telcos back in the day, along with standard 115v servers for SMBs. Also, note the RHS of my email address. ;) That is a nickname given to me about 13 years ago. I decided to adopt it for my vanity domain. > And: my cowworkers need to be able to service > them as well while I am not available and they are not as a hardware > aficionado as I am. That's the biggest reason right there. DIY is only really feasible if you run your own show, and will likely continue to be running it for a while. Or if staff is similarly skilled. Most IT folks these days aren't hardware oriented people. > So "professional" hardware with a 5 to 7 year support contract is the > way to go for me. Definitely. > I have plenty space for 2U systems and already use DL385 G7s, I am not > fixed on Intel or AMD, I'll gladly use the one which is the most fit for > a given jobs. Just out of curiosity do you have any Power or SPARC systems, or all x86? -- Stan From stan at hardwarefreak.com Mon Jan 9 15:13:40 2012 From: stan at hardwarefreak.com (Stan Hoeppner) Date: Mon, 09 Jan 2012 07:13:40 -0600 Subject: [Dovecot] Providing shared folders with multiple backend servers In-Reply-To: References: <68fd4hi9kbv8@mids.svenhartge.de> <4F08E4D9.1090203@hardwarefreak.com> <78fdevu9kbv8@mids.svenhartge.de> <4F09956C.1030109@hardwarefreak.com> <88fev069kbv8@mids.svenhartge.de> Message-ID: <4F0AE804.5070002@hardwarefreak.com> On 1/8/2012 2:15 PM, Sven Hartge wrote: > Wouldn't such a setup be the "Best of Both Worlds"? Having the main > traffic going to local disks (being RDMs) and also being able to provide > shared folders to every user who needs them without the need to move > those users onto one server? The only problems I can see at this time are: 1. Some users will have much larger mailboxes than others. Each year ~1/4 of your student population rotates, so if you manually place existing mailboxes now based on current size you have no idea who the big users are in the next freshman class, or the next. So you may have to do manual re-balancing of mailboxes, maybe frequently. 2. If you lose a Dovecot VM guest due to image file or other corruption, or some other rare cause, you can't restart that guest, but will have to build a new image from a template. This could cause either minor or significant downtime for ~1/4 of your mail users w/4 nodes. This is likely rare enough it's not worth consideration. 3. You will consume more SAN volumes and LUNs. Most arrays have a fixed number of each. May or may not be an issue. -- Stan From stan at hardwarefreak.com Mon Jan 9 15:38:20 2012 From: stan at hardwarefreak.com (Stan Hoeppner) Date: Mon, 09 Jan 2012 07:38:20 -0600 Subject: [Dovecot] Providing shared folders with multiple backend servers In-Reply-To: References: <68fd4hi9kbv8@mids.svenhartge.de> <4F08E4D9.1090203@hardwarefreak.com> <78fdevu9kbv8@mids.svenhartge.de> <4F09956C.1030109@hardwarefreak.com> <88fev069kbv8@mids.svenhartge.de> Message-ID: <4F0AEDCC.10109@hardwarefreak.com> On 1/8/2012 3:07 PM, Sven Hartge wrote: > Ah, I forgot: I _already_ have the mechanisms in place to statically > redirect/route accesses for users to different backends, since some of > the users are already redirected to a different mailsystem at another > location of my university. I assume you mean IMAP/POP connections, not SMTP. > So using this mechanism to also redirect/route users internal to _my_ > location is no big deal. > > This is what got me into the idea of several independant backend > storages without the need to share the _whole_ storage, but just the > shared folders for some users. > > (Are my words making any sense? I got the feeling I'm writing German with > English words and nobody is really understanding anything ...) You're making perfect sense, and frankly, if not for the .de TLD in your email address, I'd have thought you were an American. Your written English is probably better than mine, and it's my only language. To be fair to the Brits, I speak/write American English. ;) I'm guessing no one else has interest in this thread, or maybe simply lost interest as the replies have been lengthy, and not wholly Dovecot related. I accept some blame for that. -- Stan From sven at svenhartge.de Mon Jan 9 15:48:22 2012 From: sven at svenhartge.de (Sven Hartge) Date: Mon, 9 Jan 2012 14:48:22 +0100 Subject: [Dovecot] Providing shared folders with multiple backend servers References: <68fd4hi9kbv8@mids.svenhartge.de> <4F08E4D9.1090203@hardwarefreak.com> <78fdevu9kbv8@mids.svenhartge.de> <4F09956C.1030109@hardwarefreak.com> <88fev069kbv8@mids.svenhartge.de> <4F0ADD87.5080103@hardwarefreak.com> Message-ID: <08fhdkhkv5v8@mids.svenhartge.de> Stan Hoeppner wrote: > On 1/8/2012 9:39 AM, Sven Hartge wrote: >> Memory size. I am a bit hesistant to deploy a VM with 16GB of RAM. My >> cluster nodes each have 48GB, so no problem on this side though. > Shouldn't be a problem if you're going to spread the load over 2 to 4 > cluster nodes. 16/2 = 8GB per VM, 16/4 = 4GB per Dovecot VM. This, > assuming you are able to evenly spread user load. I think I will be able to do that. If I devide my users by using a hash like MD5 or SHA1 over their username, this should give me an even distribution. >> So, this reads like my idea in the first place. >> >> Only you place all the mails on the NFS server, whereas my idea was to >> just share the shared folders from a central point and keep the normal >> user dirs local to the different nodes, thus reducing network impact for >> the way more common user access. > To be quite honest, after thinking this through a bit, many traditional > advantages of a single shared mail store start to disappear. Whether > you use NFS or a clusterFS, or 'local' disk (RDMs), all IO goes to the > same array, so the traditional IO load balancing advantage disappears. > The other main advantage, replacing a dead hardware node, simply mapping > the LUNs to the new one and booting it up, also disappears due to > VMware's unique abilities, including vmotion. Efficient use of storage > isn't an issue as you can just as easily slice off a small LUN to each > of 2/4 Dovecot VMs as a larger one to the NFS VM. Yes. Plus I can much more easily increase a LUNs size, if the need arises. > So the only disadvantages I see are with the 'local' disk RDM mailstore > location. 'manual' connection/mailbox/size balancing, all increasing > administrator burden. Well, I don't see size balancing as a problem since I can increase the size of the disk for a node very easy. Load should be fairly even, if I distribute the 10,000 users across the nodes. Even if there is a slight imbalance, the systems should have enough power to smooth that out. I could measure the load every user creates and use that as a distribution key, but I believe this to be a wee bit over-engineered for my scenario. Initial placement of a new user will be automatic, during the activation of the account, so no administrative burden there. It seems my initial idea was not so bad after all ;) Now I "just" need o built a little test setup, put some dummy users on it and see, if anything bad happens while accessing the shared folders and how the reaction of the system is, should the shared folder server be down. >> 2.3GHz for most VMware nodes. > How many total cores per VMware node (all sockets)? 8 >> You got the numbers wrong. And I got a word wrong ;) >> >> Should have read "900GB _of_ 1300GB used". > My bad. I misunderstood. Here the memory statistics an 14:30 o'clock: total used free shared buffers cached Mem: 12046 11199 847 0 88 7926 -/+ buffers/cache: 3185 8861 Swap: 5718 10 5707 >> So not much wiggle room left. > And that one is retiring anyway as you state below. So do you have > plenty of space on your VMware SAN arrays? If not can you add disks > or do you need another array chassis? The SAN has plenty space. Over 70TiB at this time, with another 70TiB having just arrived and waiting to be connected. >> This is a Transtec Provigo 610. This is a 24 disk enclosure, 12 disks >> with 150GB (7.200k) each for the main mail storage in RAID6 and >> another 10 disks with 150GB (5.400k) for a backup LUN. I daily >> rsnapshot my /home onto this local backup (20 days of retention), >> because it is easier to restore from than firing up Bacula, which has >> the long retention time of 90 days. But must users need a restore of >> mails from $yesterday or $the_day_before. > And your current iSCSI SAN array(s) backing the VMware farm? Total > disks? Is it monolithic, or do you have multiple array chassis from > one or multiple vendors? The iSCSI storage nodes (HP P4500) use 600GB SAS6 at 15k rpm with 12 disks per node, configured in 2 RAID5 sets with 6 disks each. But this is internal to each storage node, which are kind of a blackbox and have to be treated as such. The HP P4500 is a but unique, since it does not consist of a head node which storage arrays connected to it, but of individual storage nodes forming a self balancing iSCSI cluster. (The nodes consist of DL320s G2.) So far, I had no performance or other problems with this setup and it scales quite nice, as you buy as you grow . And again, price was also a factor, deploying a FC-SAN would have cost us more than thrice the amount than the amount the deployment of an iSCSI solution did, because the latter is "just" ethernet, while the former would have needed a lot more totally new components. >> Well, it was either Parallel-SCSI or FC back then, as far as I can >> remember. The price difference between the U320 version and the FC one >> was not so big and I wanted to avoid having to route those big SCSI-U320 >> through my racks. > Can't blame you there. I take it you hadn't built the iSCSI SAN yet at > that point? No, at that time (2005/2006) nobody thought of a SAN. That is a fairly "new" idea here, first implemented for the VMware cluster in 2008. >> Then we bought new hardware (the one previous to the current one), >> this time with more RAM, better RAID controller, smarter disk setup. >> We outgrew this one really fast and a disk upgrade was not possible; >> it lasted only 2 years. > Did you need more space or more spindles? More space. The IMAP usage became more prominent which caused a steep rise in space needed on the mail storage server. But 74GiB SCA drives where expensive and 130GiB SCA drives where not available at that time. >> And this is why I kind of hold this upgrade back until dovecot 2.1 is >> released, as it has some optimizations here. > Sounds like it's going to be a bit more than an 'upgrade'. ;) Well, yes. It is more a re-implementation than an upgrade. >> I have plenty space for 2U systems and already use DL385 G7s, I am >> not fixed on Intel or AMD, I'll gladly use the one which is the most >> fit for a given jobs. > Just out of curiosity do you have any Power or SPARC systems, or all > x86? Central IT here this days only uses x86-based systems. There where some Sun SPARC systems, but both have been decomissioned. New SPARC hardware is just too expensive for our scale. And if you want to use virtualization, you can either use only SPARC systems and partition them or use x86 based systems. And then there is the need to virtualize Windows, so x86 is the only option. Most bigger Universities in Germany make nearly exclusive use of SPARC systems, but they had a central IT with big irons (IBM, HP, etc.) since back in the 1960's, so naturally the continue on that path. Gr??e, Sven. -- Sigmentation fault. Core dumped. From philip at turmel.org Mon Jan 9 15:50:49 2012 From: philip at turmel.org (Phil Turmel) Date: Mon, 09 Jan 2012 08:50:49 -0500 Subject: [Dovecot] Providing shared folders with multiple backend servers In-Reply-To: <4F0AEDCC.10109@hardwarefreak.com> References: <68fd4hi9kbv8@mids.svenhartge.de> <4F08E4D9.1090203@hardwarefreak.com> <78fdevu9kbv8@mids.svenhartge.de> <4F09956C.1030109@hardwarefreak.com> <88fev069kbv8@mids.svenhartge.de> <4F0AEDCC.10109@hardwarefreak.com> Message-ID: <4F0AF0B9.7030406@turmel.org> On 01/09/2012 08:38 AM, Stan Hoeppner wrote: > On 1/8/2012 3:07 PM, Sven Hartge wrote: [...] >> (Are my words making any sense? I got the feeling I'm writing German with >> English words and nobody is really understanding anything ...) > > You're making perfect sense, and frankly, if not for the .de TLD in your > email address, I'd have thought you were an American. Your written > English is probably better than mine, and it's my only language. To be > fair to the Brits, I speak/write American English. ;) Concur. My American ear is also perfectly happy. > I'm guessing no one else has interest in this thread, or maybe simply > lost interest as the replies have been lengthy, and not wholly Dovecot > related. I accept some blame for that. I've been following this thread with great interest, but no advice to offer. The content is entirely appropriate, and appreciated. Don't be embarrassed by your enthusiasm, Stan. Sven, a follow-up report when you have it all working as desired would also be appreciated (and appropriate). Thanks, Phil From sven at svenhartge.de Mon Jan 9 15:52:27 2012 From: sven at svenhartge.de (Sven Hartge) Date: Mon, 9 Jan 2012 14:52:27 +0100 Subject: [Dovecot] Providing shared folders with multiple backend servers References: <68fd4hi9kbv8@mids.svenhartge.de> <4F08E4D9.1090203@hardwarefreak.com> <78fdevu9kbv8@mids.svenhartge.de> <4F09956C.1030109@hardwarefreak.com> <88fev069kbv8@mids.svenhartge.de> <4F0AEDCC.10109@hardwarefreak.com> Message-ID: <18fhg73kv5v8@mids.svenhartge.de> Stan Hoeppner wrote: > On 1/8/2012 3:07 PM, Sven Hartge wrote: >> Ah, I forgot: I _already_ have the mechanisms in place to statically >> redirect/route accesses for users to different backends, since some >> of the users are already redirected to a different mailsystem at >> another location of my university. > I assume you mean IMAP/POP connections, not SMTP. Yes. perdition uses its popmap feature to redirect users of the other location to the IMAP/POP servers there. So we only need one central mailserver for the users to configure while we are able to physically store their mails at different datacenters. > I'm guessing no one else has interest in this thread, or maybe simply > lost interest as the replies have been lengthy, and not wholly Dovecot > related. I accept some blame for that. I will open a new thread with more concrete problems/questions after I setup my test setup. This will be more technical and less philosphical, I hope :) Gr??e, Sven -- Sigmentation fault. Core dumped. From sven at svenhartge.de Mon Jan 9 16:08:12 2012 From: sven at svenhartge.de (Sven Hartge) Date: Mon, 9 Jan 2012 15:08:12 +0100 Subject: [Dovecot] Providing shared folders with multiple backend servers References: <68fd4hi9kbv8@mids.svenhartge.de> <4F08E4D9.1090203@hardwarefreak.com> <78fdevu9kbv8@mids.svenhartge.de> <4F09956C.1030109@hardwarefreak.com> <88fev069kbv8@mids.svenhartge.de> <4F0AE804.5070002@hardwarefreak.com> Message-ID: <28fhgdvkv5v8@mids.svenhartge.de> Stan Hoeppner wrote: > On 1/8/2012 2:15 PM, Sven Hartge wrote: >> Wouldn't such a setup be the "Best of Both Worlds"? Having the main >> traffic going to local disks (being RDMs) and also being able to provide >> shared folders to every user who needs them without the need to move >> those users onto one server? > The only problems I can see at this time are: > 1. Some users will have much larger mailboxes than others. > Each year ~1/4 of your student population rotates, so if you > manually place existing mailboxes now based on current size > you have no idea who the big users are in the next freshman > class, or the next. So you may have to do manual re-balancing > of mailboxes, maybe frequently. The quota for students is 1GiB here. If I provide each of my 4 nodes with 500GiB of storage space, this gives me 2TiB now, which should be sufficient. If a nodes fills, I increase its storage space. Only if it fills too fast, I may have to rebalance users. And I never wanted to place the users based on their current size. I knew this was not going to work because of the reasons you mentioned. I just want to hash their username and use this as a function to distribute the users, keeping it simple and stupid. > 2. If you lose a Dovecot VM guest due to image file or other > corruption, or some other rare cause, you can't restart that guest, > but will have to build a new image from a template. This could > cause either minor or significant downtime for ~1/4 of your mail > users w/4 nodes. This is likely rare enough it's not worth > consideration. Yes, I know. But right now, if I lose my one and only mail storage servers, all users mailboxes will be offline, until I am either a) able to repair the server, b) move the disks to my identical backup system (or the backup system to the location of the failed one) or c) start the backup system and lose all mails not rsynced since the last rsync-run. It is not easy designing a mail system without a SPoF which still performs under load. For example, once a time I had a DRDB (active/passive( setup between the two storage systems. This would allow me to start my standby system without losing (nearly) any mail. But this was awful slow and sluggish. > 3. You will consume more SAN volumes and LUNs. Most arrays have a > fixed number of each. May or may not be an issue. Not really an issue here. The SAN is exclusive for the VMware cluster, so most LUNs are quite big (1TiB to 2TiB) but there are not many of them. Gr??e, Sven. -- Sigmentation fault. Core dumped. From tom at elysium.ltd.uk Mon Jan 9 16:41:55 2012 From: tom at elysium.ltd.uk (Tom Clark) Date: Mon, 9 Jan 2012 14:41:55 -0000 Subject: [Dovecot] Resetting a UID Message-ID: <025c01cccedc$d5ccd680$81668380$@elysium.ltd.uk> Hi, We've got a client with a Blackberry that hass deleted his emails off his Blackberry device. The BES won't re-download the messages as it believes it has already downloaded them (apparently it matches on UID). Is there any way of resetting a folder (and messages in the folder) UID? I know in courier you used to be able to touch the directory. Thanks, Tom Clark From tss at iki.fi Mon Jan 9 16:43:00 2012 From: tss at iki.fi (Timo Sirainen) Date: Mon, 09 Jan 2012 16:43:00 +0200 Subject: [Dovecot] Postfix user map Message-ID: <4F0AFCF4.1050506@iki.fi> http://hg.dovecot.org/dovecot-2.1/rev/f562bcaca215 implements "postmap" type sockets, which follow Postfix's tcp_table(5) protocol. So you can ask: get user at domain and Dovecot answers one of: - 200 1 - 500 User not found - 400 Internal failure So you can use this with Postfix: virtual_mailbox_maps = tcp:127.0.0.1:1234 With Dovecot you can enable it with: service auth { inet_listener postmap { listen = 127.0.0.1 port = 1234 } } Anyone have ideas if this could be improved, or used for some other purposes? From tss at iki.fi Mon Jan 9 16:51:07 2012 From: tss at iki.fi (Timo Sirainen) Date: Mon, 9 Jan 2012 16:51:07 +0200 Subject: [Dovecot] Providing shared folders with multiple backend servers In-Reply-To: <68fd4hi9kbv8@mids.svenhartge.de> References: <68fd4hi9kbv8@mids.svenhartge.de> Message-ID: Too much text in the rest of this thread so I haven't read it, but: On 8.1.2012, at 0.20, Sven Hartge wrote: > Right now, I am pondering with using an additional server with just the > shared folders on it and using NFS (or a cluster FS) to mount the shared > folder filesystem to each backend storage server, so each user has > potential access to a shared folders data. With NFS you'll run into problems with caching (http://wiki2.dovecot.org/NFS). Some cluster fs might work better. The "proper" solution for this that I've been thinking about would be to use v2.1's imapc backend with master users. So that when user A wants to access user B's shared folder, Dovecot connects to B's IMAP server using master user login, and accesses the mailbox via IMAP. Probably wouldn't be a big job to implement, mainly I'd need to figure out how this should be configured.. From tss at iki.fi Mon Jan 9 16:57:02 2012 From: tss at iki.fi (Timo Sirainen) Date: Mon, 9 Jan 2012 16:57:02 +0200 Subject: [Dovecot] 2.0.17: Index lost -> SAVEDON lost as well? In-Reply-To: <20120109074057.GC22506@charite.de> References: <20120109074057.GC22506@charite.de> Message-ID: <17E5EF90-C133-4F70-A377-0D93694181DE@iki.fi> On 9.1.2012, at 9.40, Ralf Hildebrandt wrote: > Today I encoundered these errors: > > Jan 9 08:30:06 mail dovecot: lmtp(31174, backup at backup.invalid): Error: Log synchronization error at seq=858,offset=44672 for > /home/mailboxname/mdbox/storage/dovecot.map.index: Append with UID 282388, but next_uid = 282389 Any idea why this happened? > After that, the SAVEDON date for all mails was reset to today: Yeah. The "save date" is stored only in index. And index rebuild drops all those fields. I guess this could/should be fixed in index rebuild. > Is there a way of restoring the SAVEDON info? Not currently without extra code (and even then you could only restore it to e.g. its received date). From sven at svenhartge.de Mon Jan 9 16:58:50 2012 From: sven at svenhartge.de (Sven Hartge) Date: Mon, 9 Jan 2012 15:58:50 +0100 Subject: [Dovecot] Providing shared folders with multiple backend servers References: <68fd4hi9kbv8@mids.svenhartge.de> Message-ID: <38fhk6pkv5v8@mids.svenhartge.de> Timo Sirainen wrote: > On 8.1.2012, at 0.20, Sven Hartge wrote: >> Right now, I am pondering with using an additional server with just >> the shared folders on it and using NFS (or a cluster FS) to mount the >> shared folder filesystem to each backend storage server, so each user >> has potential access to a shared folders data. > With NFS you'll run into problems with caching > (http://wiki2.dovecot.org/NFS). Some cluster fs might work better. > The "proper" solution for this that I've been thinking about would be > to use v2.1's imapc backend with master users. So that when user A > wants to access user B's shared folder, Dovecot connects to B's IMAP > server using master user login, and accesses the mailbox via IMAP. > Probably wouldn't be a big job to implement, mainly I'd need to figure > out how this should be configured.. Luckily, in my case, User A does not access anythin from User B, but instead both User A and User B access the same public folder, which is different from any folder of User A and User B. Gr??e, Sven. -- Sigmentation fault. Core dumped. From tss at iki.fi Mon Jan 9 17:00:21 2012 From: tss at iki.fi (Timo Sirainen) Date: Mon, 9 Jan 2012 17:00:21 +0200 Subject: [Dovecot] Solr plugin In-Reply-To: References: <4F04EDC8.6060809@amfes.com> <9D72A466-6ECD-4A10-A52A-20BF56B32916@iki.fi> <4F072FA9.2020009@amfes.com> <75CA67D3-81FF-4FD5-8D61-6992E808DB09@iki.fi> <4F075A76.1040807@amfes.com> <4F0A2980.7050003@amfes.com> Message-ID: On 9.1.2012, at 1.48, Daniel L. Miller wrote: > On 1/8/2012 3:40 PM, Daniel L. Miller wrote: >> On 1/6/2012 12:32 PM, Daniel L. Miller wrote: >>> On 1/6/2012 9:36 AM, Timo Sirainen wrote: >>>> On 6.1.2012, at 19.30, Daniel L. Miller wrote: >>>> >> >> Jan 8 15:40:09 bubba dovecot: imap(user1 at domain.com): Error: fts_solr: Lookup failed: 400 undefined field CC >> Jan 8 15:40:09 bubba dovecot: imap: Error: >> >> > > Looking at the Solr output - looks like the CC parameter is being capitalized while all the other fieldnames are lowercase. Did you look at the input? Looking at the code, it should be lowercased. Maybe Solr just uppercases it for some reason. Are you using a Solr schema that has "cc" field? From Ralf.Hildebrandt at charite.de Mon Jan 9 17:02:49 2012 From: Ralf.Hildebrandt at charite.de (Ralf Hildebrandt) Date: Mon, 9 Jan 2012 16:02:49 +0100 Subject: [Dovecot] 2.0.17: Index lost -> SAVEDON lost as well? In-Reply-To: <17E5EF90-C133-4F70-A377-0D93694181DE@iki.fi> References: <20120109074057.GC22506@charite.de> <17E5EF90-C133-4F70-A377-0D93694181DE@iki.fi> Message-ID: <20120109150249.GH22506@charite.de> * Timo Sirainen : > On 9.1.2012, at 9.40, Ralf Hildebrandt wrote: > > > Today I encoundered these errors: > > > > Jan 9 08:30:06 mail dovecot: lmtp(31174, backup at backup.invalid): Error: Log synchronization error at seq=858,offset=44672 for > > /home/mailboxname/mdbox/storage/dovecot.map.index: Append with UID 282388, but next_uid = 282389 > > Any idea why this happened? I was running those commands: # new style (dovecot) vorgestern=`date -d "-2 day" +"%Y-%m-%d"` doveadm expunge -u backup at backup.invalid mailbox INBOX SAVEDBEFORE $vorgestern doveadm purge -u backup at backup.invalid > > After that, the SAVEDON date for all mails was reset to today: > > Yeah. The "save date" is stored only in index. And index rebuild drops > all those fields. I guess this could/should be fixed in index rebuild. It's ok. Right now it only affects my expiry method. -- Ralf Hildebrandt Gesch?ftsbereich IT | Abteilung Netzwerk Charit? - Universit?tsmedizin Berlin Campus Benjamin Franklin Hindenburgdamm 30 | D-12203 Berlin Tel. +49 30 450 570 155 | Fax: +49 30 450 570 962 ralf.hildebrandt at charite.de | http://www.charite.de From CMarcus at Media-Brokers.com Mon Jan 9 17:14:37 2012 From: CMarcus at Media-Brokers.com (Charles Marcus) Date: Mon, 09 Jan 2012 10:14:37 -0500 Subject: [Dovecot] Providing shared folders with multiple backend servers In-Reply-To: References: <68fd4hi9kbv8@mids.svenhartge.de> Message-ID: <4F0B045D.1010101@Media-Brokers.com> On 2012-01-09 9:51 AM, Timo Sirainen wrote: > The "proper" solution for this that I've been thinking about would be > to use v2.1's imapc backend with master users. So that when user A > wants to access user B's shared folder, Dovecot connects to B's IMAP > server using master user login, and accesses the mailbox via IMAP. > Probably wouldn't be a big job to implement, mainly I'd need to > figure out how this should be configured. Sounds interesting... would this be the new officially supported method for sharing mailboxes in all cases? Or is this just for shared mailboxes on NFS shares? It sounds like this might be a proper (fully supported without kludges) way to get what I had asked about before, with respect to expanding on the concept of Master users for sharing an entire account with one or more other users... -- Best regards, Charles From noeldude at gmail.com Mon Jan 9 17:32:01 2012 From: noeldude at gmail.com (Noel) Date: Mon, 09 Jan 2012 09:32:01 -0600 Subject: [Dovecot] Postfix user map In-Reply-To: <4F0AFCF4.1050506@iki.fi> References: <4F0AFCF4.1050506@iki.fi> Message-ID: <4F0B0871.6040500@gmail.com> On 1/9/2012 8:43 AM, Timo Sirainen wrote: > http://hg.dovecot.org/dovecot-2.1/rev/f562bcaca215 implements > "postmap" type sockets, which follow Postfix's tcp_table(5) > protocol. So you can ask: > > get user at domain > > and Dovecot answers one of: > > - 200 1 > - 500 User not found > - 400 Internal failure > > So you can use this with Postfix: > > virtual_mailbox_maps = tcp:127.0.0.1:1234 > > With Dovecot you can enable it with: > > service auth { > inet_listener postmap { > listen = 127.0.0.1 > port = 1234 > } > } > > Anyone have ideas if this could be improved, or used for some > other purposes? Cool. Does this just check for valid user existence, or can it also check for over-quota (and respond 500 overquota I suppose)? -- Noel Jones From robert at schetterer.org Mon Jan 9 17:37:32 2012 From: robert at schetterer.org (Robert Schetterer) Date: Mon, 09 Jan 2012 16:37:32 +0100 Subject: [Dovecot] Postfix user map In-Reply-To: <4F0B0871.6040500@gmail.com> References: <4F0AFCF4.1050506@iki.fi> <4F0B0871.6040500@gmail.com> Message-ID: <4F0B09BC.3010300@schetterer.org> Am 09.01.2012 16:32, schrieb Noel: > On 1/9/2012 8:43 AM, Timo Sirainen wrote: >> http://hg.dovecot.org/dovecot-2.1/rev/f562bcaca215 implements >> "postmap" type sockets, which follow Postfix's tcp_table(5) >> protocol. So you can ask: >> >> get user at domain >> >> and Dovecot answers one of: >> >> - 200 1 >> - 500 User not found >> - 400 Internal failure >> >> So you can use this with Postfix: >> >> virtual_mailbox_maps = tcp:127.0.0.1:1234 >> >> With Dovecot you can enable it with: >> >> service auth { >> inet_listener postmap { >> listen = 127.0.0.1 >> port = 1234 >> } >> } >> >> Anyone have ideas if this could be improved, or used for some >> other purposes? > > > Cool. > Does this just check for valid user existence, or can it also check > for over-quota (and respond 500 overquota I suppose)? if you use dove lmtp with postfix it allready works "like that way" for over quota > > > -- Noel Jones -- Best Regards MfG Robert Schetterer Germany/Munich/Bavaria From noeldude at gmail.com Mon Jan 9 17:46:44 2012 From: noeldude at gmail.com (Noel) Date: Mon, 09 Jan 2012 09:46:44 -0600 Subject: [Dovecot] Postfix user map In-Reply-To: <4F0B09BC.3010300@schetterer.org> References: <4F0AFCF4.1050506@iki.fi> <4F0B0871.6040500@gmail.com> <4F0B09BC.3010300@schetterer.org> Message-ID: <4F0B0BE4.8010907@gmail.com> On 1/9/2012 9:37 AM, Robert Schetterer wrote: > Am 09.01.2012 16:32, schrieb Noel: >> On 1/9/2012 8:43 AM, Timo Sirainen wrote: >>> http://hg.dovecot.org/dovecot-2.1/rev/f562bcaca215 implements >>> "postmap" type sockets, which follow Postfix's tcp_table(5) >>> protocol. So you can ask: >>> >>> get user at domain >>> >>> and Dovecot answers one of: >>> >>> - 200 1 >>> - 500 User not found >>> - 400 Internal failure >>> >>> So you can use this with Postfix: >>> >>> virtual_mailbox_maps = tcp:127.0.0.1:1234 >>> >>> With Dovecot you can enable it with: >>> >>> service auth { >>> inet_listener postmap { >>> listen = 127.0.0.1 >>> port = 1234 >>> } >>> } >>> >>> Anyone have ideas if this could be improved, or used for some >>> other purposes? >> >> Cool. >> Does this just check for valid user existence, or can it also check >> for over-quota (and respond 500 overquota I suppose)? > if you use dove lmtp with postfix it allready works "like that way" > for over quota That can reject over-quota users during the postfix SMTP conversation? -- Noel Jones From robert at schetterer.org Mon Jan 9 17:50:49 2012 From: robert at schetterer.org (Robert Schetterer) Date: Mon, 09 Jan 2012 16:50:49 +0100 Subject: [Dovecot] Postfix user map In-Reply-To: <4F0B0BE4.8010907@gmail.com> References: <4F0AFCF4.1050506@iki.fi> <4F0B0871.6040500@gmail.com> <4F0B09BC.3010300@schetterer.org> <4F0B0BE4.8010907@gmail.com> Message-ID: <4F0B0CD9.3090402@schetterer.org> Am 09.01.2012 16:46, schrieb Noel: > On 1/9/2012 9:37 AM, Robert Schetterer wrote: >> Am 09.01.2012 16:32, schrieb Noel: >>> On 1/9/2012 8:43 AM, Timo Sirainen wrote: >>>> http://hg.dovecot.org/dovecot-2.1/rev/f562bcaca215 implements >>>> "postmap" type sockets, which follow Postfix's tcp_table(5) >>>> protocol. So you can ask: >>>> >>>> get user at domain >>>> >>>> and Dovecot answers one of: >>>> >>>> - 200 1 >>>> - 500 User not found >>>> - 400 Internal failure >>>> >>>> So you can use this with Postfix: >>>> >>>> virtual_mailbox_maps = tcp:127.0.0.1:1234 >>>> >>>> With Dovecot you can enable it with: >>>> >>>> service auth { >>>> inet_listener postmap { >>>> listen = 127.0.0.1 >>>> port = 1234 >>>> } >>>> } >>>> >>>> Anyone have ideas if this could be improved, or used for some >>>> other purposes? >>> >>> Cool. >>> Does this just check for valid user existence, or can it also check >>> for over-quota (and respond 500 overquota I suppose)? >> if you use dove lmtp with postfix it allready works "like that way" >> for over quota > > > That can reject over-quota users during the postfix SMTP conversation? jep ,it does, i was glad having/testing this feature in dove 2 release, avoiding overquota backscatter etc > > > > -- Noel Jones -- Best Regards MfG Robert Schetterer Germany/Munich/Bavaria From stan at hardwarefreak.com Mon Jan 9 17:56:36 2012 From: stan at hardwarefreak.com (Stan Hoeppner) Date: Mon, 09 Jan 2012 09:56:36 -0600 Subject: [Dovecot] Providing shared folders with multiple backend servers In-Reply-To: <28fhgdvkv5v8@mids.svenhartge.de> References: <68fd4hi9kbv8@mids.svenhartge.de> <4F08E4D9.1090203@hardwarefreak.com> <78fdevu9kbv8@mids.svenhartge.de> <4F09956C.1030109@hardwarefreak.com> <88fev069kbv8@mids.svenhartge.de> <4F0AE804.5070002@hardwarefreak.com> <28fhgdvkv5v8@mids.svenhartge.de> Message-ID: <4F0B0E34.8080901@hardwarefreak.com> On 1/9/2012 8:08 AM, Sven Hartge wrote: > Stan Hoeppner wrote: > The quota for students is 1GiB here. If I provide each of my 4 nodes > with 500GiB of storage space, this gives me 2TiB now, which should be > sufficient. If a nodes fills, I increase its storage space. Only if it > fills too fast, I may have to rebalance users. That should work. > And I never wanted to place the users based on their current size. I > knew this was not going to work because of the reasons you mentioned. > > I just want to hash their username and use this as a function to > distribute the users, keeping it simple and stupid. My apologies Sven. I just re-read your first messages and you did mention this method. > Yes, I know. But right now, if I lose my one and only mail storage > servers, all users mailboxes will be offline, until I am either a) able > to repair the server, b) move the disks to my identical backup system (or > the backup system to the location of the failed one) or c) start the > backup system and lose all mails not rsynced since the last rsync-run. True. 3/4 of users remaining online is much better than none. :) > It is not easy designing a mail system without a SPoF which still > performs under load. And many other systems for that matter. > For example, once a time I had a DRDB (active/passive( setup between the > two storage systems. This would allow me to start my standby system > without losing (nearly) any mail. But this was awful slow and sluggish. Eric Rostetter at University of Texas at Austin has reported good performance with his twin Dovecot DRBD cluster. Though in his case he's doing active/active DRBD with GFS2 sitting on top, so there is no failover needed. DRBD is obviously not an option for your current needs. >> 3. You will consume more SAN volumes and LUNs. Most arrays have a >> fixed number of each. May or may not be an issue. > > Not really an issue here. The SAN is exclusive for the VMware cluster, > so most LUNs are quite big (1TiB to 2TiB) but there are not many of > them. I figured this wouldn't be a problem. I'm just trying to be thorough, mentioning anything I can think of that might be an issue. The more I think about your planned architecture the more it reminds me of a "shared nothing" database cluster--even a relatively small one can outrun a well tuned mainframe, especially doing decision support/data mining workloads (TPC-H). As long as you're prepared for the extra administration, which you obviously are, this setup will yield better performance than the NFS setup I recommended. Performance may not be quite as good as 4 physical hosts with local storage, but you haven't mentioned the details of your SAN storage nor the current load on it, so obviously I can't say with any certainty. If the controller currently has plenty of spare IOPS then the performance difference would be minimal. And using the SAN allows automatic restart of a VM if a physical node dies. As with Phil, I'm anxious to see how well it works in production. When you send an update please CC me directly as sometimes I don't read all the list mail. I hope my participation was helpful to you Sven, even if only to a small degree. Best of luck with the implementation. -- Stan From sven at svenhartge.de Mon Jan 9 18:16:14 2012 From: sven at svenhartge.de (Sven Hartge) Date: Mon, 9 Jan 2012 17:16:14 +0100 Subject: [Dovecot] Providing shared folders with multiple backend servers References: <68fd4hi9kbv8@mids.svenhartge.de> <4F08E4D9.1090203@hardwarefreak.com> <78fdevu9kbv8@mids.svenhartge.de> <4F09956C.1030109@hardwarefreak.com> <88fev069kbv8@mids.svenhartge.de> <4F0AE804.5070002@hardwarefreak.com> <28fhgdvkv5v8@mids.svenhartge.de> <4F0B0E34.8080901@hardwarefreak.com> Message-ID: <48fhoafkv5v8@mids.svenhartge.de> Stan Hoeppner wrote: > The more I think about your planned architecture the more it reminds > me of a "shared nothing" database cluster--even a relatively small one > can outrun a well tuned mainframe, especially doing decision > support/data mining workloads (TPC-H). > As long as you're prepared for the extra administration, which you > obviously are, this setup will yield better performance than the NFS > setup I recommended. Performance may not be quite as good as 4 > physical hosts with local storage, but you haven't mentioned the > details of your SAN storage nor the current load on it, so obviously I > can't say with any certainty. If the controller currently has plenty > of spare IOPS then the performance difference would be minimal. This is the beauty of the HP P4500: every node is a controller, load is automagically balanced between all nodes of a storage cluster. The more nodes (up to ten) you add, the more performance you get. So far, I have not been able to push our current SAN to its limits, even with totally artificial benchmarks, so I am quite confident in its performance for the given task. But if everything fails and the performance is not good, I can still go ahead and buy dedicated hardware for the mailsystem. The only thing left is the NFS problem with caching Timo mentioned, but since the accesses to a central public shared folder will be only a minor portion of a clients access, I am hoping the impact will be minimal. Only testing will tell. Gr??e, Sven. -- Sigmentation fault. Core dumped. From robert at schetterer.org Mon Jan 9 18:19:20 2012 From: robert at schetterer.org (Robert Schetterer) Date: Mon, 09 Jan 2012 17:19:20 +0100 Subject: [Dovecot] Postfix user map In-Reply-To: <4F0B0CD9.3090402@schetterer.org> References: <4F0AFCF4.1050506@iki.fi> <4F0B0871.6040500@gmail.com> <4F0B09BC.3010300@schetterer.org> <4F0B0BE4.8010907@gmail.com> <4F0B0CD9.3090402@schetterer.org> Message-ID: <4F0B1388.209@schetterer.org> Am 09.01.2012 16:50, schrieb Robert Schetterer: > Am 09.01.2012 16:46, schrieb Noel: >> On 1/9/2012 9:37 AM, Robert Schetterer wrote: >>> Am 09.01.2012 16:32, schrieb Noel: >>>> On 1/9/2012 8:43 AM, Timo Sirainen wrote: >>>>> http://hg.dovecot.org/dovecot-2.1/rev/f562bcaca215 implements >>>>> "postmap" type sockets, which follow Postfix's tcp_table(5) >>>>> protocol. So you can ask: >>>>> >>>>> get user at domain >>>>> >>>>> and Dovecot answers one of: >>>>> >>>>> - 200 1 >>>>> - 500 User not found >>>>> - 400 Internal failure >>>>> >>>>> So you can use this with Postfix: >>>>> >>>>> virtual_mailbox_maps = tcp:127.0.0.1:1234 >>>>> >>>>> With Dovecot you can enable it with: >>>>> >>>>> service auth { >>>>> inet_listener postmap { >>>>> listen = 127.0.0.1 >>>>> port = 1234 >>>>> } >>>>> } >>>>> >>>>> Anyone have ideas if this could be improved, or used for some >>>>> other purposes? >>>> >>>> Cool. >>>> Does this just check for valid user existence, or can it also check >>>> for over-quota (and respond 500 overquota I suppose)? >>> if you use dove lmtp with postfix it allready works "like that way" >>> for over quota >> >> >> That can reject over-quota users during the postfix SMTP conversation? > > jep ,it does, i was glad having/testing this feature in dove 2 release, > avoiding > overquota backscatter etc i am afraid i wasnt total corect here in fact i havent seen backscatter overquota on my servers since using dove lmtp with postfix but i guess there may cases left in which it could happen you should ask Timo for exact tec answer the postfix answer ever was write some policy daemon for it ( which i found extremly complicated at my try, and stopped it ) but i guess its always a problem comparing the size of a mail with space left in mailstore i.e with many reciepts of one mail etc, whatever tec solution is used so i should have said dove lmtp is the best/easy solution for overquota i know at present and my problems with it are solved for now > >> >> >> >> -- Noel Jones > > -- Best Regards MfG Robert Schetterer Germany/Munich/Bavaria From tss at iki.fi Mon Jan 9 20:09:46 2012 From: tss at iki.fi (Timo Sirainen) Date: Mon, 9 Jan 2012 20:09:46 +0200 Subject: [Dovecot] Postfix user map In-Reply-To: <4F0B0871.6040500@gmail.com> References: <4F0AFCF4.1050506@iki.fi> <4F0B0871.6040500@gmail.com> Message-ID: <8A109A75-164C-41B4-A13B-19C3F1D01E12@iki.fi> On 9.1.2012, at 17.32, Noel wrote: > On 1/9/2012 8:43 AM, Timo Sirainen wrote: >> http://hg.dovecot.org/dovecot-2.1/rev/f562bcaca215 implements >> "postmap" type sockets, which follow Postfix's tcp_table(5) >> protocol. So you can ask: >> >> get user at domain >> >> and Dovecot answers one of: >> >> - 200 1 >> - 500 User not found >> - 400 Internal failure >> >> Anyone have ideas if this could be improved, or used for some >> other purposes? > > > Cool. > Does this just check for valid user existence, or can it also check > for over-quota (and respond 500 overquota I suppose)? Hmm. That looked potentially useful, but Postfix doesn't seem to support it at least that way, since the message to SMTP client is the same regardless of what I add after 500 reply. Also that would have required me to move the code somewhere else from auth process, since auth doesn't know the quota usage. And internally Dovecot would still have had to do auth lookup separately, so there's really no benefit in doing this vs. having Postfix do two lookups. From tss at iki.fi Mon Jan 9 20:12:38 2012 From: tss at iki.fi (Timo Sirainen) Date: Mon, 9 Jan 2012 20:12:38 +0200 Subject: [Dovecot] Postfix user map In-Reply-To: <4F0B1388.209@schetterer.org> References: <4F0AFCF4.1050506@iki.fi> <4F0B0871.6040500@gmail.com> <4F0B09BC.3010300@schetterer.org> <4F0B0BE4.8010907@gmail.com> <4F0B0CD9.3090402@schetterer.org> <4F0B1388.209@schetterer.org> Message-ID: <70A95B54-98CE-4B4D-8B76-CDA279353202@iki.fi> On 9.1.2012, at 18.19, Robert Schetterer wrote: > i am afraid i wasnt total corect here > in fact i havent seen backscatter overquota on my servers > since using dove lmtp with postfix LMTP shouldn't matter here. In most configs mails are put to queue first, and only from there they are sent to LMTP, and if LMTP rejects a mail then backscatter is sent. Maybe the difference you're seeing is that it's now Postfix sending the bounce (or perhaps skipping it?) instead of dovecot-lda (unless you gave -e parameter). From tss at iki.fi Mon Jan 9 20:15:00 2012 From: tss at iki.fi (Timo Sirainen) Date: Mon, 9 Jan 2012 20:15:00 +0200 Subject: [Dovecot] Providing shared folders with multiple backend servers In-Reply-To: <4F0B045D.1010101@Media-Brokers.com> References: <68fd4hi9kbv8@mids.svenhartge.de> <4F0B045D.1010101@Media-Brokers.com> Message-ID: On 9.1.2012, at 17.14, Charles Marcus wrote: > On 2012-01-09 9:51 AM, Timo Sirainen wrote: >> The "proper" solution for this that I've been thinking about would be >> to use v2.1's imapc backend with master users. So that when user A >> wants to access user B's shared folder, Dovecot connects to B's IMAP >> server using master user login, and accesses the mailbox via IMAP. >> Probably wouldn't be a big job to implement, mainly I'd need to >> figure out how this should be configured. > > Sounds interesting... would this be the new officially supported method for sharing mailboxes in all cases? Or is this just for shared mailboxes on NFS shares? Well, it would be one officially supported way to do it. It would also help when using multiple UIDs. From sven at svenhartge.de Mon Jan 9 20:25:55 2012 From: sven at svenhartge.de (Sven Hartge) Date: Mon, 9 Jan 2012 19:25:55 +0100 Subject: [Dovecot] Providing shared folders with multiple backend servers References: <68fd4hi9kbv8@mids.svenhartge.de> Message-ID: <68fi0aakv5v8@mids.svenhartge.de> Timo Sirainen wrote: > On 8.1.2012, at 0.20, Sven Hartge wrote: >> Right now, I am pondering with using an additional server with just >> the shared folders on it and using NFS (or a cluster FS) to mount the >> shared folder filesystem to each backend storage server, so each user >> has potential access to a shared folders data. > With NFS you'll run into problems with caching > (http://wiki2.dovecot.org/NFS). Some cluster fs might work better. Can "mmap_disable = yes" and the other NFS options be set per namespace or only globally? Gr??e, Sven. -- Sigmentation fault. Core dumped. From tss at iki.fi Mon Jan 9 20:35:20 2012 From: tss at iki.fi (Timo Sirainen) Date: Mon, 9 Jan 2012 20:35:20 +0200 Subject: [Dovecot] Providing shared folders with multiple backend servers In-Reply-To: <68fi0aakv5v8@mids.svenhartge.de> References: <68fd4hi9kbv8@mids.svenhartge.de> <68fi0aakv5v8@mids.svenhartge.de> Message-ID: On 9.1.2012, at 20.25, Sven Hartge wrote: > Timo Sirainen wrote: >> On 8.1.2012, at 0.20, Sven Hartge wrote: > >>> Right now, I am pondering with using an additional server with just >>> the shared folders on it and using NFS (or a cluster FS) to mount the >>> shared folder filesystem to each backend storage server, so each user >>> has potential access to a shared folders data. > >> With NFS you'll run into problems with caching >> (http://wiki2.dovecot.org/NFS). Some cluster fs might work better. > > Can "mmap_disable = yes" and the other NFS options be set per namespace > or only globally? Currently only globally. From tss at iki.fi Mon Jan 9 20:36:36 2012 From: tss at iki.fi (Timo Sirainen) Date: Mon, 9 Jan 2012 20:36:36 +0200 Subject: [Dovecot] Resetting a UID In-Reply-To: <025c01cccedc$d5ccd680$81668380$@elysium.ltd.uk> References: <025c01cccedc$d5ccd680$81668380$@elysium.ltd.uk> Message-ID: <002EEBD6-7E83-41EF-B2DC-BAA101FA92D5@iki.fi> On 9.1.2012, at 16.41, Tom Clark wrote: > We've got a client with a Blackberry that hass deleted his emails off his > Blackberry device. The BES won't re-download the messages as it believes it > has already downloaded them (apparently it matches on UID). You can delete dovecot.index* and dovecot-uidlist files. Assuming you're using maildir. > Is there any way of resetting a folder (and messages in the folder) UID? I > know in courier you used to be able to touch the directory. I doubt Courier would do that without deleting courierimapuiddb. From tss at iki.fi Mon Jan 9 20:40:01 2012 From: tss at iki.fi (Timo Sirainen) Date: Mon, 9 Jan 2012 20:40:01 +0200 Subject: [Dovecot] Proxy login failures In-Reply-To: <4F0ABF0A.1080404@enas.net> References: <4F0ABF0A.1080404@enas.net> Message-ID: <69796D8B-5CFE-48A2-A092-B1A32331BC1F@iki.fi> On 9.1.2012, at 12.18, Urban Loesch wrote: > I'm using two dovecot pop3/imap proxies in front of our dovecot servers. > Since some days I see many of the following errors in the logs of the two proxy-servers: > > dovecot: pop3-login: Error: proxy: Remote "IPV6-IP":110 disconnected: Connection closed: Connection reset by peer (state=0): user=, method=PLAIN, rip=remote-ip, lip=localip > > When this happens the Client gets the following error from the proxy: > -ERR [IN-USE] Account is temporarily unavailable. The connection to remote server dies before authentication finishes. The reason for why that happens should be logged by the backend server. Sounds like it crashes. Check for ANY error messages in backend servers. From tss at iki.fi Mon Jan 9 20:43:09 2012 From: tss at iki.fi (Timo Sirainen) Date: Mon, 9 Jan 2012 20:43:09 +0200 Subject: [Dovecot] Newbie: LDA Isn't Logging In-Reply-To: <1326087271.17295.YahooMailClassic@web125406.mail.ne1.yahoo.com> References: <1326087271.17295.YahooMailClassic@web125406.mail.ne1.yahoo.com> Message-ID: <98056774-CE97-4A39-AFEF-3FB22330D430@iki.fi> On 9.1.2012, at 7.34, Michael Papet wrote: > LDA logging worked. So, it could be something about my system. But, running /usr/lib/dovecot/deliver still doesn't return a value on the command line as documented on the wiki. > > I've attached strace files from both the malfunctioning Debian packages machine and the built from sources VM. Unfortunately, I'm a new strace user, so I don't know what it all means. The last line in the malfunctioning deliver: exit_group(67) = ? So Dovecot exits with value 67, which means EX_NOUSER. Looks like everything is working correctly. Are you maybe running a wrapper script that hides the exit code? Or in some other way checking it wrong.. From tss at iki.fi Mon Jan 9 20:44:07 2012 From: tss at iki.fi (Timo Sirainen) Date: Mon, 9 Jan 2012 20:44:07 +0200 Subject: [Dovecot] uid / gid and systemusers In-Reply-To: <1809497881.1135529.1326037030206.JavaMail.ngmail@webmail10.arcor-online.net> References: <1809497881.1135529.1326037030206.JavaMail.ngmail@webmail10.arcor-online.net> Message-ID: On 8.1.2012, at 17.37, xamiw at arcor.de wrote: > Jan 8 16:18:28 test dovecot: User q is missing UID (see mail_uid setting) > Jan 8 16:18:28 test dovecot: imap-login: Internal login failure (auth failed, 1 attempts): user=, method=PLAIN, rip=AAA.BBB.CCC.DDD, lip=EEE.FFF.GGG.HHH TLS <--- edited by me .. > auth default { > mechanisms = plain > passdb shadow { > } > } You have passdb, but no userdb. > /etc/passwd: > ... > g:x:1000:1000:test1,,,:/home/g:/bin/bash > q:x:1001:1001:test2,,,:/home/q:/bin/bash To use /etc/passwd as userdb, you need to add userdb passwd {} From sven at svenhartge.de Mon Jan 9 20:47:09 2012 From: sven at svenhartge.de (Sven Hartge) Date: Mon, 9 Jan 2012 19:47:09 +0100 Subject: [Dovecot] Providing shared folders with multiple backend servers References: <68fd4hi9kbv8@mids.svenhartge.de> <68fi0aakv5v8@mids.svenhartge.de> Message-ID: <78fi1g2kv5v8@mids.svenhartge.de> Timo Sirainen wrote: > On 9.1.2012, at 20.25, Sven Hartge wrote: >> Timo Sirainen wrote: >>> On 8.1.2012, at 0.20, Sven Hartge wrote: >>>> Right now, I am pondering with using an additional server with just >>>> the shared folders on it and using NFS (or a cluster FS) to mount >>>> the shared folder filesystem to each backend storage server, so >>>> each user has potential access to a shared folders data. >> >>> With NFS you'll run into problems with caching >>> (http://wiki2.dovecot.org/NFS). Some cluster fs might work better. >> >> Can "mmap_disable = yes" and the other NFS options be set per >> namespace or only globally? > Currently only globally. Ah, too bad. Back to the drawing board then. Implementing my idea in my environment using a cluster filesystem would be a very big pain in the lower back, so I need a different idea to share the shared folders with all nodes but still keeping the user specific mailboxes fixed and local to a node. The imapc backed namespace you mentioned sounds very interesting, but this is not implemented right now for shared folders, is it? Gr??e, Sven. -- Sigmentation fault. Core dumped. From tss at iki.fi Mon Jan 9 20:59:03 2012 From: tss at iki.fi (Timo Sirainen) Date: Mon, 9 Jan 2012 20:59:03 +0200 Subject: [Dovecot] Strange error: DIGEST-MD5 mechanism can't be supported with given passdbs In-Reply-To: <4F07BDBB.3060204@gmail.com> References: <4F05EABC.7070309@gmail.com> <4F06D272.5010200@bunbun.be> <4F071E3B.2060405@gmail.com> <1325868288.17774.30.camel@hurina> <4F07332B.70708@gmail.com> <4F07BDBB.3060204@gmail.com> Message-ID: <491E7C43-2C87-4FD6-8AC0-E79F22E9749F@iki.fi> On 7.1.2012, at 5.36, Yubao Liu wrote: > In old version, "auth->passdbs" contains all passdbs, this revision > changes "auth->passdbs" to only contain non-master passdbs. > > I'm not sure which fix is better or even my proposal is correct or fully: > a) in src/auth/auth.c:auth_passdb_preinit(), insert master passdb to > auth->passdbs too, and remove duplicate code for masterdbs > in auth_init() and auth_deinit(). Not a good idea. The master passdb needs to be treated specially, otherwise you might accidentally allow regular users logging in as other users. > b) add similar code for masterdbs in auth_passdb_list_have_verify_plain(), > auth_passdb_list_have_lookup_credentials(), auth_passdb_list_have_set_credentials(). Kind of annoying code duplication, but .. I guess it can't really be helped. Added: http://hg.dovecot.org/dovecot-2.0/rev/bed15faedfd4 > Another related question is "pass" option in master passdb, if I set it to "yes", > the authentication fails: .. > My normal passdb is a PAM passdb, it doesn't support credential lookups, that's > reasonable, Right. > but I feel the comment for "pass" option is confusing: > > # Unless you're using PAM, you probably still want the destination user to > # be looked up from passdb that it really exists. pass=yes does that. > pass = yes > } > > According the comment, it's to check whether the real user exists, why not > to check userdb but another passdb? Well.. It is going to check userdb eventually anyway, so it would still fail, just a bit later and maybe with different error message. > Even it must check against passdb, > in this case, it's obvious not necessary to lookup credentials, it's enough to > to lookup user name only. There's currently no passdb that supports "does user exist?" lookup, but doesn't support credentials lookup, so this is more of a theoretical issue. (I guess maybe PAM could be abused in some configurations to do the check, but that's rather ugly..) From noeldude at gmail.com Mon Jan 9 21:04:13 2012 From: noeldude at gmail.com (Noel) Date: Mon, 09 Jan 2012 13:04:13 -0600 Subject: [Dovecot] Postfix user map In-Reply-To: <8A109A75-164C-41B4-A13B-19C3F1D01E12@iki.fi> References: <4F0AFCF4.1050506@iki.fi> <4F0B0871.6040500@gmail.com> <8A109A75-164C-41B4-A13B-19C3F1D01E12@iki.fi> Message-ID: <4F0B3A2D.4020301@gmail.com> On 1/9/2012 12:09 PM, Timo Sirainen wrote: > On 9.1.2012, at 17.32, Noel wrote: > > Cool. > Does this just check for valid user existence, or can it also check > for over-quota (and respond 500 overquota I suppose)? > Hmm. That looked potentially useful, but Postfix doesn't seem to support it at least that way, since the message to SMTP client is the same regardless of what I add after 500 reply. Also that would have required me to move the code somewhere else from auth process, since auth doesn't know the quota usage. And internally Dovecot would still have had to do auth lookup separately, so there's really no benefit in doing this vs. having Postfix do two lookups. How about a separate TCP lookup for quota status? This would be really useful for sites that don't have that information in a shared sql table (or no SQL in postfix), and get rid of kludgy policy services used to check quota status. This would be used with a check_recipient_access table, response would be something like: 200 DUNNO quota OK 200 REJECT user over quota 500 user not found -- Noel Jones From david at paperclipsystems.com Mon Jan 9 21:15:36 2012 From: david at paperclipsystems.com (David Egbert) Date: Mon, 09 Jan 2012 12:15:36 -0700 Subject: [Dovecot] failed: Too many levels of symbolic links In-Reply-To: <4A0E9695-E78A-487F-AE53-888D27981EF1@iki.fi> References: <4F075D43.8090706@paperclipsystems.com> <703E9D47-8238-4E52-A60B-A466B77C5CBF@iki.fi> <4F076A70.3090905@paperclipsystems.com> <4F077158.4000500@paperclipsystems.com> <4A0E9695-E78A-487F-AE53-888D27981EF1@iki.fi> Message-ID: <4F0B3CD8.8050501@paperclipsystems.com> On 1/6/2012 3:30 PM, Timo Sirainen wrote: > On 7.1.2012, at 0.10, David Egbert wrote: > >>> Anyway, readdir() is failing with ELOOP. Does it always fail with "Too many levels of symbolic links" or is it sometimes different? This sounds like a bug in Linux NFS client code. You can reproduce this always with this one user's Maildir? Can you do "ls" in the directory? >>> >> Sorry about the X's... it is a client directory. We support many domains and their privacy is paramount. You are correct it is in the /cur directory. I can LS all of directories without problems. This user has 10+Gb in his mail box spread across 352 subscribed folders. As for the logs it is always the directory, always the same error. > Try the attached test program. Run it as: ./readdir /path/to/Maildir/cur > > Does it also give non-zero error? > I ran it, and it returned: readdir() errno = 0 The user backed up their data and then removed the folder from the server. The error is now gone so I am assuming there was some corrupt file in the directory. Thanks for all of the help. David Egbert Paperclip Systems, LLC --- This message, its contents, and attachments are confidential and are only authorized for the intended recipient. Disclosure, re-distribution, or use of said information is strictly prohibited, and may be excluded from disclosure by applicable law. If you are not the intended recipient, or their intermediary, please notify the sender and delete this message. From tss at iki.fi Mon Jan 9 21:16:09 2012 From: tss at iki.fi (Timo Sirainen) Date: Mon, 9 Jan 2012 21:16:09 +0200 Subject: [Dovecot] Providing shared folders with multiple backend servers In-Reply-To: <78fi1g2kv5v8@mids.svenhartge.de> References: <68fd4hi9kbv8@mids.svenhartge.de> <68fi0aakv5v8@mids.svenhartge.de> <78fi1g2kv5v8@mids.svenhartge.de> Message-ID: On 9.1.2012, at 20.47, Sven Hartge wrote: >>> Can "mmap_disable = yes" and the other NFS options be set per >>> namespace or only globally? > >> Currently only globally. > > Ah, too bad. > > Back to the drawing board then. mmap_disable=yes works pretty well even if you're only using it for local filesystems. It just spends some more memory when reading dovecot.index.cache files. > Implementing my idea in my environment using a cluster filesystem would > be a very big pain in the lower back, so I need a different idea to > share the shared folders with all nodes but still keeping the user > specific mailboxes fixed and local to a node. > > The imapc backed namespace you mentioned sounds very interesting, but > this is not implemented right now for shared folders, is it? Well.. If you don't need users sharing mailboxes to each others, then you can probably already do this with Dovecot v2.1: 1. Configure the user Dovecots: namespace { type = public prefix = Shared/ location = imapc:~/imapc-shared } imapc_host = sharedmails.example.com imapc_password = master-user-password # With latest v2.1 hg you can do: imapc_user = shareduser imapc_master_user = %u # With v2.1.rc2 and older you need to do: imapc_user = shareduser*%u auth_master_user_separator = * 2. Configure the shared Dovecot: You need master passdb that allows all existing users to log in as "shareduser" user. You can probably simply do (not tested): passdb { type = static args = user=shareduser master = yes } The "shareduser" owns all of the actual shared mailboxes and has the necessary ACLs set up for individual users. ACLs use the master username (= the real username in this case) to do the ACL checks. From tss at iki.fi Mon Jan 9 21:19:34 2012 From: tss at iki.fi (Timo Sirainen) Date: Mon, 9 Jan 2012 21:19:34 +0200 Subject: [Dovecot] Providing shared folders with multiple backend servers In-Reply-To: References: <68fd4hi9kbv8@mids.svenhartge.de> <68fi0aakv5v8@mids.svenhartge.de> <78fi1g2kv5v8@mids.svenhartge.de> Message-ID: <08C9B341-1292-44F4-AB6B-D6D804ED60BE@iki.fi> On 9.1.2012, at 21.16, Timo Sirainen wrote: > passdb { > type = static > args = user=shareduser Of course you should also require a password: args = user=shareduser pass=master-user-password From tss at iki.fi Mon Jan 9 21:31:00 2012 From: tss at iki.fi (Timo Sirainen) Date: Mon, 9 Jan 2012 21:31:00 +0200 Subject: [Dovecot] change initial permissions on creation of mail folder In-Reply-To: <4F072342.1090901@ngong.de> References: <4F072342.1090901@ngong.de> Message-ID: <061F5BDF-A47F-40F5-8B86-E42C585B9EBB@iki.fi> On 6.1.2012, at 18.37, mailinglist wrote: > Installed dovcot from Debian .deb file. Creating a new account for system users sets permission for user-only. Where to change initial permissions on creation of mail folder and other subdirectories. Permissions for folders are taken from the mail root directory. http://wiki2.dovecot.org/SharedMailboxes/Permissions has details. Permissions for newly created mail root directory are always 0700. If you want something else, create the mail directory with wanted permissions at the same time as you create the user. > Installed dovecot using "apt-get install dovecot-imapd dovecot-pop3d". Any time when I create a new account in my mail client for a system user, Dovecot tries to create ~/mail/.imap/INBOX. The permissions for mail and .imap are set to 0700. By this permissions INBOX can not be created leading to an error message in log files. When I manualy change the permissions to 0770, INBOX is created I don't really understand why INBOX couldn't be created. 0700 should be enough for most installations. Unless you have a very good reason you shouldn't use 0770 for mails (sounds more like you've a weirdly configured mail setup). From tss at iki.fi Mon Jan 9 21:31:51 2012 From: tss at iki.fi (Timo Sirainen) Date: Mon, 9 Jan 2012 21:31:51 +0200 Subject: [Dovecot] FTS-Solr plugin In-Reply-To: References: Message-ID: <51D1C049-8E87-4AB7-9A20-6BDB0748A569@iki.fi> On 6.1.2012, at 19.35, Daniel L. Miller wrote: > Solr plugin appears to break when mailbox names have an ampersand in the name. The messages appear to indicate '&' gets translated to '&--'. What message? With fts=solr (not solr_old) the mailbox name isn't used in Solr at all. It uses mailbox GUIDs. From sven at svenhartge.de Mon Jan 9 21:31:58 2012 From: sven at svenhartge.de (Sven Hartge) Date: Mon, 9 Jan 2012 20:31:58 +0100 Subject: [Dovecot] Providing shared folders with multiple backend servers References: <68fd4hi9kbv8@mids.svenhartge.de> <68fi0aakv5v8@mids.svenhartge.de> <78fi1g2kv5v8@mids.svenhartge.de> Message-ID: <98fi3r4kv5v8@mids.svenhartge.de> Timo Sirainen wrote: > On 9.1.2012, at 20.47, Sven Hartge wrote: >>>> Can "mmap_disable = yes" and the other NFS options be set per >>>> namespace or only globally? >> >>> Currently only globally. >> >> Ah, too bad. >> >> Back to the drawing board then. > mmap_disable=yes works pretty well even if you're only using it for local filesystems. It just spends some more memory when reading dovecot.index.cache files. >> Implementing my idea in my environment using a cluster filesystem would >> be a very big pain in the lower back, so I need a different idea to >> share the shared folders with all nodes but still keeping the user >> specific mailboxes fixed and local to a node. >> >> The imapc backed namespace you mentioned sounds very interesting, but >> this is not implemented right now for shared folders, is it? > Well.. If you don't need users sharing mailboxes to each others, God heavens, no! If I allowed users to share their mailboxes with other users, hell would break loose. Nononono, just shared folders set up by the admin team, statically assigned to groups of users (for example, the central postmaster@ mail alias ends in such a shared folder). > then you can probably already do this with Dovecot v2.1: > 1. Configure the user Dovecots: > namespace { > type = public > prefix = Shared/ > location = imapc:~/imapc-shared > } > imapc_host = sharedmails.example.com > imapc_password = master-user-password > # With latest v2.1 hg you can do: > imapc_user = shareduser > imapc_master_user = %u > # With v2.1.rc2 and older you need to do: > imapc_user = shareduser*%u > auth_master_user_separator = * So, in my case, this would look like this: ,---- | # User's private mail location | mail_location = mdbox:~/mdbox | | # When creating any namespaces, you must also have a private namespace: | namespace { | type = private | separator = . | prefix = INBOX. | #location defaults to mail_location. | inbox = yes | } | | namespace { | type = public | separator = . | prefix = #shared. | location = imapc:~/imapc-shared | subscriptions = no | } | | imapc_host = m-st-sh-01.foo.bar | imapc_password = master-user-password | imapc_user = shareduser | imapc_master_user = %u `---- Where do I add "list = children"? In the user-dovecots shared namespace or on the shared-dovecots private namespace? > 2. Configure the shared Dovecot: > You need master passdb that allows all existing users to log in as "shareduser" user. You can probably simply do (not tested): > passdb { > type = static > args = user=shareduser pass=master-user-password > master = yes > } > The "shareduser" owns all of the actual shared mailboxes and has the > necessary ACLs set up for individual users. ACLs use the master > username (= the real username in this case) to do the ACL checks. So this is kind of "backwards", since normally the imapc_master_user would be the static user and imapc_user would be dynamic, right? All in all, a _very_ interesting configuration. Gr??e, Sven. -- Sigmentation fault. Core dumped. From tss at iki.fi Mon Jan 9 21:38:59 2012 From: tss at iki.fi (Timo Sirainen) Date: Mon, 9 Jan 2012 21:38:59 +0200 Subject: [Dovecot] Providing shared folders with multiple backend servers In-Reply-To: <98fi3r4kv5v8@mids.svenhartge.de> References: <68fd4hi9kbv8@mids.svenhartge.de> <68fi0aakv5v8@mids.svenhartge.de> <78fi1g2kv5v8@mids.svenhartge.de> <98fi3r4kv5v8@mids.svenhartge.de> Message-ID: <183FE5C1-811F-44A1-B7C6-BC57AC92C0C2@iki.fi> On 9.1.2012, at 21.31, Sven Hartge wrote: > ,---- > | # User's private mail location > | mail_location = mdbox:~/mdbox > | > | # When creating any namespaces, you must also have a private namespace: > | namespace { > | type = private > | separator = . > | prefix = INBOX. > | #location defaults to mail_location. > | inbox = yes > | } > | > | namespace { > | type = public > | separator = . > | prefix = #shared. I'd probably just use "Shared." as prefix, since it is visible to users. Anyway if you want to use # you need to put the value in "quotes" or it's treated as comment. > | location = imapc:~/imapc-shared > | subscriptions = no list = children here > | } > | > | imapc_host = m-st-sh-01.foo.bar > | imapc_password = master-user-password > | imapc_user = shareduser > | imapc_master_user = %u > `---- > > Where do I add "list = children"? In the user-dovecots shared namespace > or on the shared-dovecots private namespace? Shared-dovecot always has mailboxes (at least INBOX), so list=children would equal list=yes. > >> 2. Configure the shared Dovecot: > >> You need master passdb that allows all existing users to log in as "shareduser" user. You can probably simply do (not tested): > >> passdb { >> type = static >> args = user=shareduser pass=master-user-password >> master = yes >> } > >> The "shareduser" owns all of the actual shared mailboxes and has the >> necessary ACLs set up for individual users. ACLs use the master >> username (= the real username in this case) to do the ACL checks. > > So this is kind of "backwards", since normally the imapc_master_user would be > the static user and imapc_user would be dynamic, right? Right. Also in this Dovecot you want a regular namespace without prefix: namespace inbox { separator = / list = yes inbox = yes } You might as well use the proper separator here in case you ever change it for users. From sven at svenhartge.de Mon Jan 9 21:45:12 2012 From: sven at svenhartge.de (Sven Hartge) Date: Mon, 9 Jan 2012 20:45:12 +0100 Subject: [Dovecot] Providing shared folders with multiple backend servers References: <68fd4hi9kbv8@mids.svenhartge.de> <68fi0aakv5v8@mids.svenhartge.de> <78fi1g2kv5v8@mids.svenhartge.de> <98fi3r4kv5v8@mids.svenhartge.de> <183FE5C1-811F-44A1-B7C6-BC57AC92C0C2@iki.fi> Message-ID: Timo Sirainen wrote: > On 9.1.2012, at 21.31, Sven Hartge wrote: >> ,---- >> | # User's private mail location >> | mail_location = mdbox:~/mdbox >> | >> | # When creating any namespaces, you must also have a private namespace: >> | namespace { >> | type = private >> | separator = . >> | prefix = INBOX. >> | #location defaults to mail_location. >> | inbox = yes >> | } >> | >> | namespace { >> | type = public >> | separator = . >> | prefix = #shared. > I'd probably just use "Shared." as prefix, since it is visible to > users. Anyway if you want to use # you need to put the value in > "quotes" or it's treated as comment. I have to use "#shared.", because this is what Courier uses. Unfortunately I have to stick to prefixes and seperators used currently. >> | location = imapc:~/imapc-shared What is the syntax of this location? What does "imapc-shared" do in this case? >> | subscriptions = no > list = children here >> | } >> | >> | imapc_host = m-st-sh-01.foo.bar >> | imapc_password = master-user-password >> | imapc_user = shareduser >> | imapc_master_user = %u >> `---- >> >> Where do I add "list = children"? In the user-dovecots shared namespace >> or on the shared-dovecots private namespace? > Shared-dovecot always has mailboxes (at least INBOX), so list=children would equal list=yes. OK, seems logical. >> >>> 2. Configure the shared Dovecot: >> >>> You need master passdb that allows all existing users to log in as "shareduser" user. You can probably simply do (not tested): >> >>> passdb { >>> type = static >>> args = user=shareduser pass=master-user-password >>> master = yes >>> } >> >>> The "shareduser" owns all of the actual shared mailboxes and has the >>> necessary ACLs set up for individual users. ACLs use the master >>> username (= the real username in this case) to do the ACL checks. >> >> So this is kind of "backwards", since normally the imapc_master_user would be >> the static user and imapc_user would be dynamic, right? > Right. Also in this Dovecot you want a regular namespace without prefix: > namespace inbox { > separator = / > list = yes > inbox = yes > } > You might as well use the proper separator here in case you ever change it for users. Is this seperator converted to '.' on the frontend? The department supporting our users will give me hell if anything visible changes in the layout of the folders for the end user. Gr??e, Sven. -- Sigmentation fault. Core dumped. From tss at iki.fi Mon Jan 9 22:05:48 2012 From: tss at iki.fi (Timo Sirainen) Date: Mon, 9 Jan 2012 22:05:48 +0200 Subject: [Dovecot] Providing shared folders with multiple backend servers In-Reply-To: References: <68fd4hi9kbv8@mids.svenhartge.de> <68fi0aakv5v8@mids.svenhartge.de> <78fi1g2kv5v8@mids.svenhartge.de> <98fi3r4kv5v8@mids.svenhartge.de> <183FE5C1-811F-44A1-B7C6-BC57AC92C0C2@iki.fi> Message-ID: On 9.1.2012, at 21.45, Sven Hartge wrote: >>> | location = imapc:~/imapc-shared > > What is the syntax of this location? What does "imapc-shared" do in this > case? It's the directory for index files. The backend IMAP server is used as a rather dummy storage, so if for example you do a FETCH 1:* BODYSTRUCTURE command, all of the message bodies are downloaded to the user's Dovecot server which parses them. But with indexes this is done only once (same as with any other mailbox format). If you want SEARCH BODY to be fast, you'd also need to use some kind of full text search indexes. If your users share the same UID (or 0666 mode would probably work too), you could share the index files rather than make them per-user. Then you could use imapc:/shared/imapc or something. BTW. All message flags are shared between users. If you want per-user flags you'd need to modify the code. >> Right. Also in this Dovecot you want a regular namespace without prefix: > >> namespace inbox { >> separator = / >> list = yes >> inbox = yes >> } > >> You might as well use the proper separator here in case you ever change it for users. > > Is this seperator converted to '.' on the frontend? Yes, as long as you explicitly specify the separator setting to the public namespace. From sven at svenhartge.de Mon Jan 9 22:13:23 2012 From: sven at svenhartge.de (Sven Hartge) Date: Mon, 9 Jan 2012 21:13:23 +0100 Subject: [Dovecot] Providing shared folders with multiple backend servers References: <68fd4hi9kbv8@mids.svenhartge.de> <68fi0aakv5v8@mids.svenhartge.de> <78fi1g2kv5v8@mids.svenhartge.de> <98fi3r4kv5v8@mids.svenhartge.de> <183FE5C1-811F-44A1-B7C6-BC57AC92C0C2@iki.fi> Message-ID: Timo Sirainen wrote: > On 9.1.2012, at 21.45, Sven Hartge wrote: >>>> | location = imapc:~/imapc-shared >> >> What is the syntax of this location? What does "imapc-shared" do in this >> case? > It's the directory for index files. The backend IMAP server is used as > a rather dummy storage, so if for example you do a FETCH 1:* > BODYSTRUCTURE command, all of the message bodies are downloaded to the > user's Dovecot server which parses them. But with indexes this is done > only once (same as with any other mailbox format). If you want SEARCH > BODY to be fast, you'd also need to use some kind of full text search > indexes. The bodies are downloaded but not stored, right? Just the index files are stored locally. > If your users share the same UID (or 0666 mode would probably work > too), you could share the index files rather than make them per-user. > Then you could use imapc:/shared/imapc or something. Hmm. Yes, this is a fully virtual setup, every users mail is owned by the virtmail user. Does this sharing of index files have any security or privacy issues? Not every user sees every shared folder, so an information leak has to be avoided at all costs. > BTW. All message flags are shared between users. If you want per-user > flags you'd need to modify the code. No, I need shared message flags, as this is the reason we introduced shared folders, so one user can see, if a mail has already been read or replied to. >>> Right. Also in this Dovecot you want a regular namespace without prefix: >> >>> namespace inbox { >>> separator = / >>> list = yes >>> inbox = yes >>> } >> >>> You might as well use the proper separator here in case you ever change it for users. >> >> Is this seperator converted to '.' on the frontend? > Yes, as long as you explicitly specify the separator setting to the > public namespace. OK, good to know, one for my documentation with an '!' behind it. Gr??e, Sven -- Sigmentation fault. Core dumped. From tss at iki.fi Mon Jan 9 22:20:44 2012 From: tss at iki.fi (Timo Sirainen) Date: Mon, 9 Jan 2012 22:20:44 +0200 Subject: [Dovecot] Providing shared folders with multiple backend servers In-Reply-To: References: <68fd4hi9kbv8@mids.svenhartge.de> <68fi0aakv5v8@mids.svenhartge.de> <78fi1g2kv5v8@mids.svenhartge.de> <98fi3r4kv5v8@mids.svenhartge.de> <183FE5C1-811F-44A1-B7C6-BC57AC92C0C2@iki.fi> Message-ID: <8002CDFC-88BB-47D2-96D2-8F7EFB26DD86@iki.fi> On 9.1.2012, at 22.13, Sven Hartge wrote: > Timo Sirainen wrote: >> On 9.1.2012, at 21.45, Sven Hartge wrote: > >>>>> | location = imapc:~/imapc-shared >>> >>> What is the syntax of this location? What does "imapc-shared" do in this >>> case? > >> It's the directory for index files. The backend IMAP server is used as >> a rather dummy storage, so if for example you do a FETCH 1:* >> BODYSTRUCTURE command, all of the message bodies are downloaded to the >> user's Dovecot server which parses them. But with indexes this is done >> only once (same as with any other mailbox format). If you want SEARCH >> BODY to be fast, you'd also need to use some kind of full text search >> indexes. > > The bodies are downloaded but not stored, right? Just the index files > are stored locally. Right. >> If your users share the same UID (or 0666 mode would probably work >> too), you could share the index files rather than make them per-user. >> Then you could use imapc:/shared/imapc or something. > > Hmm. Yes, this is a fully virtual setup, every users mail is owned by > the virtmail user. Does this sharing of index files have any security or > privacy issues? There are no privacy issues, at least currently, since there is no per-user data. If you had wanted per-user flags this wouldn't have worked. > Not every user sees every shared folder, so an information leak has to > be avoided at all costs. Oh, that reminds me, it doesn't actually work :) Because Dovecot deletes those directories it doesn't see on the remote server. You might be able to use imapc:~/imapc:INDEX=/shared/imapc though. The nice thing about shared imapc indexes is that each user doesn't have to re-index the message. From bind at enas.net Mon Jan 9 22:23:18 2012 From: bind at enas.net (Urban Loesch) Date: Mon, 09 Jan 2012 21:23:18 +0100 Subject: [Dovecot] Proxy login failures In-Reply-To: <69796D8B-5CFE-48A2-A092-B1A32331BC1F@iki.fi> References: <4F0ABF0A.1080404@enas.net> <69796D8B-5CFE-48A2-A092-B1A32331BC1F@iki.fi> Message-ID: <4F0B4CB6.2080703@enas.net> Am 09.01.2012 19:40, schrieb Timo Sirainen: > On 9.1.2012, at 12.18, Urban Loesch wrote: > >> I'm using two dovecot pop3/imap proxies in front of our dovecot servers. >> Since some days I see many of the following errors in the logs of the two proxy-servers: >> >> dovecot: pop3-login: Error: proxy: Remote "IPV6-IP":110 disconnected: Connection closed: Connection reset by peer (state=0): user=, method=PLAIN, rip=remote-ip, lip=localip >> >> When this happens the Client gets the following error from the proxy: >> -ERR [IN-USE] Account is temporarily unavailable. > The connection to remote server dies before authentication finishes. The reason for why that happens should be logged by the backend server. Sounds like it crashes. Check for ANY error messages in backend servers. > I still did that, but I found nothing in the logs. The only thing I could think about is that all 7 backend servers are virtual servers (using technology from http://linux-vserver.org) and they all are running on the same physical machine (DELL PER610 with 32GB RAM, RAID 10 SAS - load between 0.5 and 2.0, iowait about 1-5%). So they are sharing the same kernel. Also all servers are connected to a mysql server, running on a different machine in the same subnet. Could it be that either the kernel needs some tcp tuning ore perhaps the answers from the remote mysql server could be to slow in some cases? Now I switched 2 of the 7 backend servers to the backup mysql slave server. Should be no problem because dovecot is only reading from it. If it helps I will see tomorrow an I let you know. thanks Urban From sven at svenhartge.de Mon Jan 9 22:24:09 2012 From: sven at svenhartge.de (Sven Hartge) Date: Mon, 9 Jan 2012 21:24:09 +0100 Subject: [Dovecot] Providing shared folders with multiple backend servers References: <68fd4hi9kbv8@mids.svenhartge.de> <68fi0aakv5v8@mids.svenhartge.de> <78fi1g2kv5v8@mids.svenhartge.de> <98fi3r4kv5v8@mids.svenhartge.de> <183FE5C1-811F-44A1-B7C6-BC57AC92C0C2@iki.fi> <8002CDFC-88BB-47D2-96D2-8F7EFB26DD86@iki.fi> Message-ID: Timo Sirainen wrote: > On 9.1.2012, at 22.13, Sven Hartge wrote: >> Timo Sirainen wrote: >>> On 9.1.2012, at 21.45, Sven Hartge wrote: >>>>>> | location = imapc:~/imapc-shared >>>> >>>> What is the syntax of this location? What does "imapc-shared" do in >>>> this case? >> >>> It's the directory for index files. The backend IMAP server is used >>> as a rather dummy storage, so if for example you do a FETCH 1:* >>> BODYSTRUCTURE command, all of the message bodies are downloaded to >>> the user's Dovecot server which parses them. But with indexes this >>> is done only once (same as with any other mailbox format). If you >>> want SEARCH BODY to be fast, you'd also need to use some kind of >>> full text search indexes. >> >>> If your users share the same UID (or 0666 mode would probably work >>> too), you could share the index files rather than make them >>> per-user. Then you could use imapc:/shared/imapc or something. >> Hmm. Yes, this is a fully virtual setup, every users mail is owned by >> the virtmail user. Does this sharing of index files have any security >> or privacy issues? > There are no privacy issues, at least currently, since there is no > per-user data. If you had wanted per-user flags this wouldn't have > worked. OK. I think I will go with the per-user index files for now and pay the extra in bandwidth and processing power needed. All in all, of 10,000 users, only about 100 use shared folders. Gr??e, Sven. -- Sigmentation fault. Core dumped. From xamiw at arcor.de Tue Jan 10 00:30:21 2012 From: xamiw at arcor.de (xamiw at arcor.de) Date: Mon, 9 Jan 2012 23:30:21 +0100 (CET) Subject: [Dovecot] uid / gid and systemusers In-Reply-To: References: <1809497881.1135529.1326037030206.JavaMail.ngmail@webmail10.arcor-online.net> Message-ID: <778892216.47622.1326148221293.JavaMail.ngmail@webmail16.arcor-online.net> That's it, thanks a lot. ----- Original Nachricht ---- Von: Timo Sirainen An: xamiw at arcor.de Datum: 09.01.2012 19:44 Betreff: Re: [Dovecot] uid / gid and systemusers > On 8.1.2012, at 17.37, xamiw at arcor.de wrote: > > > Jan 8 16:18:28 test dovecot: User q is missing UID (see mail_uid > setting) > > Jan 8 16:18:28 test dovecot: imap-login: Internal login failure (auth > failed, 1 attempts): user=, method=PLAIN, rip=AAA.BBB.CCC.DDD, > lip=EEE.FFF.GGG.HHH TLS <--- edited by me > .. > > auth default { > > mechanisms = plain > > passdb shadow { > > } > > } > > You have passdb, but no userdb. > > > /etc/passwd: > > ... > > g:x:1000:1000:test1,,,:/home/g:/bin/bash > > q:x:1001:1001:test2,,,:/home/q:/bin/bash > > To use /etc/passwd as userdb, you need to add userdb passwd {} > > From tss at iki.fi Tue Jan 10 00:39:00 2012 From: tss at iki.fi (Timo Sirainen) Date: Tue, 10 Jan 2012 00:39:00 +0200 Subject: [Dovecot] Proxy login failures In-Reply-To: <4F0B4CB6.2080703@enas.net> References: <4F0ABF0A.1080404@enas.net> <69796D8B-5CFE-48A2-A092-B1A32331BC1F@iki.fi> <4F0B4CB6.2080703@enas.net> Message-ID: <27646CE2-F912-4D61-9016-F6BBE0DA9C56@iki.fi> On 9.1.2012, at 22.23, Urban Loesch wrote: >>> I'm using two dovecot pop3/imap proxies in front of our dovecot servers. >>> Since some days I see many of the following errors in the logs of the two proxy-servers: >>> >>> dovecot: pop3-login: Error: proxy: Remote "IPV6-IP":110 disconnected: Connection closed: Connection reset by peer (state=0): user=, method=PLAIN, rip=remote-ip, lip=localip >>> >>> When this happens the Client gets the following error from the proxy: >>> -ERR [IN-USE] Account is temporarily unavailable. >> The connection to remote server dies before authentication finishes. The reason for why that happens should be logged by the backend server. Sounds like it crashes. Check for ANY error messages in backend servers. >> > > I still did that, but I found nothing in the logs. It's difficult to guess then. At the very least there should be an "Info" message about a new connection at the time when this failure happened. If there's not even that, then maybe the problem is network related. > The only thing I could think about is that all 7 backend servers are virtual servers (using technology from http://linux-vserver.org) and they all are running > on the same physical machine (DELL PER610 with 32GB RAM, RAID 10 SAS - load between 0.5 and 2.0, iowait about 1-5%). So they are sharing the same kernel. For testing, or what's the point in doing that? :) But the load is low enough that I doubt it has anything to do with it. > Also all servers are connected to a mysql server, running on a different machine in the same subnet. Could it be that either the kernel needs some tcp tuning ore perhaps the answers from the remote mysql server > could be to slow in some cases? MySQL server problem would show up with a different error message. TCP tuning is also unlikely to help, since the connection probably dies within a second. Actually it would be a good idea to log the duration. This patch adds it: http://hg.dovecot.org/dovecot-2.0/raw-rev/8438f66433a6 These are the only explanations that I can think of for the error: * Remote Dovecot crashes / kills the connection (it would log an error message) * Remote Dovecot server is full of handling existing connections (It would log a warning) * Network trouble, something in the middle disconnecting the connection * Source/destination OS trouble, disconnecting the connection * Some hang that results in eventual disconnection. The duration patch would show if this is the case. From dmiller at amfes.com Tue Jan 10 02:21:32 2012 From: dmiller at amfes.com (Daniel L. Miller) Date: Mon, 09 Jan 2012 16:21:32 -0800 Subject: [Dovecot] Solr plugin In-Reply-To: References: <4F04EDC8.6060809@amfes.com> <9D72A466-6ECD-4A10-A52A-20BF56B32916@iki.fi> <4F072FA9.2020009@amfes.com> <75CA67D3-81FF-4FD5-8D61-6992E808DB09@iki.fi> <4F075A76.1040807@amfes.com> <4F0A2980.7050003@amfes.com> <4F0A2B4D.2040106@amfes.com> Message-ID: On 1/9/2012 7:00 AM, Timo Sirainen wrote: > On 9.1.2012, at 1.48, Daniel L. Miller wrote: > >> On 1/8/2012 3:40 PM, Daniel L. Miller wrote: >>> On 1/6/2012 12:32 PM, Daniel L. Miller wrote: >>>> On 1/6/2012 9:36 AM, Timo Sirainen wrote: >>>>> On 6.1.2012, at 19.30, Daniel L. Miller wrote: >>>>> >>> Jan 8 15:40:09 bubba dovecot: imap(user1 at domain.com): Error: fts_solr: Lookup failed: 400 undefined field CC >>> Jan 8 15:40:09 bubba dovecot: imap: Error: >>> >>> >> Looking at the Solr output - looks like the CC parameter is being capitalized while all the other fieldnames are lowercase. > Did you look at the input? Looking at the code, it should be lowercased. Maybe Solr just uppercases it for some reason. Are you using a Solr schema that has "cc" field? > I see the following in a running Solr instance. This is generated from a Windoze Thunderbird 8.0 client: Jan 9, 2012 4:20:13 PM org.apache.solr.core.SolrCore execute INFO: [] webapp=/solr path=/select params={fl=uid,score&sort=uid+asc&fq=%2Bbox:c1af150abfc9df4d7f7a00003bc41c5f+%2Buser:"dmiller at amfes.com"&q=from:"test"+OR+to:"test"+OR+CC:"test"+OR+subject:"test"+OR+body:"test"&rows=9038} status=400 QTime=4 That's where I see the uppercased CC. -- Daniel From tss at iki.fi Tue Jan 10 02:28:46 2012 From: tss at iki.fi (Timo Sirainen) Date: Tue, 10 Jan 2012 02:28:46 +0200 Subject: [Dovecot] Solr plugin In-Reply-To: References: <4F04EDC8.6060809@amfes.com> <9D72A466-6ECD-4A10-A52A-20BF56B32916@iki.fi> <4F072FA9.2020009@amfes.com> <75CA67D3-81FF-4FD5-8D61-6992E808DB09@iki.fi> <4F075A76.1040807@amfes.com> <4F0A2980.7050003@amfes.com> <4F0A2B4D.2040106@amfes.com> Message-ID: <78E1EDA2-62A8-4CD3-BA82-6239FDC975EB@iki.fi> On 10.1.2012, at 2.21, Daniel L. Miller wrote: >> Did you look at the input? Looking at the code, it should be lowercased. Maybe Solr just uppercases it for some reason. Are you using a Solr schema that has "cc" field? > > I see the following in a running Solr instance. This is generated from a Windoze Thunderbird 8.0 client: > > Jan 9, 2012 4:20:13 PM org.apache.solr.core.SolrCore execute > INFO: [] webapp=/solr path=/select params={fl=uid,score&sort=uid+asc&fq=%2Bbox:c1af150abfc9df4d7f7a00003bc41c5f+%2Buser:"dmiller at amfes.com"&q=from:"test"+OR+to:"test"+OR+CC:"test"+OR+subject:"test"+OR+body:"test"&rows=9038} status=400 QTime=4 Oh, you were talking about the searching part, not indexing. Yeah, there it wasn't necessarily lowercased. Fixed: http://hg.dovecot.org/dovecot-2.1/rev/075591a4b6a8 From stan at hardwarefreak.com Tue Jan 10 04:19:22 2012 From: stan at hardwarefreak.com (Stan Hoeppner) Date: Mon, 09 Jan 2012 20:19:22 -0600 Subject: [Dovecot] Providing shared folders with multiple backend servers In-Reply-To: <08fhdkhkv5v8@mids.svenhartge.de> References: <68fd4hi9kbv8@mids.svenhartge.de> <4F08E4D9.1090203@hardwarefreak.com> <78fdevu9kbv8@mids.svenhartge.de> <4F09956C.1030109@hardwarefreak.com> <88fev069kbv8@mids.svenhartge.de> <4F0ADD87.5080103@hardwarefreak.com> <08fhdkhkv5v8@mids.svenhartge.de> Message-ID: <4F0BA02A.3050405@hardwarefreak.com> On 1/9/2012 7:48 AM, Sven Hartge wrote: > It seems my initial idea was not so bad after all ;) Yeah, but you didn't know how "not so bad" it really was until you had me analyze it, flesh it out, and confirm it. ;) > Now I "just" need o > built a little test setup, put some dummy users on it and see, if > anything bad happens while accessing the shared folders and how the > reaction of the system is, should the shared folder server be down. It won't be down. Because instead of using NFS you're going to use GFS2 for the shared folder LUN so each user accesses the shared folders locally just as they do their mailbox. Pat yourself on the back Sven, you just eliminated a SPOF. ;) >> How many total cores per VMware node (all sockets)? > > 8 Fairly beefy. Dual socket quad core Xeons I'd guess. > Here the memory statistics an 14:30 o'clock: > > total used free shared buffers cached > Mem: 12046 11199 847 0 88 7926 > -/+ buffers/cache: 3185 8861 > Swap: 5718 10 5707 That doesn't look too bad. How many IMAP user connections at that time? Is that a high average or low for that day? The RAM numbers in isolation only paint a partial picture... > The SAN has plenty space. Over 70TiB at this time, with another 70TiB > having just arrived and waiting to be connected. 140TB of 15k storage. Wow, you're so under privileged. ;) > The iSCSI storage nodes (HP P4500) use 600GB SAS6 at 15k rpm with 12 > disks per node, configured in 2 RAID5 sets with 6 disks each. > > But this is internal to each storage node, which are kind of a blackbox > and have to be treated as such. I cringe every time I hear 'black box'... > The HP P4500 is a but unique, since it does not consist of a head node > which storage arrays connected to it, but of individual storage nodes > forming a self balancing iSCSI cluster. (The nodes consist of DL320s G2.) The 'black box' is Lefthand Networks SAN/iQ software stack. I wasn't that impressed with it when I read about it 8 or so years ago. IIRC, load balancing across cluster nodes is accomplished by resending host packets from a receiving node to another node after performing special sauce calculations regarding cluster load. Hence the need, apparently, for a full power, hot running, multi-core x86 CPU instead of an embedded low power/wattage type CPU such as MIPS, PPC, i960 descended IOP3xx, or even the Atom if they must stick with x86 binaries. If this choice was merely due to economy of scale of their server boards, they could have gone with a single socket board instead of the dual, which would have saved money. So this choice of a dual socket Xeon board wasn't strictly based on cost or ease of manufacture. Many/most purpose built SAN arrays on the market don't use full power x86 chips, but embedded RISC chips, to cut cost, power draw, and heat generation. These RISC chips are typically in order designs, don't have branch prediction or register renaming logic circuits and they have tiny caches. This is because block moving code handles streams of data and doesn't typically branch nor have many conditionals. For streaming apps, data caches simply get in the way, although an instruction cache is beneficial. HP's choice of full power CPUs that have such features suggests branching conditional code is used. Which makes sense when running algorithms that attempt to calculate the least busy node. Thus, this 'least busy node' calculation and packet shipping adds non trivial latency to host SCSI IO command completion, compared to traditional FC/iSCSI SAN arrays, or DAS, and thus has implications for high IOPS workloads and especially those making heavy use of FSYNC, such as SMTP and IMAP servers. FSYNC performance may not be an issue if the controller instantly acks FSYNC before data hits platter, but then you may run into bigger problems as you have no guarantee data hit the disk. Or, you may not run into perceptible performance issues at all given the number of P4500s you have and the proportionally light IO load of your 10K mail users. Sheer horsepower alone may prove sufficient. Just in case, it may prove beneficial to fire up ImapTest or some other synthetic mail workload generator to see if array response times are acceptable under heavy mail loads. > So far, I had no performance or other problems with this setup and it > scales quite nice, as you buy as you grow . I'm glad the Lefthand units are working well for you so far. Are you hitting the arrays with any high random IOPS workloads as of yet? > And again, price was also a factor, deploying a FC-SAN would have cost > us more than thrice the amount than the amount the deployment of an iSCSI > solution did, because the latter is "just" ethernet, while the former > would have needed a lot more totally new components. I guess that depends on the features you need, such as PIT backups, remote replication, etc. I expanded a small FC SAN about 5 years ago for the same cost as an iSCSI array, simply due to the fact that the least expensive _quality_ unit with a good reputation happened to have both iSCSI and FC ports included. It was a 1U 8x500GB Nexsan Satablade, their smallest unit (since discontinued). Ran about $8K USD IIRC. Nexsan continues to offer excellent products. For anyone interested in high density high performance FC+iSCSI SAN arrays at a midrange price, add Nexsan to your vendor research list: http://www.nexsan.com > No, at that time (2005/2006) nobody thought of a SAN. That is a fairly > "new" idea here, first implemented for the VMware cluster in 2008. You must have slower adoption on that side of the pond. As I just mentioned, I was expanding an already existing small FC SAN in 2006 that had been in place since 2004 IIRC. And this was at a small private 6-12 school with enrollment of about 500. iSCSI SANs took off like a rocket in the States around 06/07, in tandem with VMware ESX going viral here. > More space. The IMAP usage became more prominent which caused a steep > rise in space needed on the mail storage server. But 74GiB SCA drives > where expensive and 130GiB SCA drives where not available at that time. With 144TB of HP Lefthand 15K SAS drives it appears you're no longer having trouble funding storage purchases. ;) >>> And this is why I kind of hold this upgrade back until dovecot 2.1 is >>> released, as it has some optimizations here. > >> Sounds like it's going to be a bit more than an 'upgrade'. ;) > > Well, yes. It is more a re-implementation than an upgrade. It actually sounds like fun. To me anyway. ;) I love this stuff. > Central IT here this days only uses x86-based systems. There where some Sun > SPARC systems, but both have been decomissioned. New SPARC hardware is > just too expensive for our scale. And if you want to use virtualization, > you can either use only SPARC systems and partition them or use x86 > based systems. And then there is the need to virtualize Windows, so x86 > is the only option. Definitely a trend for a while now. > Most bigger Universities in Germany make nearly exclusive use of SPARC > systems, but they had a central IT with big irons (IBM, HP, etc.) since > back in the 1960's, so naturally the continue on that path. Siemens/Fujitsu machines or SUN machines? I've been under the impression that Fujitsu sold more SPARC boxen in Europe, or at least Germany, than SUN did, due to the Siemens partnership. I could be wrong here. -- Stan From robert at schetterer.org Tue Jan 10 08:06:38 2012 From: robert at schetterer.org (Robert Schetterer) Date: Tue, 10 Jan 2012 07:06:38 +0100 Subject: [Dovecot] Postfix user map In-Reply-To: <70A95B54-98CE-4B4D-8B76-CDA279353202@iki.fi> References: <4F0AFCF4.1050506@iki.fi> <4F0B0871.6040500@gmail.com> <4F0B09BC.3010300@schetterer.org> <4F0B0BE4.8010907@gmail.com> <4F0B0CD9.3090402@schetterer.org> <4F0B1388.209@schetterer.org> <70A95B54-98CE-4B4D-8B76-CDA279353202@iki.fi> Message-ID: <4F0BD56E.9090808@schetterer.org> Am 09.01.2012 19:12, schrieb Timo Sirainen: > On 9.1.2012, at 18.19, Robert Schetterer wrote: > >> i am afraid i wasnt total corect here >> in fact i havent seen backscatter overquota on my servers >> since using dove lmtp with postfix > > LMTP shouldn't matter here. In most configs mails are put to queue first, and only from there they are sent to LMTP, and if LMTP rejects a mail then backscatter is sent. Maybe the difference you're seeing is that it's now Postfix sending the bounce (or perhaps skipping it?) instead of dovecot-lda (unless you gave -e parameter). > Hi Timo, thx for clearing anyway backscatter with overquota was rare ever so no big problem -- Best Regards MfG Robert Schetterer Germany/Munich/Bavaria From yubao.liu at gmail.com Tue Jan 10 08:58:37 2012 From: yubao.liu at gmail.com (Liu Yubao) Date: Tue, 10 Jan 2012 14:58:37 +0800 Subject: [Dovecot] Strange error: DIGEST-MD5 mechanism can't be supported with given passdbs In-Reply-To: <491E7C43-2C87-4FD6-8AC0-E79F22E9749F@iki.fi> References: <4F05EABC.7070309@gmail.com> <4F06D272.5010200@bunbun.be> <4F071E3B.2060405@gmail.com> <1325868288.17774.30.camel@hurina> <4F07332B.70708@gmail.com> <4F07BDBB.3060204@gmail.com> <491E7C43-2C87-4FD6-8AC0-E79F22E9749F@iki.fi> Message-ID: On Tue, Jan 10, 2012 at 2:59 AM, Timo Sirainen wrote: > On 7.1.2012, at 5.36, Yubao Liu wrote: > >> In old version, ?"auth->passdbs" contains all passdbs, this revision >> changes "auth->passdbs" to only contain non-master passdbs. >> >> I'm not sure which fix is better or even my proposal is correct or fully: >> ?a) in src/auth/auth.c:auth_passdb_preinit(), insert master passdb to >> ? ? ?auth->passdbs too, ?and remove duplicate code for masterdbs >> ? ? ?in auth_init() and auth_deinit(). > > Not a good idea. The master passdb needs to be treated specially, otherwise you might accidentally allow regular users logging in as other users. > Sorry I don't understand well. This scheme adds all master dbs to auth->passdbs, auth->masterdbs are not changed and still contains only master users. I guess dovecot lookups auth->masterdbs for master users and auth->passdbs for regular users, regular users don't know master users' passwords so they can't login as other users. http://wiki2.dovecot.org/Authentication/MasterUsers The "Example configuration" already shows master user account can be added to auth->passdbs too. This scheme does bring unexpected issue, the master users can't have separate passwords for regular login as themselves(because masterdbs are also added to passdbs), the risk of password leak increases much, but I don't think it's a good practice to do regular login with master user account. Quoted from same wiki page(I really enjoy the wonderful Dovecot wiki, it's the most well organized and documented wiki in open source projects, thank you very much!): "If you want master users to be able to log in as themselves, you'll need to either add the user to the normal passdb or add the passdb to dovecot.conf twice, with and without master=yes. Note that if the passdbs point to different locations, the user can have a different password when logging in as other users than when logging in as himself. This is a good idea since it can avoid accidentally logging in as someone else. " Anyway, the scheme B is much less risky and much simple, just a little annoying code duplication:-) >> ?b) add similar code for masterdbs in auth_passdb_list_have_verify_plain(), >> ? ? ?auth_passdb_list_have_lookup_credentials(), auth_passdb_list_have_set_credentials(). > > Kind of annoying code duplication, but .. I guess it can't really be helped. Added: > http://hg.dovecot.org/dovecot-2.0/rev/bed15faedfd4 > Thank you very much, I don't have to maintain my private package:-) >> Another related question is "pass" option in master passdb, if I set it to "yes", >> the authentication fails: > .. >> My normal passdb is a PAM passdb, ?it doesn't support credential lookups, that's >> reasonable, > > Right. > >> but I feel the comment for "pass" option is confusing: >> >> ?# Unless you're using PAM, you probably still want the destination user to >> ?# be looked up from passdb that it really exists. pass=yes does that. >> ?pass = yes >> } >> >> According the comment, it's to check whether the real user exists, why not >> to check userdb but another passdb? > > Well.. It is going to check userdb eventually anyway, so it would still fail, just a bit later and maybe with different error message. If Dovecot doesn't check password for the real user against passdb (actually it doesn't have the password of real user because it's doing master user proxy authorization), it won't fail on userdb lookup because the userdb does contain the real user, in my case, the real user is system user and absolutely exists. > >> Even it must check against passdb, >> in this case, it's obvious not necessary to lookup credentials, it's enough to >> to lookup user name only. > > There's currently no passdb that supports "does user exist?" lookup, but doesn't support credentials lookup, so this is more of a theoretical issue. (I guess maybe PAM could be abused in some configurations to do the check, but that's rather ugly..) I don't understand why master user proxy authorization in Dovecot has to check real user against his credential, does that mean "user*master" has to authenticate twice? one for master, one for user, but often client can't provide two passwords in single login and the regular passdb such as PAM passdb doesn't support credentials lookup. So I feel it's better Dovecot checks only destination user names in passdbs or userdbs after master user authentication part succeeds to decide whether the destination user exists, just as the comment for "pass=yes" describes. This may not be a bug, IMHO just a confusing feature. Regards, Yubao Liu From l.chelchowski at slupsk.eurocar.pl Tue Jan 10 11:34:37 2012 From: l.chelchowski at slupsk.eurocar.pl (l.chelchowski) Date: Tue, 10 Jan 2012 10:34:37 +0100 Subject: [Dovecot] Quota-warning and setresgid Message-ID: Hi! Please help me with this. The problem exists when quota-warning is executing: LOG: Jan 10 10:15:06 lmtp(85973): Debug: none: root=, index=, control=, inbox=, alt= Jan 10 10:15:06 lmtp(85973): Info: Connect from local Jan 10 10:15:06 lmtp(85973): Debug: Loading modules from directory: /usr/local/lib/dovecot Jan 10 10:15:06 lmtp(85973): Debug: Module loaded: /usr/local/lib/dovecot/lib10_quota_plugin.so Jan 10 10:15:06 lmtp(85973): Debug: Module loaded: /usr/local/lib/dovecot/lib90_sieve_plugin.so Jan 10 10:15:06 lmtp(85973): Debug: auth input: tester at domain.eu home=/home/vmail/domain.eu/tester/ mail=maildir:/home/vmail/domain.eu/tester/:INDEX=/var/mail/vmail/domain.eu/tester at domain.eu/index/public uid=101 gid=12 quota_rule=*:storage=2097 acl_groups= Jan 10 10:15:06 lmtp(85973): Debug: Added userdb setting: mail=maildir:/home/vmail/domain.eu/tester/:INDEX=/var/mail/vmail/domain.eu/tester at domain.eu/index/public Jan 10 10:15:06 lmtp(85973): Debug: Added userdb setting: plugin/quota_rule=*:storage=2097 Jan 10 10:15:06 lmtp(85973): Debug: Added userdb setting: plugin/acl_groups= Jan 10 10:15:06 lmtp(85973, tester at domain.eu): Debug: Effective uid=101, gid=12, home=/home/vmail/domain.eu/tester/ Jan 10 10:15:06 lmtp(85973, tester at domain.eu): Debug: Quota root: name=user backend=dict args=:proxy::quotadict Jan 10 10:15:06 lmtp(85973, tester at domain.eu): Debug: Quota rule: root=user mailbox=* bytes=2147328 messages=0 Jan 10 10:15:06 lmtp(85973, tester at domain.eu): Debug: Quota rule: root=user mailbox=Trash bytes=+429465 (20%) messages=0 Jan 10 10:15:06 lmtp(85973, tester at domain.eu): Debug: Quota rule: root=user mailbox=SPAM bytes=+429465 (20%) messages=0 Jan 10 10:15:06 lmtp(85973, tester at domain.eu): Debug: Quota warning: bytes=1717862 (80%) messages=0 reverse=no command=quota-warning 80 tester at domain.eu Jan 10 10:15:06 lmtp(85973, tester at domain.eu): Debug: Quota warning: bytes=1932595 (90%) messages=0 reverse=no command=quota-warning 90 tester at domain.eu Jan 10 10:15:06 lmtp(85973, tester at domain.eu): Debug: Quota warning: bytes=2039961 (95%) messages=0 reverse=no command=quota-warning 95 tester at domain.eu Jan 10 10:15:06 lmtp(85973, tester at domain.eu): Debug: dict quota: user=tester at domain.eu, uri=proxy::quotadict, noenforcing=0 Jan 10 10:15:06 lmtp(85973, tester at domain.eu): Debug: Namespace : type=private, prefix=, sep=/, inbox=yes, hidden=no, list=yes, subscriptions=yes location=maildir:/home/vmail/domain.eu/tester/:INDEX=/var/mail/vmail/domain.eu/tester at domain.eu/index/public Jan 10 10:15:06 lmtp(85973, tester at domain.eu): Debug: maildir++: root=/home/vmail/domain.eu/tester, index=/var/mail/vmail/domain.eu/tester at domain.eu/index/public, control=, inbox=/home/vmail/domain.eu/tester, alt= Jan 10 10:15:06 lmtp(85973, tester at domain.eu): Debug: Namespace : type=public, prefix=Public/, sep=/, inbox=no, hidden=no, list=children, subscriptions=yes location=maildir:/home/vmail/public/:CONTROL=/var/mail/vmail/domain.eu/tester/control/public:INDEX=/var/mail/vmail/domain.eu/tester/index/public:LAYOUT=fs Jan 10 10:15:06 lmtp(85973, tester at domain.eu): Debug: fs: root=/home/vmail/public, index=/var/mail/vmail/domain.eu/tester/index/public, control=/var/mail/vmail/domain.eu/tester/control/public, inbox=, alt= Jan 10 10:15:06 lmtp(85973, tester at domain.eu): Debug: Namespace : type=shared, prefix=Shared/%u/, sep=/, inbox=no, hidden=no, list=children, subscriptions=no location=maildir:%h/:INDEX=/var/mail/vmail/domain.eu/tester at domain.eu/index/shared/%u Jan 10 10:15:06 lmtp(85973, tester at domain.eu): Debug: shared: root=/var/run/dovecot, index=, control=, inbox=, alt= ... Jan 10 10:15:06 lmtp(85973, tester at domain.eu): Debug: quota: Executing warning: quota-warning 95 tester at domain.eu Jan 10 10:15:06 lmtp(85973, tester at domain.eu): Info: bLUfAJoBDE/VTwEA9hAjDg: sieve: msgid=<4F0C0180.3040704 at domain.eu>: stored mail into mailbox 'INBOX' Jan 10 10:15:06 lmtp(85973): Info: Disconnect from local: Client quit (in reset) Jan 10 10:15:06 lda: Debug: Loading modules from directory: /usr/local/lib/dovecot Jan 10 10:15:06 lda: Debug: Module loaded: /usr/local/lib/dovecot/lib01_acl_plugin.so Jan 10 10:15:06 lda: Debug: Module loaded: /usr/local/lib/dovecot/lib10_quota_plugin.so Jan 10 10:15:06 lda: Debug: Module loaded: /usr/local/lib/dovecot/lib90_sieve_plugin.so Jan 10 10:15:06 lda: Debug: auth input: tester at domain.eu home=/home/vmail/domain.eu/tester/ mail=maildir:/home/vmail/domain.eu/tester/:INDEX=/var/mail/vmail/domain.eu/tester at domain.eu/index/public uid=101 gid=12 quota_rule=*:storage=2097 acl_groups= Jan 10 10:15:06 lda: Debug: Added userdb setting: mail=maildir:/home/vmail/domain.eu/tester/:INDEX=/var/mail/vmail/domain.eu/tester at domain.eu/index/public Jan 10 10:15:06 lda: Debug: Added userdb setting: plugin/quota_rule=*:storage=2097 Jan 10 10:15:06 lda: Debug: Added userdb setting: plugin/acl_groups= Jan 10 10:15:06 lda(tester at domain.eu): Fatal: setresgid(12(mail),12(mail),101(vmail)) failed with euid=101(vmail): Operation not permitted Jan 10 10:15:06 master: Error: service(quota-warning): child 85974 returned error 75 dovecot -n # 2.0.16: /usr/local/etc/dovecot/dovecot.conf # OS: FreeBSD 8.2-RELEASE-p3 amd64 auth_master_user_separator = * auth_mechanisms = plain login cram-md5 auth_username_format = %Lu dict { quotadict = mysql:/usr/local/etc/dovecot/dovecot-dict-sql.conf } disable_plaintext_auth = no first_valid_gid = 12 first_valid_uid = 101 log_path = /var/log/dovecot.log mail_debug = yes mail_gid = vmail mail_plugins = " quota acl" mail_privileged_group = vmail mail_uid = vmail managesieve_notify_capability = mailto managesieve_sieve_capability = fileinto reject envelope encoded-character vacation subaddress comparator-i;ascii-numeric relational regex imap4flags copy include variables body enotify environment mailbox date namespace { inbox = yes location = prefix = separator = / type = private } namespace { list = children location = maildir:/home/vmail/public/:CONTROL=/var/mail/vmail/%d/%n/control/public:INDEX=/var/mail/vmail/%d/%n/index/public:LAYOUT=fs prefix = Public/ separator = / subscriptions = yes type = public } namespace { list = children location = maildir:%%h/:INDEX=/var/mail/vmail/%d/%u/index/shared/%%u prefix = Shared/%%u/ separator = / subscriptions = no type = shared } passdb { args = /usr/local/etc/dovecot/dovecot-sql.conf driver = sql } passdb { args = /usr/local/etc/dovecot/passwd.masterusers driver = passwd-file master = yes pass = yes } plugin { acl = vfile:/usr/local/etc/dovecot/acls acl_shared_dict = file:/usr/local/etc/dovecot/shared/shared-mailboxes.db autocreate = Trash autocreate2 = Junk autocreate3 = Sent autocreate4 = Drafts autocreate5 = Archives autosubscribe = Trash autosubscribe2 = Junk autosubscribe3 = Sent autosubscribe4 = Drafts autosubscribe5 = Public/Poczta autosubscribe6 = Archives fts = squat fts_squat = partial=4 full=10 quota = dict:user::proxy::quotadict quota_rule2 = Trash:storage=+20%% quota_rule3 = SPAM:storage=+20%% quota_warning = storage=80%% quota-warning 80 %u quota_warning2 = storage=90%% quota-warning 90 %u quota_warning3 = storage=95%% quota-warning 95 %u sieve = ~/.dovecot.sieve sieve_before = /usr/local/etc/dovecot/sieve/default.sieve sieve_dir = ~/sieve sieve_global_dir = /usr/local/etc/dovecot/sieve sieve_global_path = /usr/local/etc/dovecot/sieve/default.sieve } protocols = imap pop3 sieve lmtp service auth { unix_listener /var/spool/postfix/private/auth { group = mail mode = 0660 user = postfix } unix_listener auth-userdb { group = mail mode = 0660 user = vmail } } service dict { unix_listener dict { mode = 0600 user = vmail } } service imap { executable = imap postlogin } service lmtp { unix_listener /var/spool/postfix/private/dovecot-lmtp { group = postfix mode = 0660 user = postfix } } service managesieve { drop_priv_before_exec = yes } service pop3 { drop_priv_before_exec = yes } service postlogin { executable = script-login rawlog } service quota-warning { executable = script /usr/local/bin/quota-warning.sh unix_listener quota-warning { user = vmail } user = vmail } ssl = no userdb { args = /usr/local/etc/dovecot/dovecot-sql.conf driver = sql } verbose_proctitle = yes protocol imap { imap_client_workarounds = delay-newmail tb-extra-mailbox-sep mail_plugins = " acl imap_acl autocreate fts fts_squat quota imap_quota" } protocol lmtp { mail_plugins = quota sieve } protocol pop3 { pop3_client_workarounds = outlook-no-nuls oe-ns-eoh pop3_uidl_format = %08Xu%08Xv } protocol lda { deliver_log_format = msgid=%m: %$ mail_plugins = sieve acl quota postmaster_address = postmaster at domain.eu sendmail_path = /usr/sbin/sendmail } -- ?ukasz From bind at enas.net Tue Jan 10 14:22:27 2012 From: bind at enas.net (Urban Loesch) Date: Tue, 10 Jan 2012 13:22:27 +0100 Subject: [Dovecot] Proxy login failures In-Reply-To: <27646CE2-F912-4D61-9016-F6BBE0DA9C56@iki.fi> References: <4F0ABF0A.1080404@enas.net> <69796D8B-5CFE-48A2-A092-B1A32331BC1F@iki.fi> <4F0B4CB6.2080703@enas.net> <27646CE2-F912-4D61-9016-F6BBE0DA9C56@iki.fi> Message-ID: <4F0C2D83.4010108@enas.net> On 09.01.2012 23:39, Timo Sirainen wrote: > On 9.1.2012, at 22.23, Urban Loesch wrote: > >>>> I'm using two dovecot pop3/imap proxies in front of our dovecot servers. >>>> Since some days I see many of the following errors in the logs of the two proxy-servers: >>>> >>>> dovecot: pop3-login: Error: proxy: Remote "IPV6-IP":110 disconnected: Connection closed: Connection reset by peer (state=0): user=, method=PLAIN, rip=remote-ip, lip=localip >>>> >>>> When this happens the Client gets the following error from the proxy: >>>> -ERR [IN-USE] Account is temporarily unavailable. >>> The connection to remote server dies before authentication finishes. The reason for why that happens should be logged by the backend server. Sounds like it crashes. Check for ANY error messages in backend servers. >>> >> >> I still did that, but I found nothing in the logs. > > It's difficult to guess then. At the very least there should be an "Info" message about a new connection at the time when this failure happened. If there's not even that, then maybe the problem is network related. No, there is nothing. > >> The only thing I could think about is that all 7 backend servers are virtual servers (using technology from http://linux-vserver.org) and they all are running >> on the same physical machine (DELL PER610 with 32GB RAM, RAID 10 SAS - load between 0.5 and 2.0, iowait about 1-5%). So they are sharing the same kernel. > > For testing, or what's the point in doing that? :) But the load is low enough that I doubt it has anything to do with it. This because the hardware is fast enough for handling about 40.000 Mailaccounts (both IMAP and POP). That tells me that dovecot is a really good piece of software. Very performant in my eyes. > >> Also all servers are connected to a mysql server, running on a different machine in the same subnet. Could it be that either the kernel needs some tcp tuning ore perhaps the answers from the remote mysql server >> could be to slow in some cases? > > MySQL server problem would show up with a different error message. TCP tuning is also unlikely to help, since the connection probably dies within a second. Actually it would be a good idea to log the duration. This patch adds it: > http://hg.dovecot.org/dovecot-2.0/raw-rev/8438f66433a6 > I installed the patch on my proxies and I got this: ... Jan 10 09:30:45 imap2 dovecot: pop3-login: Error: proxy: Remote "IPV6-IP":110 disconnected: Connection closed: Connection reset by peer (state=0, duration=0s): user=, method=PLAIN, rip=remote-ip, lip=local-ip Jan 10 09:45:21 imap2 dovecot: pop3-login: Error: proxy: Remote "IPV6-IP":110 disconnected: Connection closed: Connection reset by peer (state=0, duration=1s): user=, method=PLAIN, rip=remote-ip, lip=local-ip ... As you can see the duration is between 0 and 1 seconds. During this errors there was a tcpdump running on proxy #2 (imap2 in the above logs). In the time range of "09:30:45:00 - 09:30:46:00" I got an error that the backend server has resetted the connection (RST Flag set). The fact that dovecot on the backend server writes nothing in the log I think that the connection will be resetted on a lower level. Here is what whireshark tells me about that: No. Source Time Destination Protocol Info 101235 IPv6-Proxy-Server 2012-01-10 09:29:38.015073 IPv6-Backend-Server TCP 35341 > pop3 [SYN] Seq=0 Win=14400 Len=0 MSS=1440 SACK_PERM=1 TSV=1925901864 TSER=0 WS=7 101236 IPv6-Backend-Server 2012-01-10 09:29:38.015157 IPv6-Proxy-Server TCP pop3 > 35341 [SYN, ACK] Seq=0 Ack=1 Win=14280 Len=0 MSS=1440 SACK_PERM=1 TSV=309225565 TSER=1925901864 WS=7 101248 IPv6-Proxy-Server 2012-01-10 09:29:38.233046 IPv6-Backend-Server POP [TCP ACKed lost segment] [TCP Previous segment lost] C: UIDL 101249 IPv6-Backend-Server 2012-01-10 09:29:38.233312 IPv6-Proxy-Server POP S: +OK 101250 IPv6-Proxy-Server 2012-01-10 09:29:38.233328 IPv6-Backend-Server TCP 35341 > pop3 [ACK] Seq=57 Ack=50 Win=14464 Len=0 TSV=1925901886 TSER=309225587 101263 IPv6-Proxy-Server 2012-01-10 09:29:38.452210 IPv6-Backend-Server POP C: LIST 101264 IPv6-Backend-Server 2012-01-10 09:29:38.452403 IPv6-Proxy-Server POP S: +OK 0 messages: 101265 IPv6-Proxy-Server 2012-01-10 09:29:38.452426 IPv6-Backend-Server TCP 35341 > pop3 [ACK] Seq=63 Ack=70 Win=14464 Len=0 TSV=1925901908 TSER=309225609 101324 IPv6-Proxy-Server 2012-01-10 09:29:38.671209 IPv6-Backend-Server POP C: QUIT 101325 IPv6-Backend-Server 2012-01-10 09:29:38.671566 IPv6-Proxy-Server POP S: +OK Logging out. 101326 IPv6-Proxy-Server 2012-01-10 09:29:38.671678 IPv6-Backend-Server TCP 35341 > pop3 [FIN, ACK] Seq=69 Ack=89 Win=14464 Len=0 TSV=1925901930 TSER=309225631 101327 IPv6-Backend-Server 2012-01-10 09:29:38.671759 IPv6-Proxy-Server TCP pop3 > 35341 [ACK] Seq=89 Ack=70 Win=14336 Len=0 TSV=309225631 TSER=1925901930 134205 IPv6-Proxy-Server 2012-01-10 09:30:45.477314 IPv6-Backend-Server TCP [TCP Port numbers reused] 35341 > pop3 [SYN] Seq=0 Win=14400 Len=0 MSS=1440 SACK_PERM=1 TSV=1925908610 TSER=0 WS=7 134206 IPv6-Backend-Server 2012-01-10 09:30:45.477458 IPv6-Proxy-Server TCP pop3 > 35341 [SYN, ACK] Seq=0 Ack=1 Win=14280 Len=0 MSS=1440 SACK_PERM=1 TSV=309232311 TSER=1925908610 WS=7 134207 IPv6-Proxy-Server 2012-01-10 09:30:45.477499 IPv6-Backend-Server TCP 35341 > pop3 [ACK] Seq=1 Ack=1 Win=14464 Len=0 TSV=1925908610 TSER=309232311 134208 IPv6-Backend-Server 2012-01-10 09:30:45.477589 IPv6-Proxy-Server TCP pop3 > 35341 [RST] Seq=1 Win=0 Len=0 136052 IPv6-Backend-Server 2012-01-10 09:30:49.477950 IPv6-Proxy-Server TCP pop3 > 35341 [SYN, ACK] Seq=0 Ack=1 Win=14280 Len=0 MSS=1440 SACK_PERM=1 TSV=309232712 TSER=1925908610 WS=7 136053 IPv6-Proxy-Server 2012-01-10 09:30:49.477978 IPv6-Backend-Server TCP 35341 > pop3 [RST] Seq=1 Win=0 Len=0 138363 IPv6-Backend-Server 2012-01-10 09:30:55.877899 IPv6-Proxy-Server TCP pop3 > 35341 [SYN, ACK] Seq=0 Ack=1 Win=14280 Len=0 MSS=1440 SACK_PERM=1 TSV=309233352 TSER=1925908610 WS=7 138364 IPv6-Proxy-Server 2012-01-10 09:30:55.877925 IPv6-Backend-Server TCP 35341 > pop3 [RST] Seq=1 Win=0 Len=0 143154 IPv6-Backend-Server 2012-01-10 09:31:08.678005 IPv6-Proxy-Server TCP pop3 > 35341 [SYN, ACK] Seq=0 Ack=1 Win=14280 Len=0 MSS=1440 SACK_PERM=1 TSV=309234632 TSER=1925908610 WS=7 152353 IPv6-Backend-Server 2012-01-10 09:31:32.678103 IPv6-Proxy-Server TCP pop3 > 35341 [SYN, ACK] Seq=0 Ack=1 Win=14280 Len=0 MSS=1440 SACK_PERM=1 TSV=309237032 TSER=1925908610 WS=7 165891 IPv6-Backend-Server 2012-01-10 09:32:20.688324 IPv6-Proxy-Server TCP pop3 > 35341 [SYN, ACK] Seq=0 Ack=1 Win=14280 Len=0 MSS=1440 SACK_PERM=1 TSV=309241833 TSER=1925908610 WS=7 From Seq-No. 101235 - 101327 the session looks ok for me. But on Seq-No. 134205 whireshark tells me that the TCP port with source port number "35341" will be reused and on Seq-No. 34208 (after the TCP Session has been established correctly - see Seq-No. 134205 to 134207) the backend server sends a RST Packet for the session and the proxy logs the error message, that the connection has been resetted by the peer. I have no idea if dovecot is sendig the TCP reset or the kernel by himself. About 1,5 hours ago I changed the kernel flag "/proc/sys/net/ipv4/tcp_tw_recycle" to "1" on the physical backend machine. Since that I got no more error messages on the proxies. Changing the default values in "tcp_fin-timeout" or "tcp_tw_reuse" have had no effect. Only "tcp_tw_recycle" seems to help. Thanks Urban > These are the only explanations that I can think of for the error: > > * Remote Dovecot crashes / kills the connection (it would log an error message) > * Remote Dovecot server is full of handling existing connections (It would log a warning) > * Network trouble, something in the middle disconnecting the connection > * Source/destination OS trouble, disconnecting the connection > * Some hang that results in eventual disconnection. The duration patch would show if this is the case. > > From Ralf.Hildebrandt at charite.de Tue Jan 10 15:06:48 2012 From: Ralf.Hildebrandt at charite.de (Ralf Hildebrandt) Date: Tue, 10 Jan 2012 14:06:48 +0100 Subject: [Dovecot] 2.0.17: Index lost -> SAVEDON lost as well? In-Reply-To: <20120109150249.GH22506@charite.de> References: <20120109074057.GC22506@charite.de> <17E5EF90-C133-4F70-A377-0D93694181DE@iki.fi> <20120109150249.GH22506@charite.de> Message-ID: <20120110130648.GD6686@charite.de> * Ralf Hildebrandt : > * Timo Sirainen : > > On 9.1.2012, at 9.40, Ralf Hildebrandt wrote: > > > > > Today I encoundered these errors: > > > > > > Jan 9 08:30:06 mail dovecot: lmtp(31174, backup at backup.invalid): Error: Log synchronization error at seq=858,offset=44672 for > > > /home/mailboxname/mdbox/storage/dovecot.map.index: Append with UID 282388, but next_uid = 282389 > > > > Any idea why this happened? > > I was running those commands: > > # new style (dovecot) > vorgestern=`date -d "-2 day" +"%Y-%m-%d"` > doveadm expunge -u backup at backup.invalid mailbox INBOX SAVEDBEFORE $vorgestern > doveadm purge -u backup at backup.invalid So today: # doveadm search -u backup at backup.invalid mailbox INBOX SAVEDON 2012-01-08 | wc -l 0 # doveadm search -u backup at backup.invalid mailbox INBOX SAVEDON 2012-01-09 | wc -l 0 # doveadm search -u backup at backup.invalid mailbox INBOX SAVEDON 2012-01-10 | wc -l 45724 # doveadm search -u backup at backup.invalid mailbox INBOX SAVEDON 2012-01-11 | wc -l 0 Then: doveadm expunge -u backup at backup.invalid mailbox INBOX SAVEDBEFORE 2012-01-08 && \ doveadm purge -u backup at backup.invalid resulted in: doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/3f/4d/3f4d8043d87e248a2e97f87be1f604301573be49-72e4a90683d70a4fc47e000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/3f/4d/3f4d8043d87e248a2e97f87be1f604301573be49-afef6f1bf1d40a4f6773000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/32/7f/327f6d3cccc7aceb42da69ee7f3baea3267d631f-f4f5b90703d10a4f8d68000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/21/f4/21f48fad649f1b7249f9aab98b7c079b6ac19b5b-9a4fcb1e83d70a4fcd7e000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/21/f4) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/9a/fd/9afd968e9524449a151f64bd2fb1610dcf81da95-fe508a2195d30a4fb86f000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/9a/fd/9afd968e9524449a151f64bd2fb1610dcf81da95-a04fcb1e83d70a4fcd7e000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/9a/fd/9afd968e9524449a151f64bd2fb1610dcf81da95-beba543b03cf0a4fe062000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/9a/fd/9afd968e9524449a151f64bd2fb1610dcf81da95-52c15a0f03d10a4f9b68000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/9a/fd/9afd968e9524449a151f64bd2fb1610dcf81da95-c4ba543b03cf0a4fe062000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/00/04/00048d4ec98f654ad681a97b07d2e806a09c1641-22a9531683d70a4fc97e000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/00/04) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/c4/ae/c4aebf70927db7997eb8755c61a490581aff94a6-27bb543b03cf0a4fe062000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/bb/91/bb913960266ce20c2fea64ceaed1fb29eab868ce-4ba9531683d70a4fc97e000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/34/0b/340b8ae1e2c6ccbfba161475440b172caaff92b3-1d518a2195d30a4fb86f000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/4c/1e/4c1e264df5d168ed4e676267a4dcf38cd82e9797-1e518a2195d30a4fb86f000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/4c/1e) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/ca/a7/caa75263442d125e08493b237c332351604b651a-1f518a2195d30a4fb86f000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/ca/a7) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/c7/75/c775a5736e1800e3c654291b42f942ebebc6e343-c2327907cad70a4fd47e000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/c7/75) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/1b/da/1bdaede5f6b4175e577fa4148a1d2c75b6291047-c3327907cad70a4fd47e000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/1b/da) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/e4/83/e4838792800058921c4dce395f5c038e3072f053-c4327907cad70a4fd47e000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/e4/83) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/f6/bc/f6bc6d4a0127e275a61e0f8c3c56407240547bd6-4850cb1e83d70a4fcd7e000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/f6/bc) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/7f/9d/7f9dcd43a8a04aa0a0e438d1568129baf6d66105-c104472383d70a4fd17e000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/f7/4d/f74df38ff421889090e995383b5c81912c15879b-db04472383d70a4fd17e000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/f7/4d) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/bf/ef/bfef000b86fd483daefce6472bec6e1694aaac94-9a416f0c03d10a4f9868000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/bf/ef) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/bf/ef/bfef000b86fd483daefce6472bec6e1694aaac94-55bb543b03cf0a4fe062000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/bf/ef) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/95/fa/95fa16f52171e9cc30ca288eacf22ce8f5aa2fff-5bbb543b03cf0a4fe062000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/95/fa/95fa16f52171e9cc30ca288eacf22ce8f5aa2fff-61bb543b03cf0a4fe062000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/95/fa/95fa16f52171e9cc30ca288eacf22ce8f5aa2fff-62f6b90703d10a4f8d68000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/95/fa/95fa16f52171e9cc30ca288eacf22ce8f5aa2fff-ec327907cad70a4fd47e000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/95/fa/95fa16f52171e9cc30ca288eacf22ce8f5aa2fff-b9a9531683d70a4fc97e000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/24/c4/24c4692ab968bfd94cf1ca62fb46a88b7dcd78f1-df71181354d80a4f4e02000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/24/c4) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/9f/86/9f862a2ed2f9c8f9cffbfea60883da513abf390d-67c35a0f03d10a4f9b68000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/9f/86) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/31/bf/31bf583bd7db531f5b634f6f2220eb8c803f720d-50bd543b03cf0a4fe062000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/31/bf) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/41/97/4197a3de49f40e5f6c641be08b4e710c02a8a9f4-28e78c26add90a4f1706000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/41/97) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/fc/f7/fcf791a4e521548aceae0a62b3924b075f1c7b31-63ab531683d70a4fc97e000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/bd/18/bd18eecdcc9e9f17b851a1742c7ca6f8f7badfe7-2872181354d80a4f4e02000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/bb/1d/bb1da62d3688f09d4223188d0e16986a57458b91-2e72181354d80a4f4e02000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/bb/1d) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/eb/6e/eb6ebe01f3e1feaa1f5635cef5b8286e375dfdb1-3de78c26add90a4f1706000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/eb/6e) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/f1/94/f1943d0e581f54fe68f3ae052e3d2eba75ff3822-76c45a0f03d10a4f9b68000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/f1/94) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/38/27/3827505dc4412178b87c758a4f5d697644260e9e-0ee88c26add90a4f1706000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/38/27) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/ca/4e/ca4eec1630af1e986c66593bce522c32db4060cb-2b2f152613da0a4fab06000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/ca/4e) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/1b/46/1b4691175805ffb373f5c8406f33f79b41dceed2-c772181354d80a4f4e02000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/1b/46) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/78/94/789426ef58857e12e448e590201cf81acda1d3f0-af528a2195d30a4fb86f000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/b2/01/b201c8727f5d286fd3f99f61710e900aaae42bcf-0f446f0c03d10a4f9868000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/7a/58/7a58d847b53f9980365be256607df90bd4885152-10446f0c03d10a4f9868000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/83/21/8321d504845fb7fa518ffbbe9e65ba79357dc40d-11446f0c03d10a4f9868000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/2b/90/2b90a014dadfb625859720a930743d76ff1dc960-1ad49606d5d90a4f9f06000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/2b/90) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/26/bb/26bb6ef66a1c9374cde9dd4ee40c03d52a37a078-1bd49606d5d90a4f9f06000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/26/bb) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/7e/b6/7eb6cf6ecf3375708e922879cb3695c45c943650-6e73181354d80a4f4e02000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/7e/b6) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/3e/56/3e56af1afce990d2633c722e5c0b241064be0908-6f73181354d80a4f4e02000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/3e/56) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/9e/bd/9ebdb50383e3f1166b2aa753e78b855fae505528-21d49606d5d90a4f9f06000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/9e/bd) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/24/c4/24c4692ab968bfd94cf1ca62fb46a88b7dcd78f1-88e88c26add90a4f1706000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/24/c4) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/24/22/2422e76785185795d11ff19cd23c10af2df4aee3-9373181354d80a4f4e02000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/09/3b/093bbae088c17039975e55fe49f683ab5ac79f89-0112a307d5d90a4fa006000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/09/3b) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/5a/30/5a30cbae4a3900fdb2bb20e405db5d00ab93ffe3-0c12a307d5d90a4fa006000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/5a/30) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/1b/7f/1b7f03005f41026e42e354cb3a8dd805d793720e-5ed49606d5d90a4f9f06000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/1b/7f) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/fd/51/fd51d22ba92f2e018f842851149fffb81f1f1264-64d49606d5d90a4f9f06000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/fd/51) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/70/5c/705c1daf8f35ca7bf9f7bbf2fdf1b29de33766f8-65d49606d5d90a4f9f06000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/70/5c) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/e4/e0/e4e08863ae910339a3809ea51ddefb0a4db9c646-66d49606d5d90a4f9f06000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/e4/e0) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/6c/cc/6ccc5e659c6de92852315bfe977cab24b6238dc9-67d49606d5d90a4f9f06000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/6c/cc) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/7f/9c/7f9c841c810561bde8a5e3b3e51c55de53620f47-68d49606d5d90a4f9f06000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/7f/9c) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/d3/7e/d37eb19d8379eb292971bb931068e34ece403f1f-aac55a0f03d10a4f9b68000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/d3/7e) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/70/6b/706b010991ced768476e9921efd1d8baef6af103-abc55a0f03d10a4f9b68000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/70/6b) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/98/4e/984e36f32eb16f85349bafe5ef5b7c2367a30d45-bc73181354d80a4f4e02000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/98/4e) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/60/8d/608da5e9d2b4eb43705b62a7605068c886bc486b-8cd49606d5d90a4f9f06000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/60/8d) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/f9/e7/f9e75a76c6aacb9259e3055f5dffee9b7b37179d-eb73181354d80a4f4e02000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/f9/e7) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/c2/68/c268bd0abe0e334e64cf40e3b4e571eae6415c40-bbc55a0f03d10a4f9b68000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/c2/68) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/c5/aa/c5aac7e0a301a798a4eedc0140bd3f71329046df-ffbd543b03cf0a4fe062000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/f8/26/f82660e313a8a22e0152bf457231cb5a535eebcb-cbe88c26add90a4f1706000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/f8/26) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/9d/8f/9d8f0e6d58d86e5672876a4a5ac0d626f01b2653-6812a307d5d90a4fa006000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/9d/8f) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/f7/b5/f7b51d6500a594ea870151e1f6845ae1ca4dfa88-73538a2195d30a4fb86f000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/43/73/43737f46f935ea8a2077ebb3c4bc356572bb07ff-7e538a2195d30a4fb86f000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/43/73) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/4e/29/4e29fdbf309d66faba520a4a3e3f87ead728c7af-3774181354d80a4f4e02000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/4e/29) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/12/34/123406129b4eb0c148093a81dc07d63f01d6d409-8712a307d5d90a4fa006000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/12/34) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/78/8c/788c84e6f0aa744ba996e3adad4912547b85860d-90446f0c03d10a4f9868000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/78/8c) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/9f/68/9f688dfa87bd86e0302544a6f4051be4c0ebe9f3-2cf9b90703d10a4f8d68000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/19/86/198610c4bc9753908fcfe6c1bd6b330d8df7f7af-2df9b90703d10a4f8d68000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/3b/58/3b58ae30de03e06cba4520328dcaf8461321361f-4274181354d80a4f4e02000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/3b/58) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/2f/ec/2fecc3dfa406921e622a638d382f123826008e68-c12f152613da0a4fab06000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/08/e6/08e61656d670261693ada0e71514a17c523dc239-c22f152613da0a4fab06000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/d4/9d/d49dccea098551240dcae8a6454738a103d050eb-cd2f152613da0a4fab06000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/d4/9d) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/26/a8/26a8bd4bfa69aa7e699d9c9749443ed2b72bbbd9-96446f0c03d10a4f9868000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/26/a8) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/6d/f5/6df5a5c2dee317fa2c9d2aa2de9730ef0e086912-47f9b90703d10a4f8d68000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/ba/f0/baf0274f01c12252224fc0d67a7018a6323127ff-48f9b90703d10a4f8d68000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/f8/26/f82660e313a8a22e0152bf457231cb5a535eebcb-4ef9b90703d10a4f8d68000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/f8/26) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/5f/91/5f91067608300c30a80b6441b4bfe5d2e7ac3ab5-9c446f0c03d10a4f9868000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/5f/91) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/8d/d4/8dd4d6fd05df137fe72ae6bea359c4c740b41bdb-9712a307d5d90a4fa006000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/c8/97/c8979042d406fa3db0f0d5ab9d8e40fc5087c116-4d74181354d80a4f4e02000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/6b/24/6b2434e0486a4e033a5057e323c7fa76baced4aa-a7538a2195d30a4fb86f000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/6b/24) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/05/16/051673f94b6dbc17a265e6cb8a0f189f5a47518d-a8538a2195d30a4fb86f000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/9c/c1/9cc1bebd4a576aa28bf4a54f54206bf447c1e31c-87be543b03cf0a4fe062000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/9c/c1) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/04/00/04008a41d5d6d43a67ab5b823f9d80852b6f828e-f32f152613da0a4fab06000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/04/00) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/d0/b1/d0b1d7351d2a174541124b35f5471e57dd480795-fe2f152613da0a4fab06000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/d0/b1) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/3e/76/3e7645ee60d67e21c96d79ad0d961c7d9f5ca074-68f9b90703d10a4f8d68000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/3e/76) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/5e/5e/5e5e0db722cbdf59d7b23a50ab9613cd41585861-04e98c26add90a4f1706000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/5e/5e) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/1c/32/1c32564e82ebe7acc876ea5a1fd7e9f00a695c97-52ce1105c0db0a4f3c0c000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/1c/32) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/7a/bf/7abfe0943de9fa8e6cc468a3017705d1e44c9af4-15e98c26add90a4f1706000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/7a/bf) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/ca/22/ca22fb0b943c243bae14f43c08454765679805a5-25e98c26add90a4f1706000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/ca/22) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/44/3a/443ad09efc00aec5fd5579d1bfa741efcc54625c-9974181354d80a4f4e02000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/24/c4/24c4692ab968bfd94cf1ca62fb46a88b7dcd78f1-d4538a2195d30a4fb86f000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/24/c4) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/00/b3/00b3c0397b0f24dcc44410e60984147f1f6dbd4c-76ce1105c0db0a4f3c0c000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/00/b3) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/2d/e5/2de509ec0507087ceec4d16dc39260f4b369886f-86ce1105c0db0a4f3c0c000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/2d/e5) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/66/63/666325747133a0e84525bc402bb2984339037e31-c912a307d5d90a4fa006000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/66/63) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/22/f9/22f9241435febe0df6556f7d681792f1cc5b1637-ca12a307d5d90a4fa006000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/fa/1c/fa1c7138221dc54ab7b6d833970b6d230304fff3-91f9b90703d10a4f8d68000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/b5/44/b54476a0510cb83fd0144397169d603a72b3d8db-54e98c26add90a4f1706000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/b5/44) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/61/41/6141377ddbeaaa2d2f641392993723ac09dab7af-201cb50cb6db0a4f370c000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/2e/a3/2ea3fffc796512c7bf235885f3b37b0ec9c4c620-211cb50cb6db0a4f370c000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/85/cd/85cd1f6f08d3becd97d64653129bb71e513aa265-b0f9b90703d10a4f8d68000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/85/cd) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/bf/3c/bf3c51578bfee97ad56b9f0b1f6f74bbc8b30316-311cb50cb6db0a4f370c000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/bf/3c) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/c3/15/c315fdf651d9bc65d468634598ea1a8a5ef2f0dc-321cb50cb6db0a4f370c000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/c3/15) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/c9/6f/c96ff3f27a51bb19282196cf416a51a079c5f75e-2d30152613da0a4fab06000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/c9/6f) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/f8/61/f86119c665d89911b256e68c77354502389180eb-2e30152613da0a4fab06000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/a3/7c/a37c6f0966bc866dcb44b5304610604675bbb81b-2f30152613da0a4fab06000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/a3/7c) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/72/20/7220923da82b479c5381bdc7efe8a392c890b09d-02bf543b03cf0a4fe062000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/72/20) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/c9/6f/c96ff3f27a51bb19282196cf416a51a079c5f75e-5975181354d80a4f4e02000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/c9/6f) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/f8/61/f86119c665d89911b256e68c77354502389180eb-5a75181354d80a4f4e02000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/a3/7c/a37c6f0966bc866dcb44b5304610604675bbb81b-5b75181354d80a4f4e02000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/a3/7c) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/c1/47/c14767dfeaadbd2a8205767e9a274e2854c1b97c-451cb50cb6db0a4f370c000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/c1/47) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/6b/33/6b33d1b0fb794a97b1185158797bf360b96d3e62-461cb50cb6db0a4f370c000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/6b/33) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/c9/6f/c96ff3f27a51bb19282196cf416a51a079c5f75e-91e98c26add90a4f1706000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/c9/6f) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/f8/61/f86119c665d89911b256e68c77354502389180eb-92e98c26add90a4f1706000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/a3/7c/a37c6f0966bc866dcb44b5304610604675bbb81b-93e98c26add90a4f1706000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/a3/7c) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/21/57/2157e48b0f4d145909a05dc88dd9f4ab5eacba92-7215b60eb6db0a4f380c000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/21/57) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/2a/3a/2a3aade76b96474ff4d625ed2ecd9261dde5098e-e412a307d5d90a4fa006000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/6e/26/6e26032f9b2e9bb3eb6d042710b3a593a0ef4a6d-5b1cb50cb6db0a4f370c000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/6e/26) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/7a/7f/7a7fce2fad3b3046d2744f271e676f84b7bc931e-611cb50cb6db0a4f370c000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/7a/7f) failed: No such file or directory doveadm(backup at backup.invalid): Error: Corrupted dbox file /path/to/mdbox/storage/m.434 (around offset=61672172): purging found mismatched offsets (61672142 vs 61665615, 7661/10801) doveadm(backup at backup.invalid): Warning: mdbox /path/to/mdbox/storage: rebuilding indexes doveadm(backup at backup.invalid): Error: Purging namespace '' failed: Internal error occurred. Refer to server log for more information. [2012-01-10 13:59:52] After that: # doveadm search -u backup at backup.invalid mailbox INBOX SAVEDON 2012-01-08 | wc -l 0 # doveadm search -u backup at backup.invalid mailbox INBOX SAVEDON 2012-01-09 | wc -l 0 # doveadm search -u backup at backup.invalid mailbox INBOX SAVEDON 2012-01-10 | wc -l 189 # doveadm search -u backup at backup.invalid mailbox INBOX SAVEDON 2012-01-11 | wc -l 0 # fgrep dovecot: /var/log/mail.log |grep -v "dovecot: lmtp" # Nothing! -- Ralf Hildebrandt Gesch?ftsbereich IT | Abteilung Netzwerk Charit? - Universit?tsmedizin Berlin Campus Benjamin Franklin Hindenburgdamm 30 | D-12203 Berlin Tel. +49 30 450 570 155 | Fax: +49 30 450 570 962 ralf.hildebrandt at charite.de | http://www.charite.de From ath at b-one.net Tue Jan 10 16:05:14 2012 From: ath at b-one.net (Anders) Date: Tue, 10 Jan 2012 15:05:14 +0100 Subject: [Dovecot] A small bug and a question about CONTEXT=SEARCH Message-ID: <20120110140514.57EB0E51BC96F@bmail-n01.one.com> Hi, I have been looking at search and sorting with dovecot and have run into some things. The first one I think may be a minor bug because a set of commands result in the socket connection being closed without warning: UID SEARCH RETURN (SAVE COUNT) CHARSET UTF-8 (UNDELETED TEXT "foo") UID SEARCH RETURN (COUNT MIN) CHARSET UTF-8 () $ The empty paranthesis before the reference to the previous search result ($) is not legal IMAP, but should not cause the socket to be closed I think. Then I have question about RFC5267 and the announcement of CONTEXT=SEARCH in the capabilities. I think this RFC is supported by dovecot, or maybe just part of the RFC is supported? At least when I include the CONTEXT ADDTO or REMOVEFROM keywords I get an error, but UPDATE and CANCELUPDATE seem to be supported. The RFC has been updated by the RFC describing the NOTIFY extension to IMAP so maybe it has been decided to not add these keywords until a later time? I am using dovecot version 2.0.15 (with patches from Apple). Best Regards Anders From bschmidt at cms.hu-berlin.de Tue Jan 10 16:16:14 2012 From: bschmidt at cms.hu-berlin.de (Burckhard Schmidt) Date: Tue, 10 Jan 2012 15:16:14 +0100 Subject: [Dovecot] rewriting mail_location Message-ID: <4F0C482E.7000900@cms.hu-berlin.de> Hello, I have LDAP as userdb. Entries containing attributes mail=alias.user1 at some.domain.de and uid=user1 Mail to alias.user1 at some.domain.de gets delivered into /datatest/user/alias.user1 instead of /datatest/user/user1 by lda. I have userdb { args = /usr/dovecot/etc/ldapuser.conf driver = ldap } with a ldapuser.conf: hosts ... base ... user_filter = (&(|(mail=%u)(mail=%n at some.domain) (uid=%u))(objectClass=posixAccount)) user_attrs = uid=mail_location=maildir:/datatest/user/%$, uidNumber=29,gidNumber=133 I hoped local part of attribute mail could be replaced by uid for local delivery with dovecots lda? Any hints how to do that? (With postfix I could rewrite the address to uid at host and use local_transport = dovecot.) postfix has virtual_transport = dovecot. LDAP entry: mail: alias.user1 at some.domain.de uid: user1 homeDirectory: /dev/null uidNumber: 464 gidNumber: 100 mail to alias.user1 at some.domain.de Jan 10 14:03:24 ubu1004 postfix/qmgr[25221]: C434D1EE: from=, size=239, nrcpt=1 (queue active) Jan 10 14:03:24 ubu1004 dovecot: lda: Debug: Loading modules from directory: /usr/dovecot/lib/dovecot Jan 10 14:03:24 ubu1004 dovecot: lda: Debug: Module loaded: /usr/dovecot/lib/dovecot/lib20_autocreate_plugin.so Jan 10 14:03:24 ubu1004 dovecot: auth: Debug: master in: USER#0111#011alias.user1#011service=lda Jan 10 14:03:24 ubu1004 dovecot: auth: Debug: ldap(alias.user1): user search: base=ou=users,ou=...,c=de scope=subtree filter=(&(|(mail=alias.user1)(mail=alias.user1 at some.domain.de)(uid=alias.user1))(objectClass=posixAccount)) fields=uid,uidNumber,gidNumber some substitutions are visibly: Jan 10 14:03:24 ubu1004 dovecot: auth: Debug: ldap(alias.user1): result: uid(location=maildir:/datatest/user/%$/maildir)=user1 gidNumber(133)=100 uidNumber(29)=464 Jan 10 14:03:24 ubu1004 dovecot: auth: Debug: master out: USER#0111#011alias.user1#011location=maildir:/datatest/user/user1/maildir#011133=100#01129=464 Jan 10 14:03:24 ubu1004 dovecot: lda: Debug: auth input: alias.user1 location=maildir:/datatest/user/user1/maildir 133=100 29=464 Jan 10 14:03:24 ubu1004 dovecot: lda: Debug: Added userdb setting: plugin/location=maildir:/datatest/user/user1/maildir but alias "alias.user1" still used for delivery: Jan 10 14:03:24 ubu1004 dovecot: lda: Debug: Added userdb setting: plugin/133=100 Jan 10 14:03:24 ubu1004 dovecot: lda: Debug: Added userdb setting: plugin/29=464 Jan 10 14:03:24 ubu1004 dovecot: lda(alias.user1): Debug: Effective uid=29, gid=133, home= Jan 10 14:03:24 ubu1004 dovecot: lda(alias.user1): Debug: Namespace : type=private, prefix=, sep=/, inbox=yes, hidden=no, list=yes, subscriptions=yes location=maildir:/datatest/user/alias.user1/maildir:INDEX=/datatest/addons/index/alias.user1:CONTROL=/datatest/user/alias.user1/control:LAYOUT=fs Jan 10 14:03:24 ubu1004 dovecot: lda(alias.user1): Debug: fs: root=/datatest/user/alias.user1/maildir, index=/datatest/addons/index/alias.user1, control=/datatest/user/alias.user1/control, inbox=/datatest/user/alias.user1/maildir, alt= Jan 10 14:03:24 ubu1004 dovecot: lda(alias.user1): Debug: Namespace : Using permissions from /datatest/user/alias.user1/maildir: mode=0700 gid=-1 Jan 10 14:03:24 ubu1004 dovecot: lda(alias.user1): Debug: none: root=, index=, control=, inbox=, alt= Jan 10 14:03:24 ubu1004 dovecot: lda(alias.user1): Debug: Destination address: alias.user1 at ubu1004 (source: user at hostname) Jan 10 14:03:24 ubu1004 dovecot: lda(alias.user1): msgid=unspecified: saved mail to INBOX Jan 10 14:03:24 ubu1004 postfix/pipe[25226]: C434D1EE: to=, relay=dovecot, delay=14, delays=14/0.01/0/0.02, dsn=2.0.0, status=sent (delivered via dovecot service) Jan 10 14:03:24 ubu1004 postfix/qmgr[25221]: C434D1EE: removed dovecot -n # 2.0.17 (684381041dc4+): /usr/dovecot/etc/dovecot/dovecot.conf # OS: Linux 2.6.32-34-generic-pae i686 Ubuntu 10.04.3 LTS ext4 mail_gid = sysdov mail_location = maildir:/datatest/user/%n/maildir:INDEX=/datatest/addons/index/%n:CONTROL=/datatest/user/%n/control:LAYOUT=fs mail_plugins = autocreate mail_uid = sysdov passdb { args = failure_show_msg=yes imap driver = pam } service auth { client_limit = 30000 unix_listener auth-userdb { group = sysdov #effective 133 mode = 01204 user = sysdov #effective 29 } } userdb { args = /usr/dovecot/etc/ldapuser.conf driver = ldap } protocol lda { mail_plugins = autocreate } and ldapuser.conf: hosts ... base ... user_filter = (&(|(mail=%u)(mail=%n at some.domain) (uid=%u))(objectClass=posixAccount)) user_attrs = uid=mail_location=maildir:/datatest/user/%$, uidNumber=29,gidNumber=133 local part of mail should be replaced by uid for local delivery -- Regards --- Burckhard Schmidt From tss at iki.fi Tue Jan 10 16:16:51 2012 From: tss at iki.fi (Timo Sirainen) Date: Tue, 10 Jan 2012 16:16:51 +0200 Subject: [Dovecot] A small bug and a question about CONTEXT=SEARCH In-Reply-To: <20120110140514.57EB0E51BC96F@bmail-n01.one.com> References: <20120110140514.57EB0E51BC96F@bmail-n01.one.com> Message-ID: <1326205011.6987.90.camel@innu> On Tue, 2012-01-10 at 15:05 +0100, Anders wrote: > the socket connection being closed without warning: > > UID SEARCH RETURN (SAVE COUNT) CHARSET UTF-8 (UNDELETED TEXT "foo") You mean it closes with above also? It works fine with me. > UID SEARCH RETURN (COUNT MIN) CHARSET UTF-8 () $ This was fixed in v2.0.17. > Then I have question about RFC5267 and the announcement of > CONTEXT=SEARCH > > in the capabilities. I think this RFC is supported by dovecot, or maybe > just > > part of the RFC is supported? All of it is supported, as far as I know. > At least when I include the CONTEXT ADDTO or REMOVEFROM keywords I get > an error, These are server notifications. Clients aren't supposed to send them. From divizio at exentrica.it Tue Jan 10 17:16:17 2012 From: divizio at exentrica.it (Luca Di Vizio) Date: Tue, 10 Jan 2012 16:16:17 +0100 Subject: [Dovecot] little bug with Director in 2.1? Message-ID: Hi, in 2.1rc3 the "director_servers" setting does not accept hostnames as documented (with ip no problems). It works correctly in 2.0.17. Greetings, Luca From Juergen.Obermann at hrz.uni-giessen.de Tue Jan 10 17:32:07 2012 From: Juergen.Obermann at hrz.uni-giessen.de (=?iso-8859-1?b?SvxyZ2Vu?= Obermann) Date: Tue, 10 Jan 2012 16:32:07 +0100 Subject: [Dovecot] Panic: file mbox-sync.c: line 1348: assertion failed Message-ID: <20120110163207.182538xtgzoxjg8w@webmail.hrz.uni-giessen.de> Hallo, I have the following problem with doveadm: # gdb --args /opt/local/bin/doveadm -v mailbox status -u userxy/g029 'messages' "Software-alle/AK-Software-Tagung" GNU gdb 5.3 Copyright 2002 Free Software Foundation, Inc. GDB is free software, covered by the GNU General Public License, and you are welcome to change it and/or distribute copies of it under certain conditions. Type "show copying" to see the conditions. There is absolutely no warranty for GDB. Type "show warranty" for details. This GDB was configured as "sparc-sun-solaris2.8"... (gdb) run Starting program: /opt/local/bin/doveadm -v mailbox status -u g029 messages Software-alle/AK-Software-Tagung warning: Lowest section in /lib/libthread.so.1 is .dynamic at 00000074 warning: Lowest section in /lib/libdl.so.1 is .hash at 000000b4 doveadm(g029): Panic: file mbox-sync.c: line 1348: assertion failed: (file_size >= sync_ctx->expunged_space + trailer_size) doveadm(g029): Error: Raw backtrace: 0xff1cbc30 -> 0xff319544 -> 0xff319fa8 -> 0xff31add8 -> 0xff31b278 -> 0xff2a69b0 -> 0xff2a6bac -> 0x16808 -> 0x1b8fc -> 0x16ba0 -> 0x177cc -> 0x17944 -> 0x17a50 -> 0x204e8 -> 0x165c8 Program received signal SIGABRT, Aborted. 0xfe94dcdc in _lwp_kill () from /lib/libc.so.1 (gdb) bt full #0 0xfe94dcdc in _lwp_kill () from /lib/libc.so.1 No symbol table info available. #1 0xfe8e6fb4 in raise () from /lib/libc.so.1 No symbol table info available. #2 0xfe8c2078 in abort () from /lib/libc.so.1 No symbol table info available. #3 0xff1cb984 in default_fatal_finish () from /opt/local/lib/dovecot/libdovecot.so.0 No symbol table info available. #4 0xff1cbc38 in i_panic () from /opt/local/lib/dovecot/libdovecot.so.0 No symbol table info available. #5 0xff31954c in mbox_sync_handle_eof_updates () from /opt/local/lib/dovecot/libdovecot-storage.so.0 No symbol table info available. #6 0xff319fb0 in mbox_sync_do () from /opt/local/lib/dovecot/libdovecot-storage.so.0 No symbol table info available. #7 0xff31ade0 in mbox_sync_int () from /opt/local/lib/dovecot/libdovecot-storage.so.0 No symbol table info available. #8 0xff31b280 in mbox_storage_sync_init () from /opt/local/lib/dovecot/libdovecot-storage.so.0 No symbol table info available. #9 0xff2a69b8 in mailbox_sync_init () from /opt/local/lib/dovecot/libdovecot-storage.so.0 No symbol table info available. #10 0xff2a6bb4 in mailbox_sync () from /opt/local/lib/dovecot/libdovecot-storage.so.0 No symbol table info available. #11 0x00016810 in doveadm_mailbox_find_and_sync () No symbol table info available. #12 0x0001b904 in cmd_mailbox_status_run () No symbol table info available. #13 0x00016ba8 in doveadm_mail_next_user () No symbol table info available. #14 0x000177d4 in doveadm_mail_cmd () No symbol table info available. #15 0x0001794c in doveadm_mail_try_run_multi_word () No symbol table info available. #16 0x00017a58 in doveadm_mail_try_run () No symbol table info available. #17 0x000204f0 in main () No symbol table info available. (gdb) quit The program is running. Exit anyway? (y or n) y My configuration ist as follows: # /opt/local/bin/doveconf -n # 2.0.16: /opt/local/etc/dovecot/dovecot.conf # OS: SunOS 5.10 sun4v auth_verbose = yes disable_plaintext_auth = no lda_mailbox_autocreate = yes lda_mailbox_autosubscribe = yes listen = imap.hrz.uni-giessen.de localhost mail_location = mbox:~/Mail:INBOX=/var/mail/%u mail_plugins = mail_log notify zlib managesieve_notify_capability = mailto managesieve_sieve_capability = fileinto reject envelope encoded-character vacation subaddress comparator-i;ascii-numeric relational regex imap4flags copy include variables body enotify environment mailbox date ihave mdbox_rotate_interval = 1 days mdbox_rotate_size = 16 M namespace { inbox = yes location = prefix = separator = / type = private } namespace { hidden = yes list = no location = prefix = Mail/ separator = / subscriptions = yes type = private } passdb { driver = pam } passdb { args = /opt/local/etc/dovecot/dovecot-ldap.conf.ext driver = ldap } plugin { autocreate = Trash autocreate2 = caughtspam autocreate3 = Sent autocreate4 = Drafts autosubscribe = Trash autosubscribe2 = caughtspam autosubscribe3 = Sent autosubscribe4 = Drafts mail_log_events = delete undelete expunge copy mailbox_delete mailbox_rename mail_log_fields = uid box msgid size sieve = ~/.dovecot.sieve sieve_dir = ~/sieve zlib_save = gz zlib_save_level = 3 } postmaster_address = postmaster at hrz.uni-giessen.de quota_full_tempfail = yes sendmail_path = /usr/lib/sendmail service auth { client_limit = 11120 } service imap-login { process_min_avail = 16 service_count = 0 vsz_limit = 640 M } service imap { process_limit = 4096 vsz_limit = 1 G } ssl_cert = References: <3a8f9df5e523c0391c41964ae3d09d1b@imapproxy.hrz> <677F82FE-850B-43EC-86C1-6B99ED74642A@iki.fi> Message-ID: Am 20.12.2011 06:45, schrieb Timo Sirainen: > On 16.12.2011, at 0.00, J?rgen Obermann wrote: > >> Hello, >> when I try to convert from mbox to mdbox with dsync with one user it >> always panics: >> >> # /opt/local/bin/dsync -v -u userxy backup ssh root at minerva1 >> /opt/local/bin/dsync -v -u userxy >> dsync-remote(userxy): Panic: Trying to allocate 2147483648 bytes > > Well, this is clearly the problem.. But it's difficult to guess where > it's allocating that. I'd need a gdb backtrace. Does it write a core > file to userxy's home dir? If not, try replacing dsync with a script > that runs "ulimit -c unlimited" first and then execs dsync. > http://dovecot.org/bugreport.html tells what to do with core once you > have it. > > Alternative idea: Does it crash also when dsyncing locally? > gdb --args dsync -u userxy backup mdbox:/tmp/foobar > run > bt full Sorry, this problem is gone, I cannot reproduce it any more, neither locally nor with remote dsync. I found out that the user has one huge mail in his drafts folder with a 1GB video object attachment, but he surely never could send this mail because the mail size is limited to 50MB. Greetings, J?rgen -- J?rgen Obermann Hochschulrechenzentrum der Justus-Liebig-Universit?t Gie?en Heinrich-Buff-Ring 44 Tel. 0641-9913054 From mark at msapiro.net Tue Jan 10 18:34:20 2012 From: mark at msapiro.net (Mark Sapiro) Date: Tue, 10 Jan 2012 08:34:20 -0800 Subject: [Dovecot] Clients show .subscriptions folder Message-ID: Since upgrading from dovecot-2.1.rc1 to dovecot-2.1.rc3, some clients are showing a .subscriptions file in the user's mbox path as a folder. Some clients such as T'bird on Mac OS X create this file listing subscribed mbox files. Other clients such as T'bird on Windows XP show this file as a folder in the folder list even though it cannot be accessed as a folder (dovecot returns CANNOT Mailbox is not a valid mbox file). I think this may be a result of uncommenting the inbox namespace in conf.d/10-mail.conf . Is there a way to supress exposing this file to clients that don't use it? # dovecot -n # 2.1.rc3: /usr/local/etc/dovecot/dovecot.conf # OS: Linux 2.6.18-8.1.14.el5 i686 CentOS release 5 (Final) auth_mechanisms = plain apop login auth_worker_max_count = 5 mail_location = mbox:~/Mail:INBOX=/var/spool/mail/%u mail_privileged_group = mail mbox_write_locks = fcntl dotlock namespace inbox { inbox = yes location = mailbox Drafts { special_use = \Drafts } mailbox Junk { special_use = \Junk } mailbox Sent { special_use = \Sent } mailbox "Sent Messages" { special_use = \Sent } mailbox Trash { special_use = \Trash } prefix = } passdb { args = /usr/local/etc/dovecot.passwd driver = passwd-file } passdb { driver = pam } protocols = imap pop3 service auth { unix_listener /var/spool/postfix/private/auth { mode = 0666 } } ssl_cert = The highway is for gamblers, San Francisco Bay Area, California better use your sense - B. Dylan From mark.davenport at yahoo.co.uk Tue Jan 10 21:12:04 2012 From: mark.davenport at yahoo.co.uk (sparkietm) Date: Tue, 10 Jan 2012 11:12:04 -0800 (PST) Subject: [Dovecot] Dovecot under Virtual Host environment?? Message-ID: <33114381.post@talk.nabble.com> Hi all, Could anyone point me to a walk-through of setting up Dovecot for multiple domains under a Virtual Host environment? I'd like to offer clients their own email domain for each virtual host e.g john at client_1.com, sue at client_2.com. I'm guessing this is fairly common set-up, but a search on "multiple domains" didn't bring much back. Also, would it be possible to offer each client webmail? Many thanks in advance Mark -- View this message in context: http://old.nabble.com/Dovecot-under-Virtual-Host-environment---tp33114381p33114381.html Sent from the Dovecot mailing list archive at Nabble.com. From robert at schetterer.org Tue Jan 10 23:00:11 2012 From: robert at schetterer.org (Robert Schetterer) Date: Tue, 10 Jan 2012 22:00:11 +0100 Subject: [Dovecot] Dovecot under Virtual Host environment?? In-Reply-To: <33114381.post@talk.nabble.com> References: <33114381.post@talk.nabble.com> Message-ID: <4F0CA6DB.4010809@schetterer.org> Am 10.01.2012 20:12, schrieb sparkietm: > > Hi all, > Could anyone point me to a walk-through of setting up Dovecot for multiple > domains under a Virtual Host environment? I'd like to offer clients their > own email domain for each virtual host e.g john at client_1.com, > sue at client_2.com. I'm guessing this is fairly common set-up, but a search > on "multiple domains" didn't bring much back. > > Also, would it be possible to offer each client webmail? > > Many thanks in advance > > Mark look here for examples, for webmail i.e roundecube, horde , squirrelmail are widly used http://wiki.dovecot.org/HowTo From joseba.torre at ehu.es Wed Jan 11 13:12:16 2012 From: joseba.torre at ehu.es (Joseba Torre) Date: Wed, 11 Jan 2012 12:12:16 +0100 Subject: [Dovecot] Providing shared folders with multiple backend servers In-Reply-To: <4F0AF0B9.7030406@turmel.org> References: <68fd4hi9kbv8@mids.svenhartge.de> <4F08E4D9.1090203@hardwarefreak.com> <78fdevu9kbv8@mids.svenhartge.de> <4F09956C.1030109@hardwarefreak.com> <88fev069kbv8@mids.svenhartge.de> <4F0AEDCC.10109@hardwarefreak.com> <4F0AF0B9.7030406@turmel.org> Message-ID: <4F0D6E90.5010603@ehu.es> El 09/01/12 14:50, Phil Turmel escribi?: > I've been following this thread with great interest, but no advice to offer. > The content is entirely appropriate, and appreciated. Don't be embarrassed > by your enthusiasm, Stan. +1 From sven at svenhartge.de Wed Jan 11 14:50:54 2012 From: sven at svenhartge.de (Sven Hartge) Date: Wed, 11 Jan 2012 13:50:54 +0100 Subject: [Dovecot] Providing shared folders with multiple backend servers References: <68fd4hi9kbv8@mids.svenhartge.de> Message-ID: Sven Hartge wrote: > I am currently in the planning stage for a "new and improved" mail > system at my university. OK, executive summary of the design ideas so far: - deployment of X (starting with 4, but easily scalable) virtual servers on VMware ESX - storage will be backed by a RDM on our iSCSI SAN. + main mailbox storage will be on 15k SAS6 600GB disks + backup rsnapshot storage will be on 7.2k SAS6 2TB disks - XFS filesystem on LVM, allowing easy local snapshots for rsnapshot - sharing folders from one user to another is not needed - central public shared folders reside on its own storage server and are accessed through the imapc-backend configured for the "#shared."-namespace (needs dovecot 2.1~rc3 or higher) - mdbox with compression (23h lifetime, 50MB max size) - quota in MySQL, allowing my MXes to check the quota for a user _before_ accepting any mail for him. This is a much needed feature, currently not possible and thus leading to backscatter right now. - + Backup with bacula for file level backup every 24 hours (120 days retention) + rsnapshot to node local backup space for easier access (14 days retention) + possibly SAN-based remote snapshots to different storage tier. Because sharing a RDM (or VMDK) with multiple VMs pins the VM to an ESX server and prohibits HA and DRS in the ESX cluster and because of my bad experience with cluster FS I want to avoid one and use only local storage for the personal mailboxes of the users. Each user is fixed to one server, routing/redirecting of IMAP/POP3 connections happens via perdition (popmap feature via LDAP lookup) in a frontend server (this component is already working since some 3-ish years). So each node is isolated from the other nodes, knows only its users and does not care about users on other nodes. This prevents usage of the dovecot director, which only works if all nodes are able to access all mailboxes (correct?) I am aware this creates a SPoF for an 1/X portion of my users in the case of a VM failure, but this is deemed acceptable, since the use of VMs will allow me to quickly deploy a new one and reattach the RDM. (And if my whole iSCSI storage or ESX cluster fails, I have other, bigger problems than a non-functional mail system.) Comments? Gr??e, Sven. -- Sigmentation fault. Core dumped. From forumer at smartmobili.com Wed Jan 11 16:11:12 2012 From: forumer at smartmobili.com (forumer at smartmobili.com) Date: Wed, 11 Jan 2012 15:11:12 +0100 Subject: [Dovecot] Log imap commands Message-ID: Hi, I am trying to optimize an imap library and I am comparing with some existing webmail, for instance from roundcube I can log the imap commands with the following format : [11-Jan-2012 14:22:55 +0100]: [DBD1] S: * OK [CAPABILITY IMAP4rev1 LITERAL+ SASL-IR LOGIN-REFERRALS ID ENABLE IDLE STARTTLS AUTH=PLAIN AUTH=LOGIN AUTH=DIGEST-MD5 AUTH=CRAM-MD5] Dovecot ready. [11-Jan-2012 14:22:55 +0100]: [DBD1] C: A0001 ID ("name" "Roundcube Webmail" "version" "0.6" "php" "5.3.5-1ubuntu7.4" "os" "Linux" "command" "/") [11-Jan-2012 14:22:55 +0100]: [DBD1] S: * ID NIL [11-Jan-2012 14:22:55 +0100]: [DBD1] S: A0001 OK ID completed. [11-Jan-2012 14:22:55 +0100]: [DBD1] C: A0002 AUTHENTICATE CRAM-MD5 [11-Jan-2012 14:22:55 +0100]: [DBD1] S: + RDM1MTE1NjkxOTQzODE4NDEuMTMyNjI4ODE3NUBzZC0zMDYzNT4= [11-Jan-2012 14:22:55 +0100]: [DBD1] C: d2ViZ3Vlc3RAc21hcnRtb2JpbGkuY29tIDczODMxNjUzZmVlYzdjNDVlNzRkYTg1YjIwMjk2NWM0 [11-Jan-2012 14:22:55 +0100]: [DBD1] S: A0002 OK [CAPABILITY IMAP4rev1 LITERAL+ SASL-IR LOGIN-REFERRALS ID ENABLE IDLE SORT SORT=DISPLAY THREAD=REFERENCES THREAD=REFS MULTIAPPEND UNSELECT CHILDREN NAMESPACE UIDPLUS LIST-EXTENDED I18NLEVEL=1 CONDSTORE QRESYNC ESEARCH ESORT SEARCHRES WITHIN CONTEXT=SEARCH LIST-STATUS FUZZY] Logged in [11-Jan-2012 14:22:55 +0100]: [DBD1] C: A0003 NAMESPACE [11-Jan-2012 14:22:55 +0100]: [DBD1] S: * NAMESPACE (("" ".")) NIL NIL [11-Jan-2012 14:22:55 +0100]: [DBD1] S: A0003 OK Namespace completed. [11-Jan-2012 14:22:55 +0100]: [DBD1] C: A0004 LOGOUT [11-Jan-2012 14:22:55 +0100]: [DBD1] S: * BYE Logging out ... And now I would like to do the same from my imap library so I have started wireshark but it's a bit messy and difficult to compare. I was wondering if dovecot allows to log imap communications ? Thanks From wgillespie+dovecot at es2eng.com Wed Jan 11 16:19:44 2012 From: wgillespie+dovecot at es2eng.com (Willie Gillespie) Date: Wed, 11 Jan 2012 07:19:44 -0700 Subject: [Dovecot] Log imap commands In-Reply-To: References: Message-ID: <4F0D9A80.2020707@es2eng.com> On 1/11/2012 7:11 AM, forumer at smartmobili.com wrote: > I was wondering if dovecot allows to log imap communications ? You could look at Rawlog http://wiki.dovecot.org/Debugging/Rawlog http://wiki2.dovecot.org/Debugging/Rawlog From gerv at esrf.fr Wed Jan 11 17:04:06 2012 From: gerv at esrf.fr (Didier Gervaise) Date: Wed, 11 Jan 2012 16:04:06 +0100 Subject: [Dovecot] How to solve a "Connection queue full problem" - dovecot version 2.0.16 Message-ID: <4F0DA4E6.8040205@esrf.fr> Hello, I put in production dovecot yesterday. After an hour, nobody could log in ("Max number of imap connection" error message on Thunderbird) Afterward, I found these messages in the logs: Jan 10 09:21:20 mailsrv dovecot: [ID 583609 mail.info] imap-login: Disconnected: Connection queue full (no auth attempts): rip=xxx.xxx.xxx.xxx, lip=xxx.xxx.xxx.xxx In the panic, I changed these values in /usr/local/etc/dovecot/conf.d/10-master.conf default_process_limit = 20000 default_client_limit = 20000 This apparently solved the problem but now I have these messages when I start dovecot: Jan 11 14:41:08 mailsrvspare dovecot: [ID 583609 mail.info] master: Dovecot v2.0.15 starting up Jan 11 14:41:08 mailsrvspare dovecot: [ID 583609 mail.warning] config: Warning: service auth { client_limit=4096 } is lower than required under max. load (103024) Jan 11 14:41:08 mailsrvspare dovecot: [ID 583609 mail.warning] config: Warning: service anvil { client_limit=20000 } is lower than required under max. load (60003) What should I do ? - adding "service_count = 0" in service imap-login { ... } and removing the modifications I did in 10-master.conf ? or - should I configure differently default_process_limit and default_client_limit ? It is a small site (about 1000 users). Currently I have 666 imap processes and 136 imap-login processes. Additionnal infos: The Server is a Solaris 10 Sun X4540 32GB RAM mailsrv:~ % /usr/local/sbin/dovecot -n # 2.0.16: /usr/local/etc/dovecot/dovecot.conf doveconf: Warning: service auth { client_limit=4096 } is lower than required under max. load (103024) doveconf: Warning: service anvil { client_limit=20000 } is lower than required under max. load (60003) # OS: SunOS 5.10 i86pc default_client_limit = 20000 default_process_limit = 20000 disable_plaintext_auth = no first_valid_uid = 100 mail_debug = yes mail_plugins = " quota" mail_privileged_group = mail managesieve_notify_capability = mailto managesieve_sieve_capability = fileinto reject envelope encoded-character vacation subaddress comparator-i;ascii-numeric relational regex imap4flags copy include variables body enotify environment mailbox date ihave passdb { driver = pam } plugin { quota = maildir:User quota quota_rule = ?:storage=4G quota_rule2 = Trash:storage=+100M quota_warning = storage=95%% quota-warning 95 %u quota_warning2 = storage=90%% quota-warning 90 %u quota_warning3 = storage=80%% quota-warning 80 %u sieve = ~/.dovecot.sieve sieve_dir = ~/ } postmaster_address = postmaster at esrf.fr protocols = imap pop3 lmtp sieve service imap-login { inet_listener imap { port = 143 } inet_listener imaps { port = 993 } } service imap { process_limit = 2000 } service managesieve-login { inet_listener sieve { port = 4190 } } service pop3-login { inet_listener pop3 { port = 110 } inet_listener pop3s { port = 995 } } service quota-warning { executable = script /usr/local/bin/quota-warning.sh unix_listener quota-warning { user = dovecot } user = dovecot } ssl_cert = References: <4F0D9A80.2020707@es2eng.com> Message-ID: <53bff2b57fe462c70eb9795cda5b8f06@smartmobili.com> Le 11.01.2012 15:19, Willie Gillespie a ?crit?: > On 1/11/2012 7:11 AM, forumer at smartmobili.com wrote: >> I was wondering if dovecot allows to log imap communications ? > > You could look at Rawlog > http://wiki.dovecot.org/Debugging/Rawlog > http://wiki2.dovecot.org/Debugging/Rawlog Ok so I suppose I need to rebuild dovecot with the --with-rawlog option but I am under ubuntu and I was using some dovecot-2.x source package hosted here : http://xi.rename-it.nl/debian/ But now it seems to be dead, any idea where I could find a deb-src for dovecot 2.x ? From CMarcus at Media-Brokers.com Wed Jan 11 17:23:55 2012 From: CMarcus at Media-Brokers.com (Charles Marcus) Date: Wed, 11 Jan 2012 10:23:55 -0500 Subject: [Dovecot] Log imap commands In-Reply-To: <53bff2b57fe462c70eb9795cda5b8f06@smartmobili.com> References: <4F0D9A80.2020707@es2eng.com> <53bff2b57fe462c70eb9795cda5b8f06@smartmobili.com> Message-ID: <4F0DA98B.4080705@Media-Brokers.com> On 2012-01-11 10:09 AM, forumer at smartmobili.com wrote: > Le 11.01.2012 15:19, Willie Gillespie a ?crit : >> On 1/11/2012 7:11 AM, forumer at smartmobili.com wrote: >>> I was wondering if dovecot allows to log imap communications ? >> >> You could look at Rawlog >> http://wiki.dovecot.org/Debugging/Rawlog >> http://wiki2.dovecot.org/Debugging/Rawlog > > Ok so I suppose I need to rebuild dovecot with the --with-rawlog option > but I am under ubuntu > and I was using some dovecot-2.x source package hosted here : > http://xi.rename-it.nl/debian/ > But now it seems to be dead, any idea where I could find a deb-src for > dovecot 2.x ? Another option that shouldn't require recompiling might be the MailLog plugin: http://wiki2.dovecot.org/Plugins/MailLog -- Best regards, Charles From Frank.Post at pallas.com Wed Jan 11 17:35:42 2012 From: Frank.Post at pallas.com (Frank Post) Date: Wed, 11 Jan 2012 16:35:42 +0100 Subject: [Dovecot] sieve under lmtp using wrong homedir ? Message-ID: Hi, i have a problem with dovecot-2.0.15. All is working well except lmtp. Sieve scripts are correctly saved under /var/vmail/test.com/test/sieve, but under lmtp sieve will use /var/vmail//testuser/ Uid testuser has mail=test at test.com configured in ldap. As i could see in the debug logs, there is a difference between the auth "master out" lines, but why ? working if managesieve stores scripts: Jan 11 15:02:42 auth: Debug: master in: REQUEST 3533701121 23001 1 7ec31d3c65cb934785e8eb0f33a182ae Jan 11 15:02:42 auth: Debug: ldap(test at test.com,10.234.201.4): result: mail(user)=test at test.com Jan 11 15:02:42 auth: Debug: master out: USER 3533701121 test at test.com home=/var/vmail/test.com/test uid=5000 gid=5000 Jan 11 15:02:42 managesieve(test at test.com): Debug: Effective uid=5000, gid=5000, home=/var/vmail/test.com/test but under lmtp not: Jan 11 14:39:53 auth: Debug: master in: USER 1 testuser service=lmtp lip=10.234.201.9 rip=10.234.201.4 Jan 11 14:39:53 auth: Debug: auth(testuser,10.234.201.4): username changed testuser -> test at test.com Jan 11 14:39:53 auth: Debug: ldap(test at test.com,10.234.201.4): result: mail(user)=test at test.com Jan 11 14:39:53 auth: Debug: master out: USER 1 test at test.com home=/var/vmail//testuser uid=5000 gid=5000 Jan 11 14:39:53 lmtp(8499): Debug: auth input: test at test.com home=/var/vmail//testuser uid=5000 gid=5000 Jan 11 14:39:53 lmtp(8499): Debug: changed username to test at test.com Jan 11 14:39:53 lmtp(8499, test at test.com): Debug: Effective uid=5000, gid=5000, home=/var/vmail//testuser Jan 11 14:39:53 lmtp(8499, test at test.com): Debug: Quota root: name=User quota backend=maildir args= Jan 11 14:39:53 lmtp(8499, test at test.com): Debug: Quota rule: root=User quota mailbox=* bytes=2147483648 messages=0 Jan 11 14:39:53 lmtp(8499, test at test.com): Debug: Quota warning: bytes=1932735283 (90%) messages=0 reverse=no command=quota-warning 90 test at test.com Jan 11 14:39:53 lmtp(8499, test at test.com): Debug: maildir++: root=/var/vmail/test.com/test/Maildir, index=/var/dovecot/indexes/test.com/test, control=, inbox=/var/vmail/test.com/test/Maildir, alt= Jan 11 14:39:53 lmtp(8499, test at test.com): Debug: trash: No trash setting - plugin disabled Jan 11 14:39:53 lmtp(8499, test at test.com): Debug: sieve: include: sieve_global_dir is not set; it is currently not possible to include `:global' scripts. Jan 11 14:39:53 lmtp(8499, test at test.com): Debug: pla8CymRDU8zIQAAFrfQGQ: sieve: user's script path /var/vmail//testuser/.dovecot.sieve doesn't exist (using global script path in stead) Jan 11 14:39:53 lmtp(8499, test at test.com): Debug: pla8CymRDU8zIQAAFrfQGQ: sieve: user has no valid personal script Jan 11 14:39:53 lmtp(8499, test at test.com): Debug: pla8CymRDU8zIQAAFrfQGQ: sieve: no scripts to execute: reverting to default delivery. Jan 11 14:39:53 lmtp(8499, test at test.com): Debug: Namespace : Using permissions from /var/vmail/test.com/test/Maildir: mode=0700 gid=-1 Thanks, for your help. Frank -------------- next part -------------- A non-text attachment was scrubbed... Name: dovecot-front.conf Type: application/octet-stream Size: 4126 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: dovecot-back.conf Type: application/octet-stream Size: 3394 bytes Desc: not available URL: From ath at b-one.net Wed Jan 11 17:57:46 2012 From: ath at b-one.net (Anders) Date: Wed, 11 Jan 2012 16:57:46 +0100 Subject: [Dovecot] A small bug and a question about CONTEXT=SEARCH Message-ID: <20120111155746.BD7BDDA030B2B@bmail06.one.com> On Tue, 2012-01-10 at 15:05 +0100, Anders wrote: > > the socket connection being closed without warning: > > UID SEARCH RETURN (SAVE COUNT) CHARSET UTF-8 (UNDELETED TEXT "foo") > > You mean it closes with above also? It works fine with me. No, that also works fine here :-) > > UID SEARCH RETURN (COUNT MIN) CHARSET UTF-8 () $ > > This was fixed in v2.0.17. Great, thanks! > > Then I have question about RFC5267 and the announcement of > > CONTEXT=SEARCH > > in the capabilities. I think this RFC is supported by dovecot, or maybe > > just part of the RFC is supported? > > All of it is supported, as far as I know. > > At least when I include the CONTEXT ADDTO or REMOVEFROM keywords I get > > an error, > These are server notifications. Clients aren't supposed to send them. Sorry, apparently I was a bit too fast there. ADDTO and REMOVEFROM should not be sent by a client, but I think that a client can send CONTEXT as a hint to the server, see http://tools.ietf.org/html/rfc5267#section-4.2 Thanks! Regards Anders From forumer at smartmobili.com Wed Jan 11 19:19:28 2012 From: forumer at smartmobili.com (forumer at smartmobili.com) Date: Wed, 11 Jan 2012 18:19:28 +0100 Subject: [Dovecot] Log imap commands In-Reply-To: <53bff2b57fe462c70eb9795cda5b8f06@smartmobili.com> References: <4F0D9A80.2020707@es2eng.com> <53bff2b57fe462c70eb9795cda5b8f06@smartmobili.com> Message-ID: <1f53a15cea08527fb79bd71037fa161f@smartmobili.com> Le 11.01.2012 16:09, forumer at smartmobili.com a ?crit?: > Le 11.01.2012 15:19, Willie Gillespie a ?crit?: >> On 1/11/2012 7:11 AM, forumer at smartmobili.com wrote: >>> I was wondering if dovecot allows to log imap communications ? >> >> You could look at Rawlog >> http://wiki.dovecot.org/Debugging/Rawlog >> http://wiki2.dovecot.org/Debugging/Rawlog > > Ok so I suppose I need to rebuild dovecot with the --with-rawlog > option but I am under ubuntu > and I was using some dovecot-2.x source package hosted here : > http://xi.rename-it.nl/debian/ > But now it seems to be dead, any idea where I could find a deb-src > for dovecot 2.x ? Actually I finally found that repository it still working. From adrian.minta at gmail.com Wed Jan 11 19:30:47 2012 From: adrian.minta at gmail.com (Adrian Minta) Date: Wed, 11 Jan 2012 19:30:47 +0200 Subject: [Dovecot] howto disable indexing on dovecot-lda ? In-Reply-To: <4F06F0E7.904@gmail.com> References: <4F06D5D9.20001@gmail.com> <4F06DFF5.40707@hardwarefreak.com> <4F06F0E7.904@gmail.com> Message-ID: <4F0DC747.4070505@gmail.com> Hello, I tested with "mail_location = whatever-you-have-now:INDEX=MEMORY" and it seems to help, but in the mean time I found another option completely undocumented that seems to do exactly what I wanted: protocol lda { mailbox_list_index_disable= yes } Does anyone knows exactly what "mailbox_list_index_disable" does and if is still available in 2.0 and 2.1 branch ? From kadafax at gmail.com Wed Jan 11 20:00:37 2012 From: kadafax at gmail.com (huret deffgok) Date: Wed, 11 Jan 2012 19:00:37 +0100 Subject: [Dovecot] Dovecot LDA and address extensions - folders flood Message-ID: Hi list, This post is slightly OT, I hope no one will take offense. I was following the wiki on using dovecot LDA with postfix and implemented, for our future mail server, the address extensions mechanism: an email sent to "validUser+foldername at mydomain.com" will have dovecot-lda automagically create and subscribe the "foldername" folder. With some basic scripting I was able to create hundreds of folders in a few seconds. So my question is how do you implement this great feature in a secure way so that funny random people out there cant flood your mailbox with gigatons of folder. Thanks, kfx From CMarcus at Media-Brokers.com Wed Jan 11 20:04:49 2012 From: CMarcus at Media-Brokers.com (Charles Marcus) Date: Wed, 11 Jan 2012 13:04:49 -0500 Subject: [Dovecot] Dovecot LDA and address extensions - folders flood In-Reply-To: References: Message-ID: <4F0DCF41.7040204@Media-Brokers.com> On 2012-01-11 1:00 PM, huret deffgok wrote: > Hi list, > > This post is slightly OT, I hope no one will take offense. > I was following the wiki on using dovecot LDA with postfix and implemented, > for our future mail server, the address extensions mechanism: an email sent > to "validUser+foldername at mydomain.com" will have dovecot-lda automagically > create and subscribe the "foldername" folder. With some basic scripting I > was able to create hundreds of folders in a few seconds. So my question is > how do you implement this great feature in a secure way so that funny > random people out there cant flood your mailbox with gigatons of folder. Don't have it autocreate the folder... Seriously, there is no way to provide that functionality and have the system determine when it is *you* doing it or someone else... But I think it is a non problem... how often do you receive plus-addressed spam?? -- Best regards, Charles From forumer at smartmobili.com Wed Jan 11 20:29:26 2012 From: forumer at smartmobili.com (forumer at smartmobili.com) Date: Wed, 11 Jan 2012 19:29:26 +0100 Subject: [Dovecot] Log imap commands In-Reply-To: <1f53a15cea08527fb79bd71037fa161f@smartmobili.com> References: <4F0D9A80.2020707@es2eng.com> <53bff2b57fe462c70eb9795cda5b8f06@smartmobili.com> <1f53a15cea08527fb79bd71037fa161f@smartmobili.com> Message-ID: I have added the following lines to dovecot configuration(/etc/dovecot/conf.d/10-master.conf) : ... service pop3 { # Max. number of POP3 processes (connections) #process_limit = 1024 } service postlogin { executable = script-login -d rawlog unix_listener postlogin { } } ... and I have created a folder dovecot.rawlog was shown below : root at vf-12345:/home/vmail/smartmobili.com/webguest# ls -la ... drwxrwxrwx 2 vmail vmail 4096 2012-01-11 19:11 dovecot.rawlog/ -rw------- 1 vmail vmail 19002 2011-12-27 13:01 dovecot-uidlist -rw------- 1 vmail vmail 8 2012-01-11 12:52 dovecot-uidvalidity ... And after that I have restarted dovecot and logged in with the webguest account but cannot see any logs. What am I doing wrong ? From geoffb at corp.sonic.net Wed Jan 11 20:53:30 2012 From: geoffb at corp.sonic.net (Geoffrey Broadwell) Date: Wed, 11 Jan 2012 10:53:30 -0800 Subject: [Dovecot] (no subject) Message-ID: <1326308010.2329.47.camel@rover> I'm working on a Dovecot plugin, but I'm pretty new to Dovecot, so there's a LOT to learn about the code base and it's pretty slow going. I've got a few things coded so far, but I want to make sure I'm headed down the right path and get some advice before I go too much further. A couple years ago, I wrote some code for our Courier implementation that sent a magic UDP packet to a small server each time a user modified their voicemail IMAP folder. That UDP server would then connect back to Courier via IMAP again and check whether the folder had any unread messages left in it. Finally, it would contact our phone switches to modify the state of the message waiting indicator (MWI) on that user's phone line appropriately. Fast forward to now, and we want to migrate wholesale to Dovecot 2.x. The servers are all in place, they've been well tested and burned in (with Dovecot 2.0.15 I believe), and the final migration is pretty much waiting on a port to Dovecot of the MWI update functionality. The good news is that I originally spent some effort to isolate the UDP packet generation and delivery, and I used purely standard portable code as per APUE2, so I think that chunk of code should be reusable with only minor modifications. I'm aware that internally Dovecot has its own memory, buffer, and string management functions, but it doesn't feel like a win to try to convert the existing code. It's small, completely isolated, and well reviewed -- I'd be more afraid of using the new (to me) Dovecot API incorrectly than I am that the existing code has bugs in buffer handling. By cribbing from other plugins and editing appropriately, I've also created the skeleton for my plugin: Makefile, docs, conf snippet, .spec (I'll be deploying the plugin as an RPM), and so on. I've got the beginnings of the .h and .c written, just enough to init and deinit the plugin by calling mail_storage_hooks_{add,remove}() with some stub hook functions. This all seems good so far; test builds are error-free and seem sane. So now the hard part is writing the piece that I can't just crib from elsewhere -- making sure that I hook every place in Dovecot that the user's voicemail folder can be changed in a way that would change it between having one or more unread messages, and not having any unread messages at all (or vice-versa, of course). At the same time, I want to minimize the performance impact to Dovecot (and the load on the UDP server) by only hooking the places I need to, filtering out as many false positives as I can without introducing massive complexity, and only pinging the UDP server when it's most likely to notice a change in the state of that user's voicemail server. It seems to me that I need to at least capture mailbox_allocated from the mail_storage hooks, for a couple reasons: 1. The state of the voicemail folder could be changed because the entire folder is created, destroyed, or renamed. 2. I want to only do further checks when I'm sure I'm looking at the voicemail folder. There's no reason to do work when the user is working with any other folder. So now the questions: Does all of the above seem sane so far? Do I need to hook mail_allocated as well, or will I be able to see any change I need to monitor just from the mailbox? Finally, I'm lost about what operations on the mailbox and the mails within it I need to check. Can anyone offer some advice (or doc pointers) on this? Thank you! -'f From geoffb at corp.sonic.net Wed Jan 11 20:57:11 2012 From: geoffb at corp.sonic.net (Geoffrey Broadwell) Date: Wed, 11 Jan 2012 10:57:11 -0800 Subject: [Dovecot] Need help with details for new Dovecot plugin In-Reply-To: <1326308010.2329.47.camel@rover> References: <1326308010.2329.47.camel@rover> Message-ID: <1326308231.2329.50.camel@rover> My sincere apologies for the subjectless email (my MUA should have caught that!); the above is the corrected subject line. -'f On Wed, 2012-01-11 at 10:53 -0800, Geoffrey Broadwell wrote: > I'm working on a Dovecot plugin, but I'm pretty new to Dovecot, so > there's a LOT to learn about the code base and it's pretty slow going. > I've got a few things coded so far, but I want to make sure I'm headed > down the right path and get some advice before I go too much further. > > A couple years ago, I wrote some code for our Courier implementation > that sent a magic UDP packet to a small server each time a user modified > their voicemail IMAP folder. That UDP server would then connect back to > Courier via IMAP again and check whether the folder had any unread > messages left in it. Finally, it would contact our phone switches to > modify the state of the message waiting indicator (MWI) on that user's > phone line appropriately. > > Fast forward to now, and we want to migrate wholesale to Dovecot 2.x. > The servers are all in place, they've been well tested and burned in > (with Dovecot 2.0.15 I believe), and the final migration is pretty much > waiting on a port to Dovecot of the MWI update functionality. > > The good news is that I originally spent some effort to isolate the UDP > packet generation and delivery, and I used purely standard portable code > as per APUE2, so I think that chunk of code should be reusable with only > minor modifications. I'm aware that internally Dovecot has its own > memory, buffer, and string management functions, but it doesn't feel > like a win to try to convert the existing code. It's small, completely > isolated, and well reviewed -- I'd be more afraid of using the new (to > me) Dovecot API incorrectly than I am that the existing code has bugs in > buffer handling. > > By cribbing from other plugins and editing appropriately, I've also > created the skeleton for my plugin: Makefile, docs, conf snippet, .spec > (I'll be deploying the plugin as an RPM), and so on. I've got the > beginnings of the .h and .c written, just enough to init and deinit the > plugin by calling mail_storage_hooks_{add,remove}() with some stub hook > functions. This all seems good so far; test builds are error-free and > seem sane. > > So now the hard part is writing the piece that I can't just crib from > elsewhere -- making sure that I hook every place in Dovecot that the > user's voicemail folder can be changed in a way that would change it > between having one or more unread messages, and not having any unread > messages at all (or vice-versa, of course). At the same time, I want to > minimize the performance impact to Dovecot (and the load on the UDP > server) by only hooking the places I need to, filtering out as many > false positives as I can without introducing massive complexity, and > only pinging the UDP server when it's most likely to notice a change in > the state of that user's voicemail server. > > It seems to me that I need to at least capture mailbox_allocated from > the mail_storage hooks, for a couple reasons: > > 1. The state of the voicemail folder could be changed because > the entire folder is created, destroyed, or renamed. > > 2. I want to only do further checks when I'm sure I'm looking at > the voicemail folder. There's no reason to do work when the > user is working with any other folder. > > So now the questions: > > Does all of the above seem sane so far? > > Do I need to hook mail_allocated as well, or will I be able to see any > change I need to monitor just from the mailbox? > > Finally, I'm lost about what operations on the mailbox and the mails > within it I need to check. Can anyone offer some advice (or doc > pointers) on this? > > Thank you! > > > -'f > > From nicolas.kowalski at gmail.com Wed Jan 11 21:01:18 2012 From: nicolas.kowalski at gmail.com (Nicolas KOWALSKI) Date: Wed, 11 Jan 2012 20:01:18 +0100 Subject: [Dovecot] proxy, managesieve and ssl? Message-ID: <20120111190118.GD14492@petole.demisel.net> Hello, On a dovecot 2.0.14 proxy, I found that proxying managesieve works well when using 'starttls' option in pass_attrs, but does not work when using 'ssl' option. The backend server is also dovecot 2.0.14; when using the ssl option, it reports "no auth attempts" in the logs about managesieve-login, and meanwhile the MUA, Thunderbird with sieve plugin, reports [TRYLATER] account is temporary disabled; no problem when using starttls option on the proxy, all works well. I would like to use IMAPs, instead of IMAP+STARTTLS, from proxy to backend, and have Managesieve still working. Is this supported? Thanks, -- Nicolas From kadafax at gmail.com Wed Jan 11 21:05:43 2012 From: kadafax at gmail.com (huret deffgok) Date: Wed, 11 Jan 2012 20:05:43 +0100 Subject: [Dovecot] Dovecot LDA and address extensions - folders flood In-Reply-To: <4F0DCF41.7040204@Media-Brokers.com> References: <4F0DCF41.7040204@Media-Brokers.com> Message-ID: On Wed, Jan 11, 2012 at 7:04 PM, Charles Marcus wrote: > On 2012-01-11 1:00 PM, huret deffgok wrote: > >> Hi list, >> >> This post is slightly OT, I hope no one will take offense. >> I was following the wiki on using dovecot LDA with postfix and >> implemented, >> for our future mail server, the address extensions mechanism: an email >> sent >> to "validUser+foldername@**mydomain.com" >> will have dovecot-lda automagically >> create and subscribe the "foldername" folder. With some basic scripting I >> was able to create hundreds of folders in a few seconds. So my question is >> how do you implement this great feature in a secure way so that funny >> random people out there cant flood your mailbox with gigatons of folder. >> > > Don't have it autocreate the folder... > > Seriously, there is no way to provide that functionality and have the > system determine when it is *you* doing it or someone else... > > But I think it is a non problem... how often do you receive plus-addressed > spam?? None from now. But I was thinking about something like malice rather than spamming. For me it's an open door to DOS the service. What about a functionality that would throttle the rate of creation of folders from one IP address, with a ban in case of abuse ? Or maybe should I look at the file system level. From CMarcus at Media-Brokers.com Wed Jan 11 21:25:24 2012 From: CMarcus at Media-Brokers.com (Charles Marcus) Date: Wed, 11 Jan 2012 14:25:24 -0500 Subject: [Dovecot] Dovecot LDA and address extensions - folders flood In-Reply-To: References: <4F0DCF41.7040204@Media-Brokers.com> Message-ID: <4F0DE224.8000900@Media-Brokers.com> On 2012-01-11 2:05 PM, huret deffgok wrote: > On Wed, Jan 11, 2012 at 7:04 PM, Charles Marcus wrote: >> On 2012-01-11 1:00 PM, huret deffgok wrote: >>> This post is slightly OT, I hope no one will take offense. I was >>> following the wiki on using dovecot LDA with postfix and >>> implemented, for our future mail server, the address extensions >>> mechanism: an email sent to >>> "validUser+foldername@**mydomain.com" >>> will have dovecot-lda automagically create and subscribe the >>> "foldername" folder. With some basic scripting I was able to >>> create hundreds of folders in a few seconds. So my question is >>> how do you implement this great feature in a secure way so that >>> funny random people out there cant flood your mailbox with >>> gigatons of folder. >> Don't have it autocreate the folder... >> >> Seriously, there is no way to provide that functionality and have the >> system determine when it is *you* doing it or someone else... >> >> But I think it is a non problem... how often do you receive plus-addressed >> spam?? > None from now. But I was thinking about something like malice rather than > spamming. For me it's an open door to DOS the service. > What about a functionality that would throttle the rate of creation of > folders from one IP address, with a ban in case of abuse ? Or maybe should > I look at the file system level. Again - and no offense - but I think you are tilting at windmills... If you get hit by this, you will not only have thousands or millions of folders, you'll have one email for each folder. So, the question is, how do you prevent being flooded with spam... and the answer is, decent anti-spam measures. I prefer ASSP, but I just wish you could use it as an after queue content filter (for its most excellent content filtering and more importantly quarantine management/block reporting features/functionality). That said, postfix, with sane anti-spam measures, along with the most excellent new postscreen (available in 2.8+ I believe) is good enough to stop most anything like this that you may be worried about. Like I said, set up postfix (or your smtp server) right, and this is a non-issue. -- Best regards, Charles From tss at iki.fi Wed Jan 11 22:34:33 2012 From: tss at iki.fi (Timo Sirainen) Date: Wed, 11 Jan 2012 22:34:33 +0200 Subject: [Dovecot] proxy, managesieve and ssl? In-Reply-To: <20120111190118.GD14492@petole.demisel.net> References: <20120111190118.GD14492@petole.demisel.net> Message-ID: <95F23E50-BD64-4844-8838-04E5BB9033A7@iki.fi> On 11.1.2012, at 21.01, Nicolas KOWALSKI wrote: > On a dovecot 2.0.14 proxy, I found that proxying managesieve works well > when using 'starttls' option in pass_attrs, but does not work when using > 'ssl' option. The backend server is also dovecot 2.0.14; when using the > ssl option, it reports "no auth attempts" in the logs about > managesieve-login, and meanwhile the MUA, Thunderbird with sieve plugin, > reports [TRYLATER] account is temporary disabled; no problem when using > starttls option on the proxy, all works well. > > I would like to use IMAPs, instead of IMAP+STARTTLS, from proxy to > backend, and have Managesieve still working. Is this supported? You'll need to kludge it a little bit. I guess you're using LDAP, since you mentioned pass_attrs? protocol sieve { passdb { args = ldap-with-starttls.conf } } protocol !sieve { passdb { args = ldap-with-ssl.conf } } From stephan at rename-it.nl Wed Jan 11 23:06:51 2012 From: stephan at rename-it.nl (Stephan Bosch) Date: Wed, 11 Jan 2012 22:06:51 +0100 Subject: [Dovecot] proxy, managesieve and ssl? In-Reply-To: <20120111190118.GD14492@petole.demisel.net> References: <20120111190118.GD14492@petole.demisel.net> Message-ID: <4F0DF9EB.50605@rename-it.nl> On 1/11/2012 8:01 PM, Nicolas KOWALSKI wrote: > Hello, > > On a dovecot 2.0.14 proxy, I found that proxying managesieve works well > when using 'starttls' option in pass_attrs, but does not work when using > 'ssl' option. The backend server is also dovecot 2.0.14; when using the > ssl option, it reports "no auth attempts" in the logs about > managesieve-login, and meanwhile the MUA, Thunderbird with sieve plugin, > reports [TRYLATER] account is temporary disabled; no problem when using > starttls option on the proxy, all works well. > > I would like to use IMAPs, instead of IMAP+STARTTLS, from proxy to > backend, and have Managesieve still working. Is this supported? Although there is no such thing as a standard sieveS protocol, you can make Dovecot v2.x talk SSL from the start at a ManageSieve socket. Since normally people will not use something like this, it is not available by default. In conf.d/20-managesieve.conf you can adjust the service definition of ManageSieve as follows: service managesieve-login { inet_listener sieve { port = 4190 } inet_listener sieves { port = 5190 ssl = yes } } This starts the normal protocol on port 4190 and the direct-SSL version on an alternative port. You can also put the ssl=yes directly in the port 4190 listener, as long as no client will have to connect to this server directly (no client will support it). Regards, Stephan. From michael.abbott at apple.com Thu Jan 12 01:09:17 2012 From: michael.abbott at apple.com (Mike Abbott) Date: Wed, 11 Jan 2012 17:09:17 -0600 Subject: [Dovecot] MASTER_AUTH_MAX_DATA_SIZE Message-ID: <1BCAD28D-8120-45C9-BAA2-B6597C34545A@apple.com> In 2.0.17 you increased LOGIN_MAX_INBUF_SIZE from 1024 to 4096. Should you also have increased MASTER_AUTH_MAX_DATA_SIZE from (1024*2) to (4096*2)? /* This should be kept in sync with LOGIN_MAX_INBUF_SIZE. Multiply it by two to make sure there's space to transfer the command tag */ From dlie76 at yahoo.com.au Thu Jan 12 04:30:49 2012 From: dlie76 at yahoo.com.au (Daminto Lie) Date: Wed, 11 Jan 2012 18:30:49 -0800 (PST) Subject: [Dovecot] could not start dovecot - unknown section type Message-ID: <1326335449.87714.YahooMailNeo@web113411.mail.gq1.yahoo.com> Hi, I was wondering if I could get some help with the following error when trying to start dovecot service on Ubuntu Server 10.04. The error message is as follows ?* Starting IMAP/POP3 mail server dovecot?????????????????????????????????????? Error: Error in configuration file /usr/local/etc/dovecot/dovecot.conf line 15: Unknown section type Fatal: Invalid configuration in /usr/local/etc/dovecot/dovecot.conf [fail] I have just managed to upgrade it from 1.2.19 to 2.0.17. Then, I tried to start the dovecot by running the command $ sudo /etc/init.d/dovecot start And I received the above message. Below is the configuration for dovecot.conf # 2.0.17 (684381041dc4+): /usr/local/etc/dovecot/dovecot.conf # OS: Linux 2.6.32-37-generic-pae i686 Ubuntu 10.04.3 LTS ext4 auth_debug = yes auth_debug_passwords = yes auth_mechanisms = plain login auth_username_format = %Lu auth_verbose = yes base_dir = /var/run/dovecot disable_plaintext_auth = no first_valid_uid = 1001 last_valid_uid = 2000 log_timestamp = "%Y-%m-%d %H:%M:%S " mail_location = maildir:/home/vmail/%u/Maildir mail_privileged_group = mail passdb { ? driver = pam } passdb { ? args = /usr/local/etc/dovecot/dovecot-ldap.conf ? driver = ldap } plugin { ? quota = maildir ? quota_rule = *:storage=3GB ? quota_rule2 = Trash:storage=20%% ? quota_rule3 = Spam:storage=10%% ? quota_warning = storage=95%% /usr/local/bin/quota-warning.sh 95 ? quota_warning2 = storage=80%% /usr/local/bin/quota-warning.sh 80 } protocols = imap service auth { ? unix_listener /var/run/dovecot-auth-master { ??? group = vmail ??? mode = 0660 ??? user = vmail ? } ? unix_listener /var/spool/postfix/private/auth { ??? group = mail ??? mode = 0660 ??? user = postfix ? } ? user = root } service imap-login { ? chroot = login ? executable = /usr/lib/dovecot/imap-login ? inet_listener imap { ??? address = * ??? port = 143 ? } ? user = dovecot } service imap { ? executable = /usr/lib/dovecot/imap } service pop3-login { ? chroot = login ? user = dovecot } ssl = no userdb { ? driver = passwd } userdb { ? args = uid=1001 gid=1001 home=/home/vmail/%u allow_all_users=yes ? driver = static } verbose_proctitle = yes protocol imap { ? imap_client_workarounds = delay-newmail ? mail_plugins = quota imap_quota } protocol pop3 { ? pop3_uidl_format = %08Xu%08Xv } protocol lda { ? auth_socket_path = /var/run/dovecot-auth-master ? mail_plugins = quota ? postmaster_address = postmaster at example.com ? rejection_reason = Your message to <%t> was automatically rejected:%n%r ? sendmail_path = /usr/lib/sendmail } Any help would be greatly appreciated. Thank you From rob0 at gmx.co.uk Thu Jan 12 05:12:38 2012 From: rob0 at gmx.co.uk (/dev/rob0) Date: Wed, 11 Jan 2012 21:12:38 -0600 Subject: [Dovecot] could not start dovecot - unknown section type In-Reply-To: <1326335449.87714.YahooMailNeo@web113411.mail.gq1.yahoo.com> References: <1326335449.87714.YahooMailNeo@web113411.mail.gq1.yahoo.com> Message-ID: <201201112112.39736@harrier.slackbuilds.org> On Wednesday 11 January 2012 20:30:49 Daminto Lie wrote: > I was wondering if I could get some help with the following error > when trying to start dovecot service on Ubuntu Server 10.04. > > The error message is as follows > > * Starting IMAP/POP3 mail server > dovecot > > Error: Error in configuration file > /usr/local/etc/dovecot/dovecot.conf line 15: Unknown section type > Fatal: Invalid configuration in > /usr/local/etc/dovecot/dovecot.conf [fail] > > > I have just managed to upgrade it from 1.2.19 to 2.0.17. Then, I > tried to start the dovecot by running the command > > > $ sudo /etc/init.d/dovecot start > > And I received the above message. It would seem that you did not upgrade the init script, and the old one is reading the config file and expecting a different format. You used source to upgrade, which means you did not "upgrade" in the conventional sense -- you installed new software. Either fix the script or run without it: dovecot start See: http://wiki2.dovecot.org/CompilingSource http://wiki2.dovecot.org/RunningDovecot > Below is the configuration for dovecot.conf snip -- http://rob0.nodns4.us/ -- system administration and consulting Offlist GMX mail is seen only if "/dev/rob0" is in the Subject: From dlie76 at yahoo.com.au Thu Jan 12 07:19:42 2012 From: dlie76 at yahoo.com.au (Daminto Lie) Date: Wed, 11 Jan 2012 21:19:42 -0800 (PST) Subject: [Dovecot] could not start dovecot - unknown section type In-Reply-To: <201201112112.39736@harrier.slackbuilds.org> References: <1326335449.87714.YahooMailNeo@web113411.mail.gq1.yahoo.com> <201201112112.39736@harrier.slackbuilds.org> Message-ID: <1326345582.91512.YahooMailNeo@web113409.mail.gq1.yahoo.com> Thank you for your reply. Yes, you're right. I should not have called it an upgrade since I actually removed dovecot 1.2.9 completely and installed the dovecot 2.0.17 from the source. Later, I mucked up the init file because I still used the old version one. I'm sorry about this. I remember I tried to upgrade by running doveconf -n -c dovecot.conf > dovecot-2.conf, I got an error message saying doveconf: command not found. Then, I tried to google it to find solutions but to no avail. This is why I decided to install it from scratch. Thank you for your help ________________________________ From: /dev/rob0 To: dovecot at dovecot.org Sent: Thursday, 12 January 2012 2:12 PM Subject: Re: [Dovecot] could not start dovecot - unknown section type On Wednesday 11 January 2012 20:30:49 Daminto Lie wrote: > I was wondering if I could get some help with the following error > when trying to start dovecot service on Ubuntu Server 10.04. > > The error message is as follows > >? * Starting IMAP/POP3 mail server > dovecot? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? > > Error: Error in configuration file > /usr/local/etc/dovecot/dovecot.conf line 15: Unknown section type > Fatal: Invalid configuration in > /usr/local/etc/dovecot/dovecot.conf [fail] > > > I have just managed to upgrade it from 1.2.19 to 2.0.17. Then, I > tried to start the dovecot by running the command > > > $ sudo /etc/init.d/dovecot start > > And I received the above message. It would seem that you did not upgrade the init script, and the old one is reading the config file and expecting a different format. You used source to upgrade, which means you did not "upgrade" in the conventional sense -- you installed new software. Either fix the script or run without it: ??? dovecot start See: ??? http://wiki2.dovecot.org/CompilingSource ??? http://wiki2.dovecot.org/RunningDovecot > Below is the configuration for dovecot.conf snip -- ? http://rob0.nodns4.us/ -- system administration and consulting ? Offlist GMX mail is seen only if "/dev/rob0" is in the Subject: From nicolas.kowalski at gmail.com Thu Jan 12 10:47:07 2012 From: nicolas.kowalski at gmail.com (Nicolas KOWALSKI) Date: Thu, 12 Jan 2012 09:47:07 +0100 Subject: [Dovecot] proxy, managesieve and ssl? In-Reply-To: <95F23E50-BD64-4844-8838-04E5BB9033A7@iki.fi> References: <20120111190118.GD14492@petole.demisel.net> <95F23E50-BD64-4844-8838-04E5BB9033A7@iki.fi> Message-ID: <20120112084707.GE14492@petole.demisel.net> On Wed, Jan 11, 2012 at 10:34:33PM +0200, Timo Sirainen wrote: > On 11.1.2012, at 21.01, Nicolas KOWALSKI wrote: > > > I would like to use IMAPs, instead of IMAP+STARTTLS, from proxy to > > backend, and have Managesieve still working. Is this supported? > > You'll need to kludge it a little bit. I guess you're using LDAP, since you mentioned pass_attrs? Yes, I am using LDAP. > protocol sieve { > passdb { > args = ldap-with-starttls.conf > } > } When just adding the above, it works perfectly, Thanks! > protocol !sieve { > passdb { > args = ldap-with-ssl.conf > } > } Is this really needed? It looks like it works without it. When using it, I get this error: Jan 12 09:40:59 imap1 dovecot: auth: Fatal: No passdbs specified in configuration file. PLAIN mechanism needs one Jan 12 09:40:59 imap1 dovecot: master: Error: service(auth): command startup failed, throttling -- Nicolas From nicolas.kowalski at gmail.com Thu Jan 12 10:58:13 2012 From: nicolas.kowalski at gmail.com (Nicolas KOWALSKI) Date: Thu, 12 Jan 2012 09:58:13 +0100 Subject: [Dovecot] proxy, managesieve and ssl? In-Reply-To: <4F0DF9EB.50605@rename-it.nl> References: <20120111190118.GD14492@petole.demisel.net> <4F0DF9EB.50605@rename-it.nl> Message-ID: <20120112085813.GF14492@petole.demisel.net> On Wed, Jan 11, 2012 at 10:06:51PM +0100, Stephan Bosch wrote: > On 1/11/2012 8:01 PM, Nicolas KOWALSKI wrote: > > > >I would like to use IMAPs, instead of IMAP+STARTTLS, from proxy to > >backend, and have Managesieve still working. Is this supported? > > Although there is no such thing as a standard sieveS protocol, you > can make Dovecot v2.x talk SSL from the start at a ManageSieve > socket. Since normally people will not use something like this, it > is not available by default. > > In conf.d/20-managesieve.conf you can adjust the service definition > of ManageSieve as follows: > > service managesieve-login { > inet_listener sieve { > port = 4190 > } > > inet_listener sieves { > port = 5190 > ssl = yes > } > } This works well, when using (as Timo wrote) a different ldap pass_attrs for sieve, specifying this specific 5190 port. Thanks for your suggestion. > This starts the normal protocol on port 4190 and the direct-SSL > version on an alternative port. You can also put the ssl=yes > directly in the port 4190 listener, as long as no client will have > to connect to this server directly (no client will support it). Well, as this is non-standard, I guess I will not use it. I much prefer to stick with what has been RFCed. -- Nicolas From kjonca at o2.pl Thu Jan 12 12:39:06 2012 From: kjonca at o2.pl (Kamil =?iso-8859-2?Q?Jo=F1ca?=) Date: Thu, 12 Jan 2012 11:39:06 +0100 Subject: [Dovecot] compressed mboxes very slow References: <87iptnoans.fsf@alfa.kjonca> Message-ID: <8739blw6gl.fsf@alfa.kjonca> kjonca at o2.pl (Kamil Jo?ca) writes: > I have some archive mails in gzipped mboxes. I could use them with > dovecot 1.x without problems. > But recently I have installed dovecot 2.0.12, and they are slow. very > slow. Recently I have to read some compressed mboxes again, and no progress :( I took 2.0.17 sources and put some i_debug ("#kjonca["__FILE__",%d,%s] %d", __LINE__,__func__,...some parameters ...); lines into istream-bzlib.c, istream-raw-mbox.c and istream-limit.c and found that: in istream-limit.c in function around lines 40-45: --8<---------------cut here---------------start------------->8--- i_stream_seek(stream->parent, lstream->istream.parent_start_offset + stream->istream.v_offset); stream->pos -= stream->skip; stream->skip = 0; --8<---------------cut here---------------end--------------->8--- seeks stream, (calling i_stream_raw_mbox_seek in file istream-raw-mbox.c ) and then (line 50 ) --8<---------------cut here---------------start------------->8--- if ((ret = i_stream_read(stream->parent)) == -2) return -2; --8<---------------cut here---------------end--------------->8--- tries to read some data earlier in stream, and with compressed mboxes it cause reread file from the beginning. Then I commented out (just for testing) lines 40-45 from istream-limit.c and bzipped mbox can be opened in reasonable time. (MOreover I can read some randomly picked mails without problems) Unfortunately, meanig of fields in istream* structures is very unclear for me (especially skip,pos and offset) to write proper code by myself. KJ -- http://sporothrix.wordpress.com/2011/01/16/usa-sie-krztusza-kto-nastepny/ Jak kto? ma pecha, to z?amie z?b podczas seksu oralnego (S.Sok??) From info_postfix at gmx.ch Thu Jan 12 12:00:52 2012 From: info_postfix at gmx.ch (maximus12) Date: Thu, 12 Jan 2012 02:00:52 -0800 (PST) Subject: [Dovecot] Server Time 45min ahead Message-ID: <33126760.post@talk.nabble.com> Hi, I have the issue that my server clock is 45min fast. Therefore I would like to install ntp. I read a lot on the internet about dovecot and ntp. My issue is that 45 min are a lot an I would like to minimize mail server downtimes as much as possible. I don't care if the time corrections with ntp takes more than a few month. Does anyone know how I should proceed (e.g. how I have to setup ntp -> no time jump during installation and afterwards). Thanks a lot for your help! -- View this message in context: http://old.nabble.com/Server-Time-45min-ahead-tp33126760p33126760.html Sent from the Dovecot mailing list archive at Nabble.com. From Ralf.Hildebrandt at charite.de Thu Jan 12 13:43:50 2012 From: Ralf.Hildebrandt at charite.de (Ralf Hildebrandt) Date: Thu, 12 Jan 2012 12:43:50 +0100 Subject: [Dovecot] Server Time 45min ahead In-Reply-To: <33126760.post@talk.nabble.com> References: <33126760.post@talk.nabble.com> Message-ID: <20120112114350.GQ1341@charite.de> * maximus12 : > > Hi, > > I have the issue that my server clock is 45min fast. Therefore I would like > to install ntp. > I read a lot on the internet about dovecot and ntp. > My issue is that 45 min are a lot an I would like to minimize mail server > downtimes as much as possible. > I don't care if the time corrections with ntp takes more than a few month. > > Does anyone know how I should proceed (e.g. how I have to setup ntp -> no > time jump during installation and afterwards). stop dovecot & postfix ntpdate timeserver start dovecot & postfix start ntpd the time jump is only really critical when the programs are running. -- Ralf Hildebrandt Gesch?ftsbereich IT | Abteilung Netzwerk Charit? - Universit?tsmedizin Berlin Campus Benjamin Franklin Hindenburgdamm 30 | D-12203 Berlin Tel. +49 30 450 570 155 | Fax: +49 30 450 570 962 ralf.hildebrandt at charite.de | http://www.charite.de From info_postfix at gmx.ch Thu Jan 12 13:49:15 2012 From: info_postfix at gmx.ch (maximus12) Date: Thu, 12 Jan 2012 03:49:15 -0800 (PST) Subject: [Dovecot] Server Time 45min ahead In-Reply-To: <20120112114350.GQ1341@charite.de> References: <33126760.post@talk.nabble.com> <20120112114350.GQ1341@charite.de> Message-ID: <33127262.post@talk.nabble.com> Thanks a lot for your quick response. I thought that dovecot won't start until the server time reaches the time before the "time jump". >From your point of view dovecot will start normally if I adjust the time when dovecot is stopped? Thanks a lot for clarification. Ralf Hildebrandt wrote: > > * maximus12 : >> >> Hi, >> >> I have the issue that my server clock is 45min fast. Therefore I would >> like >> to install ntp. >> I read a lot on the internet about dovecot and ntp. >> My issue is that 45 min are a lot an I would like to minimize mail server >> downtimes as much as possible. >> I don't care if the time corrections with ntp takes more than a few >> month. >> >> Does anyone know how I should proceed (e.g. how I have to setup ntp -> no >> time jump during installation and afterwards). > > stop dovecot & postfix > ntpdate timeserver > start dovecot & postfix > start ntpd > > the time jump is only really critical when the programs are running. > > -- > Ralf Hildebrandt > Gesch?ftsbereich IT | Abteilung Netzwerk > Charit? - Universit?tsmedizin Berlin > Campus Benjamin Franklin > Hindenburgdamm 30 | D-12203 Berlin > Tel. +49 30 450 570 155 | Fax: +49 30 450 570 962 > ralf.hildebrandt at charite.de | http://www.charite.de > > > -- View this message in context: http://old.nabble.com/Server-Time-45min-ahead-tp33126760p33127262.html Sent from the Dovecot mailing list archive at Nabble.com. From Ralf.Hildebrandt at charite.de Thu Jan 12 13:57:05 2012 From: Ralf.Hildebrandt at charite.de (Ralf Hildebrandt) Date: Thu, 12 Jan 2012 12:57:05 +0100 Subject: [Dovecot] Server Time 45min ahead In-Reply-To: <33127262.post@talk.nabble.com> References: <33126760.post@talk.nabble.com> <20120112114350.GQ1341@charite.de> <33127262.post@talk.nabble.com> Message-ID: <20120112115705.GS1341@charite.de> * maximus12 : > > Thanks a lot for your quick response. > > I thought that dovecot won't start until the server time reaches the time > before the "time jump". > > From your point of view dovecot will start normally if I adjust the time > when dovecot is stopped? Don't take my word for it, but I think the behaviour is this: * dovecot is running, time jumps backwards -> dovecot exits * dovecot is not running, time jumps backwards -> dovecot can be started It also depends on your dovecot version, see http://wiki.dovecot.org/TimeMovedBackwards -- Ralf Hildebrandt Gesch?ftsbereich IT | Abteilung Netzwerk Charit? - Universit?tsmedizin Berlin Campus Benjamin Franklin Hindenburgdamm 30 | D-12203 Berlin Tel. +49 30 450 570 155 | Fax: +49 30 450 570 962 ralf.hildebrandt at charite.de | http://www.charite.de From Harlan.Stenn at pfcs.com Thu Jan 12 14:47:31 2012 From: Harlan.Stenn at pfcs.com (Harlan Stenn) Date: Thu, 12 Jan 2012 07:47:31 -0500 Subject: [Dovecot] Server Time 45min ahead In-Reply-To: <20120112114350.GQ1341@charite.de> References: <33126760.post@talk.nabble.com> <20120112114350.GQ1341@charite.de> Message-ID: <20120112124731.42C752842A@gwc.pfcs.com> Ralf wrote: > stop dovecot & postfix > ntpdate timeserver > start dovecot & postfix > start ntpd Speaking as stenn at ntp.org, I recommend: - run 'ntpd -gN' as early as possible in the startup sequence (no need for ntpdate) then as late as possible in the startup sequence, run: - ntp-wait -v -s 1 ; start dovecot and postfix (and database servers) H From moseleymark at gmail.com Thu Jan 12 20:32:16 2012 From: moseleymark at gmail.com (Mark Moseley) Date: Thu, 12 Jan 2012 10:32:16 -0800 Subject: [Dovecot] moving mail out of alt storage In-Reply-To: <87obylafsw.fsf_-_@algae.riseup.net> References: <87sjnya3z5.fsf@algae.riseup.net> <1316077133.12936.18.camel@hurina> <87obylafsw.fsf_-_@algae.riseup.net> Message-ID: On Thu, Sep 15, 2011 at 10:14 AM, Micah Anderson wrote: > Timo Sirainen writes: > >> On Wed, 2011-09-14 at 23:17 -0400, Micah Anderson wrote: >>> I moved some mail into the alt storage: >>> >>> doveadm altmove -u johnd at example.com seen savedbefore 1w >>> >>> and now I want to move it back to the regular INBOX, but I can't see how >>> I can do that with either 'altmove' or 'mailbox move'. >> >> Is this sdbox or mdbox? With sdbox you could simply "mv" the files. Or >> apply patch: http://hg.dovecot.org/dovecot-2.0/rev/1910c76a6cc9 > > This is mdbox, which is why I am not sure how to operate because I am > used to individual files as is with maildir. > > micah > I'm curious about this too. Is moving the m.# file out of the ALT path's storage/ directory into the non-ALT storage/ directory sufficient? Or will that cause odd issues? From tss at iki.fi Thu Jan 12 22:20:06 2012 From: tss at iki.fi (Timo Sirainen) Date: Thu, 12 Jan 2012 22:20:06 +0200 Subject: [Dovecot] MASTER_AUTH_MAX_DATA_SIZE In-Reply-To: <1BCAD28D-8120-45C9-BAA2-B6597C34545A@apple.com> References: <1BCAD28D-8120-45C9-BAA2-B6597C34545A@apple.com> Message-ID: <09EF3E7A-15A2-45EE-91BD-6EEFD1FD8049@iki.fi> On 12.1.2012, at 1.09, Mike Abbott wrote: > In 2.0.17 you increased LOGIN_MAX_INBUF_SIZE from 1024 to 4096. > Should you also have increased MASTER_AUTH_MAX_DATA_SIZE from (1024*2) to (4096*2)? > /* This should be kept in sync with LOGIN_MAX_INBUF_SIZE. Multiply it by two > to make sure there's space to transfer the command tag */ Well, yes.. Although I'd rather not do that. 1. Command tag length needs to be restricted to something reasonable, maybe 100 chars, so it won't have to be multiplied by 2 but just added the 100 (+1 for NUL). 2. Maybe I can change the LOGIN_MAX_INBUF_SIZE back to its original size and change the AUTHENTICATE command handling to read the SASL initial response to a separate buffer. I'll try doing those next week. From mcbdovecot at robuust.nl Fri Jan 13 01:10:59 2012 From: mcbdovecot at robuust.nl (Maarten Bezemer) Date: Fri, 13 Jan 2012 00:10:59 +0100 (CET) Subject: [Dovecot] Need help with details for new Dovecot plugin In-Reply-To: <1326308231.2329.50.camel@rover> References: <1326308010.2329.47.camel@rover> <1326308231.2329.50.camel@rover> Message-ID: >> A couple years ago, I wrote some code for our Courier implementation >> that sent a magic UDP packet to a small server each time a user modified >> their voicemail IMAP folder. That UDP server would then connect back to >> Courier via IMAP again and check whether the folder had any unread >> messages left in it. Finally, it would contact our phone switches to >> modify the state of the message waiting indicator (MWI) on that user's >> phone line appropriately. Using a Dovecot plugin for this would require mail delivery to go through Dovecot as well as all mail access. So, no postfix or exim or whatever doing mail delivery by itself (mbox/maildir), and no MUAs accessing mail locally. With courier, you probably had everything going through courier, but with Dovecot, that need not always be the case. So, using a dovecot-plugin for this may not even catch everything. Of course I don't know anything about the details of the project (number of users, requirements for speed of MWI updates, mail storage type, etc.) but if it's not a very large setup and mail storage is mbox or maildir, I'd probably go for cron-based external monitoring using find and stuff like that. Maybe even with login scripting for extra triggering. HTH... -- Maarten From tss at iki.fi Fri Jan 13 01:17:47 2012 From: tss at iki.fi (Timo Sirainen) Date: Fri, 13 Jan 2012 01:17:47 +0200 Subject: [Dovecot] (no subject) In-Reply-To: <1326308010.2329.47.camel@rover> References: <1326308010.2329.47.camel@rover> Message-ID: On 11.1.2012, at 20.53, Geoffrey Broadwell wrote: > So now the hard part is writing the piece that I can't just crib from > elsewhere -- making sure that I hook every place in Dovecot that the > user's voicemail folder can be changed in a way that would change it > between having one or more unread messages, and not having any unread > messages at all (or vice-versa, of course). At the same time, I want to > minimize the performance impact to Dovecot (and the load on the UDP > server) by only hooking the places I need to, filtering out as many > false positives as I can without introducing massive complexity, and > only pinging the UDP server when it's most likely to notice a change in > the state of that user's voicemail server. I think notify plugin would help you do this the easiest way. See mail_log plugin for an example of how to use it. From noel.butler at ausics.net Fri Jan 13 03:15:13 2012 From: noel.butler at ausics.net (Noel Butler) Date: Fri, 13 Jan 2012 11:15:13 +1000 Subject: [Dovecot] Server Time 45min ahead In-Reply-To: <20120112124731.42C752842A@gwc.pfcs.com> References: <33126760.post@talk.nabble.com> <20120112114350.GQ1341@charite.de> <20120112124731.42C752842A@gwc.pfcs.com> Message-ID: <1326417313.5785.3.camel@tardis> On Thu, 2012-01-12 at 07:47 -0500, Harlan Stenn wrote: > > then as late as possible in the startup sequence, run: > > - ntp-wait -v -s 1 ; start dovecot and postfix (and database servers) I'll +1 that advice, I introduced ntp-wait sometime ago when dovecot kept bitching, not a single glitch since. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 490 bytes Desc: This is a digitally signed message part URL: From user+dovecot at localhost.localdomain.org Fri Jan 13 03:25:41 2012 From: user+dovecot at localhost.localdomain.org (Pascal Volk) Date: Fri, 13 Jan 2012 02:25:41 +0100 Subject: [Dovecot] 2.1.rc3 (1a722c7676bb): doveadm mailbox list withholds child mailboxes Message-ID: <4F0F8815.8070609@localhost.localdomain.org> Probably I've overlooked something. But a quick search in `hg log -k doveadm` didn't show appropriate information. doveadm mailbox list -u user at example.com doesn't show child mailboxes. mailbox = N/A || \*: Sent Trash INBOX Drafts Junk-E-Mail Supplier mailbox = Supplier*: Supplier mailbox = Supplier/*: Supplier/Dell Supplier/VMware Supplier/? The same problem exists in `doveadm mailbox status` Regards, Pascal -- The trapper recommends today: defaced.1201301 at localdomain.org From moseleymark at gmail.com Fri Jan 13 04:00:08 2012 From: moseleymark at gmail.com (Mark Moseley) Date: Thu, 12 Jan 2012 18:00:08 -0800 Subject: [Dovecot] MySQL server has gone away Message-ID: I'm running 2.0.17 and I'm still seeing a decent amount of "MySQL server has gone away" errors, despite having multiple hosts defined in my auth userdb 'connect'. This is Debian Lenny 32-bit and I'm seeing the same thing with 2.0.16 on Debian Squeeze 64-bit. E.g.: Jan 12 20:30:33 auth-worker: Error: mysql: Query failed, retrying: MySQL server has gone away Our mail mysql servers are busy enough that wait_timeout is set to a whopping 30 seconds. On my regular boxes, I see a good deal of these in the logs. I've been doing a lot of mucking with doveadm/dsync (working on maildir->mdbox migration finally, yay!) on test boxes (same dovecot package & version) and when I get this error, despite the log saying it's retrying, it doesn't seem to be. Instead I get: dsync(root): Error: user ...: Auth USER lookup failed dsync(root): Fatal: User lookup failed: Internal error occurred. Refer to server log for more information. Watching tcpdump at the same time, it looks like it's going through some of the mysql servers, but all of them have by now disconnected and are in CLOSE_WAIT. Here's an (edited) example after doing a dsync that completes without errors, with tcpdump running in the background: # sleep 30; netstat -ant | grep 3306; dsync -C^ -u mailbox at test.com backup mdbox:~/mdbox tcp 1 0 10.1.15.129:57436 10.1.52.48:3306 CLOSE_WAIT tcp 1 0 10.1.15.129:49917 10.1.52.49:3306 CLOSE_WAIT tcp 1 0 10.1.15.129:35904 10.1.52.47:3306 CLOSE_WAIT 20:49:59.725005 IP 10.1.15.129.35904 > 10.1.52.47.3306: F 1126:1126(0) ack 807 win 1004 20:49:59.725459 IP 10.1.52.47.3306 > 10.1.15.129.35904: . ack 1127 win 123 20:49:59.725568 IP 10.1.15.129.57436 > 10.1.52.48.3306: F 1126:1126(0) ack 807 win 1004 20:49:59.725779 IP 10.1.52.48.3306 > 10.1.15.129.57436: . ack 1127 win 123 dsync(root): Error: user mailbox at test.com: Auth USER lookup failed dsync(root): Fatal: User lookup failed: Internal error occurred. Refer to server log for more information. 10.1.15.129 in this case is the dovecot server, and the 10.1.52.0/24 boxes are mysql servers. That's the same pattern I've seen almost every time. Just a FIN packet to two of the servers (ack'd by the mysql server) and then it fails. Is the retry mechanism supposed to transparently start a new connection, or is this how it works? In connecting remotely to these same servers (which aren't getting production traffic, so I'm the only person connecting to them), I get seemingly random disconnects via IMAP, always coinciding with a "MySQL server has gone away" error in the logs. This is non-production, so I'm happy to turn on whatever debugging would be useful. Here's doveconf -n from the box the tcpdump was on. This box is just configured for lmtp (but have seen the same thing on one configured for IMAP/POP as well), so it's pretty small, config-wise: # 2.0.17: /etc/dovecot/dovecot/dovecot.conf # OS: Linux 3.0.9-nx i686 Debian 5.0.9 auth_cache_negative_ttl = 0 auth_cache_ttl = 0 auth_debug = yes auth_failure_delay = 0 base_dir = /var/run/dovecot/ debug_log_path = /var/log/dovecot/debug.log default_client_limit = 3005 default_internal_user = doveauth default_process_limit = 1500 deliver_log_format = M=%m, F=%f, S="%s" => %$ disable_plaintext_auth = no first_valid_uid = 199 last_valid_uid = 201 lda_mailbox_autocreate = yes listen = * log_path = /var/log/dovecot/mail.log mail_debug = yes mail_fsync = always mail_location = maildir:~/Maildir:INDEX=/var/cache/dovecot/%2Mu/%2.2Mu/%u mail_nfs_index = yes mail_nfs_storage = yes mail_plugins = zlib quota mail_privileged_group = mail mail_uid = 200 managesieve_notify_capability = mailto managesieve_sieve_capability = fileinto reject envelope encoded-character vacation subaddress comparator-i;ascii-numeric relational regex imap4flags copy include variables body enotify environment mailbox date ihave mdbox_rotate_interval = 1 days mmap_disable = yes namespace { hidden = no inbox = yes list = yes location = prefix = INBOX. separator = . subscriptions = yes type = private } passdb { args = /opt/dovecot/etc/lmtp/sql.conf driver = sql } plugin { info_log_path = /var/log/dovecot/dovecot-deliver.log log_path = /var/log/dovecot/dovecot-deliver.log quota = maildir:User quota quota_rule = *:bytes=25M quota_rule2 = INBOX.Trash:bytes=+10%% quota_rule3 = *:messages=3000 sieve = ~/sieve/dovecot.sieve sieve_before = /etc/dovecot/scripts/spam.sieve sieve_dir = ~/sieve/ zlib_save = gz zlib_save_level = 3 } protocols = lmtp sieve service auth-worker { unix_listener auth-worker { mode = 0666 } user = doveauth } service auth { client_limit = 8000 unix_listener login/auth { mode = 0666 } user = doveauth } service lmtp { executable = lmtp -L process_min_avail = 10 unix_listener lmtp { mode = 0666 } } ssl = no userdb { driver = prefetch } userdb { args = /opt/dovecot/etc/lmtp/sql.conf driver = sql } verbose_proctitle = yes protocol lmtp { mail_plugins = zlib quota sieve } Thanks! From henson at acm.org Fri Jan 13 04:51:29 2012 From: henson at acm.org (Paul B. Henson) Date: Thu, 12 Jan 2012 18:51:29 -0800 Subject: [Dovecot] MySQL server has gone away In-Reply-To: References: Message-ID: <4F0F9C31.8070009@acm.org> On 1/12/2012 6:00 PM, Mark Moseley wrote: > Jan 12 20:30:33 auth-worker: Error: mysql: Query failed, retrying: > MySQL server has gone away I've actually been meaning to send a similar message for the last couple of months :). We run dovecot solely as a sasl authentication provider to postfix for smtp authentication. We're currently running 2.0.15 with a handful of patches from a few months ago when Timo fixed mysql failover. We also see sporadic messages like that in the logs: Jan 11 01:00:57 sparky dovecot: auth-worker: Error: mysql: Query failed, retrying: MySQL server has gone away We do have a timeout on the mysql servers, so I don't necessarily mind this message, except we also see some number of these: Jan 11 01:00:57 sparky dovecot: auth-worker: Error: sql(clgeurts,108.38.64.98): Password query failed: MySQL server has gone away The mysql servers have never been down or unresponsive, if it retries, it should succeed. I'm not sure what's happening here, perhaps it tries the query on one mysql server connection (we have two configured) which has timed out, and then tries the other one, and if the other one has also timed out just fails? I also see some auth timeouts: Jan 11 22:06:02 sparky dovecot: auth: CRAM-MD5(?,200.37.175.14): Request 10232.28 timeouted after 150 secs, state=2 I'm not sure if they're related to the mysql timeouts. There are also some postfix auth errors: Jan 11 23:55:41 sparky postfix/smtpd[20994]: warning: unknown[200.37.175.14]: SASL CRAM-MD5 authentication failed: Connection lost to authentication server Which I think happen when dovecot takes too long to respond. I haven't had time to dig into it or get any debugging info, but just thought I'd pipe up when I saw your similar question :). -- Paul B. Henson | (909) 979-6361 | http://www.csupomona.edu/~henson/ Operating Systems and Network Analyst | henson at csupomona.edu California State Polytechnic University | Pomona CA 91768 From user+dovecot at localhost.localdomain.org Fri Jan 13 05:10:31 2012 From: user+dovecot at localhost.localdomain.org (Pascal Volk) Date: Fri, 13 Jan 2012 04:10:31 +0100 Subject: [Dovecot] 2.1.rc3 (1a722c7676bb):dsync umlaut problems Message-ID: <4F0FA0A7.10909@localhost.localdomain.org> All umlauts in mailbox names are lost after converting mbox/Maildir mailboxes to mdbox. [location2 scp-ed from the old server] # ls -d /srv/import/Maildir/.Gel\&APY-schte\ Elemente/ /srv/import/Maildir/.Gel&APY-schte Elemente/ # dsync -u jane at example.com -v mirror maildir:/srv/import/Maildir/ ? dsync(jane at example.com): Info: Gel?schte Elemente: only in dest ? ? # doveadm mailbox list -u jane at example.com Gel* Gel__schte_Elemente # ls -d mdbox/mailboxes/Gel* mdbox/mailboxes/Gel__schte_Elemente Regards, Pascal -- The trapper recommends today: cafefeed.1201303 at localdomain.org From mark at msapiro.net Fri Jan 13 06:37:24 2012 From: mark at msapiro.net (Mark Sapiro) Date: Thu, 12 Jan 2012 20:37:24 -0800 Subject: [Dovecot] Clients show .subscriptions folder In-Reply-To: References: Message-ID: <4F0FB504.5070802@msapiro.net> Mark Sapiro wrote: > Since upgrading from dovecot-2.1.rc1 to dovecot-2.1.rc3, some clients > are showing a .subscriptions file in the user's mbox path as a folder. > > Some clients such as T'bird on Mac OS X create this file listing > subscribed mbox files. Other clients such as T'bird on Windows XP show > this file as a folder in the folder list even though it cannot be > accessed as a folder (dovecot returns CANNOT Mailbox is not a valid > mbox file). > > I think this may be a result of uncommenting the inbox namespace in > conf.d/10-mail.conf > . > > Is there a way to supress exposing this file to clients that don't use > it? I worked around this by setting the client to show only subscribed folders. -- Mark Sapiro The highway is for gamblers, San Francisco Bay Area, California better use your sense - B. Dylan From kjonca at o2.pl Fri Jan 13 08:20:13 2012 From: kjonca at o2.pl (Kamil =?iso-8859-2?Q?Jo=F1ca?=) Date: Fri, 13 Jan 2012 07:20:13 +0100 Subject: [Dovecot] dovecot 2.0.15 - purge errors Message-ID: <87hb00run6.fsf@alfa.kjonca> Dovecot 2.0.15, debian package, am I lost some mails? How can I check what is in *.broken file? --8<---------------cut here---------------start------------->8--- $doveadm -v purge doveadm(kjonca): Error: Corrupted dbox file /home/kjonca/Mail/0/storage/m.6469 (around offset=291530): purging found mismatched offsets (291500 vs 299692, 60/215) doveadm(kjonca): Warning: mdbox /home/kjonca/Mail/0/storage: rebuilding indexes doveadm(kjonca): Error: Corrupted dbox file /home/kjonca/Mail/0/storage/m.6469 (around offset=599914): metadata header has bad magic value doveadm(kjonca): Warning: dbox: Copy of the broken file saved to /home/kjonca/Mail/0/storage/m.6469.broken doveadm(kjonca): Warning: Transaction log file /home/kjonca/Mail/0/storage/dovecot.map.index.log was locked for 211 seconds doveadm(kjonca): Error: Purging namespace '' failed: Internal error occurred. Refer to server log for more information. [2012-01-13 06:45:07] --8<---------------cut here---------------end--------------->8--- doveconf -n --8<---------------cut here---------------start------------->8--- # 2.0.15: /etc/dovecot/dovecot.conf # OS: Linux 2.6.38+3-64 x86_64 Debian wheezy/sid auth_debug = yes auth_mechanisms = digest-md5 cram-md5 login plain auth_verbose = yes listen = alfa log_path = /var/log/dovecot log_timestamp = "%Y-%m-%d %H:%M:%S " mail_debug = yes mail_location = mdbox:~/Mail/0 mail_log_prefix = "%Us(%u): " mail_plugins = zlib notify acl mail_privileged_group = mail namespace { hidden = no inbox = yes list = yes location = prefix = separator = / subscriptions = yes type = private } namespace { hidden = no inbox = no list = yes location = mbox:~/Mail/Old:CONTROL=~/Mail/.dovecot/control/Old:INDEX=~/Mail/.dovecot/index/Old prefix = "#Old/" separator = / subscriptions = yes type = private } passdb { args = scheme=PLAIN /etc/security/dovecot.pwd driver = passwd-file } plugin { acl = vfile mail_log_events = delete undelete expunge copy mailbox_delete mailbox_rename mail_log_fields = uid box msgid size zlib_save = bz2 zlib_save_level = 9 } protocols = imap service auth { user = root } service imap-login { process_limit = 2 process_min_avail = 1 } service imap { vsz_limit = 512 M } service pop3-login { process_limit = 2 process_min_avail = 1 } service pop3 { vsz_limit = 512 M } ssl = no userdb { driver = passwd } verbose_proctitle = yes protocol imap { mail_max_userip_connections = 20 mail_plugins = zlib imap_zlib mail_log notify acl } protocol pop3 { pop3_uidl_format = %08Xu%08Xv } protocol lda { deliver_log_format = msgid=%m: %$ log_path = ~/log/deliver.log postmaster_address = root at localhost } --8<---------------cut here---------------end--------------->8--- -- Gdyby kto? mia? zb?dny Toshiba G450 - to ch?tnie przejm? ;) ---------------- Biologia poucza, ze je?li ci? co? ugryz?o, to niemal pewne, ze by?a to samica. From goetz.reinicke at filmakademie.de Fri Jan 13 11:01:05 2012 From: goetz.reinicke at filmakademie.de (=?ISO-8859-15?Q?G=F6tz_Reinicke?=) Date: Fri, 13 Jan 2012 10:01:05 +0100 Subject: [Dovecot] more than 200 imap processes for one user Message-ID: <4F0FF2D1.4040909@filmakademie.de> HI, recently I noticed, that our dovecot server (RH EL 5.7 dovecot-1.0.7-7.el5_7.1) 'fires' up a lot of imap processes only for one user. I counted 214 :-) most of tham in the 'S' state and are started nearly at the same time within 5 minutes. Usually users do have about 4 to 10 .... Dose anyone has an idea, what could be the cause? Thanks for any suggestion and best regards . G?tz -- G?tz Reinicke IT-Koordinator Tel. +49 7141 969 420 Fax +49 7141 969 55 420 E-Mail goetz.reinicke at filmakademie.de Filmakademie Baden-W?rttemberg GmbH Akademiehof 10 71638 Ludwigsburg www.filmakademie.de Eintragung Amtsgericht Stuttgart HRB 205016 Vorsitzender des Aufsichtsrats: J?rgen Walter MdL Staatssekret?r im Ministerium f?r Wissenschaft, Forschung und Kunst Baden-W?rttemberg Gesch?ftsf?hrer: Prof. Thomas Schadt -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 5161 bytes Desc: S/MIME Kryptografische Unterschrift URL: From tss at iki.fi Fri Jan 13 11:36:38 2012 From: tss at iki.fi (Timo Sirainen) Date: Fri, 13 Jan 2012 11:36:38 +0200 Subject: [Dovecot] MySQL server has gone away In-Reply-To: References: Message-ID: On 13.1.2012, at 4.00, Mark Moseley wrote: > I'm running 2.0.17 and I'm still seeing a decent amount of "MySQL > server has gone away" errors, despite having multiple hosts defined in > my auth userdb 'connect'. This is Debian Lenny 32-bit and I'm seeing > the same thing with 2.0.16 on Debian Squeeze 64-bit. > > E.g.: > > Jan 12 20:30:33 auth-worker: Error: mysql: Query failed, retrying: > MySQL server has gone away > > Our mail mysql servers are busy enough that wait_timeout is set to a > whopping 30 seconds. On my regular boxes, I see a good deal of these > in the logs. I've been doing a lot of mucking with doveadm/dsync > (working on maildir->mdbox migration finally, yay!) on test boxes > (same dovecot package & version) and when I get this error, despite > the log saying it's retrying, it doesn't seem to be. Instead I get: > > dsync(root): Error: user ...: Auth USER lookup failed Try with only one host in the "connect" string? My guess: Both the connections have timed out, and the retrying fails as well (there is only one retry). Although if the retrying lookup fails, there should be an error logged about it also (you don't see one?) Also another idea to avoid them in the first place: service auth-worker { idle_kill = 20 } From tss at iki.fi Fri Jan 13 11:40:02 2012 From: tss at iki.fi (Timo Sirainen) Date: Fri, 13 Jan 2012 11:40:02 +0200 Subject: [Dovecot] more than 200 imap processes for one user In-Reply-To: <4F0FF2D1.4040909@filmakademie.de> References: <4F0FF2D1.4040909@filmakademie.de> Message-ID: On 13.1.2012, at 11.01, G?tz Reinicke wrote: > recently I noticed, that our dovecot server (RH EL 5.7 > dovecot-1.0.7-7.el5_7.1) 'fires' up a lot of imap processes only for one > user. v1.1+ limits this to 10 processes by default. > Dose anyone has an idea, what could be the cause? Some client gone crazy. From janfrode at tanso.net Fri Jan 13 12:26:56 2012 From: janfrode at tanso.net (Jan-Frode Myklebust) Date: Fri, 13 Jan 2012 11:26:56 +0100 Subject: [Dovecot] dsync conversion and ldap attributes Message-ID: <20120113102656.GA12031@dibs.tanso.net> I have: mail_home = /srv/mailstore/%256RHu/%d/%n mail_location = maildir:~/:INDEX=/indexes/%1u/%1.1u/%u userdb { args = /etc/dovecot/dovecot-ldap.conf.ext driver = ldap } and the dovecot-ldap.conf.ext specifies: user_attrs = mailMessageStore=home, mailLocation=mail, mailQuota=quota_rule=*:storage=%$ Now I want to convert individual users to mdbox using dsync, but how to I tell location2 to not fetch "home" and "mail" from ldap and use different mail_location (mdbox:~/mdbox) ? I.e. I want converted accounts stored in mail_location mdbox:/srv/mailstore/%256RHu/%d/%n/mdbox. -jf From CMarcus at Media-Brokers.com Fri Jan 13 13:38:01 2012 From: CMarcus at Media-Brokers.com (Charles Marcus) Date: Fri, 13 Jan 2012 06:38:01 -0500 Subject: [Dovecot] Need help with details for new Dovecot plugin In-Reply-To: References: <1326308010.2329.47.camel@rover> <1326308231.2329.50.camel@rover> Message-ID: <4F101799.6040002@Media-Brokers.com> On 2012-01-12 6:10 PM, Maarten Bezemer wrote: > Of course I don't know anything about the details of the project (number > of users, requirements for speed of MWI updates, mail storage type, > etc.) but if it's not a very large setup and mail storage is mbox or > maildir, I'd probably go for cron-based external monitoring using find > and stuff like that. Maybe even with login scripting for extra triggering. I know that dovecot supports inotify (not sure how or in what way, and ianap, so may be totally off base), so maybe that could be leveraged? -- Best regards, Charles From CMarcus at Media-Brokers.com Fri Jan 13 13:41:51 2012 From: CMarcus at Media-Brokers.com (Charles Marcus) Date: Fri, 13 Jan 2012 06:41:51 -0500 Subject: [Dovecot] Need help with details for new Dovecot plugin - was: Re: (no subject) In-Reply-To: References: <1326308010.2329.47.camel@rover> Message-ID: <4F10187F.3010507@Media-Brokers.com> On 2012-01-12 6:17 PM, Timo Sirainen wrote: > On 11.1.2012, at 20.53, Geoffrey Broadwell wrote: >> So now the hard part is writing the piece that I can't just crib from >> elsewhere -- making sure that I hook every place in Dovecot that the >> user's voicemail folder can be changed in a way that would change it >> between having one or more unread messages, and not having any unread >> messages at all (or vice-versa, of course). At the same time, I want to >> minimize the performance impact to Dovecot (and the load on the UDP >> server) by only hooking the places I need to, filtering out as many >> false positives as I can without introducing massive complexity, and >> only pinging the UDP server when it's most likely to notice a change in >> the state of that user's voicemail server. > I think notify plugin would help you do this the easiest way. See > mail_log plugin for an example of how to use it. Oops, should have read all messages before replying (I usually skip messages with (no subject), but I try to read everything on some lists (dovecot is one of them)... Timo - searching on 'inotify' or 'notify' on both wiki1 and wiki2 has 'no results'... maybe the search indexes need to be updated? Or, is it just that there really is no documentation of inotify on either of the wikis? -- Best regards, Charles From joseba.torre at ehu.es Fri Jan 13 15:59:25 2012 From: joseba.torre at ehu.es (Joseba Torre) Date: Fri, 13 Jan 2012 14:59:25 +0100 Subject: [Dovecot] Dsync and compressed mailboxes Message-ID: <4F1038BD.1010605@ehu.es> Hi, I will begin two migrations next week, and in both cases I plan to use compressed mailboxes with mdbox format. But in the last minute one doubt has appeared: is dsync aware of compressed mailboxes? I'm not sure if dsync -u $USER mirror mdbox:compressed_mdbox_path works, or if I have to use something else (I guess that with a running dovecot dsync backup should work). Thanks. From ivo at crm.walltopia.com Fri Jan 13 19:11:30 2012 From: ivo at crm.walltopia.com (IVO GELOV (CRM)) Date: Fri, 13 Jan 2012 19:11:30 +0200 Subject: [Dovecot] Using Dovecot-auth to return error code 450 (or other 4xx) to Postfix when user is on vacation Message-ID: Hello to all members. I am using Dovecot for 5 years, but this is my first post here. I am aware of the various autoresponder scripts for vacation autoreplies (I am using Virtual Vacation 3.1 by Mischa Peters). I have an issue with auto-replies - it is vulnerable to spamming with forged email address. Forging can be prevented with several Postfix settings, which I did in the past - but was forced to remove, because our company occasionaly has clients with improper configurations and those settings prevent us to receive their legitimate mail (and this of course is not good for the business). So I have though about another idea. Since I use Dovecot-auth to verify mailbox existence - I just wonder is it possible to somehow indicate specific error code (and hopefully descriptive text also) to Postfix (e.g. 450 or some other temporary failure) when the owner of the mailbox is currently on vacation ? Best wishes, IVO GELOV From CMarcus at Media-Brokers.com Fri Jan 13 20:03:36 2012 From: CMarcus at Media-Brokers.com (Charles Marcus) Date: Fri, 13 Jan 2012 13:03:36 -0500 Subject: [Dovecot] Using Dovecot-auth to return error code 450 (or other 4xx) to Postfix when user is on vacation In-Reply-To: References: Message-ID: <4F1071F8.4080202@Media-Brokers.com> On 2012-01-13 12:11 PM, IVO GELOV (CRM) wrote: > I am aware of the various autoresponder scripts for vacation autoreplies > (I am using Virtual Vacation 3.1 by Mischa Peters). > I have an issue with auto-replies - it is vulnerable to spamming with > forged email address. I think you are using an extremely old/outdated version... The latest version would not suffer this problem, because it has a lot of message types that it will *not* respond to, including messages appearing to be from yourself... Get the latest version fro the postfixadmin package. However, I don't know how to use it without also using postfixadmin (it creates databases for storing the vacation message, etc)... -- Best regards, Charles From moseleymark at gmail.com Fri Jan 13 20:29:45 2012 From: moseleymark at gmail.com (Mark Moseley) Date: Fri, 13 Jan 2012 10:29:45 -0800 Subject: [Dovecot] MySQL server has gone away In-Reply-To: References: Message-ID: On Fri, Jan 13, 2012 at 1:36 AM, Timo Sirainen wrote: > On 13.1.2012, at 4.00, Mark Moseley wrote: > >> I'm running 2.0.17 and I'm still seeing a decent amount of "MySQL >> server has gone away" errors, despite having multiple hosts defined in >> my auth userdb 'connect'. This is Debian Lenny 32-bit and I'm seeing >> the same thing with 2.0.16 on Debian Squeeze 64-bit. >> >> E.g.: >> >> Jan 12 20:30:33 auth-worker: Error: mysql: Query failed, retrying: >> MySQL server has gone away >> >> Our mail mysql servers are busy enough that wait_timeout is set to a >> whopping 30 seconds. On my regular boxes, I see a good deal of these >> in the logs. I've been doing a lot of mucking with doveadm/dsync >> (working on maildir->mdbox migration finally, yay!) on test boxes >> (same dovecot package & version) and when I get this error, despite >> the log saying it's retrying, it doesn't seem to be. Instead I get: >> >> dsync(root): Error: user ...: Auth USER lookup failed > > Try with only one host in the "connect" string? My guess: Both the connections have timed out, and the retrying fails as well (there is only one retry). Although if the retrying lookup fails, there should be an error logged about it also (you don't see one?) > > Also another idea to avoid them in the first place: > > service auth-worker { > ?idle_kill = 20 > } > With just one 'connect' host, it seems to reconnect just fine (using the same tests as above) and I'm not seeing the same error. It worked every time that I tried, with no complaints of "MySQL server has gone away". If there are multiple hosts, it seems like the most robust thing to do would be to exhaust the existing connections and if none of those succeed, then start a new connection to one of them. It will probably result in much more convoluted logic but it'd probably match better what people expect from a retry. Alternatively, since in all my tests, the mysql server has closed the connection prior to this, is the auth worker not recognizing its connection is already half-closed (in which case, it probably shouldn't even consider it a legitimate connection and just automatically reconnect, i.e. try #1, not the retry, which would happen after another failure). I'll give the idle_kill a try too. I kind of like the idea of idle_kill for auth processes anyway, just to free up some connections on the mysql server. From ghandidrivesahumvee at rocketfish.com Fri Jan 13 20:59:02 2012 From: ghandidrivesahumvee at rocketfish.com (Dovecot-GDH) Date: Fri, 13 Jan 2012 10:59:02 -0800 Subject: [Dovecot] Dsync and compressed mailboxes In-Reply-To: <4F1038BD.1010605@ehu.es> References: <4F1038BD.1010605@ehu.es> Message-ID: <01D2B152-D1C3-4A89-8CE7-608357ADCBC2@rocketfish.com> The dsync process will be aware of whatever configuration file it refers to. The best thing to do is to set up a separate instance of Dovecot with compression enabled (really not that hard to do) and point dsync to that separate instances's configuration. Mailboxes written by dsync will be compressed. On Jan 13, 2012, at 5:59 AM, Joseba Torre wrote: > Hi, > > I will begin two migrations next week, and in both cases I plan to use compressed mailboxes with mdbox format. But in the last minute one doubt has appeared: is dsync aware of compressed mailboxes? I'm not sure if > > dsync -u $USER mirror mdbox:compressed_mdbox_path > > works, or if I have to use something else (I guess that with a running dovecot dsync backup should work). > > Thanks. From robert at schetterer.org Fri Jan 13 21:38:28 2012 From: robert at schetterer.org (Robert Schetterer) Date: Fri, 13 Jan 2012 20:38:28 +0100 Subject: [Dovecot] MySQL server has gone away In-Reply-To: References: Message-ID: <4F108834.60709@schetterer.org> Am 13.01.2012 19:29, schrieb Mark Moseley: > On Fri, Jan 13, 2012 at 1:36 AM, Timo Sirainen wrote: >> On 13.1.2012, at 4.00, Mark Moseley wrote: >> >>> I'm running 2.0.17 and I'm still seeing a decent amount of "MySQL >>> server has gone away" errors, despite having multiple hosts defined in >>> my auth userdb 'connect'. This is Debian Lenny 32-bit and I'm seeing >>> the same thing with 2.0.16 on Debian Squeeze 64-bit. >>> >>> E.g.: >>> >>> Jan 12 20:30:33 auth-worker: Error: mysql: Query failed, retrying: >>> MySQL server has gone away >>> >>> Our mail mysql servers are busy enough that wait_timeout is set to a >>> whopping 30 seconds. On my regular boxes, I see a good deal of these >>> in the logs. I've been doing a lot of mucking with doveadm/dsync >>> (working on maildir->mdbox migration finally, yay!) on test boxes >>> (same dovecot package & version) and when I get this error, despite >>> the log saying it's retrying, it doesn't seem to be. Instead I get: >>> >>> dsync(root): Error: user ...: Auth USER lookup failed >> >> Try with only one host in the "connect" string? My guess: Both the connections have timed out, and the retrying fails as well (there is only one retry). Although if the retrying lookup fails, there should be an error logged about it also (you don't see one?) >> >> Also another idea to avoid them in the first place: >> >> service auth-worker { >> idle_kill = 20 >> } >> > > With just one 'connect' host, it seems to reconnect just fine (using > the same tests as above) and I'm not seeing the same error. It worked > every time that I tried, with no complaints of "MySQL server has gone > away". > > If there are multiple hosts, it seems like the most robust thing to do > would be to exhaust the existing connections and if none of those > succeed, then start a new connection to one of them. It will probably > result in much more convoluted logic but it'd probably match better > what people expect from a retry. > > Alternatively, since in all my tests, the mysql server has closed the > connection prior to this, is the auth worker not recognizing its > connection is already half-closed (in which case, it probably > shouldn't even consider it a legitimate connection and just > automatically reconnect, i.e. try #1, not the retry, which would > happen after another failure). > > I'll give the idle_kill a try too. I kind of like the idea of > idle_kill for auth processes anyway, just to free up some connections > on the mysql server. by the way , if you use sql for auth have you tried auth caching ? http://wiki.dovecot.org/Authentication/Caching i.e. # Authentication cache size (e.g. 10M). 0 means it's disabled. Note that # bsdauth, PAM and vpopmail require cache_key to be set for caching to be used. auth_cache_size = 10M # Time to live for cached data. After TTL expires the cached record is no # longer used, *except* if the main database lookup returns internal failure. # We also try to handle password changes automatically: If user's previous # authentication was successful, but this one wasn't, the cache isn't used. # For now this works only with plaintext authentication. auth_cache_ttl = 1 hour # TTL for negative hits (user not found, password mismatch). # 0 disables caching them completely. auth_cache_negative_ttl = 0 -- Best Regards MfG Robert Schetterer Germany/Munich/Bavaria From moseleymark at gmail.com Fri Jan 13 22:45:03 2012 From: moseleymark at gmail.com (Mark Moseley) Date: Fri, 13 Jan 2012 12:45:03 -0800 Subject: [Dovecot] MySQL server has gone away In-Reply-To: <4F108834.60709@schetterer.org> References: <4F108834.60709@schetterer.org> Message-ID: On Fri, Jan 13, 2012 at 11:38 AM, Robert Schetterer wrote: > Am 13.01.2012 19:29, schrieb Mark Moseley: >> On Fri, Jan 13, 2012 at 1:36 AM, Timo Sirainen wrote: >>> On 13.1.2012, at 4.00, Mark Moseley wrote: >>> >>>> I'm running 2.0.17 and I'm still seeing a decent amount of "MySQL >>>> server has gone away" errors, despite having multiple hosts defined in >>>> my auth userdb 'connect'. This is Debian Lenny 32-bit and I'm seeing >>>> the same thing with 2.0.16 on Debian Squeeze 64-bit. >>>> >>>> E.g.: >>>> >>>> Jan 12 20:30:33 auth-worker: Error: mysql: Query failed, retrying: >>>> MySQL server has gone away >>>> >>>> Our mail mysql servers are busy enough that wait_timeout is set to a >>>> whopping 30 seconds. On my regular boxes, I see a good deal of these >>>> in the logs. I've been doing a lot of mucking with doveadm/dsync >>>> (working on maildir->mdbox migration finally, yay!) on test boxes >>>> (same dovecot package & version) and when I get this error, despite >>>> the log saying it's retrying, it doesn't seem to be. Instead I get: >>>> >>>> dsync(root): Error: user ...: Auth USER lookup failed >>> >>> Try with only one host in the "connect" string? My guess: Both the connections have timed out, and the retrying fails as well (there is only one retry). Although if the retrying lookup fails, there should be an error logged about it also (you don't see one?) >>> >>> Also another idea to avoid them in the first place: >>> >>> service auth-worker { >>> ?idle_kill = 20 >>> } >>> >> >> With just one 'connect' host, it seems to reconnect just fine (using >> the same tests as above) and I'm not seeing the same error. It worked >> every time that I tried, with no complaints of "MySQL server has gone >> away". >> >> If there are multiple hosts, it seems like the most robust thing to do >> would be to exhaust the existing connections and if none of those >> succeed, then start a new connection to one of them. It will probably >> result in much more convoluted logic but it'd probably match better >> what people expect from a retry. >> >> Alternatively, since in all my tests, the mysql server has closed the >> connection prior to this, is the auth worker not recognizing its >> connection is already half-closed (in which case, it probably >> shouldn't even consider it a legitimate connection and just >> automatically reconnect, i.e. try #1, not the retry, which would >> happen after another failure). >> >> I'll give the idle_kill a try too. I kind of like the idea of >> idle_kill for auth processes anyway, just to free up some connections >> on the mysql server. > > by the way , if you use sql for auth have you tried auth caching ? > > http://wiki.dovecot.org/Authentication/Caching > > i.e. > > # Authentication cache size (e.g. 10M). 0 means it's disabled. Note that > # bsdauth, PAM and vpopmail require cache_key to be set for caching to > be used. > > auth_cache_size = 10M > > # Time to live for cached data. After TTL expires the cached record is no > # longer used, *except* if the main database lookup returns internal > failure. > # We also try to handle password changes automatically: If user's previous > # authentication was successful, but this one wasn't, the cache isn't used. > # For now this works only with plaintext authentication. > > auth_cache_ttl = 1 hour > > # TTL for negative hits (user not found, password mismatch). > # 0 disables caching them completely. > > auth_cache_negative_ttl = 0 Yup, we have caching turned on for our production boxes. On this particular box, I'd just shut off caching so that I could work on a script for converting from maildir->mdbox and run it repeatedly on the same mailbox. I got tired of restarting dovecot between each test :) From user+dovecot at localhost.localdomain.org Fri Jan 13 23:04:12 2012 From: user+dovecot at localhost.localdomain.org (Pascal Volk) Date: Fri, 13 Jan 2012 22:04:12 +0100 Subject: [Dovecot] 2.1.rc3 (1a722c7676bb):dsync umlaut problems In-Reply-To: <4F0FA0A7.10909@localhost.localdomain.org> References: <4F0FA0A7.10909@localhost.localdomain.org> Message-ID: <4F109C4C.5050402@localhost.localdomain.org> On 01/13/2012 04:10 AM Pascal Volk wrote: > All umlauts in mailbox names are lost after converting mbox/Maildir > mailboxes to mdbox. > > # ls -d /srv/import/Maildir/.Gel\&APY-schte\ Elemente/ > /srv/import/Maildir/.Gel&APY-schte Elemente/ > ? > # doveadm mailbox list -u jane at example.com Gel* > Gel__schte_Elemente Oh, and child mailboxes with umlauts becomes top level mailboxes: # ls -d /srv/import/Maildir/.INBOX.Projekte.K\&APY-ln /srv/import/Maildir/.INBOX.Projekte.K&APY-ln #ls -d mdbox/mailboxes/INBOX_Projekte_K__ln mdbox/mailboxes/INBOX_Projekte_K__ln Regards, Pascal -- The trapper recommends today: f007ba11.1201305 at localdomain.org From user+dovecot at localhost.localdomain.org Sat Jan 14 00:04:33 2012 From: user+dovecot at localhost.localdomain.org (Pascal Volk) Date: Fri, 13 Jan 2012 23:04:33 +0100 Subject: [Dovecot] 2.1.rc3 (1a722c7676bb): Panic: file ostream.c: line 173 (o_stream_sendv): assertion failed: (stream->stream_errno != 0) Message-ID: <4F10AA71.6030901@localhost.localdomain.org> Hi Timo, today some imap processes are crashed. Regards, Pascal -- The trapper recommends today: f007ba11.1201322 at localdomain.org -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: core.imap.1326475521-24777_bt.txt URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: doveconf-n.txt URL: From info_postfix at gmx.ch Sat Jan 14 00:15:02 2012 From: info_postfix at gmx.ch (maximus12) Date: Fri, 13 Jan 2012 14:15:02 -0800 (PST) Subject: [Dovecot] Server Time 45min ahead In-Reply-To: <20120112115705.GS1341@charite.de> References: <33126760.post@talk.nabble.com> <20120112114350.GQ1341@charite.de> <33127262.post@talk.nabble.com> <20120112115705.GS1341@charite.de> Message-ID: <33137241.post@talk.nabble.com> Hi Ralf, Thanks for your help. Dovecot stop Change the server time Dovecot start Got a warning but it worked! Thanks a lot for your help. (With dovecot 1.x) -- View this message in context: http://old.nabble.com/Server-Time-45min-ahead-tp33126760p33137241.html Sent from the Dovecot mailing list archive at Nabble.com. From henson at acm.org Sat Jan 14 00:46:08 2012 From: henson at acm.org (Paul B. Henson) Date: Fri, 13 Jan 2012 14:46:08 -0800 Subject: [Dovecot] MySQL server has gone away In-Reply-To: References: Message-ID: <20120113224607.GS4844@bender.csupomona.edu> On Fri, Jan 13, 2012 at 01:36:38AM -0800, Timo Sirainen wrote: > Also another idea to avoid them in the first place: > > service auth-worker { > idle_kill = 20 > } Ah, set the auth-worker timeout to less than the mysql timeout to prevent a stale mysql connection from ever being used. I'll try that, thanks. -- Paul B. Henson | (909) 979-6361 | http://www.csupomona.edu/~henson/ Operating Systems and Network Analyst | henson at csupomona.edu California State Polytechnic University | Pomona CA 91768 From moseleymark at gmail.com Sat Jan 14 01:19:28 2012 From: moseleymark at gmail.com (Mark Moseley) Date: Fri, 13 Jan 2012 15:19:28 -0800 Subject: [Dovecot] MySQL server has gone away In-Reply-To: <20120113224607.GS4844@bender.csupomona.edu> References: <20120113224607.GS4844@bender.csupomona.edu> Message-ID: On Fri, Jan 13, 2012 at 2:46 PM, Paul B. Henson wrote: > On Fri, Jan 13, 2012 at 01:36:38AM -0800, Timo Sirainen wrote: > >> Also another idea to avoid them in the first place: >> >> service auth-worker { >> ? idle_kill = 20 >> } > > Ah, set the auth-worker timeout to less than the mysql timeout to > prevent a stale mysql connection from ever being used. I'll try that, > thanks. I gave that a try. Sometimes it seems to kill off the auth-worker but not till after a minute or so (with idle_kill = 20). Other times, the worker stays around for more like 5 minutes (I gave up watching), despite being idle -- and I'm the only person connecting to it, so it's definitely idle. Does auth-worker perhaps only wake up every so often to check its idle status? To test, I kicked off a dsync, then grabbed a netstat: tcp 0 0 10.1.15.129:40070 10.1.52.47:3306 ESTABLISHED 29146/auth worker [ tcp 0 0 10.1.15.129:33369 10.1.52.48:3306 ESTABLISHED 29146/auth worker [ tcp 0 0 10.1.15.129:54083 10.1.52.49:3306 ESTABLISHED 29146/auth worker [ then kicked off this loop: # while true; do date; ps p 29146 |tail -n1; sleep 1; done Fri Jan 13 18:05:14 EST 2012 29146 ? S 0:00 dovecot/auth worker [0 wait, 0 passdb, 0 userdb] Fri Jan 13 18:05:15 EST 2012 29146 ? S 0:00 dovecot/auth worker [0 wait, 0 passdb, 0 userdb] .... More lines of the loop ... Fri Jan 13 18:05:35 EST 2012 29146 ? S 0:00 dovecot/auth worker [0 wait, 0 passdb, 0 userdb] 18:05:36.252976 IP 10.1.52.48.3306 > 10.1.15.129.33369: F 77:77(0) ack 92 win 91 18:05:36.288549 IP 10.1.15.129.33369 > 10.1.52.48.3306: . ack 78 win 913 Fri Jan 13 18:05:36 EST 2012 29146 ? S 0:00 dovecot/auth worker [0 wait, 0 passdb, 0 userdb] 18:05:37.196204 IP 10.1.52.49.3306 > 10.1.15.129.54083: F 806:806(0) ack 1126 win 123 18:05:37.228594 IP 10.1.15.129.54083 > 10.1.52.49.3306: . ack 807 win 1004 18:05:37.411955 IP 10.1.52.47.3306 > 10.1.15.129.40070: F 806:806(0) ack 1126 win 123 18:05:37.448573 IP 10.1.15.129.40070 > 10.1.52.47.3306: . ack 807 win 1004 Fri Jan 13 18:05:37 EST 2012 29146 ? S 0:00 dovecot/auth worker [0 wait, 0 passdb, 0 userdb] ... more lines of the loop ... Fri Jan 13 18:10:13 EST 2012 29146 ? S 0:00 dovecot/auth worker [0 wait, 0 passdb, 0 userdb] Fri Jan 13 18:10:14 EST 2012 29146 ? S 0:00 dovecot/auth worker [0 wait, 0 passdb, 0 userdb] ^C at which point I bailed out. Looking again a couple of minutes later, it was gone. Nothing else was going on and the logs don't show any activity between 18:05:07 and 18:10:44. From henson at acm.org Sat Jan 14 02:19:12 2012 From: henson at acm.org (Paul B. Henson) Date: Fri, 13 Jan 2012 16:19:12 -0800 Subject: [Dovecot] MySQL server has gone away In-Reply-To: <4F108834.60709@schetterer.org> References: <4F108834.60709@schetterer.org> Message-ID: <20120114001912.GZ4844@bender.csupomona.edu> On Fri, Jan 13, 2012 at 11:38:28AM -0800, Robert Schetterer wrote: > by the way , if you use sql for auth have you tried auth caching ? > > http://wiki.dovecot.org/Authentication/Caching Hmm, hadn't tried that, but flipped it on to see how it might work out. The only tradeoff is a potential delay between when an account is disabled and when it can stop authenticating. I set the timeout to 10 minutes for now, with an hour timeout for negative caching. That page says you can send a USR2 signal to the auth process for cache stats? That doesn't seem to work. OTOH, that page is for version 1, not 2; is there some other way to generate cache stats in version 2? Thanks... -- Paul B. Henson | (909) 979-6361 | http://www.csupomona.edu/~henson/ Operating Systems and Network Analyst | henson at csupomona.edu California State Polytechnic University | Pomona CA 91768 From henson at acm.org Sat Jan 14 03:54:29 2012 From: henson at acm.org (Paul B. Henson) Date: Fri, 13 Jan 2012 17:54:29 -0800 Subject: [Dovecot] MySQL server has gone away In-Reply-To: References: Message-ID: <4F10E055.4030303@acm.org> On 1/13/2012 10:29 AM, Mark Moseley wrote: > connection prior to this, is the auth worker not recognizing its > connection is already half-closed (in which case, it probably > shouldn't even consider it a legitimate connection and just > automatically reconnect, i.e. try #1, not the retry, which would > happen after another failure). I don't think there's any way to tell from the mysql api that the server has closed the connection short of trying to use it and getting that specific error. I suppose that specific error could be special cased as an immediate "try again with no penalty" rather than considered a failure. -- Paul B. Henson | (909) 979-6361 | http://www.csupomona.edu/~henson/ Operating Systems and Network Analyst | henson at csupomona.edu California State Polytechnic University | Pomona CA 91768 From robert at schetterer.org Sat Jan 14 10:01:12 2012 From: robert at schetterer.org (Robert Schetterer) Date: Sat, 14 Jan 2012 09:01:12 +0100 Subject: [Dovecot] MySQL server has gone away In-Reply-To: <20120114001912.GZ4844@bender.csupomona.edu> References: <4F108834.60709@schetterer.org> <20120114001912.GZ4844@bender.csupomona.edu> Message-ID: <4F113648.2000902@schetterer.org> Am 14.01.2012 01:19, schrieb Paul B. Henson: > On Fri, Jan 13, 2012 at 11:38:28AM -0800, Robert Schetterer wrote: > >> by the way , if you use sql for auth have you tried auth caching ? >> >> http://wiki.dovecot.org/Authentication/Caching > > Hmm, hadn't tried that, but flipped it on to see how it might work out. > The only tradeoff is a potential delay between when an account is > disabled and when it can stop authenticating. I set the timeout to 10 > minutes for now, with an hour timeout for negative caching. dont know if i unserstand you right perhaps this is what you mean, i use this with/cause fail2ban # TTL for negative hits (user not found, password mismatch). # 0 disables caching them completely. auth_cache_negative_ttl = 0 > > That page says you can send a USR2 signal to the auth process for cache > stats? That doesn't seem to work. OTOH, that page is for version 1, not > 2; is there some other way to generate cache stats in version 2? auth cache works with dove 2, no idea about dove 1 ,didnt test, but i guess it does > > Thanks... > -- Best Regards MfG Robert Schetterer Germany/Munich/Bavaria From yubao.liu at gmail.com Sat Jan 14 15:49:31 2012 From: yubao.liu at gmail.com (Yubao Liu) Date: Sat, 14 Jan 2012 21:49:31 +0800 Subject: [Dovecot] [PATCH] support master user to login as other users by DIGEST-MD5 SASL proxy authorization Message-ID: <4F1187EB.5070002@gmail.com> Hi Timo, As http://wiki2.dovecot.org/Authentication/MasterUsers states, currently the first way for master users to log in as other users only supports PLAIN SASL mechanism, and because DIGEST-MD5 uses user name to calculate MD5 digest, the second way can't support DIGEST-MD5. I enhance the code to support DIGEST-MD5 too for the first way, please review the attached patch against dovecot-2.0 HG tip. The patch also contains a little fix to "nonce-count" string, RFC 2831 shows it should be "nc". I tested it on Debian Wheezy, it seems OK. Below are my verification steps. (Debian packaged 2.0.15 + http://hg.dovecot.org/dovecot-2.0/rev/bed15faedfd4 + attached patch) $ doveconf -n # 2.0.15: /etc/dovecot/dovecot.conf # OS: Linux 3.1.0-1-686-pae i686 Debian wheezy/sid auth_default_realm = corp.example.com auth_krb5_keytab = /etc/dovecot.keytab auth_master_user_separator = * auth_mechanisms = gssapi digest-md5 cram-md5 auth_realms = corp.example.com auth_username_format = %n first_valid_gid = 1000 first_valid_uid = 1000 mail_location = mdbox:/srv/mail/%u/Mail managesieve_notify_capability = mailto managesieve_sieve_capability = fileinto reject envelope encoded-character vacation subaddress comparator-i;ascii-numeric relational regex imap4flags copy include variables body enotify environment mailbox date ihave passdb { args = /etc/dovecot/master-users driver = passwd-file master = yes } passdb { driver = pam } plugin { sieve = /srv/mail/%u/.dovecot.sieve sieve_dir = /srv/mail/%u/sieve } protocols = " imap lmtp sieve" service auth { unix_listener auth-client { group = Debian-exim mode = 0660 } } ssl_cert = , method=DIGEST-MD5, rip=127.0.0.1, lip=127.0.1.1, mpid=15974, TLS Jan 14 20:35:32 gold dovecot: imap: Debug: Added userdb setting: plugin/master_user=webmail2 Jan 14 20:35:32 gold dovecot: imap(dieken): Debug: Effective uid=1000, gid=1000, home=/srv/mail/dieken Jan 14 20:35:32 gold dovecot: imap(dieken): Debug: fs: root=/srv/mail/dieken/Mail, index=, control=, inbox=, alt= Jan 14 20:35:32 gold dovecot: imap(dieken): Debug: Namespace : Using permissions from /srv/mail/dieken/Mail: mode=0700 gid=-1 Jan 14 20:35:34 gold dovecot: imap(dieken): Disconnected: Logged out bytes=8/329 Jan 14 20:35:34 gold dovecot: imap-login: Warning: SSL alert: where=0x4008, ret=256: warning close notify [127.0.0.1] Jan 14 21:04:50 gold dovecot: imap(dieken): Disconnected: Logged out bytes=131/533 Jan 14 21:33:59 gold dovecot: imap-login: Login: user=, method=DIGEST-MD5, rip=127.0.0.1, lip=127.0.1.1, mpid=16114, TLS Jan 14 21:34:03 gold dovecot: imap(dieken): Disconnected: Logged out bytes=8/329 Jan 14 21:36:56 gold dovecot: imap-login: Disconnected (no auth attempts): rip=127.0.0.1, lip=127.0.1.1 Jan 14 21:36:56 gold dovecot: imap-login: Disconnected (no auth attempts): rip=127.0.0.1, lip=127.0.1.1 Jan 14 21:36:58 gold dovecot: imap-login: Login: user=, method=DIGEST-MD5, rip=127.0.0.1, lip=127.0.1.1, mpid=16135, TLS Jan 14 21:37:00 gold dovecot: imap(dieken): Disconnected: Logged out bytes=10/377 Regards, Yubao Liu -------------- next part -------------- A non-text attachment was scrubbed... Name: digest-md5-sasl-proxy-authorization.patch Type: text/x-patch Size: 2322 bytes Desc: not available URL: From AxelLuttgens at swing.be Sat Jan 14 19:03:22 2012 From: AxelLuttgens at swing.be (Axel Luttgens) Date: Sat, 14 Jan 2012 18:03:22 +0100 Subject: [Dovecot] v2.x services documentation In-Reply-To: <04D662E7-2A0A-448B-BA21-1E337A400CA6@iki.fi> References: <04D662E7-2A0A-448B-BA21-1E337A400CA6@iki.fi> Message-ID: <92A86804-CEEE-4EB6-9EE7-FC8B7905AA2C@swing.be> Le 7 d?c. 2011 ? 15:22, Timo Sirainen a ?crit : > If you've ever wanted to know everything about the service {} blocks, this should be quite helpful: http://wiki2.dovecot.org/Services Hello Timo, I know, I'm quite late at reading the messages, and this is really a nice and useful one; thanks! Up to now, I only had the opportunity to quickly read the wiki page, and have a small question; one may read: process_min_avail Minimum number of processes that always should be available to accept more client connections. For service_limit=1 processes this decreases the latency for handling new connections. For service_limit!=1 processes it could be set to the number of CPU cores on the system to balance the load among them. What's that service_limit setting? TIA, Axel From ivo at crm.walltopia.com Sat Jan 14 19:23:58 2012 From: ivo at crm.walltopia.com (IVO GELOV (CRM)) Date: Sat, 14 Jan 2012 19:23:58 +0200 Subject: [Dovecot] Using Dovecot-auth to return error code 450 (or other 4xx) to Postfix when user is on vacation In-Reply-To: <4F1071F8.4080202@Media-Brokers.com> References: <4F1071F8.4080202@Media-Brokers.com> Message-ID: On Fri, 13 Jan 2012 20:03:36 +0200, Charles Marcus wrote: > On 2012-01-13 12:11 PM, IVO GELOV (CRM) wrote: >> I am aware of the various autoresponder scripts for vacation autoreplies >> (I am using Virtual Vacation 3.1 by Mischa Peters). >> I have an issue with auto-replies - it is vulnerable to spamming with >> forged email address. > > I think you are using an extremely old/outdated version... > > The latest version would not suffer this problem, because it has a lot > of message types that it will *not* respond to, including messages > appearing to be from yourself... > > Get the latest version fro the postfixadmin package. > > However, I don't know how to use it without also using postfixadmin (it > creates databases for storing the vacation message, etc)... > I have downloaded the latest version 4.0 - but it seems there is no way to prevent spammers to use forged email addresses. I decided to remove the vacation feature from our corporate mail server, because it actually opens a backdoor (even though only when someone decides to activate his vacation auto-reply) for spammers and puts a risk on the company (our server can be blacklisted). I still think that my idea with custom error codes is more useful - if the user is on vacation, the message is rejected immediately (no auto-reply is sent) and sender can see (hopefully, because most users just ignore error messages) the reason why the messages was rejected. Probably Dovecot-auth does not offer such flexibility right now - but it worths considering. From robert at schetterer.org Sat Jan 14 21:24:39 2012 From: robert at schetterer.org (Robert Schetterer) Date: Sat, 14 Jan 2012 20:24:39 +0100 Subject: [Dovecot] Using Dovecot-auth to return error code 450 (or other 4xx) to Postfix when user is on vacation In-Reply-To: References: <4F1071F8.4080202@Media-Brokers.com> Message-ID: <4F11D677.2040706@schetterer.org> Am 14.01.2012 18:23, schrieb IVO GELOV (CRM): > On Fri, 13 Jan 2012 20:03:36 +0200, Charles Marcus > wrote: > >> On 2012-01-13 12:11 PM, IVO GELOV (CRM) wrote: >>> I am aware of the various autoresponder scripts for vacation autoreplies >>> (I am using Virtual Vacation 3.1 by Mischa Peters). >>> I have an issue with auto-replies - it is vulnerable to spamming with >>> forged email address. >> >> I think you are using an extremely old/outdated version... >> >> The latest version would not suffer this problem, because it has a lot >> of message types that it will *not* respond to, including messages >> appearing to be from yourself... >> >> Get the latest version fro the postfixadmin package. >> >> However, I don't know how to use it without also using postfixadmin (it >> creates databases for storing the vacation message, etc)... >> > > I have downloaded the latest version 4.0 - but it seems there is no way > to prevent > spammers to use forged email addresses. I decided to remove the vacation > feature > from our corporate mail server, because it actually opens a backdoor > (even though > only when someone decides to activate his vacation auto-reply) for > spammers and > puts a risk on the company (our server can be blacklisted). > > I still think that my idea with custom error codes is more useful - if > the user is > on vacation, the message is rejected immediately (no auto-reply is sent) > and sender > can see (hopefully, because most users just ignore error messages) the > reason why > the messages was rejected. > > Probably Dovecot-auth does not offer such flexibility right now - but it > worths > considering. your right there is no way make perfekt sure that someone not uses your emailaddress "from and to" for spamming ( dkim and spf may help little ) now i hope i understand your problem right a good way is to use dove lmtp with sieve also good antispam in postfix, perhaps a before global antispam sieve filter rule, that catched spam is sorted in some special junk folder , and so its not handled by incomming in mailbox inbox with what userdefined sieve rule ( i.e Vacation ) ever look here http://wiki.dovecot.org/LDA/Sieve for ideas anyway if you use other vacation tecs, make sure allready flagged spam by i.e clamav, amavis, spamassassin etc in postfix stage is not handled by your vacation service , script etc. as far i remember i gave some patch to the postfixadmin vacation script doing exact this there is no ultimate way not to answer spammers by vacation or other auto script etc but if you do right , the problem goes nearly null the risk of beeing blacklisted by third party exist ever when i.e forwarding ( redirect ) mail to outside ( so antispam filter is a "must have" here ), a simple vacation message only, is no high or none risk, as long it does not include any part of the real spam message also vacation should only answer once in some time period, which should protect against loops and flooding others the corect answer to your subject would be if you want postfix simple to reject mails for some mailaddress with error code you like if the mailaddressowner is away, use a postfix reject table, if you want with i.e in/with mysql and some gui ( i.e. php ) so the mailaddressowner can edit the table himself anyway, i personally dont use vacation anymore for many reasons , but others find it hardly needed -- Best Regards MfG Robert Schetterer Germany/Munich/Bavaria From mail at kinesis.me Sat Jan 14 22:17:58 2012 From: mail at kinesis.me (Charles Thompson) Date: Sat, 14 Jan 2012 12:17:58 -0800 Subject: [Dovecot] IMAP maillog error: file lib.c: line 37 (nearest_power): assertion failed: (num <= ((size_t)1 << (BITS_IN_SIZE_T-1))) Message-ID: Dear Mailing List, What does this error mean and how do I fix it? I am on a Centos 4.9 >From /var/log/maillog : Jan 14 11:54:51 hostname imap(username): file lib.c: line 37 (nearest_power): assertion failed: (num <= ((size_t)1 << (BITS_IN_SIZE_T-1))) Version information : root at hostname[/etc/rc.d/rc3.d]# dovecot --version ; dovecot -n ; cat /etc/*release* 0.99.11 Usage: dovecot [-F] [-c ] Fatal: Unknown argument: -n CentOS release 4.9 (Final) root at hostname[/etc/rc.d/rc3.d]# Thank you. -- Sincerely, Charles Thompson *UNIX & Linux Administrator* Tel* : *(650) 906-9156 Web : www.kinesis.me Mail: mail at kinesis.me From user+dovecot at localhost.localdomain.org Sat Jan 14 22:45:29 2012 From: user+dovecot at localhost.localdomain.org (Pascal Volk) Date: Sat, 14 Jan 2012 21:45:29 +0100 Subject: [Dovecot] IMAP maillog error: file lib.c: line 37 (nearest_power): assertion failed: (num <= ((size_t)1 << (BITS_IN_SIZE_T-1))) In-Reply-To: References: Message-ID: <4F11E969.2000909@localhost.localdomain.org> On 01/14/2012 09:17 PM Charles Thompson wrote: > Dear Mailing List, > > What does this error mean and how do I fix it? I am on a Centos 4.9 > > From /var/log/maillog : > Jan 14 11:54:51 hostname imap(username): file lib.c: line 37 > (nearest_power): assertion failed: (num <= ((size_t)1 << > (BITS_IN_SIZE_T-1))) > > > Version information : > root at hostname[/etc/rc.d/rc3.d]# dovecot --version ; dovecot -n ; cat > /etc/*release* > 0.99.11 > Usage: dovecot [-F] [-c ] > Fatal: Unknown argument: -n > CentOS release 4.9 (Final) > root at hostname[/etc/rc.d/rc3.d]# > > Thank you. To make it sort: Upgrade Regards, Pascal -- The trapper recommends today: cafefeed.1201421 at localdomain.org From CMarcus at Media-Brokers.com Sun Jan 15 14:33:24 2012 From: CMarcus at Media-Brokers.com (Charles Marcus) Date: Sun, 15 Jan 2012 07:33:24 -0500 Subject: [Dovecot] Using Dovecot-auth to return error code 450 (or other 4xx) to Postfix when user is on vacation In-Reply-To: References: <4F1071F8.4080202@Media-Brokers.com> Message-ID: <4F12C794.6070609@Media-Brokers.com> On 2012-01-14 12:23 PM, IVO GELOV (CRM) wrote: > I have downloaded the latest version 4.0 - but it seems there is no > way to prevent spammers to use forged email addresses. I decided to > remove the vacation feature from our corporate mail server, because > it actually opens a backdoor (even though only when someone decides > to activate his vacation auto-reply) for spammers and puts a risk on > the company (our server can be blacklisted). Sorry, I misread your message... However, (I *think*) there *is* a simple solution to your problem, if I now understand it correctly... Simply disallow anyone sending from an email address in your domain from sending without SASL_AUTHing... The way I do this is: in main.cf (I put all of my restrictions in smtpd_recipient_restrictions) add: check_sender_access ${hash}/nospoof, somewhere after reject_unauth_destination *but before any RBL checks) where nospoof contains: # Prevent spoofing from domains that we own allowed_address1 at example.com OK allowed_address2 at example.com OK example.com REJECT You must use sasl_auth to send from one of our example.com email addresses... and of course be sure to postmap the nospoof database after making any changes... -- Best regards, Charles From CMarcus at Media-Brokers.com Sun Jan 15 14:40:05 2012 From: CMarcus at Media-Brokers.com (Charles Marcus) Date: Sun, 15 Jan 2012 07:40:05 -0500 Subject: [Dovecot] IMAP maillog error: file lib.c: line 37 (nearest_power): assertion failed: (num <= ((size_t)1 << (BITS_IN_SIZE_T-1))) In-Reply-To: References: Message-ID: <4F12C925.4030008@Media-Brokers.com> On 2012-01-14 3:17 PM, Charles Thompson wrote: > Version information : > root at hostname[/etc/rc.d/rc3.d]# dovecot --version ; dovecot -n ; cat > /etc/*release* > 0.99.11 0.99 is simply way, way, *way* too old to waste any time helping you. The short answer is - *upgrade* to a more recent version (at *least* the latest 1.2.x series, but preferably 2.0.16)... Be sure to read all of the docs on upgrading, because you *will* have some reconfiguring to do... *Then*, if you have any questions/issues, by all means come back and ask... -- Best regards, Charles From CMarcus at Media-Brokers.com Sun Jan 15 14:50:00 2012 From: CMarcus at Media-Brokers.com (Charles Marcus) Date: Sun, 15 Jan 2012 07:50:00 -0500 Subject: [Dovecot] Using Dovecot-auth to return error code 450 (or other 4xx) to Postfix when user is on vacation In-Reply-To: <4F12C794.6070609@Media-Brokers.com> References: <4F1071F8.4080202@Media-Brokers.com> <4F12C794.6070609@Media-Brokers.com> Message-ID: <4F12CB78.6020602@Media-Brokers.com> On 2012-01-15 7:33 AM, Charles Marcus wrote: > check_sender_access ${hash}/nospoof, Oh - if you aren't using variables for the maps paths, just use: check_sender_access hash:/path/to/map/nospoof, -- Best regards, Charles From user+dovecot at localhost.localdomain.org Sun Jan 15 15:11:05 2012 From: user+dovecot at localhost.localdomain.org (Pascal Volk) Date: Sun, 15 Jan 2012 14:11:05 +0100 Subject: [Dovecot] 2.1.rc3 (1a722c7676bb): doveadm mailbox list -> Segmentation fault Message-ID: <4F12D069.9060102@localhost.localdomain.org> Oops, I did it again. -- The trapper recommends today: c01dcofe.1201514 at localdomain.org -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: core.doveadm.1326628435-21046_bt.txt URL: From CMarcus at Media-Brokers.com Sun Jan 15 19:03:42 2012 From: CMarcus at Media-Brokers.com (Charles Marcus) Date: Sun, 15 Jan 2012 12:03:42 -0500 Subject: [Dovecot] Using Dovecot-auth to return error code 450 (or other 4xx) to Postfix when user is on vacation In-Reply-To: <4F12CB78.6020602@Media-Brokers.com> References: <4F1071F8.4080202@Media-Brokers.com> <4F12C794.6070609@Media-Brokers.com> <4F12CB78.6020602@Media-Brokers.com> Message-ID: <4F1306EE.3050907@Media-Brokers.com> On 2012-01-15 7:50 AM, Charles Marcus wrote: > On 2012-01-15 7:33 AM, Charles Marcus wrote: >> check_sender_access ${hash}/nospoof, > > Oh - if you aren't using variables for the maps paths, just use: > > check_sender_access hash:/path/to/map/nospoof, One last thing - this obviously requires one or both of: permit_sasl_authenticated permit_mynetworks *before* the check_sender_access check... -- Best regards, Charles From CMarcus at Media-Brokers.com Sun Jan 15 19:10:31 2012 From: CMarcus at Media-Brokers.com (Charles Marcus) Date: Sun, 15 Jan 2012 12:10:31 -0500 Subject: [Dovecot] Using Dovecot-auth to return error code 450 (or other 4xx) to Postfix when user is on vacation In-Reply-To: <4F1306EE.3050907@Media-Brokers.com> References: <4F1071F8.4080202@Media-Brokers.com> <4F12C794.6070609@Media-Brokers.com> <4F12CB78.6020602@Media-Brokers.com> <4F1306EE.3050907@Media-Brokers.com> Message-ID: <4F130887.1020304@Media-Brokers.com> On 2012-01-15 12:03 PM, Charles Marcus wrote: > On 2012-01-15 7:50 AM, Charles Marcus wrote: >> On 2012-01-15 7:33 AM, Charles Marcus wrote: >>> check_sender_access ${hash}/nospoof, >> Oh - if you aren't using variables for the maps paths, just use: >> >> check_sender_access hash:/path/to/map/nospoof, > One last thing - this obviously requires one or both of: > > permit_sasl_authenticated > permit_mynetworks > > *before* the check_sender_access check... spoke too soon... one more 'last thing'... This also obviously requires you to enforce a policy that all users must either sasl_auth or be on a system whose IP is included in my_networks... -- Best regards, Charles From henson at acm.org Sun Jan 15 23:20:29 2012 From: henson at acm.org (Paul B. Henson) Date: Sun, 15 Jan 2012 13:20:29 -0800 Subject: [Dovecot] MySQL server has gone away In-Reply-To: <4F113648.2000902@schetterer.org> References: <4F108834.60709@schetterer.org> <20120114001912.GZ4844@bender.csupomona.edu> <4F113648.2000902@schetterer.org> Message-ID: <20120115212029.GC21623@bender.csupomona.edu> On Sat, Jan 14, 2012 at 12:01:12AM -0800, Robert Schetterer wrote: > > Hmm, hadn't tried that, but flipped it on to see how it might work out. > > The only tradeoff is a potential delay between when an account is > > disabled and when it can stop authenticating. I set the timeout to 10 > > minutes for now, with an hour timeout for negative caching. > > dont know if i unserstand you right Before I turned on auth caching, every attempted authentication hit our mysql database, which in addition to the password itself contains a flag indicating whether or not the account is enabled. So if somebody was abusing smtp authentication, our helpdesk could disable their account, and it would *immediately* stop working. Whereas with authentication caching enabled, there is a window the size of the ttl where an account that has been disabled can continue to successfully authenticate. > > That page says you can send a USR2 signal to the auth process for cache > > stats? That doesn't seem to work. OTOH, that page is for version 1, not > > 2; is there some other way to generate cache stats in version 2? > > auth cache works with dove 2, no idea about dove 1 ,didnt test, but i > guess it does I'm using dovecot 2; my question was that the documentation for dovecot 1 described a way to make dovecot dump the authentication cache statistics that doesn't seem to work for dovecot 2, and if there was some other way to get the cache statistics in dovecot 2. Thanks... From mark at msapiro.net Sun Jan 15 23:36:48 2012 From: mark at msapiro.net (Mark Sapiro) Date: Sun, 15 Jan 2012 13:36:48 -0800 Subject: [Dovecot] Using Dovecot-auth to return error code 450 (or other 4xx) to Postfix when user is on vacation In-Reply-To: References: Message-ID: <4F1346F0.6020908@msapiro.net> IVO GELOV (CRM) wrote: > I still think that my idea with custom error codes is more useful - if the user is > on vacation, the message is rejected immediately (no auto-reply is sent) and sender > can see (hopefully, because most users just ignore error messages) the reason why > the messages was rejected. A 4xx status will not do this. It should just cause the sending MTA to keep the message queued and keep retrying. Depending on the sending MTA's retry and notification policies, the sender may see no error or delay notification for several days. If you really want the sender to immediately see a rejection, you have to use a 5xx status. -- Mark Sapiro The highway is for gamblers, San Francisco Bay Area, California better use your sense - B. Dylan From mark at msapiro.net Sun Jan 15 23:50:02 2012 From: mark at msapiro.net (Mark Sapiro) Date: Sun, 15 Jan 2012 13:50:02 -0800 Subject: [Dovecot] Using Dovecot-auth to return error code 450 (or other 4xx) to Postfix when user is on vacation In-Reply-To: <4F12C794.6070609@Media-Brokers.com> References: <4F1071F8.4080202@Media-Brokers.com> <4F12C794.6070609@Media-Brokers.com> Message-ID: <4F134A0A.70804@msapiro.net> On 11:59 AM, Charles Marcus wrote: > On 2012-01-14 12:23 PM, IVO GELOV (CRM) wrote: >> I have downloaded the latest version 4.0 - but it seems there is no >> way to prevent spammers to use forged email addresses. I decided to >> remove the vacation feature from our corporate mail server, because >> it actually opens a backdoor (even though only when someone decides >> to activate his vacation auto-reply) for spammers and puts a risk on >> the company (our server can be blacklisted). > > Sorry, I misread your message... > > However, (I *think*) there *is* a simple solution to your problem, if I > now understand it correctly... > > Simply disallow anyone sending from an email address in your domain from > sending without SASL_AUTHing... I don't see how this will help. The scenario the OP is concerned about is spammer at foreign.domain sends a message with forged From: and maybe envelope sender victim at other.foreign.domain to his user on vacation. The vacation program sends an autoresponse to the victim. However, why worry about this minimal backscatter? A good vacation program will not send more that one autoresponse per long time (a week?) for a given sender/recipient and won't include the original spam payload. So, even though a spammer might use this backdoor to cause your server to send messages to multiple recipients, the messages should not have spam payloads and shouldn't be sent more that once to a given end recipient. -- Mark Sapiro The highway is for gamblers, San Francisco Bay Area, California better use your sense - B. Dylan From phessler at theapt.org Mon Jan 16 11:15:21 2012 From: phessler at theapt.org (Peter Hessler) Date: Mon, 16 Jan 2012 10:15:21 +0100 Subject: [Dovecot] per-user limit? Message-ID: <20120116091521.GA10944@gir.theapt.org> I am seeing a problem where users are limited to 6 imap logins total. One of my users has a bunch of phones and computers, and wants them all on at the same time. I'm looking through my configuration, and I cannot see a limit on how many times a single user can connect. He is connecting from different IPs. Any ideas? My logs show the following error when they attempt to auth for a 7th time: dovecot: imap-login: Disconnected (no auth attempts): rip=111.yy.zz.xx, lip=81.209.183.113, TLS $ dovecot -n # 2.0.16: /etc/dovecot/dovecot.conf # OS: OpenBSD 5.1 amd64 ffs auth_mechanisms = plain login base_dir = /var/dovecot/ listen = *, [::] mail_location = maildir:/usr/home/%u/Maildir managesieve_notify_capability = mailto managesieve_sieve_capability = fileinto reject envelope encoded-character vacation subaddress comparator-i;ascii-numeric relational regex imap4flags copy include variables body enotify environment mailbox date ihave mbox_write_locks = fcntl passdb { driver = bsdauth } service auth { unix_listener /var/run/dovecot/auth-master { mode = 0600 } unix_listener /var/spool/postfix/private/auth { group = wheel mode = 0660 user = _postfix } user = root } service imap-login { process_limit = 128 process_min_avail = 6 service_count = 1 user = _dovecot } service pop3-login { process_limit = 64 process_min_avail = 6 service_count = 1 user = _dovecot } ssl_cert = References: <4F1346F0.6020908@msapiro.net> Message-ID: On Sun, 15 Jan 2012 23:36:48 +0200, Mark Sapiro wrote: > IVO GELOV (CRM) wrote: > >> I still think that my idea with custom error codes is more useful - if the user is >> on vacation, the message is rejected immediately (no auto-reply is sent) and sender >> can see (hopefully, because most users just ignore error messages) the reason why >> the messages was rejected. > > > A 4xx status will not do this. It should just cause the sending MTA to > keep the message queued and keep retrying. Depending on the sending > MTA's retry and notification policies, the sender may see no error or > delay notification for several days. > > If you really want the sender to immediately see a rejection, you have > to use a 5xx status. > Yes, you are right. The error code is the smallest difficulty :) From ivo at crm.walltopia.com Mon Jan 16 11:38:01 2012 From: ivo at crm.walltopia.com (IVO GELOV (CRM)) Date: Mon, 16 Jan 2012 11:38:01 +0200 Subject: [Dovecot] Using Dovecot-auth to return error code 450 (or other 4xx) to Postfix when user is on vacation In-Reply-To: <4F134A0A.70804@msapiro.net> References: <4F1071F8.4080202@Media-Brokers.com> <4F12C794.6070609@Media-Brokers.com> <4F134A0A.70804@msapiro.net> Message-ID: On Sun, 15 Jan 2012 23:50:02 +0200, Mark Sapiro wrote: > On 11:59 AM, Charles Marcus wrote: >> On 2012-01-14 12:23 PM, IVO GELOV (CRM) wrote: >>> I have downloaded the latest version 4.0 - but it seems there is no >>> way to prevent spammers to use forged email addresses. I decided to >>> remove the vacation feature from our corporate mail server, because >>> it actually opens a backdoor (even though only when someone decides >>> to activate his vacation auto-reply) for spammers and puts a risk on >>> the company (our server can be blacklisted). >> >> Sorry, I misread your message... >> >> However, (I *think*) there *is* a simple solution to your problem, if I >> now understand it correctly... >> >> Simply disallow anyone sending from an email address in your domain from >> sending without SASL_AUTHing... > > > I don't see how this will help. The scenario the OP is concerned about > is spammer at foreign.domain sends a message with forged From: and maybe > envelope sender victim at other.foreign.domain to his user on vacation. The > vacation program sends an autoresponse to the victim. > > However, why worry about this minimal backscatter? A good vacation > program will not send more that one autoresponse per long time (a week?) > for a given sender/recipient and won't include the original spam > payload. So, even though a spammer might use this backdoor to cause your > server to send messages to multiple recipients, the messages should not > have spam payloads and shouldn't be sent more that once to a given end > recipient. > The limitation of 1 message per week for any unique combination of sender/recipient does not stop backscatter - because each message can come with a new forged FROM address, and from different compromised mail servers. The spammer does not have control over the body of the auto-replies (which is something like "I am not at the office, please write to my colleagues"), but it still may cause the victims to take some measures. From ivo at crm.walltopia.com Mon Jan 16 11:48:11 2012 From: ivo at crm.walltopia.com (IVO GELOV (CRM)) Date: Mon, 16 Jan 2012 11:48:11 +0200 Subject: [Dovecot] Using Dovecot-auth to return error code 450 (or other 4xx) to Postfix when user is on vacation In-Reply-To: <4F12C794.6070609@Media-Brokers.com> References: <4F1071F8.4080202@Media-Brokers.com> <4F12C794.6070609@Media-Brokers.com> Message-ID: On Sun, 15 Jan 2012 14:33:24 +0200, Charles Marcus wrote: > On 2012-01-14 12:23 PM, IVO GELOV (CRM) wrote: >> I have downloaded the latest version 4.0 - but it seems there is no >> way to prevent spammers to use forged email addresses. I decided to >> remove the vacation feature from our corporate mail server, because >> it actually opens a backdoor (even though only when someone decides >> to activate his vacation auto-reply) for spammers and puts a risk on >> the company (our server can be blacklisted). > > Sorry, I misread your message... > > However, (I *think*) there *is* a simple solution to your problem, if I > now understand it correctly... > > Simply disallow anyone sending from an email address in your domain from > sending without SASL_AUTHing... > > The way I do this is: > > in main.cf (I put all of my restrictions in > smtpd_recipient_restrictions) add: > > check_sender_access ${hash}/nospoof, > > somewhere after reject_unauth_destination *but before any RBL checks) > > where nospoof contains: > > # Prevent spoofing from domains that we own > allowed_address1 at example.com OK > allowed_address2 at example.com OK > example.com REJECT You must use sasl_auth to send from one of our > example.com email addresses... > > and of course be sure to postmap the nospoof database after making any > changes... > These are the restrictions I apply (or had been applying for some time). Anyway, for now I simply disabled the vacation plugin. smtpd_client_restrictions = permit_mynetworks, check_client_access mysql:/etc/postfix/sender_ip, permit_sasl_authenticated, reject_unknown_client #reject_rhsbl_client blackhole.securitysage.com, reject_rbl_client opm.blitzed.org, #smtpd_client_restrictions = permit_sasl_authenticated, permit_mynetworks, check_client_access mysql:/etc/postfix/client_sql, reject_rbl_client sbl.spamhaus.org, reject_rbl_client list.dsbl.org,reject_rbl_client cbl.abuseat.org, reject_rbl_client dul.dnsbl.sorbs.net, reject_rbl_client dnsbl.ahbl.org, permit #smtpd_client_restrictions = permit_sasl_authenticated, permit_mynetworks, check_client_access mysql:/etc/postfix/client_ok, reject_rbl_client sbl.spamhaus.org, reject_rbl_client list.dsbl.org,reject_rbl_client cbl.abuseat.org, reject_rbl_client dul.dnsbl.sorbs.net, reject_rbl_client dnsbl.ahbl.org, reject_unknown_client ###, check_policy_service inet:127.0.0.1:10040, reject_rbl_client sbl.spamhaus.org, reject_rbl_client cbl.abuseat.org, reject_rbl_client dul.dnsbl.sorbs.net, reject_rbl_client dnsbl.ahbl.org #,reject_rbl_client opm.blitzed.org, reject_rbl_client relays.ordb.org, reject_rbl_client dun.dnsrbl.net #REJECT_NON_FQDN_HOSTNAME - proverka dali HELO e pylno Domain ime (sus suffix) #smtpd_helo_restrictions = check_helo_access hash:/etc/postfix/helo_access, reject_invalid_hostname, reject_non_fqdn_hostname smtpd_helo_restrictions = reject_invalid_hostname smtpd_sender_restrictions = permit_sasl_authenticated, permit_mynetworks, reject_rhsbl_sender rhsbl.ahbl.org, reject_rhsbl_sender rhsbl.sorbs.net, reject_rhsbl_sender multi.surbl.org #reject_rhsbl_sender blackhole.securitysage.com, reject_rhsbl_sender opm.blitzed.org, #smtpd_sender_restrictions = permit_sasl_authenticated, permit_mynetworks, check_sender_access mysql:/etc/postfix/sender_sql, reject_non_fqdn_sender, reject_unknown_sender_domain, reject_rhsbl_sender rhsbl.ahbl.org, reject_rhsbl_sender block.rhs.mailpolice.com, reject_rhsbl_sender rhsbl.sorbs.net, reject_rhsbl_sender multi.surbl.org, reject_rhsbl_sender dsn.rfc-ignorant.org, permit #, reject_rhsbl_sender dsn.rfc-ignorant.org, reject_rhsbl_sender relays.ordb.org, reject_rhsbl_sender dun.dnsrbl.net #smtpd_recipient_restrictions = permit_sasl_authenticated, permit_mynetworks, reject_unauth_destination, reject_unauth_pipelining, check_recipient_access regexp:/etc/postfix/dspam_incoming smtpd_recipient_restrictions = permit_sasl_authenticated, permit_mynetworks, reject_unauth_destination, reject_unauth_pipelining smtpd_data_restrictions = reject_unauth_pipelining From joseba.torre at ehu.es Mon Jan 16 11:50:49 2012 From: joseba.torre at ehu.es (Joseba Torre) Date: Mon, 16 Jan 2012 10:50:49 +0100 Subject: [Dovecot] Using Dovecot-auth to return error code 450 (or other 4xx) to Postfix when user is on vacation In-Reply-To: <4F11D677.2040706@schetterer.org> References: <4F1071F8.4080202@Media-Brokers.com> <4F11D677.2040706@schetterer.org> Message-ID: <4F13F2F9.2070008@ehu.es> > anyway if you use other vacation tecs, make sure allready flagged spam > by i.e clamav, amavis, spamassassin etc in postfix stage is not handled > by your vacation service , script etc. > as far i remember i gave some patch to the postfixadmin vacation script > doing exact this If you're using any antispam soft that gives every mail a spam score (like spamassassin does), you can use a strong rule for vacation replies (like "only messages with a spam score under 5 are allowed, but only those under 3 may have a vacation reply"). From rasca at miamammausalinux.org Mon Jan 16 12:42:08 2012 From: rasca at miamammausalinux.org (RaSca) Date: Mon, 16 Jan 2012 11:42:08 +0100 Subject: [Dovecot] Quota is not working (Debian Squeeze - Dovecot 1.2) Message-ID: <4F13FF00.1050108@miamammausalinux.org> Hi all, I'm trying to make quota work in Squeeze (Dovecot 1.2.15-7). The quota module is correctly loaded and, when receiving a message, from the log I see these messages: Jan 16 11:20:04 mail-1 dovecot: deliver(testquota@): Loading modules from directory: /usr/lib/dovecot/modules/lda Jan 16 11:20:04 mail-1 dovecot: deliver(testquota@): Module loaded: /usr/lib/dovecot/modules/lda/lib10_quota_plugin.so Jan 16 11:20:04 mail-1 dovecot: deliver(testquota@): Module loaded: /usr/lib/dovecot/modules/lda/lib90_sieve_plugin.so Jan 16 11:20:04 mail-1 dovecot: deliver(testquota@): auth input: uid=5000 Jan 16 11:20:04 mail-1 dovecot: deliver(testquota@): auth input: gid=5000 Jan 16 11:20:04 mail-1 dovecot: deliver(testquota@): auth input: home=/mail/mailboxes//testquota Jan 16 11:20:04 mail-1 dovecot: deliver(testquota@): Quota root: name=/mail/mailboxes//testquota backend=maildir args= Jan 16 11:20:04 mail-1 dovecot: deliver(testquota@): maildir: data=/mail/mailboxes//testquota@ Jan 16 11:20:04 mail-1 dovecot: deliver(testquota@): maildir++: root=/mail/mailboxes//testquota@, index=, control=, inbox=/mail/mailboxes//testquota@ Jan 16 11:20:04 mail-1 dovecot: deliver(testquota@): sieve: user's script path /mail/mailboxes//testquota/.dovecot.sieve doesn't exist (using global script path in stead) Jan 16 11:20:04 mail-1 dovecot: deliver(testquota@): sieve: using sieve path for user's script: /mail/sieve/globalsieverc Jan 16 11:20:04 mail-1 dovecot: deliver(testquota@): sieve: opening script /mail/sieve/globalsieverc Jan 16 11:20:04 mail-1 dovecot: deliver(testquota@): sieve: executing compiled script /mail/sieve/globalsieverc Jan 16 11:20:04 mail-1 dovecot: deliver(testquota@): Namespace : Using permissions from /mail/mailboxes//testquota@: mode=0700 gid=-1 Jan 16 11:20:05 mail-1 dovecot: deliver(testquota@): sieve: msgid=<4F13F996.4000501 at seat.it>: stored mail into mailbox 'INBOX' Now, since I've got a message like this: Quota root: name=/mail/mailboxes//testquota@ backend=maildir args= it seems that something is checked, but even if this directory is over quota, nothing happens. This is my dovecot conf: protocols = imap pop3 disable_plaintext_auth = no log_timestamp = "%Y-%m-%d %H:%M:%S " mail_location = maildir:/mail/mailboxes/%d/%n@%d mail_privileged_group = mail mail_debug = yes mail_nfs_storage = yes mmap_disable=yes fsync_disable=no mail_nfs_index = yes protocol imap { mail_plugins = quota imap_quota } protocol pop3 { pop3_uidl_format = %08Xu%08Xv mail_plugins = quota } protocol managesieve { } protocol lda { auth_socket_path = /var/run/dovecot/auth-master postmaster_address = postmaster@ mail_plugins = sieve quota quota_full_tempfail = no log_path = } auth default { mechanisms = plain passdb sql { args = /etc/dovecot/dovecot-sql.conf } userdb passwd { } userdb static { args = uid=5000 gid=5000 home=/mail/mailboxes/%d/%n@%d allow_all_users=yes } user = root socket listen { master { path = /var/run/dovecot/auth-master mode = 0600 user = vmail } client { path = /var/spool/postfix/private/auth mode = 0660 user = postfix group = postfix } } } plugin { quota = maildir:/mail/mailboxes/%d/%n@%d sieve_global_path = /mail/sieve/globalsieverc } The db connection works, this is /etc/dovecot/dovecot-sql.conf: driver = mysql connect = host= dbname=mail user= password= default_pass_scheme = CRYPT password_query = SELECT username, password FROM mailbox WHERE username='%u' user_query = SELECT username AS user, maildir AS home, CONCAT('*:storage=', quota , 'B') AS quota_rule FROM mailbox WHERE username = '%u' AND active = '1' and for the user testquota the user_query results in this: +-------------------+----------------------------+--------------------+ | user | home | quota_rule | +-------------------+----------------------------+--------------------+ | testquota@ | /testquota@/ | *:storage=1024000B | +-------------------+----------------------------+--------------------+ everything else is ok, for example I'm using sieve for the spam filter, and the SPAM is correctly put in the .SPAM dir. I turned on debug on dovecot, but I can't see if the query in some way fails. Can you please help me to understand what am I doing wrong? -- RaSca Mia Mamma Usa Linux: Niente ? impossibile da capire, se lo spieghi bene! rasca at miamammausalinux.org http://www.miamammausalinux.org From jsxmoney at gmail.com Mon Jan 16 14:38:44 2012 From: jsxmoney at gmail.com (Jason X, Maney) Date: Mon, 16 Jan 2012 14:38:44 +0200 Subject: [Dovecot] Dovecot unable to locate mailbox Message-ID: Dear all, I hope someone can point me in the right direction. here. I have setup my Dovecot v2.0.13 on Ubuntu 11.10. The logs tells me that the mail location has failed as follows: ========= Jan 16 14:18:16 myservername dovecot: pop3-login: Login: user=, method=PLAIN, rip=aaa.bbb.ccc.ddd, lip=www.xxx.yyy.zzz, mpid=1360, TLS Jan 16 14:18:16 myservername dovecot: pop3(userA): Error: user molla: Initialization failed: mail_location not set and autodetection failed: Mail storage autodetection failed with home=/home/userA Jan 16 14:18:16 myservername dovecot: pop3(userA): Error: Invalid user settings. Refer to server log for more information. ========= Yet my config also come out strangely as below: ========= root at guyana:~# dovecot -n # 2.0.13: /etc/dovecot/dovecot.conf # OS: Linux 3.0.0-12-server x86_64 Ubuntu 11.10 passdb { driver = pam } protocols = " imap pop3" ssl_cert = References: <4F1071F8.4080202@Media-Brokers.com> <4F12C794.6070609@Media-Brokers.com> <4F134A0A.70804@msapiro.net> Message-ID: <4F141D93.30406@Media-Brokers.com> On 2012-01-15 4:50 PM, Mark Sapiro wrote: > I don't see how this will help. The scenario the OP is concerned about > isspammer at foreign.domain sends a message with forged From: and maybe > envelope sendervictim at other.foreign.domain to his user on vacation. Guess I should read more carefully... for some reason I thought I remembered him being worried about forged senders in his own domain(s)... Sorry for the noise... -- Best regards, Charles From kirill at shutemov.name Mon Jan 16 17:05:05 2012 From: kirill at shutemov.name (Kirill A. Shutemov) Date: Mon, 16 Jan 2012 17:05:05 +0200 Subject: [Dovecot] v2.1.rc3 released In-Reply-To: <1325878845.17774.38.camel@hurina> References: <1325878845.17774.38.camel@hurina> Message-ID: <20120116150504.GA28883@shutemov.name> On Fri, Jan 06, 2012 at 09:40:44PM +0200, Timo Sirainen wrote: > http://dovecot.org/releases/2.1/rc/dovecot-2.1.rc3.tar.gz > http://dovecot.org/releases/2.1/rc/dovecot-2.1.rc3.tar.gz.sig > > Whops, rc2 was missing a file. I always run "make distcheck", which > should catch these, but recently it has always failed due to clang > static checking giving one "error" that I didn't really want to fix. > Because of that the distcheck didn't finish and didn't check for the > missing file. > > So, anyway, I've made clang happy again, and now that I see how bad idea > it is to just ignore the failed distcheck, I won't do that again in > future. :) > > ./autogen failed: $ ./autogen.sh libtoolize: Consider adding `AC_CONFIG_MACRO_DIR([m4])' to configure.in and libtoolize: rerunning libtoolize, to keep the correct libtool macros in-tree. libtoolize: Consider adding `-I m4' to ACLOCAL_AMFLAGS in Makefile.am. src/plugins/fts/Makefile.am:52: `pkglibexecdir' is not a legitimate directory for `SCRIPTS' Makefile.am:24: `pkglibdir' is not a legitimate directory for `DATA' autoreconf: automake failed with exit status: 1 $ automake --version | head -1 automake (GNU automake) 1.11.2 -- Kirill A. Shutemov From info at simonecaruso.com Mon Jan 16 17:40:59 2012 From: info at simonecaruso.com (Simone Caruso) Date: Mon, 16 Jan 2012 16:40:59 +0100 Subject: [Dovecot] Quota is not working (Debian Squeeze - Dovecot 1.2) In-Reply-To: <4F13FF00.1050108@miamammausalinux.org> References: <4F13FF00.1050108@miamammausalinux.org> Message-ID: <4F14450B.8000903@simonecaruso.com> On 16/01/2012 11:42, RaSca wrote: > Hi all, > I'm trying to make quota work in Squeeze (Dovecot 1.2.15-7). try "auth_debug = yes" -- Simone Caruso IT Consultant +39 349 65 90 805 From thomas at koch.ro Mon Jan 16 17:51:45 2012 From: thomas at koch.ro (Thomas Koch) Date: Mon, 16 Jan 2012 16:51:45 +0100 Subject: [Dovecot] Trying to get metadata plugin working Message-ID: <201201161651.46232.thomas@koch.ro> Hi, I'm working on a Kolab related project and wanted to use dovecot on my dev machine. However I'm stuck with the metadata-plugin. I "solved" the permissions problems but now I get dict: Error: file dict commit: file_dotlock_open(~/Maildir/shared-metadata) failed: No such file or directory Before that, I had dict { metadata = file:/var/lib/dovecot/shared-metadata but got problems since my normal user had no permission to access /var/lib/dovecot. I compiled the plugin from the most recent commit. My dovecot runs in a chroot. I can login with KMail and can create Groupware (annotated) folders, but the metadata file dict won't get created and I also can't set/get metadata via telnet. doveconf -N # 2.0.15: /etc/dovecot/dovecot.conf # OS: Linux 3.1.0-1-amd64 x86_64 Debian 6.0.3 auth_mechanisms = plain dict { metadata = file:~/Maildir/shared-metadata } mail_access_groups = dovecot mail_location = maildir:~/Maildir mail_plugins = " metadata" passdb { driver = pam } plugin { metadata_dict = proxy::metadata } protocols = " imap" service dict { unix_listener dict { group = dovecot mode = 0666 } } ssl_cert = References: <4F1071F8.4080202@Media-Brokers.com> <4F12C794.6070609@Media-Brokers.com> <4F134A0A.70804@msapiro.net> Message-ID: <4F144E57.9060802@msapiro.net> On 11:59 AM, IVO GELOV (CRM) wrote: > > The limitation of 1 message per week for any unique combination of > sender/recipient > does not stop backscatter - because each message can come with a new > forged FROM address, > and from different compromised mail servers. > The spammer does not have control over the body of the auto-replies > (which is something > like "I am not at the office, please write to my colleagues"), but it still > may cause the victims to take some measures. All true, but the sender in the sender/recipient combination is the forged From: that ultimately receives the backscatter and the recipient is your local user who set the vacation autoresponse. If you only have one or two local users on vacation at a time, any given backscatter recipient could receive at most one or two backscatter messages per week regardless of how many compromised servers the spammer sends from. And this assumes the spam is initially sent to multiple local users on vacation and gets past your local spam filtering. I don't know about you, but I have more significant potential backscatter sources to worry about. -- Mark Sapiro The highway is for gamblers, San Francisco Bay Area, California better use your sense - B. Dylan From rasca at miamammausalinux.org Mon Jan 16 18:28:58 2012 From: rasca at miamammausalinux.org (RaSca) Date: Mon, 16 Jan 2012 17:28:58 +0100 Subject: [Dovecot] Quota is not working (Debian Squeeze - Dovecot 1.2) In-Reply-To: <4F14450B.8000903@simonecaruso.com> References: <4F13FF00.1050108@miamammausalinux.org> <4F14450B.8000903@simonecaruso.com> Message-ID: <4F14504A.9010302@miamammausalinux.org> Il giorno Lun 16 Gen 2012 16:40:59 CET, Simone Caruso ha scritto: > On 16/01/2012 11:42, RaSca wrote: >> Hi all, >> I'm trying to make quota work in Squeeze (Dovecot 1.2.15-7). > try "auth_debug = yes" > In fact, enabling auth_debug gives me this: Jan 16 17:21:06 mail-2 dovecot: auth(default): master in: USER#0111#011testquota@#011service=deliver Jan 16 17:21:06 mail-2 dovecot: auth(default): passwd(testquota@): lookup Jan 16 17:21:06 mail-2 dovecot: auth(default): passwd(testquota@): unknown user Jan 16 17:21:06 mail-2 dovecot: auth(default): master out: USER#0111#011testquota@#011uid=5000#011gid=5000#011home=/mail/mailboxes//testquota@ But what I don't understand is that manually doing password_query and user_query works. So why I receive unknown user? Is there something else to set? -- RaSca Mia Mamma Usa Linux: Niente ? impossibile da capire, se lo spieghi bene! rasca at miamammausalinux.org http://www.miamammausalinux.org From greve at kolabsys.com Mon Jan 16 18:13:14 2012 From: greve at kolabsys.com (Georg C. F. Greve) Date: Mon, 16 Jan 2012 17:13:14 +0100 Subject: [Dovecot] [Kolab-devel] Trying to get metadata plugin working In-Reply-To: <201201161651.46232.thomas@koch.ro> References: <201201161651.46232.thomas@koch.ro> Message-ID: <2001652.RYW7Y0I4zo@katana.lair> On Monday 16 January 2012 16.51:45 Thomas Koch wrote: > I'm working on a Kolab related project and wanted to use dovecot on my dev > machine. Very interesting. Please document your findings in wiki.kolab.org once you're done. > dict: Error: file dict commit: file_dotlock_open(~/Maildir/shared-metadata) > failed: No such file or directory Can't really help with that one, I'm afraid. Best regards, Georg -- Georg C. F. Greve Chief Executive Officer Kolab Systems AG Z?rich, Switzerland e: greve at kolabsys.com t: +41 78 904 43 33 w: http://kolabsys.com pgp: 86574ACA Georg C. F. Greve -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 308 bytes Desc: This is a digitally signed message part. URL: From tss at iki.fi Mon Jan 16 19:16:57 2012 From: tss at iki.fi (Timo Sirainen) Date: Mon, 16 Jan 2012 19:16:57 +0200 Subject: [Dovecot] Trying to get metadata plugin working In-Reply-To: <201201161651.46232.thomas@koch.ro> References: <201201161651.46232.thomas@koch.ro> Message-ID: <23312B5E-14CF-42D9-8A18-F995EDA874C4@iki.fi> On 16.1.2012, at 17.51, Thomas Koch wrote: > dict: Error: file dict commit: file_dotlock_open(~/Maildir/shared-metadata) > failed: No such file or directory It's not expanding ~/ > dict { > metadata = file:~/Maildir/shared-metadata Use %h/ instead of ~/ From thomas at koch.ro Mon Jan 16 20:26:12 2012 From: thomas at koch.ro (Thomas Koch) Date: Mon, 16 Jan 2012 19:26:12 +0100 Subject: [Dovecot] Trying to get metadata plugin working In-Reply-To: <23312B5E-14CF-42D9-8A18-F995EDA874C4@iki.fi> References: <201201161651.46232.thomas@koch.ro> <23312B5E-14CF-42D9-8A18-F995EDA874C4@iki.fi> Message-ID: <201201161926.12309.thomas@koch.ro> Timo Sirainen: > Use %h/ instead of ~/ Hi Timo, it doesn't expand either %h nor %%h. When I hardcode the path to my dev user's homedir I get a permission error. After hardcoding it to /tmp/shared-metadata the file gets at least written, but the content looks strange: shared/mailbox/7c2ae515102e144f172d0000d1887b74/shared//vendor/kolab/folder- test true Best regards, Thomas Koch, http://www.koch.ro From tss at iki.fi Mon Jan 16 20:33:44 2012 From: tss at iki.fi (Timo Sirainen) Date: Mon, 16 Jan 2012 20:33:44 +0200 Subject: [Dovecot] Trying to get metadata plugin working In-Reply-To: <201201161926.12309.thomas@koch.ro> References: <201201161651.46232.thomas@koch.ro> <23312B5E-14CF-42D9-8A18-F995EDA874C4@iki.fi> <201201161926.12309.thomas@koch.ro> Message-ID: On 16.1.2012, at 20.26, Thomas Koch wrote: > Timo Sirainen: >> Use %h/ instead of ~/ > > Hi Timo, > > it doesn't expand either %h nor %%h. Oh, right, wrong place. If you make it go through proxy, it doesn't do any expansion. It's then accessed by the "dict" process (which probably runs as "dovecot" user). You could instead use something like: metadata_dict = file:%h/Maildir/shared-metadata > When I hardcode the path to my dev user's > homedir I get a permission error. After hardcoding it to /tmp/shared-metadata > the file gets at least written, but the content looks strange: > > shared/mailbox/7c2ae515102e144f172d0000d1887b74/shared//vendor/kolab/folder- > test > true I haven't really looked at what the metadata plugin actually does.. From buchholz at easystreet.net Tue Jan 17 00:41:46 2012 From: buchholz at easystreet.net (Don Buchholz) Date: Mon, 16 Jan 2012 14:41:46 -0800 Subject: [Dovecot] imap-login process_limit reached Message-ID: <4F14A7AA.8010507@easystreet.net> I've been having some problems with IMAP user connections to the Dovecot (v2.0.8) server. The following message is being logged. Jan 16 10:51:36 postal dovecot: master: Warning: service(imap-login): process_limit reached, client connections are being dropped The server is running Red Hat Enterprise Linux release 4 (update 6). Dovecot is v2.0.8. We have only 29 user accounts in /etc/dovecot/users. There were 196 "dovecot/imap" processes and 6 other dovecot processes, for a total of 202 "dovecot" processes, listed in the 'ps aux' output when problems were being experienced. Stopping and restarting the Dovecot system fixes the problem -- for a while. The 'doveconf -n' output is attached. I have not set any "process_limit" values, and I don't think I'm getting anywhere close to the 1024 default, so I'm pretty confused as to what might be wrong. Any suggestions on what to do next are appreciated. Thanks, - Don ------------------------------------------------------------------------ -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: doveconf-n.txt URL: From lists at wildgooses.com Tue Jan 17 02:22:35 2012 From: lists at wildgooses.com (Ed W) Date: Tue, 17 Jan 2012 00:22:35 +0000 Subject: [Dovecot] Storing passwords encrypted... bcrypt? In-Reply-To: <4F04FAA9.3020908@localhost.localdomain.org> References: <4F0367A2.1000807@Media-Brokers.com> <4F04FAA9.3020908@localhost.localdomain.org> Message-ID: <4F14BF4B.5060804@wildgooses.com> On 05/01/2012 01:19, Pascal Volk wrote: > On 01/03/2012 09:40 PM Charles Marcus wrote: >> Hi everyone, >> >> Was just perusing this article about how trivial it is to decrypt >> passwords that are stored using most (standard) encryption methods (like >> MD5), and was wondering - is it possible to use bcrypt with >> dovecot+postfix+mysql (or posgres)? > Yes it is possible to use bcrypt with dovecot. Currently you have only > to write your password scheme plugin. The bcrypt algorithm is described > at http://en.wikipedia.org/wiki/Bcrypt. > > If you are using Dovecot>= 2.0 'doveadm pw' supports the schemes: > *BSD: Blowfish-Crypt > *Linux (since glibc 2.7): SHA-256-Crypt and SHA-512-Crypt > Some distributions have also added support for Blowfish-Crypt > See also: doveadm-pw(1) > > If you are using Dovecot< 2.0 you can also use any of the algorithms > supported by your system's libc. But then you have to prefix the hashes > with {CRYPT} - not {{BLF,SHA256,SHA512}-CRYPT}. > I'm a bit late, but the above is absolutely correct Basically the simplest solution is to pick a glibc which natively supports bcrypt (and the equivalent algorithm, but using SHA-256/512). Then you can effectively use any of these hashes in your /etc/{passwd,shadow} file. With the hash testing native in your glibc then a bunch of applications automatically acquire the ability to test passwords stored in these hash formats, dovecot being one of them To generate the hashes in that format, choose an appropriate library for your web interface or whatever generates the hashes for you. There are even command line utilities (mkpasswd) to do this for you. I forget the config knobs (/etc/logins.def ?), but it's entirely possible to also have all your normal /etc/shadow hashes generated in this format going forward if you wish I posted some patches for uclibc recently for bcrypt and I think sha-256/512 already made it in. I believe several of the big names have similar patches for glibc. Just to attack some of the myths here: - Salting passwords basically means adding some random garbage at the front of the password before hashing. - Salting passwords prevents you using a big lookup table to cheat and instantly reverse the password - Salting has very little ability to stop you bruteforcing the password, ie it takes around the same time to figure out the SHA or blowfish hash of every word in some dictionary, regardless of whether you use the raw word or the word with some garbage in front of it - Using an iterated hash algorithm gives you a linear increase in difficulty in bruteforcing passwords. So if you do a million iterations on each password, then it takes a million times longer to bruteforce (probably there are shortcuts to be discovered, assume that this is best case, but it's still a good improvement). - Bear in mind that off the shelf GPU crackers will do of the order 100-300 million hashes per second!! http://www.cryptohaze.com/multiforcer.php The last statistic should be scary to someone who has some small knowledge of the number of unique words in the [english] language, even multiplying up for trivial permutations with numbers or punctuation... So in conclusion: everyone who stores passwords in hash form should make their way in an orderly fashion towards the door if they don't currently use an iterated hash function. No need to run, but it definitely should be on the todo list to apply where feasible. BCrypt is very common and widely implemented, but it would seem logical to consider SHA-256/512 (iterated) options where there is application support. Note I personally believe there are valid reasons to store plaintext passwords - this seems to cause huge criticism due to the ensuing disaster which can happen if the database is pinched, but it does allow for enhanced security in the password exchange, so ultimately it depends on where your biggest risk lies... Good luck Ed W From lists at wildgooses.com Tue Jan 17 02:28:32 2012 From: lists at wildgooses.com (Ed W) Date: Tue, 17 Jan 2012 00:28:32 +0000 Subject: [Dovecot] compressed mboxes very slow In-Reply-To: <8739blw6gl.fsf@alfa.kjonca> References: <87iptnoans.fsf@alfa.kjonca> <8739blw6gl.fsf@alfa.kjonca> Message-ID: <4F14C0B0.9020709@wildgooses.com> On 12/01/2012 10:39, Kamil Jo?ca wrote: > kjonca at o2.pl (Kamil Jo?ca) writes: > >> I have some archive mails in gzipped mboxes. I could use them with >> dovecot 1.x without problems. >> But recently I have installed dovecot 2.0.12, and they are slow. very >> slow. > > Recently I have to read some compressed mboxes again, and no progress :( > I took 2.0.17 sources and put some > i_debug ("#kjonca["__FILE__",%d,%s] %d", __LINE__,__func__,...some parameters ...); > > lines into istream-bzlib.c, istream-raw-mbox.c and istream-limit.c > and found that: > > in istream-limit.c in function around lines 40-45: > --8<---------------cut here---------------start------------->8--- > i_stream_seek(stream->parent, lstream->istream.parent_start_offset + > stream->istream.v_offset); > stream->pos -= stream->skip; > stream->skip = 0; > --8<---------------cut here---------------end--------------->8--- > seeks stream, (calling i_stream_raw_mbox_seek in file istream-raw-mbox.c ) > > and then (line 50 ) > --8<---------------cut here---------------start------------->8--- > if ((ret = i_stream_read(stream->parent)) == -2) > return -2; > --8<---------------cut here---------------end--------------->8--- > > tries to read some data earlier in stream, and with compressed mboxes it > cause reread file from the beginning. > Just wanted to bump this since it seems interesting. Timo do you have a comment? I definitely see your point that skipping backwards in a compressed stream is going to be very CPU intensive. Ed W From moseleymark at gmail.com Tue Jan 17 03:17:26 2012 From: moseleymark at gmail.com (Mark Moseley) Date: Mon, 16 Jan 2012 17:17:26 -0800 Subject: [Dovecot] LMTP Logging Message-ID: Just had a minor suggestion, with no clue how hard/easy it would be to implement: The %f flag in deliver_log_format seems to pick up the From: header, instead of the "MAIL FROM:<...>" arg. It'd be handy to have a %F that shows the "MAIL FROM" arg instead. I'm looking at tracking emails through logs from Exim to Dovecot easily. I know Message-ID can be used for correlation but it adds some complexity to searching, i.e. I can't just grep for the sender (as logged by Exim), unless I assume "MAIL FROM" always == From: From janfrode at tanso.net Tue Jan 17 10:36:19 2012 From: janfrode at tanso.net (Jan-Frode Myklebust) Date: Tue, 17 Jan 2012 09:36:19 +0100 Subject: [Dovecot] resolve mail_home ? Message-ID: <20120117083619.GA21186@dibs.tanso.net> I now have "mail_home = /srv/mailstore/%256RHu/%d/%n". Is there any way of asking dovecot where a user's home directory is? It's not in "doveadm user": $ doveadm user -f home janfrode at lyse.net $ doveadm user janfrode at tanso.net userdb: janfrode at tanso.net mail : mdbox:~/mdbox quota_rule: *:storage=1048576 Alternatively, is there an easy way to calculate the %256RHu hash ? -jf From ivo at crm.walltopia.com Tue Jan 17 10:52:40 2012 From: ivo at crm.walltopia.com (IVO GELOV (CRM)) Date: Tue, 17 Jan 2012 10:52:40 +0200 Subject: [Dovecot] Using Dovecot-auth to return error code 450 (or other 4xx) to Postfix when user is on vacation In-Reply-To: <4F144E57.9060802@msapiro.net> References: <4F1071F8.4080202@Media-Brokers.com> <4F12C794.6070609@Media-Brokers.com> <4F134A0A.70804@msapiro.net> <4F144E57.9060802@msapiro.net> Message-ID: On Mon, 16 Jan 2012 18:20:39 +0200, Mark Sapiro wrote: > On 11:59 AM, IVO GELOV (CRM) wrote: >> >> The limitation of 1 message per week for any unique combination of >> sender/recipient >> does not stop backscatter - because each message can come with a new >> forged FROM address, >> and from different compromised mail servers. >> The spammer does not have control over the body of the auto-replies >> (which is something >> like "I am not at the office, please write to my colleagues"), but it still >> may cause the victims to take some measures. > > > All true, but the sender in the sender/recipient combination is the > forged From: that ultimately receives the backscatter and the recipient > is your local user who set the vacation autoresponse. If you only have > one or two local users on vacation at a time, any given backscatter > recipient could receive at most one or two backscatter messages per week > regardless of how many compromised servers the spammer sends from. And > this assumes the spam is initially sent to multiple local users on > vacation and gets past your local spam filtering. > > I don't know about you, but I have more significant potential > backscatter sources to worry about. > I see your point and I agree with you this is a minor problem. Thanks for your time, Mark. Best wishes, Ivo Gelov From ivo at crm.walltopia.com Tue Jan 17 11:59:14 2012 From: ivo at crm.walltopia.com (IVO GELOV (CRM)) Date: Tue, 17 Jan 2012 11:59:14 +0200 Subject: [Dovecot] Dovecot unable to locate mailbox In-Reply-To: References: Message-ID: On Mon, 16 Jan 2012 14:38:44 +0200, Jason X, Maney wrote: > Dear all, > > I hope someone can point me in the right direction. here. I have setup my > Dovecot v2.0.13 on Ubuntu 11.10. The logs tells me that the mail location > has failed as follows: > > ========= > Jan 16 14:18:16 myservername dovecot: pop3-login: Login: user=, > method=PLAIN, rip=aaa.bbb.ccc.ddd, lip=www.xxx.yyy.zzz, mpid=1360, TLS > Jan 16 14:18:16 myservername dovecot: pop3(userA): Error: user molla: > Initialization failed: mail_location not set and autodetection failed: Mail > storage autodetection failed with home=/home/userA > Jan 16 14:18:16 myservername dovecot: pop3(userA): Error: Invalid user > settings. Refer to server log for more information. > ========= > > Yet my config also come out strangely as below: > > # path given in the mail_location setting. > # mail_location = maildir:~/Maildir > # mail_location = mbox:~/mail:INBOX=/var/mail/%u > # mail_location = mbox:/var/mail/%d/%1n/%n:INDEX=/var/indexes/%d/%1n/%n > mail_location = maildir:~/Maildir > # explicitly, ie. mail_location does nothing unless you have a namespace > # mail_location, which is also the default for it. Hi, Jason. I will describe my configuration and probably you will find some usefull information. I am using Postfix as MTA and have configured Dovecot to be LDA. I have several domains, so I am using the following folder schema: /var/mail/vhosts = the root of the mail storage /var/mail/vhosts/domain_1 = first domain /var/mail/vhosts/domain_1/user_1 = first mailbox in this domain .... /var/mail/vhosts/domain_2 = another domain /var/mail/vhosts/domain_2/user_1 = first mailbox in the other domain This is achieved with the following setting in mail.conf: mail_location = maildir:/var/mail/vhosts/%d/%n But since I do not want to manually go and create the corresponding folders each time I add new user (I manage accounts through a MySQL table), I also use the following setting in lda.conf: lda_mailbox_autocreate = yes lda_mailbox_autosubscribe = yes Perhaps you only need to add the latter settings in lda.conf and everything should run fine. Best wishes, IVO GELOV From interfasys at gmail.com Tue Jan 17 13:07:28 2012 From: interfasys at gmail.com (=?UTF-8?B?aW50ZXJmYVN5cyBzw6BybA==?=) Date: Tue, 17 Jan 2012 11:07:28 +0000 Subject: [Dovecot] Antispam plugin not compatible with Dovecot 2.1 Message-ID: <4F155670.6010905@gmail.com> Here is what I get when I try to compile the antispam plugin agaisnt Dovecot 2.1 ************** mailbox.c: In function 'antispam_save_begin': mailbox.c:138:12: error: 'struct mail_save_context' has no member named 'copying' mailbox.c: In function 'antispam_save_finish': mailbox.c:174:12: error: 'struct mail_save_context' has no member named 'copying' Failed to compile mailbox.c (plugin)! gmake[3]: *** [mailbox.plugin.o] Error 1 ************** The other objects compile fine. Cheers, Olivier From CMarcus at Media-Brokers.com Tue Jan 17 13:26:39 2012 From: CMarcus at Media-Brokers.com (Charles Marcus) Date: Tue, 17 Jan 2012 06:26:39 -0500 Subject: [Dovecot] per-user limit? In-Reply-To: <20120116091521.GA10944@gir.theapt.org> References: <20120116091521.GA10944@gir.theapt.org> Message-ID: <4F155AEF.3080105@Media-Brokers.com> On 2012-01-16 4:15 AM, Peter Hessler wrote: > I'm looking through my configuration, and I cannot see a limit on how > many times a single user can connect. He is connecting from different > IPs. I think you're needing: http://wiki2.dovecot.org/Services#Service_limits -- Best regards, Charles From tss at iki.fi Tue Jan 17 16:20:13 2012 From: tss at iki.fi (Timo Sirainen) Date: Tue, 17 Jan 2012 16:20:13 +0200 Subject: [Dovecot] little bug with Director in 2.1? In-Reply-To: References: Message-ID: <1326810013.11500.1.camel@innu> Hi, On Tue, 2012-01-10 at 16:16 +0100, Luca Di Vizio wrote: > in 2.1rc3 the "director_servers" setting does not accept hostnames as > documented (with ip no problems). > It works correctly in 2.0.17. The problem most likely was that v2.1 chroots the director process by default, but it did it a bit too early so hostname lookups failed. http://hg.dovecot.org/dovecot-2.1/rev/1d54d2963392 should fix it. From michael at orlitzky.com Tue Jan 17 16:23:47 2012 From: michael at orlitzky.com (Michael Orlitzky) Date: Tue, 17 Jan 2012 09:23:47 -0500 Subject: [Dovecot] imap-login process_limit reached In-Reply-To: <4F14A7AA.8010507@easystreet.net> References: <4F14A7AA.8010507@easystreet.net> Message-ID: <4F158473.1000901@orlitzky.com> First of all, feature request: doveconf -d show the default value of all settings On 01/16/12 17:41, Don Buchholz wrote: > > The 'doveconf -n' output is attached. I have not set any > "process_limit" values, and I don't think I'm getting anywhere close to > the 1024 default, so I'm pretty confused as to what might be wrong. > > Any suggestions on what to do next are appreciated. What makes you think 1024 is the default? We had to increase it. It shows up in doveconf -n output, so I don't think that's the default. # doveconf -n | grep limit default_process_limit = 1024 From phessler at theapt.org Tue Jan 17 16:27:31 2012 From: phessler at theapt.org (Peter Hessler) Date: Tue, 17 Jan 2012 15:27:31 +0100 Subject: [Dovecot] per-user limit? In-Reply-To: <4F155AEF.3080105@Media-Brokers.com> References: <20120116091521.GA10944@gir.theapt.org> <4F155AEF.3080105@Media-Brokers.com> Message-ID: <20120117142731.GF24394@gir.theapt.org> On 2012 Jan 17 (Tue) at 06:26:39 -0500 (-0500), Charles Marcus wrote: :On 2012-01-16 4:15 AM, Peter Hessler wrote: :>I'm looking through my configuration, and I cannot see a limit on how :>many times a single user can connect. He is connecting from different :>IPs. : :I think you're needing: : :http://wiki2.dovecot.org/Services#Service_limits : Thanks for the pointer. Hoever, this doesn't seem to help me. When I do "doveconf | grep [foo]" I find that the limits are either '0' or '1'. Except in "service imap-login { process-limit = 128 }". I had bumped that up from 64, and now it is at 1024. I don't have many users (about 6 that use imap), and nobody can use more than 6. I also double checked my process limits, and they are either unlimited, or measured in the ten-thousands. -- Osborn's Law: Variables won't; constants aren't. From duihi77 at gmail.com Tue Jan 17 16:31:01 2012 From: duihi77 at gmail.com (Duane Hill) Date: Tue, 17 Jan 2012 14:31:01 +0000 Subject: [Dovecot] imap-login process_limit reached In-Reply-To: <4F158473.1000901@orlitzky.com> References: <4F14A7AA.8010507@easystreet.net> <4F158473.1000901@orlitzky.com> Message-ID: <716809841.20120117143101@gmail.com> On Tuesday, January 17, 2012 at 14:23:47 UTC, michael at orlitzky.com confabulated: > First of all, feature request: > doveconf -d > show the default value of all settings You mean like doveconf(1) ? OPTIONS -a Show all settings with their currently configured values. -- If at first you don't succeed... ...so much for skydiving. From michael at orlitzky.com Tue Jan 17 16:58:04 2012 From: michael at orlitzky.com (Michael Orlitzky) Date: Tue, 17 Jan 2012 09:58:04 -0500 Subject: [Dovecot] imap-login process_limit reached In-Reply-To: <716809841.20120117143101@gmail.com> References: <4F14A7AA.8010507@easystreet.net> <4F158473.1000901@orlitzky.com> <716809841.20120117143101@gmail.com> Message-ID: <4F158C7C.4070209@orlitzky.com> On 01/17/12 09:31, Duane Hill wrote: > On Tuesday, January 17, 2012 at 14:23:47 UTC, michael at orlitzky.com confabulated: > >> First of all, feature request: > >> doveconf -d >> show the default value of all settings > > You mean like doveconf(1) ? > > OPTIONS > -a Show all settings with their currently configured values. > Using -a shows you all settings, as they're running in your installation. That's the defaults, except where they're overwritten by your config. I was asking for the defaults regardless of what's in my config file, so that I don't have to deduce them from the combined doveconf output & my config file. From michael at orlitzky.com Tue Jan 17 17:01:45 2012 From: michael at orlitzky.com (Michael Orlitzky) Date: Tue, 17 Jan 2012 10:01:45 -0500 Subject: [Dovecot] imap-login process_limit reached In-Reply-To: <4F158C7C.4070209@orlitzky.com> References: <4F14A7AA.8010507@easystreet.net> <4F158473.1000901@orlitzky.com> <716809841.20120117143101@gmail.com> <4F158C7C.4070209@orlitzky.com> Message-ID: <4F158D59.2070703@orlitzky.com> On 01/17/12 09:58, Michael Orlitzky wrote: > > Using -a shows you all settings, as they're running in your > installation. That's the defaults, except where they're overwritten by > your config. > > I was asking for the defaults regardless of what's in my config file, so > that I don't have to deduce them from the combined doveconf output & my > config file. In other words, I don't want to have to do this: mail2 ~ # touch empty-config.conf mail2 ~ # doveconf -a -c empty-config.conf | grep limit | head doveconf: Error: ssl enabled, but ssl_cert not set doveconf: Error: ssl enabled, but ssl_cert not set doveconf: Fatal: Error in configuration file empty-config.conf: ssl enabled, but ssl_cert not set default_client_limit = 1000 default_process_limit = 100 default_vsz_limit = 256 M recipient_delimiter = + client_limit = 0 process_limit = 1 vsz_limit = 18446744073709551615 B client_limit = 1 process_limit = 0 vsz_limit = 18446744073709551615 B to find out that the default process limit isn't 1000. From tss at iki.fi Tue Jan 17 17:27:15 2012 From: tss at iki.fi (Timo Sirainen) Date: Tue, 17 Jan 2012 17:27:15 +0200 Subject: [Dovecot] resolve mail_home ? In-Reply-To: <20120117083619.GA21186@dibs.tanso.net> References: <20120117083619.GA21186@dibs.tanso.net> Message-ID: <1326814035.11500.9.camel@innu> On Tue, 2012-01-17 at 09:36 +0100, Jan-Frode Myklebust wrote: > I now have "mail_home = /srv/mailstore/%256RHu/%d/%n". Is there any way > of asking dovecot where a user's home directory is? No.. > It's not in "doveadm user": > > $ doveadm user -f home janfrode at lyse.net > $ doveadm user janfrode at tanso.net > userdb: janfrode at tanso.net > mail : mdbox:~/mdbox > quota_rule: *:storage=1048576 Right, because it's a default setting, not something that comes from a userdb lookup. > Alternatively, is there an easy way to calculate the %256RHu hash ? Nope.. Maybe a new command, or maybe a parameter to doveadm user that would show mail_uid/gid/home. Or maybe something that dumps config output with %vars expanded to the given user. Hmm. From tss at iki.fi Tue Jan 17 17:30:11 2012 From: tss at iki.fi (Timo Sirainen) Date: Tue, 17 Jan 2012 17:30:11 +0200 Subject: [Dovecot] per-user limit? In-Reply-To: <20120116091521.GA10944@gir.theapt.org> References: <20120116091521.GA10944@gir.theapt.org> Message-ID: <1326814211.11500.11.camel@innu> On Mon, 2012-01-16 at 10:15 +0100, Peter Hessler wrote: > I am seeing a problem where users are limited to 6 imap logins total. > One of my users has a bunch of phones and computers, and wants them all > on at the same time. > > I'm looking through my configuration, and I cannot see a limit on how > many times a single user can connect. He is connecting from different > IPs. > > Any ideas? My logs show the following error when they attempt to auth > for a 7th time: > > dovecot: imap-login: Disconnected (no auth attempts): rip=111.yy.zz.xx, lip=81.209.183.113, TLS This means that the client simply didn't try to log in. If Dovecot reaches some kind of a limit, it logs about that. If there isn't anything else logged, I don't think the problem is in Dovecot itself. Can you reproduce this yourself by logging in with e.g. telnet? From javierdemiguel at us.es Tue Jan 17 17:35:17 2012 From: javierdemiguel at us.es (=?UTF-8?B?SmF2aWVyIE1pZ3VlbCBSb2Ryw61ndWV6?=) Date: Tue, 17 Jan 2012 16:35:17 +0100 Subject: [Dovecot] resolve mail_home ? In-Reply-To: <1326814035.11500.9.camel@innu> References: <20120117083619.GA21186@dibs.tanso.net> <1326814035.11500.9.camel@innu> Message-ID: <4F159535.1070701@us.es> That comand/paramater should be great for our backup scripts in our hashed mdboxes tree, we are using now slocate... Regards Javier > Nope.. > > Maybe a new command, or maybe a parameter to doveadm user that would > show mail_uid/gid/home. Or maybe something that dumps config output with > %vars expanded to the given user. Hmm. > From CMarcus at Media-Brokers.com Tue Jan 17 18:13:44 2012 From: CMarcus at Media-Brokers.com (Charles Marcus) Date: Tue, 17 Jan 2012 11:13:44 -0500 Subject: [Dovecot] imap-login process_limit reached In-Reply-To: <4F158C7C.4070209@orlitzky.com> References: <4F14A7AA.8010507@easystreet.net> <4F158473.1000901@orlitzky.com> <716809841.20120117143101@gmail.com> <4F158C7C.4070209@orlitzky.com> Message-ID: <4F159E38.5020802@Media-Brokers.com> On 2012-01-17 9:58 AM, Michael Orlitzky wrote: > Using -a shows you all settings, as they're running in your > installation. That's the defaults, except where they're overwritten by > your config. > > I was asking for the defaults regardless of what's in my config file, so > that I don't have to deduce them from the combined doveconf output& my > config file. Yeah, I had suggested this to Timo a long time ago when I suggested doveconf -n (the way postfix does it), but I don't think he ever did the -d option... maybe it got lost in the huffle... -- Best regards, Charles From buchholz at easystreet.net Tue Jan 17 20:15:28 2012 From: buchholz at easystreet.net (Don Buchholz) Date: Tue, 17 Jan 2012 10:15:28 -0800 Subject: [Dovecot] imap-login process_limit reached In-Reply-To: <4F158473.1000901@orlitzky.com> References: <4F14A7AA.8010507@easystreet.net> <4F158473.1000901@orlitzky.com> Message-ID: <4F15BAC0.3060003@easystreet.net> Michael Orlitzky wrote: > First of all, feature request: > > doveconf -d > show the default value of all settings > > > On 01/16/12 17:41, Don Buchholz wrote: > >> The 'doveconf -n' output is attached. I have not set any >> "process_limit" values, and I don't think I'm getting anywhere close to >> the 1024 default, so I'm pretty confused as to what might be wrong. >> >> Any suggestions on what to do next are appreciated. >> > > > What makes you think 1024 is the default? We had to increase it. It > shows up in doveconf -n output, so I don't think that's the default. > > # doveconf -n | grep limit > default_process_limit = 1024 > What makes me think 1024 is the default? The documentation: --> http://wiki2.dovecot.org/Services?highlight=%28process_limit%29#imap.2C_pop3.2C_managesieve From michael at orlitzky.com Tue Jan 17 20:30:02 2012 From: michael at orlitzky.com (Michael Orlitzky) Date: Tue, 17 Jan 2012 13:30:02 -0500 Subject: [Dovecot] imap-login process_limit reached In-Reply-To: <4F15BAC0.3060003@easystreet.net> References: <4F14A7AA.8010507@easystreet.net> <4F158473.1000901@orlitzky.com> <4F15BAC0.3060003@easystreet.net> Message-ID: <4F15BE2A.6010605@orlitzky.com> On 01/17/12 13:15, Don Buchholz wrote: >> > What makes me think 1024 is the default? > The documentation: > --> > http://wiki2.dovecot.org/Services?highlight=%28process_limit%29#imap.2C_pop3.2C_managesieve > That's only for those three services (imap, pop3, managesieve), not for imap-login unfortunately. Check here for more info, http://wiki2.dovecot.org/LoginProcess but the good part is, Since one login process can handle only one connection, the service's process_limit setting limits the number of users that can be logging in at the same time (defaults to default_process_limit=100). From buchholz at easystreet.net Tue Jan 17 21:02:37 2012 From: buchholz at easystreet.net (Don Buchholz) Date: Tue, 17 Jan 2012 11:02:37 -0800 Subject: [Dovecot] imap-login process_limit reached In-Reply-To: <4F15BAC0.3060003@easystreet.net> References: <4F14A7AA.8010507@easystreet.net> <4F158473.1000901@orlitzky.com> <4F15BAC0.3060003@easystreet.net> Message-ID: <4F15C5CD.80904@easystreet.net> Don Buchholz wrote: > Michael Orlitzky wrote: >> First of all, feature request: >> >> doveconf -d >> show the default value of all settings >> >> >> On 01/16/12 17:41, Don Buchholz wrote: >> >>> The 'doveconf -n' output is attached. I have not set any >>> "process_limit" values, and I don't think I'm getting anywhere close to >>> the 1024 default, so I'm pretty confused as to what might be wrong. >>> >>> Any suggestions on what to do next are appreciated. >>> >> >> >> What makes you think 1024 is the default? We had to increase it. It >> shows up in doveconf -n output, so I don't think that's the default. >> >> # doveconf -n | grep limit >> default_process_limit = 1024 >> > What makes me think 1024 is the default? > The documentation: > --> > http://wiki2.dovecot.org/Services?highlight=%28process_limit%29#imap.2C_pop3.2C_managesieve > > But, Michael's right, documentation can be wrong. So, I dumped the entire configuration. Here are the values found on the running system. Both imap and pop3 services have "process_limit = 1024". | [root at postal ~]# doveconf -a | # 2.0.8: /etc/dovecot/dovecot.conf | # OS: Linux 2.6.9-67.0.1.ELsmp i686 Red Hat Enterprise Linux WS release 4 (Nahant Update 6) ext3 | ... | default_process_limit = 100 | ... | service anvil { | ... | process_limit = 1 | ... | } | service auth-worker { | ... | process_limit = 0 | ... | } | service auth { | ... | process_limit = 1 | ... | } | service config { | ... | process_limit = 0 | ... | } | service dict { | ... | process_limit = 0 | ... | } | service director { | ... | process_limit = 1 | ... | } | service dns_client { | ... | process_limit = 0 | ... | } | service doveadm { | ... | process_limit = 0 | ... | } | service imap-login { | ... | process_limit = 0 | ... | } | service imap { | ... | process_limit = 1024 | ... | } | service lmtp { | ... | process_limit = 0 | ... | } | service log { | ... | process_limit = 1 | ... | } | service pop3-login { | ... | process_limit = 0 | ... | } | service pop3 { | ... | process_limit = 1024 | ... | } | service ssl-params { | ... | process_limit = 0 | ... | } From michael at orlitzky.com Tue Jan 17 21:12:55 2012 From: michael at orlitzky.com (Michael Orlitzky) Date: Tue, 17 Jan 2012 14:12:55 -0500 Subject: [Dovecot] imap-login process_limit reached In-Reply-To: <4F15C5CD.80904@easystreet.net> References: <4F14A7AA.8010507@easystreet.net> <4F158473.1000901@orlitzky.com> <4F15BAC0.3060003@easystreet.net> <4F15C5CD.80904@easystreet.net> Message-ID: <4F15C837.2020002@orlitzky.com> On 01/17/12 14:02, Don Buchholz wrote: >> > But, Michael's right, documentation can be wrong. So, I dumped the > entire configuration. Here are the values found on the running system. > Both imap and pop3 services have "process_limit = 1024". > You probably just posted this while my last message was in-flight, but just in case, 'imap' and 'imap-login' are different, and have different process limits. As the title of the thread suggests, you're out of imap-login processes, not imap ones. From buchholz at easystreet.net Tue Jan 17 21:48:29 2012 From: buchholz at easystreet.net (Don Buchholz) Date: Tue, 17 Jan 2012 11:48:29 -0800 Subject: [Dovecot] imap-login process_limit reached In-Reply-To: <4F15BE2A.6010605@orlitzky.com> References: <4F14A7AA.8010507@easystreet.net> <4F158473.1000901@orlitzky.com> <4F15BAC0.3060003@easystreet.net> <4F15BE2A.6010605@orlitzky.com> Message-ID: <4F15D08D.4070209@easystreet.net> Michael Orlitzky wrote: > On 01/17/12 13:15, Don Buchholz wrote: > >>> >>> >> What makes me think 1024 is the default? >> The documentation: >> --> >> http://wiki2.dovecot.org/Services?highlight=%28process_limit%29#imap.2C_pop3.2C_managesieve >> >> > > That's only for those three services (imap, pop3, managesieve), not for > imap-login unfortunately. Check here for more info, > > http://wiki2.dovecot.org/LoginProcess > > but the good part is, > > Since one login process can handle only one connection, the > service's process_limit setting limits the number of users that can > be logging in at the same time (defaults to > default_process_limit=100). > Doh! Thanks, Michael. I wasn't looking at the original error message closely enough. I scanned too quickly and saw "service(imap)" and not "service(imap-login)". Now the failure when there are only ~200 (total) dovecot processes makes sense (because about half of the processes here are dovecot/imap-login). I've added the following to our configuration: service imap-login { process_limit = 500 process_min_avail = 2 } Thanks for your help ... and patience. - Don From buchholz at easystreet.net Tue Jan 17 21:52:19 2012 From: buchholz at easystreet.net (Don Buchholz) Date: Tue, 17 Jan 2012 11:52:19 -0800 Subject: [Dovecot] imap-login process_limit reached In-Reply-To: <4F15C837.2020002@orlitzky.com> References: <4F14A7AA.8010507@easystreet.net> <4F158473.1000901@orlitzky.com> <4F15BAC0.3060003@easystreet.net> <4F15C5CD.80904@easystreet.net> <4F15C837.2020002@orlitzky.com> Message-ID: <4F15D173.9090103@easystreet.net> Michael Orlitzky wrote: > On 01/17/12 14:02, Don Buchholz wrote: > >> But, Michael's right, documentation can be wrong. So, I dumped the >> entire configuration. Here are the values found on the running system. >> Both imap and pop3 services have "process_limit = 1024". >> >> > > You probably just posted this while my last message was in-flight, but > just in case, 'imap' and 'imap-login' are different, and have different > process limits. > > As the title of the thread suggests, you're out of imap-login processes, > not imap ones. > Yup! ... see reply on other branch in this thread. Thanks again! - Don From michael at orlitzky.com Tue Jan 17 22:13:22 2012 From: michael at orlitzky.com (Michael Orlitzky) Date: Tue, 17 Jan 2012 15:13:22 -0500 Subject: [Dovecot] imap-login process_limit reached In-Reply-To: <4F15D08D.4070209@easystreet.net> References: <4F14A7AA.8010507@easystreet.net> <4F158473.1000901@orlitzky.com> <4F15BAC0.3060003@easystreet.net> <4F15BE2A.6010605@orlitzky.com> <4F15D08D.4070209@easystreet.net> Message-ID: <4F15D662.9030601@orlitzky.com> On 01/17/12 14:48, Don Buchholz wrote: >> > Doh! Thanks, Michael. I wasn't looking at the original error message > closely enough. I scanned too quickly and saw "service(imap)" and not > "service(imap-login)". Now the failure when there are only ~200 (total) > dovecot processes makes sense (because about half of the processes here > are dovecot/imap-login). > > ... > > Thanks for your help ... and patience. > No problem, I went through the exact same process when we hit the limit. From interfasys at gmail.com Wed Jan 18 03:03:46 2012 From: interfasys at gmail.com (=?UTF-8?B?aW50ZXJmYVN5cyBzw6BybA==?=) Date: Wed, 18 Jan 2012 01:03:46 +0000 Subject: [Dovecot] [Dovecot 2.1] ACL plugin makes imap service crash when using some clients Message-ID: <4F161A72.8030400@gmail.com> Hello, I've just noticed that when Horde is connecting to Dovecot 2.1, it crashes the imap service if Dovecot is configured to use the ACL plugin. I'm not sure what's so special about the command Horde sends, but it shouldn't make Dovecot crash. Everything is fine when using Thunderbird. Here is the message in Dovecot's logs "Fatal: master: service(imap): child 89974 killed with signal 11 (core not dumped)" The message says that the core is not dumped, even though I did add drop_priv_before_exec=yes to my config file. I've tried connecting to the pid using gdb, but the process just hangs as soon as I'm connected. Cheers, Olivier From user+dovecot at localhost.localdomain.org Wed Jan 18 03:33:19 2012 From: user+dovecot at localhost.localdomain.org (Pascal Volk) Date: Wed, 18 Jan 2012 02:33:19 +0100 Subject: [Dovecot] [Dovecot 2.1] ACL plugin makes imap service crash when using some clients In-Reply-To: <4F161A72.8030400@gmail.com> References: <4F161A72.8030400@gmail.com> Message-ID: <4F16215F.5000909@localhost.localdomain.org> On 01/18/2012 02:03 AM interfaSys s?rl wrote: > Hello, > > I've just noticed that when Horde is connecting to Dovecot 2.1, it > crashes the imap service if Dovecot is configured to use the ACL plugin. > I'm not sure what's so special about the command Horde sends, but it > shouldn't make Dovecot crash. Everything is fine when using Thunderbird. > > Here is the message in Dovecot's logs > "Fatal: master: service(imap): child 89974 killed with signal 11 (core > not dumped)" > > The message says that the core is not dumped, even though I did add > drop_priv_before_exec=yes to my config file. dovecot stop ulimit -c unlimited dovecot Now connect with Horde and let it crash. > I've tried connecting to the pid using gdb, but the process just hangs > as soon as I'm connected. > continue [wait for the crash] bt full detach quit Regards, Pascal -- The trapper recommends today: cafefeed.1201802 at localdomain.org From gordon.grubert+lists at uni-greifswald.de Wed Jan 18 14:02:58 2012 From: gordon.grubert+lists at uni-greifswald.de (Gordon Grubert) Date: Wed, 18 Jan 2012 13:02:58 +0100 Subject: [Dovecot] Dovecot crashes totally - SOLVED In-Reply-To: <4EB6D845.7040208@uni-greifswald.de> References: <4EA317B5.3090209@uni-greifswald.de> <1320435812.21919.150.camel@hurina> <4EB6D845.7040208@uni-greifswald.de> Message-ID: <4F16B4F2.5050107@uni-greifswald.de> On 11/06/2011 07:56 PM, Gordon Grubert wrote: > On 11/04/2011 08:43 PM, Timo Sirainen wrote: >> On Sat, 2011-10-22 at 21:21 +0200, Gordon Grubert wrote: >>> Hello, >>> >>> our dovecot server crashes totally without any really useful >>> log messages. The error log can be found in the attachment. >>> The only way to get dovecot running again is a complete >>> system restart. >> >> How often does it break? If really a "complete system restart" is needed >> to fix it, it doesn't sound like a Dovecot problem. Check if it's enough >> to stop dovecot and then make sure there aren't any dovecot processes >> lying around afterwards. > Currently, the problem occurred three times. The last time some days > ago. The last "crash" was in the night and, therefore, we used the > chance for a detailed debugging of the system. > > You could be right, that it's not a dovecot problem. Next to dovecot, > we found other processes hanging and could not be killed by "kill -9". > Additionally, we found a commonness of all of these processes: They > hanged while trying to access the mailbox volume. Therefore, we repaired > the filesystem. Now, we're watching the system ... > >>> Oct 11 09:55:23 mailserver2 dovecot: master: Error: service(imap): >>> Initial status notification not received in 30 seconds, killing the >>> process >>> Oct 11 09:56:23 mailserver2 dovecot: imap-login: Error: master(imap): >>> Auth request timed out (received 0/12 bytes) >> >> Kind of looks like auth process is hanging. You could see if stracing it >> shows anything useful. Also are any errors logged about LDAP? Is LDAP >> running on the same server? > Dovecot authenticates against postfix and postfix has an LDAP > connection. The LDAP is running on an external cluster. Here, > no errors are reported. > > We hope, that the filesystem error was the reason for the problem > and, that the problem is fixed by repairing it. During the last two month, no error occurred. Therefore, the problem in the filesystem seems to be the reason for the dovecot crash. Thx and best regards, Gordon -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 5396 bytes Desc: S/MIME Cryptographic Signature URL: From tss at iki.fi Wed Jan 18 14:23:00 2012 From: tss at iki.fi (Timo Sirainen) Date: Wed, 18 Jan 2012 14:23:00 +0200 Subject: [Dovecot] imap-login process_limit reached In-Reply-To: <4F14A7AA.8010507@easystreet.net> References: <4F14A7AA.8010507@easystreet.net> Message-ID: <1326889380.11500.16.camel@innu> On Mon, 2012-01-16 at 14:41 -0800, Don Buchholz wrote: > I've been having some problems with IMAP user connections to the Dovecot > (v2.0.8) server. The following message is being logged. > > Jan 16 10:51:36 postal dovecot: master: Warning: > service(imap-login): process_limit reached, client connections are > being dropped Maybe this will help some in future: http://hg.dovecot.org/dovecot-2.1/rev/a4e61c99c7eb The new error message is: service(imap-login): process_limit (100) reached, client connections are being dropped From lee at standen.id.au Wed Jan 18 14:44:35 2012 From: lee at standen.id.au (Lee Standen) Date: Wed, 18 Jan 2012 20:44:35 +0800 Subject: [Dovecot] Performance of Maildir vs sdbox/mdbox Message-ID: <02f401ccd5de$ef17c470$cd474d50$@standen.id.au> Hi Guys, I've been desperately trying to find some comparative performance information about the different mailbox formats supported by Dovecot in order to make an assessment on which format is right for our environment. This is a brand new build, with customer mailboxes to be migrated in over the course of 3-4 months. Some details on our new environment: * Approximately 1.6M+ mailboxes once all legacy systems are combined * NetApp FAS6280 storage w/ 120TB usable for mail storage, 1TB of FlashCache in each controller * All mail storage presented via NFS over 10Gbps Ethernet (Jumbo Frames) * Postfix will feed new email to Dovecot via LMTP * Dovecot servers have been split based on their role - Dovecot LDA Servers (running LMTP protocol) - Dovecot POP/IMAP servers (running POP/IMAP protocols) - LDA & POP/IMAP servers are segmented into geographically split groups (so no server sees every single mailbox) - Nginx proxy used to terminate customer connections, connections are redirected to the appropriate geographic servers * Apache Lucene indexes will be used to accelerate IMAP search for users Our closest current live configuration (Qmail SMTP, Courier IMAP, Maildir) has 600K mailboxes and pushes ~ 35,000 NFS operations per second at peak Some of the things I would like to know: * Are we likely to see a reduction in IOPS/User by using Maildir alone under Dovecot? * What kind of IOPS/User reduction could we expect to see under mdbox? * If someone can give some technical reasoning behind why mdbox does less IOPS than Maildir? I understand some of the reasons for the mdbox IOPS question, but I need some more information so we can discuss internally and make a decision as to whether we're comfortable going with mdbox from day one. We're very familiar with Maidlir, and there's just some uneasiness internally around going to a new mail storage format. Thanks! From tss at iki.fi Wed Jan 18 14:58:15 2012 From: tss at iki.fi (Timo Sirainen) Date: Wed, 18 Jan 2012 14:58:15 +0200 Subject: [Dovecot] Dovecot Solutions company update Message-ID: <1326891495.11500.32.camel@innu> Hi, A small update: My Dovecot support company finally has web pages: http://www.dovecot.fi/ We've also started providing 24/7 support. From robert at schetterer.org Wed Jan 18 15:05:57 2012 From: robert at schetterer.org (Robert Schetterer) Date: Wed, 18 Jan 2012 14:05:57 +0100 Subject: [Dovecot] Performance of Maildir vs sdbox/mdbox In-Reply-To: <02f401ccd5de$ef17c470$cd474d50$@standen.id.au> References: <02f401ccd5de$ef17c470$cd474d50$@standen.id.au> Message-ID: <4F16C3B5.80404@schetterer.org> Am 18.01.2012 13:44, schrieb Lee Standen: > Hi Guys, > > > > I've been desperately trying to find some comparative performance > information about the different mailbox formats supported by Dovecot in > order to make an assessment on which format is right for our environment. > > This is a brand new build, with customer mailboxes to be migrated in over > the course of 3-4 months. > > > > Some details on our new environment: > > * Approximately 1.6M+ mailboxes once all legacy systems are combined > > * NetApp FAS6280 storage w/ 120TB usable for mail storage, 1TB of FlashCache > in each controller > > * All mail storage presented via NFS over 10Gbps Ethernet (Jumbo Frames) nfs may not be optimal clusterfilesystem might better, but this is an heavy seperate discussion > > * Postfix will feed new email to Dovecot via LMTP perfect > > * Dovecot servers have been split based on their role > > - Dovecot LDA Servers (running LMTP protocol) > > - Dovecot POP/IMAP servers (running POP/IMAP protocols) > > - LDA & POP/IMAP servers are segmented into geographically split groups > (so no server sees every single mailbox) > > - Nginx proxy used to terminate customer connections, connections are > redirected to the appropriate geographic servers > > * Apache Lucene indexes will be used to accelerate IMAP search for users > sounds ok > > > > > Our closest current live configuration (Qmail SMTP, Courier IMAP, Maildir) > has 600K mailboxes and pushes ~ 35,000 NFS operations per second at peak wow thats big > > > > Some of the things I would like to know: > > * Are we likely to see a reduction in IOPS/User by using Maildir alone under > Dovecot? > > * What kind of IOPS/User reduction could we expect to see under mdbox? there should be people on the list , knowing this , by migration done > > * If someone can give some technical reasoning behind why mdbox does less > IOPS than Maildir? as far i remember mdbox takes 8 mails per file ( i am not using it currently, so i didnt investigate it ), better wait for more qualified answer, anyway mdbox seems recommended in your case from our last plans about 25k mailboxes we decide using mdbox, as far i remember.... > > > > I understand some of the reasons for the mdbox IOPS question, but I need > some more information so we can discuss internally and make a decision as to > whether we're comfortable going with mdbox from day one. We're very > familiar with Maidlir, and there's just some uneasiness internally around > going to a new mail storage format. > > > > Thanks! > > > > from my personal knowledge io on storage has most influance of performance, if at last ,all other setup parts are solved optimal wait a little bit , i guess more matching answers will come up after all ,you can hire someone, perhaps Timo, if you stuck in something -- Best Regards MfG Robert Schetterer Germany/Munich/Bavaria From javierdemiguel at us.es Wed Jan 18 15:27:52 2012 From: javierdemiguel at us.es (=?ISO-8859-1?Q?Javier_Miguel_Rodr=EDguez?=) Date: Wed, 18 Jan 2012 14:27:52 +0100 Subject: [Dovecot] Performance of Maildir vs sdbox/mdbox In-Reply-To: <02f401ccd5de$ef17c470$cd474d50$@standen.id.au> References: <02f401ccd5de$ef17c470$cd474d50$@standen.id.au> Message-ID: <4F16C8D8.1090804@us.es> Spanish edu site here, 80k users, 4,5 TB of email, 6.000 iops (indexes) + 9.000 iops (mdboxes) in working hours here. We evaluated mdbox against Maildir and we found that with these setting dovecot 2 perfoms better than Maildir: mdbox_rotate_interval = 1d mdbox_rotate_size=60m zlib_save_level = 9 # 1..9 zlib_save = gz # or bz2 We detected 40% less iops with this setup *in working hours (more info below)*. Zlib saved some writes (15-30%). With mdbox, deletion of a message is written to indexes (use SSD for this), and a nightly cronjob deletes the real message from the mdbox, this saves us some iops in working hours. Also, backup software is MUCH happier handling hundreds of thousands files (mdbox) versus tens of millions (maildir) Mdbox has also drawbacks: you have to be VERY careful with your indexes, they contain data that can not be rebuilt from mdboxes. The nightly cronjob "purging" the mdboxes hammers the SAN. Full backup time is reduced, but incremental backup space & time increases: if you delete a message, after "purging" it from the mdbox the mdbox file changes (size and date), so the incremental backup has to copy it again. Regards Javier From email at randpoger.org Wed Jan 18 15:29:31 2012 From: email at randpoger.org (email at randpoger.org) Date: Wed, 18 Jan 2012 14:29:31 +0100 Subject: [Dovecot] Dovecot did not accept Login from Host Message-ID: <192f7dbb6b6c9e71bd44c41f08097a92-EhVcX1lATAFfWEQABwoYZ1dfaANWUkNeXEJbAVo1WEdQS1oIXkF3CEtXWV4wQEYAWVJQQ1tSWQ==-webmailer2@server06.webmailer.hosteurope.de> Hi! My Dovecot is running and i can connect + login through telnet: -------------------------------------------- >> telnet localhost 143 Trying ::1... Trying 127.0.0.1... Connected to localhost. Escape character is '^]'. * OK [CAPABILITY IMAP4rev1 LITERAL+ SASL-IR LOGIN-REFERRALS ID ENABLE STARTTLS AUTH=PLAIN AUTH=LOGIN] Dovecot ready. >> 1 login user passwort 1 OK [...] Logged in -------------------------------------------- But through my domain i can only connect, but than i get an error: -------------------------------------------- >> telnet domain.de 143 Trying xx.xxx.xxx.xx... Connected to domain.de. Escape character is '^]'. * OK [CAPABILITY IMAP4rev1 LITERAL+ SASL-IR LOGIN-REFERRALS ID ENABLE STARTTLS AUTH=PLAIN AUTH=LOGIN] Dovecot ready. >> 1 login user passwort 1 NO [AUTHENTICATIONFAILED] Authentication failed. -------------------------------------------- My dovecot.conf: -------------------------------------------- protocols = imap imaps ssl_cert_file = /etc/ssl/certs/dovecot.pem ssl_key_file = /etc/ssl/private/dovecot.pem mail_location= /var/mail/%u log_path = /var/log/dovecot.log log_timestamp = "%Y-%m-%d %H:%M:%S " auth_verbose = yes auth_debug = yes protocol imap { } auth default { mechanisms = plain login passdb pam { } userdb passwd { } user = root } -------------------------------------------- If i try to connect+login through an domain, dovecot write NOTHING into the .log Someone ideas about this? From tss at iki.fi Wed Jan 18 15:54:31 2012 From: tss at iki.fi (Timo Sirainen) Date: Wed, 18 Jan 2012 15:54:31 +0200 Subject: [Dovecot] Performance of Maildir vs sdbox/mdbox In-Reply-To: <02f401ccd5de$ef17c470$cd474d50$@standen.id.au> References: <02f401ccd5de$ef17c470$cd474d50$@standen.id.au> Message-ID: <1326894871.11500.45.camel@innu> On Wed, 2012-01-18 at 20:44 +0800, Lee Standen wrote: > I've been desperately trying to find some comparative performance > information about the different mailbox formats supported by Dovecot in > order to make an assessment on which format is right for our environment. Unfortunately there aren't really any. Everyone who seems to switch to sdbox/mdbox usually also change their hardware at the same time, so there aren't really any before/after metrics. I've of course some unrealistic synthetic benchmarks, but I don't think they are very useful. So, I would also be very interested in seeing some before/after graphs of disk IO, CPU and memory usage of Maildir -> dbox switch in same hardware. Maildir is anyway definitely worse performance then sdbox or mdbox. mdbox also uses less NFS operations, but I don't know how much faster (if any) it is with Netapps. > * All mail storage presented via NFS over 10Gbps Ethernet (Jumbo Frames) > > * Postfix will feed new email to Dovecot via LMTP > > * Dovecot servers have been split based on their role > > - Dovecot LDA Servers (running LMTP protocol) > > - Dovecot POP/IMAP servers (running POP/IMAP protocols) You're going to run into NFS caching troubles with the above split setup. I don't recommend it. You will see error messages about index corruption with it, and with dbox it can cause metadata loss. http://wiki2.dovecot.org/NFS http://wiki2.dovecot.org/Director > - LDA & POP/IMAP servers are segmented into geographically split groups > (so no server sees every single mailbox) > > - Nginx proxy used to terminate customer connections, connections are > redirected to the appropriate geographic servers Can the same mailbox still be accessed via multiple geographic servers? I've had some plans for doing this kind of access/replication using dsync.. > * Apache Lucene indexes will be used to accelerate IMAP search for users Dovecot's fts-solr or fts-lucene? > Our closest current live configuration (Qmail SMTP, Courier IMAP, Maildir) > has 600K mailboxes and pushes ~ 35,000 NFS operations per second at peak > > Some of the things I would like to know: > > * Are we likely to see a reduction in IOPS/User by using Maildir alone under > Dovecot? If you have webmail type of clients, definitely. For Outlook/Thunderbird you should still see improvement, but not necessarily as much. You didn't mention POP3. That isn't Dovecot's strong point. Its performance should be about the same as Courier-POP3, but could be less than QMail-POP3. Although if many of your POP3 users keep a lot of mails on server it > * If someone can give some technical reasoning behind why mdbox does less > IOPS than Maildir? Maildir renames files a lot. From new/ -> to cur/ and then every time message flag changes. That's why sdbox is faster. Why mdbox should be faster than sdbox is because mdbox puts (or should put) more mail data physically closer in disks to make reading it faster. > I understand some of the reasons for the mdbox IOPS question, but I need > some more information so we can discuss internally and make a decision as to > whether we're comfortable going with mdbox from day one. We're very > familiar with Maidlir, and there's just some uneasiness internally around > going to a new mail storage format. It's at least safer to first switch to Dovecot+Maildir to make sure that any problems you might find aren't related to the mailbox format.. From ebroch at whitehorsetc.com Wed Jan 18 16:20:31 2012 From: ebroch at whitehorsetc.com (Eric Broch) Date: Wed, 18 Jan 2012 07:20:31 -0700 Subject: [Dovecot] shared folder files not displaying in thunderbird Message-ID: <4F16D52F.2040907@whitehorsetc.com> Hello, I have dovecot installed with the configuration below. One of the subfolders created (using the email client) under the '/home/vpopmail/domains/mydomain.com/shared/projects' share no longer (it used to) displays the files located in it. There are about 150 folders under the '/home/vpopmail/domains/mydomain.com/shared/projects' share all of which display the files located in them, the one mentioned used to display the contents but no longer does. What would be the reason that one folder would no longer display existing files in the email client (Thunderbird) and the other folders would? And, how do I fix this? I've already tried unsubscribing and resubscribing the folder. This did not work. Would it now be simply a matter of unsubscribing the folder, deleting the dovecot files, and resubscribing to the folder? Eric # 2.0.11: /etc/dovecot/dovecot.conf # OS: Linux 2.6.18-238.19.1.el5 i686 CentOS release 5.7 (Final) auth_cache_size = 32 M auth_mechanisms = plain login digest-md5 cram-md5 auth_username_format = %Lu disable_plaintext_auth = no first_valid_uid = 89 log_path = /var/log/dovecot.log login_greeting = Dovecot toaster ready. namespace { inbox = yes location = prefix = INBOX. separator = . type = private } namespace { location = maildir:/home/vpopmail/domains/mydomain.com/shared/projects prefix = projects. separator = . type = public } passdb { args = cache_key=%u webmail=127.0.0.1 driver = vpopmail } plugin/quota = maildir protocols = imap ssl_cert = References: <1326891495.11500.32.camel@innu> Message-ID: <4F16D607.5030800@schetterer.org> Am 18.01.2012 13:58, schrieb Timo Sirainen: > Hi, > > A small update: My Dovecot support company finally has web pages: > http://www.dovecot.fi/ > > We've also started providing 24/7 support. > > Hi Timo, very cool ! -- Best Regards MfG Robert Schetterer Germany/Munich/Bavaria From tss at iki.fi Wed Jan 18 16:32:36 2012 From: tss at iki.fi (Timo Sirainen) Date: Wed, 18 Jan 2012 16:32:36 +0200 Subject: [Dovecot] Dovecot unable to locate mailbox In-Reply-To: References: Message-ID: <1326897156.11500.51.camel@innu> On Mon, 2012-01-16 at 14:38 +0200, Jason X, Maney wrote: > Jan 16 14:18:16 myservername dovecot: pop3(userA): Error: user molla: > Initialization failed: mail_location not set and autodetection failed: Mail > storage autodetection failed with home=/home/userA As it says. > Yet my config also come out strangely as below: > > ========= > root at guyana:~# dovecot -n > # 2.0.13: /etc/dovecot/dovecot.conf > # OS: Linux 3.0.0-12-server x86_64 Ubuntu 11.10 > passdb { > driver = pam > } > protocols = " imap pop3" > ssl_cert = ssl_key = userdb { > driver = passwd > } > root at guyana:~# > ========= There is no mail_location above. This is the configuration Dovecot sees. > My mailbox location setting is as follows: > > ========= > cat conf.d/10-mail.conf |grep mail_location Look at /etc/dovecot/dovecot.conf file. Do you see !include conf.d/*.conf in there? Probably not, so those files aren't being read. From tss at iki.fi Wed Jan 18 16:34:18 2012 From: tss at iki.fi (Timo Sirainen) Date: Wed, 18 Jan 2012 16:34:18 +0200 Subject: [Dovecot] Antispam plugin not compatible with Dovecot 2.1 In-Reply-To: <4F155670.6010905@gmail.com> References: <4F155670.6010905@gmail.com> Message-ID: <1326897258.11500.53.camel@innu> On Tue, 2012-01-17 at 11:07 +0000, interfaSys s?rl wrote: > Here is what I get when I try to compile the antispam plugin agaisnt > Dovecot 2.1 > > ************** > mailbox.c: In function 'antispam_save_begin': > mailbox.c:138:12: error: 'struct mail_save_context' has no member named > 'copying' The "copying" should be changed to "copying_via_save". From lee at standen.id.au Wed Jan 18 16:36:45 2012 From: lee at standen.id.au (Lee Standen) Date: Wed, 18 Jan 2012 22:36:45 +0800 Subject: [Dovecot] Performance of Maildir vs sdbox/mdbox Message-ID: <7c413ccbddc8e25584311c55672a51e5@standen.id.au> On 18.01.2012 21:54, Timo Sirainen wrote: > On Wed, 2012-01-18 at 20:44 +0800, Lee Standen wrote: > >> I've been desperately trying to find some comparative performance >> information about the different mailbox formats supported by Dovecot >> in >> order to make an assessment on which format is right for our >> environment. > > Unfortunately there aren't really any. Everyone who seems to switch > to > sdbox/mdbox usually also change their hardware at the same time, so > there aren't really any before/after metrics. I've of course some > unrealistic synthetic benchmarks, but I don't think they are very > useful. > > So, I would also be very interested in seeing some before/after > graphs > of disk IO, CPU and memory usage of Maildir -> dbox switch in same > hardware. > > Maildir is anyway definitely worse performance then sdbox or mdbox. > mdbox also uses less NFS operations, but I don't know how much faster > (if any) it is with Netapps. We have bought new hardware for this project too, so we might not be able to help out massively on that front... we do have NFS operations monitored though so we should at least be able to compare that metric since the underlying storage operating system is the same. All NetApp hardware runs their Data ONTAP operating system, so the metrics are assured to be the same :) How about this... are there any tools available (that you know of) to capture real live customer POP3/IMAP traffic and replay it against a separate system? That might be a feasible option for doing a like-for-like comparison in our environment? We could probably get something in place to simulate the load if we can do something like that... >> * All mail storage presented via NFS over 10Gbps Ethernet (Jumbo >> Frames) >> >> * Postfix will feed new email to Dovecot via LMTP >> >> * Dovecot servers have been split based on their role >> >> - Dovecot LDA Servers (running LMTP protocol) >> >> - Dovecot POP/IMAP servers (running POP/IMAP protocols) > > You're going to run into NFS caching troubles with the above split > setup. I don't recommend it. You will see error messages about index > corruption with it, and with dbox it can cause metadata loss. > http://wiki2.dovecot.org/NFS http://wiki2.dovecot.org/Director That might be the one thing (unfortunately) which prevents us from going with the dbox format. I understand the same issue can actually occur on Dovecot Maildir as well, but because Maildir works without these index files, we were willing to just go with it. I will raise it again, but there has been a lot of push back about introducing a single point of failure, even though this is a perceived one. The biggest challenge I have at the moment if I try to sell the dbox format is providing some kind of data on the expected gains from this. If it's only a 10% reduction in NFS operations for the typical user, then it's probably not worth our while. > >> - LDA & POP/IMAP servers are segmented into geographically split >> groups >> (so no server sees every single mailbox) >> >> - Nginx proxy used to terminate customer connections, connections >> are >> redirected to the appropriate geographic servers > > Can the same mailbox still be accessed via multiple geographic > servers? > I've had some plans for doing this kind of access/replication using > dsync.. No, we're using the nginx proxy layer to ensure that if a user in Sydney (for example) tries to access a Perth mailbox, their connection is redirected (by nginx) to the Perth POP/IMAP servers. Postfix configuration is handling the same thing on the LMTP side. The requirement here is for all users to have the same settings regardless of location, but still be able to locate the email servers and data close to the customer. > >> * Apache Lucene indexes will be used to accelerate IMAP search for >> users > > Dovecot's fts-solr or fts-lucene? fts-solr. I've been using Lucene/Solr interchangeably when discussing this project with my peers :) > >> Our closest current live configuration (Qmail SMTP, Courier IMAP, >> Maildir) >> has 600K mailboxes and pushes ~ 35,000 NFS operations per second at >> peak >> >> Some of the things I would like to know: >> >> * Are we likely to see a reduction in IOPS/User by using Maildir >> alone under >> Dovecot? > > If you have webmail type of clients, definitely. For > Outlook/Thunderbird > you should still see improvement, but not necessarily as much. > > You didn't mention POP3. That isn't Dovecot's strong point. Its > performance should be about the same as Courier-POP3, but could be > less > than QMail-POP3. Although if many of your POP3 users keep a lot of > mails > on server it > Our existing systems run with about 21K concurrent IMAP connections at any one point in time, not counting Webmail POP3 runs at about 3600 concurrent connections, but since those are not long lived it's not particularly indicative of customer numbers. Vague recollection is something like 25% IMAP, 55-60% POP3, rest < 20% Webmail. I'd have to go back and check the breakdown again. >> * If someone can give some technical reasoning behind why mdbox does >> less >> IOPS than Maildir? > > Maildir renames files a lot. From new/ -> to cur/ and then every time > message flag changes. That's why sdbox is faster. Why mdbox should be > faster than sdbox is because mdbox puts (or should put) more mail > data > physically closer in disks to make reading it faster. > >> I understand some of the reasons for the mdbox IOPS question, but I >> need >> some more information so we can discuss internally and make a >> decision as to >> whether we're comfortable going with mdbox from day one. We're very >> familiar with Maidlir, and there's just some uneasiness internally >> around >> going to a new mail storage format. > > It's at least safer to first switch to Dovecot+Maildir to make sure > that > any problems you might find aren't related to the mailbox format.. Yep, I'm considering that. The flip side is that it's actually going to be difficult for us to change mail format once we've migrated into this system, but we have an opportunity for (literally) a month long testing phase beginning in Feb/March which will let us test as many possibilities as we can. From tss at iki.fi Wed Jan 18 16:52:58 2012 From: tss at iki.fi (Timo Sirainen) Date: Wed, 18 Jan 2012 16:52:58 +0200 Subject: [Dovecot] LMTP Logging In-Reply-To: References: Message-ID: <1326898378.11500.54.camel@innu> On Mon, 2012-01-16 at 17:17 -0800, Mark Moseley wrote: > Just had a minor suggestion, with no clue how hard/easy it would be to > implement: > > The %f flag in deliver_log_format seems to pick up the From: header, > instead of the "MAIL FROM:<...>" arg. It'd be handy to have a %F that > shows the "MAIL FROM" arg instead. I'm looking at tracking emails > through logs from Exim to Dovecot easily. I know Message-ID can be > used for correlation but it adds some complexity to searching, i.e. I > can't just grep for the sender (as logged by Exim), unless I assume > "MAIL FROM" always == From: Added to v2.1: http://hg.dovecot.org/dovecot-2.1/rev/7ee2cfbcae2e http://hg.dovecot.org/dovecot-2.1/rev/08cc9d2a79e6 From tss at iki.fi Wed Jan 18 16:56:41 2012 From: tss at iki.fi (Timo Sirainen) Date: Wed, 18 Jan 2012 16:56:41 +0200 Subject: [Dovecot] Quota is not working (Debian Squeeze - Dovecot 1.2) In-Reply-To: <4F13FF00.1050108@miamammausalinux.org> References: <4F13FF00.1050108@miamammausalinux.org> Message-ID: <1326898601.11500.56.camel@innu> On Mon, 2012-01-16 at 11:42 +0100, RaSca wrote: > passdb sql { > args = /etc/dovecot/dovecot-sql.conf > } > userdb passwd { > } > userdb static { > args = uid=5000 gid=5000 home=/mail/mailboxes/%d/%n@%d > allow_all_users=yes > } You're using SQL only for passdb lookup. > plugin { > quota = maildir:/mail/mailboxes/%d/%n@%d The above path probably doesn't do what you intended. It's only the user-visible quota root name. It could just as well be "User quota" or something. > The db connection works, this is /etc/dovecot/dovecot-sql.conf: > > driver = mysql > connect = host= dbname=mail user= password= > default_pass_scheme = CRYPT > password_query = SELECT username, password FROM mailbox WHERE username='%u' > user_query = SELECT username AS user, maildir AS home, > CONCAT('*:storage=', quota , 'B') AS quota_rule FROM mailbox WHERE > username = '%u' AND active = '1' user_query isn't used, because you aren't using userdb sql. From tss at iki.fi Wed Jan 18 17:06:49 2012 From: tss at iki.fi (Timo Sirainen) Date: Wed, 18 Jan 2012 17:06:49 +0200 Subject: [Dovecot] v2.1.rc3 released In-Reply-To: <20120116150504.GA28883@shutemov.name> References: <1325878845.17774.38.camel@hurina> <20120116150504.GA28883@shutemov.name> Message-ID: <1326899209.11500.58.camel@innu> On Mon, 2012-01-16 at 17:05 +0200, Kirill A. Shutemov wrote: > ./autogen failed: > > $ ./autogen.sh > libtoolize: Consider adding `AC_CONFIG_MACRO_DIR([m4])' to configure.in and > libtoolize: rerunning libtoolize, to keep the correct libtool macros in-tree. > libtoolize: Consider adding `-I m4' to ACLOCAL_AMFLAGS in Makefile.am. > src/plugins/fts/Makefile.am:52: `pkglibexecdir' is not a legitimate directory for `SCRIPTS' > Makefile.am:24: `pkglibdir' is not a legitimate directory for `DATA' > autoreconf: automake failed with exit status: 1 > $ automake --version | head -1 > automake (GNU automake) 1.11.2 Looks like automake bug: http://old.nabble.com/Re%3A-Scripts-in-pkglibexecdir--p33070266.html From lee at standen.id.au Wed Jan 18 17:21:33 2012 From: lee at standen.id.au (Lee Standen) Date: Wed, 18 Jan 2012 23:21:33 +0800 Subject: [Dovecot] Performance of Maildir vs sdbox/mdbox In-Reply-To: <1326894871.11500.45.camel@innu> References: <02f401ccd5de$ef17c470$cd474d50$@standen.id.au> <1326894871.11500.45.camel@innu> Message-ID: Out of interest, has the NFS issue been tested on NFS4? My understanding is that NFS4 has a lot of fixes for the locking/caching problems that plague NFS3, and we were planning to use NFS4 from day one. If this hasn't been tested, is there some kind of load simulator that we could run to see if the issue does occur in our environment? On 18.01.2012 21:54, Timo Sirainen wrote: > On Wed, 2012-01-18 at 20:44 +0800, Lee Standen wrote: > >> I've been desperately trying to find some comparative performance >> information about the different mailbox formats supported by Dovecot >> in >> order to make an assessment on which format is right for our >> environment. > > Unfortunately there aren't really any. Everyone who seems to switch > to > sdbox/mdbox usually also change their hardware at the same time, so > there aren't really any before/after metrics. I've of course some > unrealistic synthetic benchmarks, but I don't think they are very > useful. > > So, I would also be very interested in seeing some before/after > graphs > of disk IO, CPU and memory usage of Maildir -> dbox switch in same > hardware. > > Maildir is anyway definitely worse performance then sdbox or mdbox. > mdbox also uses less NFS operations, but I don't know how much faster > (if any) it is with Netapps. > >> * All mail storage presented via NFS over 10Gbps Ethernet (Jumbo >> Frames) >> >> * Postfix will feed new email to Dovecot via LMTP >> >> * Dovecot servers have been split based on their role >> >> - Dovecot LDA Servers (running LMTP protocol) >> >> - Dovecot POP/IMAP servers (running POP/IMAP protocols) > > You're going to run into NFS caching troubles with the above split > setup. I don't recommend it. You will see error messages about index > corruption with it, and with dbox it can cause metadata loss. > http://wiki2.dovecot.org/NFS http://wiki2.dovecot.org/Director > >> - LDA & POP/IMAP servers are segmented into geographically split >> groups >> (so no server sees every single mailbox) >> >> - Nginx proxy used to terminate customer connections, connections >> are >> redirected to the appropriate geographic servers > > Can the same mailbox still be accessed via multiple geographic > servers? > I've had some plans for doing this kind of access/replication using > dsync.. > >> * Apache Lucene indexes will be used to accelerate IMAP search for >> users > > Dovecot's fts-solr or fts-lucene? > >> Our closest current live configuration (Qmail SMTP, Courier IMAP, >> Maildir) >> has 600K mailboxes and pushes ~ 35,000 NFS operations per second at >> peak >> >> Some of the things I would like to know: >> >> * Are we likely to see a reduction in IOPS/User by using Maildir >> alone under >> Dovecot? > > If you have webmail type of clients, definitely. For > Outlook/Thunderbird > you should still see improvement, but not necessarily as much. > > You didn't mention POP3. That isn't Dovecot's strong point. Its > performance should be about the same as Courier-POP3, but could be > less > than QMail-POP3. Although if many of your POP3 users keep a lot of > mails > on server it > >> * If someone can give some technical reasoning behind why mdbox does >> less >> IOPS than Maildir? > > Maildir renames files a lot. From new/ -> to cur/ and then every time > message flag changes. That's why sdbox is faster. Why mdbox should be > faster than sdbox is because mdbox puts (or should put) more mail > data > physically closer in disks to make reading it faster. > >> I understand some of the reasons for the mdbox IOPS question, but I >> need >> some more information so we can discuss internally and make a >> decision as to >> whether we're comfortable going with mdbox from day one. We're very >> familiar with Maidlir, and there's just some uneasiness internally >> around >> going to a new mail storage format. > > It's at least safer to first switch to Dovecot+Maildir to make sure > that > any problems you might find aren't related to the mailbox format.. From tss at iki.fi Wed Jan 18 17:28:36 2012 From: tss at iki.fi (Timo Sirainen) Date: Wed, 18 Jan 2012 17:28:36 +0200 Subject: [Dovecot] Performance of Maildir vs sdbox/mdbox In-Reply-To: <7c413ccbddc8e25584311c55672a51e5@standen.id.au> References: <7c413ccbddc8e25584311c55672a51e5@standen.id.au> Message-ID: <1326900516.11500.71.camel@innu> On Wed, 2012-01-18 at 22:36 +0800, Lee Standen wrote: > How about this... are there any tools available (that you know of) to > capture real live customer POP3/IMAP traffic and replay it against a > separate system? That might be a feasible option for doing a > like-for-like comparison in our environment? We could probably get > something in place to simulate the load if we can do something like > that... I've thought about that too before, but with IMAP traffic it doesn't work very well. Even if the storages were 100% synchronized at startup, the session states could easily become desynced. For example if client does a NOOP at the same time when two mails are being delivered to the mailbox, serverA might show only one of them while serverB would show two of them because it was executed a tiny bit later. All of the client's future commands could then be affected by this desync. (OK, I wrote the above thinking about a real-time system where you could redirect the client's traffic to two systems, but basically same problems exist for offline replays too. Although it would be easier to fix the replays to handle this.) > > You're going to run into NFS caching troubles with the above split > > setup. I don't recommend it. You will see error messages about index > > corruption with it, and with dbox it can cause metadata loss. > > http://wiki2.dovecot.org/NFS http://wiki2.dovecot.org/Director > > That might be the one thing (unfortunately) which prevents us from > going with the dbox format. I understand the same issue can actually > occur on Dovecot Maildir as well, but because Maildir works without > these index files, we were willing to just go with it. Are you planning on also redirecting POP3/IMAP connections to somewhat randomly to the different servers? I really don't recommend that, even with Maildir.. Some of the errors will be user visible, even if no actual data loss happens. Users may get disconnected, and sometimes might have to clean their client's cache. > I will raise it > again, but there has been a lot of push back about introducing a single > point of failure, even though this is a perceived one. What is a single point of failure there? > > It's at least safer to first switch to Dovecot+Maildir to make sure > > that > > any problems you might find aren't related to the mailbox format.. > > Yep, I'm considering that. The flip side is that it's actually going > to be difficult for us to change mail format once we've migrated into > this system, but we have an opportunity for (literally) a month long > testing phase beginning in Feb/March which will let us test as many > possibilities as we can. The mailbox format switching can be done one user at a time with zero downtime with dsync. From tss at iki.fi Wed Jan 18 17:34:54 2012 From: tss at iki.fi (Timo Sirainen) Date: Wed, 18 Jan 2012 17:34:54 +0200 Subject: [Dovecot] Performance of Maildir vs sdbox/mdbox In-Reply-To: References: <02f401ccd5de$ef17c470$cd474d50$@standen.id.au> <1326894871.11500.45.camel@innu> Message-ID: <1326900894.11500.74.camel@innu> On Wed, 2012-01-18 at 23:21 +0800, Lee Standen wrote: > Out of interest, has the NFS issue been tested on NFS4? My > understanding is that NFS4 has a lot of fixes for the locking/caching > problems that plague NFS3, and we were planning to use NFS4 from day > one. I've tried with Linux NFS4 server+client a few years ago. It seemed to have all the same caching problems as NFS3. > If this hasn't been tested, is there some kind of load simulator that > we could run to see if the issue does occur in our environment? http://imapwiki.org/ImapTest should easily trigger it. Just run it against two servers, both hammering the same mailbox. From tss at iki.fi Wed Jan 18 17:59:39 2012 From: tss at iki.fi (Timo Sirainen) Date: Wed, 18 Jan 2012 17:59:39 +0200 Subject: [Dovecot] 2.1.rc3 (1a722c7676bb): doveadm mailbox list -> Segmentation fault In-Reply-To: <4F12D069.9060102@localhost.localdomain.org> References: <4F12D069.9060102@localhost.localdomain.org> Message-ID: <1326902379.11500.81.camel@innu> On Sun, 2012-01-15 at 14:11 +0100, Pascal Volk wrote: > Core was generated by `doveadm mailbox list -u > jane.roe at example.com /*'. Finally fixed: http://hg.dovecot.org/dovecot-2.1/rev/99ea6da7dc99 From tss at iki.fi Wed Jan 18 18:04:49 2012 From: tss at iki.fi (Timo Sirainen) Date: Wed, 18 Jan 2012 18:04:49 +0200 Subject: [Dovecot] v2.x services documentation In-Reply-To: <92A86804-CEEE-4EB6-9EE7-FC8B7905AA2C@swing.be> References: <04D662E7-2A0A-448B-BA21-1E337A400CA6@iki.fi> <92A86804-CEEE-4EB6-9EE7-FC8B7905AA2C@swing.be> Message-ID: <1326902689.11500.82.camel@innu> On Sat, 2012-01-14 at 18:03 +0100, Axel Luttgens wrote: > Up to now, I only had the opportunity to quickly read the wiki page, and have a small question; one may read: > > process_min_avail > Minimum number of processes that always should be available to accept more client connections. For service_limit=1 processes this decreases the latency for handling new connections. For service_limit!=1 processes it could be set to the number of CPU cores on the system to balance the load among them. > > What's that service_limit setting? Thanks, fixed. Was supposed to be service_count. From eugene at raptor.kiev.ua Wed Jan 18 18:19:58 2012 From: eugene at raptor.kiev.ua (Eugene Paskevich) Date: Wed, 18 Jan 2012 18:19:58 +0200 Subject: [Dovecot] Antispam plugin not compatible with Dovecot 2.1 In-Reply-To: <1326897258.11500.53.camel@innu> References: <4F155670.6010905@gmail.com> <1326897258.11500.53.camel@innu> Message-ID: On Wed, 18 Jan 2012 16:34:18 +0200, Timo Sirainen wrote: > On Tue, 2012-01-17 at 11:07 +0000, interfaSys s?rl wrote: >> Here is what I get when I try to compile the antispam plugin agaisnt >> Dovecot 2.1 >> >> ************** >> mailbox.c: In function 'antispam_save_begin': >> mailbox.c:138:12: error: 'struct mail_save_context' has no member named >> 'copying' > > The "copying" should be changed to "copying_via_save". Thank you, Timo. Would #if DOVECOT_IS_GE(2,1) suffice or do I need anything more specific? -- Eugene Paskevich | *==)----------- | Plug me into eugene at raptor.kiev.ua | -----------(==* | The Matrix From tss at iki.fi Wed Jan 18 18:31:49 2012 From: tss at iki.fi (Timo Sirainen) Date: Wed, 18 Jan 2012 18:31:49 +0200 Subject: [Dovecot] Antispam plugin not compatible with Dovecot 2.1 In-Reply-To: References: <4F155670.6010905@gmail.com> <1326897258.11500.53.camel@innu> Message-ID: <1326904309.11500.83.camel@innu> On Wed, 2012-01-18 at 18:19 +0200, Eugene Paskevich wrote: > >> mailbox.c: In function 'antispam_save_begin': > >> mailbox.c:138:12: error: 'struct mail_save_context' has no member named > >> 'copying' > > > > The "copying" should be changed to "copying_via_save". > > Thank you, Timo. > Would #if DOVECOT_IS_GE(2,1) suffice or do I need anything more specific? Where do you expect to find such macro? ;) Hm. Perhaps I should try to add one. From eugene at raptor.kiev.ua Wed Jan 18 18:41:39 2012 From: eugene at raptor.kiev.ua (Eugene Paskevich) Date: Wed, 18 Jan 2012 18:41:39 +0200 Subject: [Dovecot] Antispam plugin not compatible with Dovecot 2.1 In-Reply-To: <1326904309.11500.83.camel@innu> References: <4F155670.6010905@gmail.com> <1326897258.11500.53.camel@innu> <1326904309.11500.83.camel@innu> Message-ID: On Wed, 18 Jan 2012 18:31:49 +0200, Timo Sirainen wrote: > On Wed, 2012-01-18 at 18:19 +0200, Eugene Paskevich wrote: >> >> mailbox.c: In function 'antispam_save_begin': >> >> mailbox.c:138:12: error: 'struct mail_save_context' has no member >> named >> >> 'copying' >> > >> > The "copying" should be changed to "copying_via_save". >> >> Thank you, Timo. >> Would #if DOVECOT_IS_GE(2,1) suffice or do I need anything more >> specific? > > Where do you expect to find such macro? ;) Hm. Perhaps I should try to > add one. Heh. That's Johannes' package private macro... :) -- Eugene Paskevich | *==)----------- | Plug me into eugene at raptor.kiev.ua | -----------(==* | The Matrix From moseleymark at gmail.com Wed Jan 18 19:17:40 2012 From: moseleymark at gmail.com (Mark Moseley) Date: Wed, 18 Jan 2012 09:17:40 -0800 Subject: [Dovecot] LMTP Logging In-Reply-To: <1326898378.11500.54.camel@innu> References: <1326898378.11500.54.camel@innu> Message-ID: On Wed, Jan 18, 2012 at 6:52 AM, Timo Sirainen wrote: > On Mon, 2012-01-16 at 17:17 -0800, Mark Moseley wrote: >> Just had a minor suggestion, with no clue how hard/easy it would be to >> implement: >> >> The %f flag in deliver_log_format seems to pick up the From: header, >> instead of the "MAIL FROM:<...>" arg. It'd be handy to have a %F that >> shows the "MAIL FROM" arg instead. I'm looking at tracking emails >> through logs from Exim to Dovecot easily. I know Message-ID can be >> used for correlation but it adds some complexity to searching, i.e. I >> can't just grep for the sender (as logged by Exim), unless I assume >> "MAIL FROM" always == From: > > Added to v2.1: http://hg.dovecot.org/dovecot-2.1/rev/7ee2cfbcae2e > http://hg.dovecot.org/dovecot-2.1/rev/08cc9d2a79e6 > > You're awesome, thanks! From moseleymark at gmail.com Wed Jan 18 19:54:15 2012 From: moseleymark at gmail.com (Mark Moseley) Date: Wed, 18 Jan 2012 09:54:15 -0800 Subject: [Dovecot] Performance of Maildir vs sdbox/mdbox In-Reply-To: <7c413ccbddc8e25584311c55672a51e5@standen.id.au> References: <7c413ccbddc8e25584311c55672a51e5@standen.id.au> Message-ID: >>> * All mail storage presented via NFS over 10Gbps Ethernet (Jumbo Frames) >>> >>> * Postfix will feed new email to Dovecot via LMTP >>> >>> * Dovecot servers have been split based on their role >>> >>> ?- Dovecot LDA Servers (running LMTP protocol) >>> >>> ?- Dovecot POP/IMAP servers (running POP/IMAP protocols) >> >> >> You're going to run into NFS caching troubles with the above split >> setup. I don't recommend it. You will see error messages about index >> corruption with it, and with dbox it can cause metadata loss. >> http://wiki2.dovecot.org/NFS http://wiki2.dovecot.org/Director > > > That might be the one thing (unfortunately) which prevents us from going > with the dbox format. ?I understand the same issue can actually occur on > Dovecot Maildir as well, but because Maildir works without these index > files, we were willing to just go with it. ?I will raise it again, but there > has been a lot of push back about introducing a single point of failure, > even though this is a perceived one. I'm in the middle of working on a Maildir->mdbox migration as well, and likewise, over NFS (all Netapps but moving to Sun), and likewise with split LDA and IMAP/POP servers (and both of those served out of pools). I was hoping doing things like setting "mail_nfs_index = yes" and "mmap_disable = yes" and "mail_fsync = always/optimized" would mitigate most of the risks of index corruption, as well as probably turning indexing off on the LDA side of things--i.e. all the suggestions at http://wiki2.dovecot.org/NFS. Is that definitely not the case? Is there anything else (beyond moving to a director-based architecture) that can mitigate the risk of index corruption? In our case, incoming IMAP/POP are 'stuck' to servers based on IP persistence for a given amount of time, but incoming LDA is randomly distributed. From tss at iki.fi Wed Jan 18 19:58:31 2012 From: tss at iki.fi (Timo Sirainen) Date: Wed, 18 Jan 2012 19:58:31 +0200 Subject: [Dovecot] Performance of Maildir vs sdbox/mdbox In-Reply-To: References: <7c413ccbddc8e25584311c55672a51e5@standen.id.au> Message-ID: <33C8A3B3-7DB5-4638-8C34-54E0C7E739A4@iki.fi> On 18.1.2012, at 19.54, Mark Moseley wrote: > I'm in the middle of working on a Maildir->mdbox migration as well, > and likewise, over NFS (all Netapps but moving to Sun), and likewise > with split LDA and IMAP/POP servers (and both of those served out of > pools). I was hoping doing things like setting "mail_nfs_index = yes" > and "mmap_disable = yes" and "mail_fsync = always/optimized" would > mitigate most of the risks of index corruption, They help, but aren't 100% effective and they also make the performance worse. > as well as probably > turning indexing off on the LDA side of things You can't turn off indexing with dbox. > --i.e. all the > suggestions at http://wiki2.dovecot.org/NFS. Is that definitely not > the case? Is there anything else (beyond moving to a director-based > architecture) that can mitigate the risk of index corruption? In our > case, incoming IMAP/POP are 'stuck' to servers based on IP persistence > for a given amount of time, but incoming LDA is randomly distributed. What's the problem with director-based architecture? From buchholz at easystreet.net Wed Jan 18 20:25:40 2012 From: buchholz at easystreet.net (Don Buchholz) Date: Wed, 18 Jan 2012 10:25:40 -0800 Subject: [Dovecot] imap-login process_limit reached In-Reply-To: <1326889380.11500.16.camel@innu> References: <4F14A7AA.8010507@easystreet.net> <1326889380.11500.16.camel@innu> Message-ID: <4F170EA4.20909@easystreet.net> Timo Sirainen wrote: > On Mon, 2012-01-16 at 14:41 -0800, Don Buchholz wrote: > >> I've been having some problems with IMAP user connections to the Dovecot >> (v2.0.8) server. The following message is being logged. >> >> Jan 16 10:51:36 postal dovecot: master: Warning: >> service(imap-login): process_limit reached, client connections are >> being dropped >> > > Maybe this will help some in future: > http://hg.dovecot.org/dovecot-2.1/rev/a4e61c99c7eb > > The new error message is: > > service(imap-login): process_limit (100) reached, client connections are being dropped > Great idea! Thanks, Timo. - Don From janfrode at tanso.net Wed Jan 18 20:51:38 2012 From: janfrode at tanso.net (Jan-Frode Myklebust) Date: Wed, 18 Jan 2012 19:51:38 +0100 Subject: [Dovecot] Performance of Maildir vs sdbox/mdbox In-Reply-To: <33C8A3B3-7DB5-4638-8C34-54E0C7E739A4@iki.fi> References: <7c413ccbddc8e25584311c55672a51e5@standen.id.au> <33C8A3B3-7DB5-4638-8C34-54E0C7E739A4@iki.fi> Message-ID: <20120118185137.GA21945@dibs.tanso.net> On Wed, Jan 18, 2012 at 07:58:31PM +0200, Timo Sirainen wrote: > > > --i.e. all the > > suggestions at http://wiki2.dovecot.org/NFS. Is that definitely not > > the case? Is there anything else (beyond moving to a director-based > > architecture) that can mitigate the risk of index corruption? In our > > case, incoming IMAP/POP are 'stuck' to servers based on IP persistence > > for a given amount of time, but incoming LDA is randomly distributed. > > What's the problem with director-based architecture? It hasn't been working reliably for lmtp in v2.0. To quote yourself: ----8<----8<----8<-----8<-----8<-----8<----8<-----8<----8<----8<-- I think the way I originally planned LMTP proxying to work is simply too complex to work reliably, perhaps even if the code was bug-free. So instead of reading+writing DATA at the same time, this patch changes the DATA to be first read into memory or temp file, and then from there read and sent to the LMTP backends: http://hg.dovecot.org/dovecot-2.1/raw-rev/51d87deb5c26 ----8<----8<----8<-----8<-----8<-----8<----8<-----8<----8<----8<-- unfortunately I haven't tested that patch, so I have no idea if it fixed the issues or not... -jf From tss at iki.fi Wed Jan 18 21:03:18 2012 From: tss at iki.fi (Timo Sirainen) Date: Wed, 18 Jan 2012 21:03:18 +0200 Subject: [Dovecot] Performance of Maildir vs sdbox/mdbox In-Reply-To: <20120118185137.GA21945@dibs.tanso.net> References: <7c413ccbddc8e25584311c55672a51e5@standen.id.au> <33C8A3B3-7DB5-4638-8C34-54E0C7E739A4@iki.fi> <20120118185137.GA21945@dibs.tanso.net> Message-ID: <23FFD99C-7D70-40BE-A4F3-FD259FFC62E9@iki.fi> On 18.1.2012, at 20.51, Jan-Frode Myklebust wrote: > On Wed, Jan 18, 2012 at 07:58:31PM +0200, Timo Sirainen wrote: >> >>> --i.e. all the >>> suggestions at http://wiki2.dovecot.org/NFS. Is that definitely not >>> the case? Is there anything else (beyond moving to a director-based >>> architecture) that can mitigate the risk of index corruption? In our >>> case, incoming IMAP/POP are 'stuck' to servers based on IP persistence >>> for a given amount of time, but incoming LDA is randomly distributed. >> >> What's the problem with director-based architecture? > > It hasn't been working reliably for lmtp in v2.0. Yes, besides that :) > To quote yourself: > > ----8<----8<----8<-----8<-----8<-----8<----8<-----8<----8<----8<-- > > I think the way I originally planned LMTP proxying to work is simply too > complex to work reliably, perhaps even if the code was bug-free. So > instead of reading+writing DATA at the same time, this patch changes the > DATA to be first read into memory or temp file, and then from there read > and sent to the LMTP backends: > > http://hg.dovecot.org/dovecot-2.1/raw-rev/51d87deb5c26 > > ----8<----8<----8<-----8<-----8<-----8<----8<-----8<----8<----8<-- > > unfortunately I haven't tested that patch, so I have no idea if it > fixed the issues or not... I'm not sure if that patch is useful or not. The important patch to fix it is http://hg.dovecot.org/dovecot-2.0/rev/71084b799a6c From moseleymark at gmail.com Wed Jan 18 21:49:59 2012 From: moseleymark at gmail.com (Mark Moseley) Date: Wed, 18 Jan 2012 11:49:59 -0800 Subject: [Dovecot] Performance of Maildir vs sdbox/mdbox In-Reply-To: <33C8A3B3-7DB5-4638-8C34-54E0C7E739A4@iki.fi> References: <7c413ccbddc8e25584311c55672a51e5@standen.id.au> <33C8A3B3-7DB5-4638-8C34-54E0C7E739A4@iki.fi> Message-ID: On Wed, Jan 18, 2012 at 9:58 AM, Timo Sirainen wrote: > On 18.1.2012, at 19.54, Mark Moseley wrote: > >> I'm in the middle of working on a Maildir->mdbox migration as well, >> and likewise, over NFS (all Netapps but moving to Sun), and likewise >> with split LDA and IMAP/POP servers (and both of those served out of >> pools). I was hoping doing things like setting "mail_nfs_index = yes" >> and "mmap_disable = yes" and "mail_fsync = always/optimized" would >> mitigate most of the risks of index corruption, > > They help, but aren't 100% effective and they also make the performance worse. In testing, it seemed very much like the benefits of reducing IOPS by up to a couple orders of magnitude outweighed having to use those settings. Both in scripted testing and just using a mail UI, with the NFS-ish settings, I didn't notice any lag and doing things like checking a good-sized mailbox were at least as quick as Maildir. And I'm hoping that reducing IOPS across the entire set of NFS servers will compound the benefits quite a bit. >> as well as probably >> turning indexing off on the LDA side of things > > You can't turn off indexing with dbox. Ah, too bad. I was hoping I could get away with the LDA not updating the index but just dropping the message into storage/m.# but it'd still be seen on the IMAP/POP side--but hadn't tested that. Guess that's not the case. >> --i.e. all the >> suggestions at http://wiki2.dovecot.org/NFS. Is that definitely not >> the case? Is there anything else (beyond moving to a director-based >> architecture) that can mitigate the risk of index corruption? In our >> case, incoming IMAP/POP are 'stuck' to servers based on IP persistence >> for a given amount of time, but incoming LDA is randomly distributed. > > What's the problem with director-based architecture? Nothing, per se. It's just that migrating to mdbox *and* to a director architecture is quite a bit more added complexity than simply migrating to mdbox alone. Hopefully, I'm not hijacking this thread. This seems pretty pertinent as well to the OP. From janfrode at tanso.net Wed Jan 18 22:14:37 2012 From: janfrode at tanso.net (Jan-Frode Myklebust) Date: Wed, 18 Jan 2012 21:14:37 +0100 Subject: [Dovecot] Performance of Maildir vs sdbox/mdbox In-Reply-To: <23FFD99C-7D70-40BE-A4F3-FD259FFC62E9@iki.fi> References: <7c413ccbddc8e25584311c55672a51e5@standen.id.au> <33C8A3B3-7DB5-4638-8C34-54E0C7E739A4@iki.fi> <20120118185137.GA21945@dibs.tanso.net> <23FFD99C-7D70-40BE-A4F3-FD259FFC62E9@iki.fi> Message-ID: <20120118201437.GA23070@dibs.tanso.net> On Wed, Jan 18, 2012 at 09:03:18PM +0200, Timo Sirainen wrote: > On 18.1.2012, at 20.51, Jan-Frode Myklebust wrote: > > >> What's the problem with director-based architecture? > > > > It hasn't been working reliably for lmtp in v2.0. > > Yes, besides that :) Besides that it's great! > > unfortunately I haven't tested that patch, so I have no idea if it > > fixed the issues or not... > > I'm not sure if that patch is useful or not. The important patch to fix it is http://hg.dovecot.org/dovecot-2.0/rev/71084b799a6c So with that oneliner on our directors, you expect lmtp proxying trough director to be better than lmtp to rr-dns towards backend servers? If so, I guess we should give it another try. -jf From tss at iki.fi Wed Jan 18 22:26:31 2012 From: tss at iki.fi (Timo Sirainen) Date: Wed, 18 Jan 2012 22:26:31 +0200 Subject: [Dovecot] Performance of Maildir vs sdbox/mdbox In-Reply-To: <20120118201437.GA23070@dibs.tanso.net> References: <7c413ccbddc8e25584311c55672a51e5@standen.id.au> <33C8A3B3-7DB5-4638-8C34-54E0C7E739A4@iki.fi> <20120118185137.GA21945@dibs.tanso.net> <23FFD99C-7D70-40BE-A4F3-FD259FFC62E9@iki.fi> <20120118201437.GA23070@dibs.tanso.net> Message-ID: <956410A8-290E-408A-B85A-5AD46F5CDB70@iki.fi> On 18.1.2012, at 22.14, Jan-Frode Myklebust wrote: >>> unfortunately I haven't tested that patch, so I have no idea if it >>> fixed the issues or not... >> >> I'm not sure if that patch is useful or not. The important patch to fix it is http://hg.dovecot.org/dovecot-2.0/rev/71084b799a6c > > So with that oneliner on our directors, you expect lmtp proxying trough > director to be better than lmtp to rr-dns towards backend servers? If so, > I guess we should give it another try. It should fix the hangs that were common. I'm not sure if it fixes everything without the complexity reduction patch. From admin at opsys.de Wed Jan 18 22:41:02 2012 From: admin at opsys.de (Markus Fritz) Date: Wed, 18 Jan 2012 21:41:02 +0100 Subject: [Dovecot] Quota won't work Message-ID: I tried to set a quota setting. I installed dovecot with newest version, patched it and started it. dovecot -n: # 1.2.15: /etc/dovecot/dovecot.conf # OS: Linux 2.6.32-5-amd64 x86_64 Debian 6.0.3 ext4 log_timestamp: %Y-%m-%d %H:%M:%S protocols: imap imaps pop3 pop3s ssl_listen: 143 ssl_cipher_list: ALL:!LOW:!SSLv2 disable_plaintext_auth: no login_dir: /var/run/dovecot/login login_executable(default): /usr/lib/dovecot/imap-login login_executable(imap): /usr/lib/dovecot/imap-login login_executable(pop3): /usr/lib/dovecot/pop3-login mail_privileged_group: mail mail_location: maildir:/var/vmail/%d/%n/Maildir mbox_write_locks: fcntl dotlock mail_executable(default): /usr/lib/dovecot/imap mail_executable(imap): /usr/lib/dovecot/imap mail_executable(pop3): /usr/lib/dovecot/pop3 mail_plugins(default): quota imap_quota mail_plugins(imap): quota imap_quota mail_plugins(pop3): quota mail_plugin_dir(default): /usr/lib/dovecot/modules/imap mail_plugin_dir(imap): /usr/lib/dovecot/modules/imap mail_plugin_dir(pop3): /usr/lib/dovecot/modules/pop3 namespace: type: private inbox: yes list: yes subscriptions: yes lda: postmaster_address: postmaster at opsys.de mail_plugins: sieve quota log_path: auth default: mechanisms: plain login verbose: yes passdb: driver: sql args: /etc/dovecot/dovecot-sql.conf userdb: driver: static args: uid=5000 gid=5000 home=/var/vmail/%d/%n/Maildir allow_all_users=yes socket: type: listen client: path: /var/spool/postfix/private/auth mode: 432 user: postfix group: postfix master: path: /var/run/dovecot/auth-master mode: 384 user: vmail /etc/dovecot/dovecot-sql.conf: driver = mysql connect = host=127.0.0.1 dbname=mailserver user=mailuser password=****** default_pass_scheme = PLAIN-MD5 password_query = SELECT email as user, password FROM virtual_users WHERE email='%u'; user_query = SELECT CONCAT('/var/mail/', maildir) AS home, CONCAT('*:bytes=', quota) AS quota_rule \ FROM virtual_users WHERE email='%u' virtual_users has this: CREATE TABLE IF NOT EXISTS `virtual_users` ( `id` int(11) NOT NULL AUTO_INCREMENT, `domain_id` int(11) NOT NULL, `password` varchar(32) NOT NULL, `email` varchar(100) NOT NULL, `quota` int(11) NOT NULL DEFAULT '629145600', PRIMARY KEY (`id`), UNIQUE KEY `email` (`email`), KEY `domain_id` (`domain_id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; Also postfix is installed with this (not the hole cfg): virtual_mailbox_domains = mysql:/etc/postfix/mysql-virtual-mailbox-domains.cf virtual_mailbox_limit_inbox = no virtual_mailbox_limit_maps = mysql:/etc/postfix/mysql-quota.cf virtual_mailbox_limit_override = yes virtual_mailbox_maps = mysql:/etc/postfix/mysql-virtual-mailbox-maps.cf virtual_maildir_extended = yes virtual_maildir_limit_message = "The user you are trying to reach is over quota." virtual_maildir_limit_message_maps = mail:/etc/postfix/mysql-quota.cf virtual_overquota_bounce = yes /etc/postfix/mysql-quota.cf: user = mailuser password = ****** hosts = 127.0.0.1 dbname = mailserver query = SELECT quota FROM virtual_users WHERE email='%s' I changed the quota of my mail account to 40, so 40Byte should be the maximum. My account is at a size of 600KB now. I still recieve mails, also they will be saved without errors. /var/log/mail.log says nothing to quota, just normal recieve and store entries. What to fix? -- Markus Fritz Administration opsys.de From tss at iki.fi Wed Jan 18 23:05:40 2012 From: tss at iki.fi (Timo Sirainen) Date: Wed, 18 Jan 2012 23:05:40 +0200 Subject: [Dovecot] Performance of Maildir vs sdbox/mdbox In-Reply-To: References: <7c413ccbddc8e25584311c55672a51e5@standen.id.au> <33C8A3B3-7DB5-4638-8C34-54E0C7E739A4@iki.fi> Message-ID: <233EA3FE-D978-4A62-AEE7-4E908AE83935@iki.fi> On 18.1.2012, at 21.49, Mark Moseley wrote: >> What's the problem with director-based architecture? > > Nothing, per se. It's just that migrating to mdbox *and* to a director > architecture is quite a bit more added complexity than simply > migrating to mdbox alone. Yes, I agree it's safer to do one thing that a time. That's why I'd do a switch to director first. :) From tss at iki.fi Wed Jan 18 23:07:42 2012 From: tss at iki.fi (Timo Sirainen) Date: Wed, 18 Jan 2012 23:07:42 +0200 Subject: [Dovecot] Quota won't work In-Reply-To: References: Message-ID: <40CE1ECA-D884-4127-862E-A6733B685594@iki.fi> On 18.1.2012, at 22.41, Markus Fritz wrote: > passdb: > driver: sql > args: /etc/dovecot/dovecot-sql.conf > userdb: > driver: static > args: uid=5000 gid=5000 home=/var/vmail/%d/%n/Maildir allow_all_users=yes You use sql as passdb, static as userdb. > password_query = SELECT email as user, password FROM virtual_users WHERE email='%u'; passdb sql executes password_query. > user_query = SELECT CONCAT('/var/mail/', maildir) AS home, CONCAT('*:bytes=', quota) AS quota_rule \ > FROM virtual_users WHERE email='%u' userdb sql executes user_query. But you're not using userdb sql, you're using userdb static. This query never gets executed. Also you don't have plugin { quota } setting. From Juergen.Obermann at hrz.uni-giessen.de Wed Jan 18 23:40:17 2012 From: Juergen.Obermann at hrz.uni-giessen.de (=?UTF-8?Q?J=C3=BCrgen_Obermann?=) Date: Wed, 18 Jan 2012 22:40:17 +0100 Subject: [Dovecot] Panic: file mbox-sync.c: line 1348: assertion failed In-Reply-To: <20120110163207.182538xtgzoxjg8w@webmail.hrz.uni-giessen.de> References: <20120110163207.182538xtgzoxjg8w@webmail.hrz.uni-giessen.de> Message-ID: <1460d9f2fc09b7f8f0d607cb5a86e01b@imapproxy.hrz> Am 10.01.2012 16:32, schrieb J?rgen Obermann: > > I have the following problem with doveadm: > > # gdb --args /opt/local/bin/doveadm -v mailbox status -u > userxy/g029 'messages' "Software-alle/AK-Software-Tagung" > GNU gdb 5.3 > Copyright 2002 Free Software Foundation, Inc. > GDB is free software, covered by the GNU General Public License, and > you are > welcome to change it and/or distribute copies of it under certain > conditions. > Type "show copying" to see the conditions. > There is absolutely no warranty for GDB. Type "show warranty" for > details. > This GDB was configured as "sparc-sun-solaris2.8"... > (gdb) run > Starting program: /opt/local/bin/doveadm -v mailbox status -u g029 > messages Software-alle/AK-Software-Tagung > warning: Lowest section in /lib/libthread.so.1 is .dynamic at > 00000074 > warning: Lowest section in /lib/libdl.so.1 is .hash at 000000b4 > doveadm(g029): Panic: file mbox-sync.c: line 1348: assertion failed: > (file_size >= sync_ctx->expunged_space + trailer_size) > doveadm(g029): Error: Raw backtrace: 0xff1cbc30 -> 0xff319544 -> > 0xff319fa8 -> 0xff31add8 -> 0xff31b278 -> 0xff2a69b0 -> 0xff2a6bac -> > 0x16808 -> 0x1b8fc -> 0x16ba0 -> 0x177cc -> 0x17944 -> 0x17a50 -> > 0x204e8 -> 0x165c8 > > Program received signal SIGABRT, Aborted. Hallo, the problem went away after I deleted the dovecot index files for the mailbox. Greetins, J?rgen Obermann Hochschulrechenzentrum der Justus-Liebig-Universit?t Gie?en Heinrich-Buff-Ring 44 Tel. 0641-9913054 From stan at hardwarefreak.com Thu Jan 19 06:39:04 2012 From: stan at hardwarefreak.com (Stan Hoeppner) Date: Wed, 18 Jan 2012 22:39:04 -0600 Subject: [Dovecot] Performance of Maildir vs sdbox/mdbox In-Reply-To: <1326894871.11500.45.camel@innu> References: <02f401ccd5de$ef17c470$cd474d50$@standen.id.au> <1326894871.11500.45.camel@innu> Message-ID: <4F179E68.5020408@hardwarefreak.com> On 1/18/2012 7:54 AM, Timo Sirainen wrote: > On Wed, 2012-01-18 at 20:44 +0800, Lee Standen wrote: >> * All mail storage presented via NFS over 10Gbps Ethernet (Jumbo Frames) >> >> * Postfix will feed new email to Dovecot via LMTP >> >> * Dovecot servers have been split based on their role >> >> - Dovecot LDA Servers (running LMTP protocol) >> >> - Dovecot POP/IMAP servers (running POP/IMAP protocols) > > You're going to run into NFS caching troubles with the above split > setup. I don't recommend it. You will see error messages about index > corruption with it, and with dbox it can cause metadata loss. > http://wiki2.dovecot.org/NFS http://wiki2.dovecot.org/Director Would it be possible to fix this NFS mdbox index corruption issue in this split scenario by using a dual namespace and disabling indexing on the INBOX? The goal being no index file collisions between LDA and imap processes. Maybe something like: namespace { separator = / prefix = "#mbox/" location = mbox:~/mail:INBOX=/var/mail/%u:INDEX=MEMORY inbox = yes hidden = yes list = no } namespace { separator = / prefix = location = mdbox:~/mdbox } Client access to new mail might be a little slower, but if it eliminates the index corruption issue and allows the split architecture, it may be a viable option. -- Stan From ebroch at whitehorsetc.com Thu Jan 19 08:48:29 2012 From: ebroch at whitehorsetc.com (Eric Broch) Date: Wed, 18 Jan 2012 23:48:29 -0700 Subject: [Dovecot] shared folder files not displaying in thunderbird Message-ID: <4F17BCBD.3020802@whitehorsetc.com> Can anyone help me figure out why email in a sub-folder (created using Thunderbird) of a dovecot namespace will not display in Thunderbird? ... Hello, I have dovecot installed with the configuration below. One of the subfolders created (using the email client) under the '/home/vpopmail/domains/mydomain.com/shared/projects' share no longer (it used to) displays the files located in it. There are about 150 folders under the '/home/vpopmail/domains/mydomain.com/shared/projects' share all of which display the files located in them, the one mentioned used to display the contents but no longer does. What would be the reason that one folder would no longer display existing files in the email client (Thunderbird) and the other folders would? And, how do I fix this? I've already tried unsubscribing and resubscribing the folder. This did not work. Would it now be simply a matter of unsubscribing the folder, deleting the dovecot files, and resubscribing to the folder? Eric # 2.0.11: /etc/dovecot/dovecot.conf # OS: Linux 2.6.18-238.19.1.el5 i686 CentOS release 5.7 (Final) auth_cache_size = 32 M auth_mechanisms = plain login digest-md5 cram-md5 auth_username_format = %Lu disable_plaintext_auth = no first_valid_uid = 89 log_path = /var/log/dovecot.log login_greeting = Dovecot toaster ready. namespace { inbox = yes location = prefix = INBOX. separator = . type = private } namespace { location = maildir:/home/vpopmail/domains/mydomain.com/shared/projects prefix = projects. separator = . type = public } passdb { args = cache_key=%u webmail=127.0.0.1 driver = vpopmail } plugin/quota = maildir protocols = imap ssl_cert = Hi, i want to send mails direct into a public folder. If i send an email via my local postfix the mail will be handled as a normal private mail. Dovecot does create a mailbox in the private Namespace and do use not the mailbox in public one. I hope you can help me with my little problem. Here sone informations about my configuration: [root at imap1 etc]# ls -la /var/dovecot/imap/public/ insgesamt 16 drwxr-x--- 3 vmail vmail 4096 19. Jan 10:12 . drwxr-x--- 5 vmail vmail 4096 18. Jan 08:41 .. -rw-r----- 1 vmail vmail 0 19. Jan 10:11 dovecot-acl-list -rw-r----- 1 vmail vmail 8 19. Jan 10:12 dovecot-uidvalidity -r--r--r-- 1 vmail vmail 0 19. Jan 10:12 dovecot-uidvalidity.4f17de84 drwx------ 5 vmail vmail 4096 19. Jan 10:12 .hrztest and here is me configuration: # 2.0.9: /etc/dovecot/dovecot.conf # OS: Linux 2.6.32-220.2.1.el6.i686 i686 Red Hat Enterprise Linux Server release 6.2 (Santiago) auth_username_format = %Ln disable_plaintext_auth = no login_greeting = Dovecot IMAP der Jade Hochschule. mail_access_groups = vmail mail_debug = yes mail_gid = vmail mail_plugins = quota acl mail_uid = vmail managesieve_notify_capability = mailto managesieve_sieve_capability = fileinto reject envelope encoded-character vacation subaddress comparator-i;ascii-numeric relational regex imap4flags copy include variables body enotify environment mailbox date imapflags notify mbox_write_locks = fcntl namespace { inbox = yes location = maildir:/var/dovecot/imap/%1n/%n prefix = separator = / type = private } namespace { list = children location = maildir:/var/dovecot/imap/public/ prefix = public/ separator = / subscriptions = no type = public } passdb { args = /etc/dovecot/dovecot-ldap.conf driver = ldap } passdb { driver = pam } plugin { acl = vfile acl_shared_dict = file:/var/lib/dovecot/shared-mailboxes mail_log_fields = uid box msgid size quota = dict:user::file:/var/dovecot/imap/%1n/%n/dovecot-quota quota_rule = *:storage=50MB quota_rule2 = Trash:storage=+10% sieve = /var/dovecot/imap/%1n/%n/.dovecot.sieve sieve_dir = /var/dovecot/imap/%1n/%n/sieve sieve_extensions = +notify +imapflags sieve_quota_max_scripts = 2 } postmaster_address = postmaster at jade-hs.de protocols = imap pop3 lmtp sieve service lmtp { unix_listener /var/spool/postfix/private/dovecot-lmtp { group = postfix mode = 0660 user = postfix } } service managesieve-login { inet_listener sieve { port = 4190 } } ssl_cert = References: <1BCAD28D-8120-45C9-BAA2-B6597C34545A@apple.com> <09EF3E7A-15A2-45EE-91BD-6EEFD1FD8049@iki.fi> Message-ID: <1326981545.11500.86.camel@innu> On Thu, 2012-01-12 at 22:20 +0200, Timo Sirainen wrote: > On 12.1.2012, at 1.09, Mike Abbott wrote: > > > In 2.0.17 you increased LOGIN_MAX_INBUF_SIZE from 1024 to 4096. > > Should you also have increased MASTER_AUTH_MAX_DATA_SIZE from (1024*2) to (4096*2)? > > /* This should be kept in sync with LOGIN_MAX_INBUF_SIZE. Multiply it by two > > to make sure there's space to transfer the command tag */ > > Well, yes.. Although I'd rather not do that. > > 1. Command tag length needs to be restricted to something reasonable, maybe 100 chars, so it won't have to be multiplied by 2 but just added the 100 (+1 for NUL). > > 2. Maybe I can change the LOGIN_MAX_INBUF_SIZE back to its original size and change the AUTHENTICATE command handling to read the SASL initial response to a separate buffer. > > I'll try doing those next week. http://hg.dovecot.org/dovecot-2.1/rev/b86f7dd170c6 does this. From moseleymark at gmail.com Thu Jan 19 19:08:06 2012 From: moseleymark at gmail.com (Mark Moseley) Date: Thu, 19 Jan 2012 09:08:06 -0800 Subject: [Dovecot] Performance of Maildir vs sdbox/mdbox In-Reply-To: <4F179E68.5020408@hardwarefreak.com> References: <02f401ccd5de$ef17c470$cd474d50$@standen.id.au> <1326894871.11500.45.camel@innu> <4F179E68.5020408@hardwarefreak.com> Message-ID: On Wed, Jan 18, 2012 at 8:39 PM, Stan Hoeppner wrote: > On 1/18/2012 7:54 AM, Timo Sirainen wrote: >> On Wed, 2012-01-18 at 20:44 +0800, Lee Standen wrote: > >>> * All mail storage presented via NFS over 10Gbps Ethernet (Jumbo Frames) >>> >>> * Postfix will feed new email to Dovecot via LMTP >>> >>> * Dovecot servers have been split based on their role >>> >>> ? - Dovecot LDA Servers (running LMTP protocol) >>> >>> ? - Dovecot POP/IMAP servers (running POP/IMAP protocols) >> >> You're going to run into NFS caching troubles with the above split >> setup. I don't recommend it. You will see error messages about index >> corruption with it, and with dbox it can cause metadata loss. >> http://wiki2.dovecot.org/NFS http://wiki2.dovecot.org/Director > > Would it be possible to fix this NFS mdbox index corruption issue in > this split scenario by using a dual namespace and disabling indexing on > the INBOX? ?The goal being no index file collisions between LDA and imap > processes. ?Maybe something like: > > namespace { > ?separator = / > ?prefix = "#mbox/" > ?location = mbox:~/mail:INBOX=/var/mail/%u:INDEX=MEMORY > ?inbox = yes > ?hidden = yes > ?list = no > } > namespace { > ?separator = / > ?prefix = > ?location = mdbox:~/mdbox > } > > Client access to new mail might be a little slower, but if it eliminates > the index corruption issue and allows the split architecture, it may be > a viable option. > > -- > Stan It could be that I botched my test up somehow, but when I tested something similar yesterday (pointing the index at another location on the LDA), it didn't work. I was sending from the LDA server and confirmed that the messages made it to storage/m.# but without the real indexes being updated. When I checked the mailbox via IMAP, it never seemed to register that there was a message there, so I'm guessing that dovecot never looks at the storage files but just relies on the indexes to be correct. That sound right, Timo? From rob0 at gmx.co.uk Thu Jan 19 19:37:15 2012 From: rob0 at gmx.co.uk (/dev/rob0) Date: Thu, 19 Jan 2012 11:37:15 -0600 Subject: [Dovecot] Storing passwords encrypted... bcrypt? In-Reply-To: <4F14BF4B.5060804@wildgooses.com> References: <4F0367A2.1000807@Media-Brokers.com> <4F04FAA9.3020908@localhost.localdomain.org> <4F14BF4B.5060804@wildgooses.com> Message-ID: <20120119173715.GD14195@harrier.slackbuilds.org> On Tue, Jan 17, 2012 at 12:22:35AM +0000, Ed W wrote: > Note I personally believe there are valid reasons to store > plaintext passwords - this seems to cause huge criticism due to > the ensuing disaster which can happen if the database is pinched, > but it does allow for enhanced security in the password exchange, > so ultimately it depends on where your biggest risk lies... Exactly. In any security decision, consider the threat model first. There are too many kneejerk "secure" ideas in circulation. -- http://rob0.nodns4.us/ -- system administration and consulting Offlist GMX mail is seen only if "/dev/rob0" is in the Subject: From tss at iki.fi Thu Jan 19 21:18:00 2012 From: tss at iki.fi (Timo Sirainen) Date: Thu, 19 Jan 2012 21:18:00 +0200 Subject: [Dovecot] Performance of Maildir vs sdbox/mdbox In-Reply-To: <4F179E68.5020408@hardwarefreak.com> References: <02f401ccd5de$ef17c470$cd474d50$@standen.id.au> <1326894871.11500.45.camel@innu> <4F179E68.5020408@hardwarefreak.com> Message-ID: <44D201BF-3A31-4354-8B55-0BFE11721601@iki.fi> On 19.1.2012, at 6.39, Stan Hoeppner wrote: >> You're going to run into NFS caching troubles with the above split >> setup. I don't recommend it. You will see error messages about index >> corruption with it, and with dbox it can cause metadata loss. >> http://wiki2.dovecot.org/NFS http://wiki2.dovecot.org/Director > > Would it be possible to fix this NFS mdbox index corruption issue in > this split scenario by using a dual namespace and disabling indexing on > the INBOX? The goal being no index file collisions between LDA and imap > processes. Maybe something like: > > namespace { > separator = / > prefix = "#mbox/" > location = mbox:~/mail:INBOX=/var/mail/%u:INDEX=MEMORY > inbox = yes > hidden = yes > list = no > } > namespace { > separator = / > prefix = > location = mdbox:~/mdbox > } > > Client access to new mail might be a little slower, but if it eliminates > the index corruption issue and allows the split architecture, it may be > a viable option. That assumes that mails are only being delivered to INBOX (i.e. no Sieve or +mailbox addressing). I suppose you could do that if you can live with that limitation. Slightly better for performance would be to not actually keep INBOX mails in mbox format but use snarf plugin to move them to mdbox. And of course the above still requires that for imap/pop3 access the user is redirected to the same server every time. I don't really see it helping much. From tss at iki.fi Thu Jan 19 21:21:20 2012 From: tss at iki.fi (Timo Sirainen) Date: Thu, 19 Jan 2012 21:21:20 +0200 Subject: [Dovecot] Performance of Maildir vs sdbox/mdbox In-Reply-To: References: <02f401ccd5de$ef17c470$cd474d50$@standen.id.au> <1326894871.11500.45.camel@innu> <4F179E68.5020408@hardwarefreak.com> Message-ID: <2DF8A9C6-EE59-4557-A1AE-4E4D2BC91C93@iki.fi> On 19.1.2012, at 19.08, Mark Moseley wrote: >> namespace { >> separator = / >> prefix = "#mbox/" >> location = mbox:~/mail:INBOX=/var/mail/%u:INDEX=MEMORY >> inbox = yes >> hidden = yes >> list = no >> } >> >> Client access to new mail might be a little slower, but if it eliminates >> the index corruption issue and allows the split architecture, it may be >> a viable option. >> >> -- >> Stan > > It could be that I botched my test up somehow, but when I tested > something similar yesterday (pointing the index at another location on > the LDA), it didn't work. Note that Stan used mbox format for INBOX, not mdbox. > I was sending from the LDA server and > confirmed that the messages made it to storage/m.# but without the > real indexes being updated. When I checked the mailbox via IMAP, it > never seemed to register that there was a message there, so I'm > guessing that dovecot never looks at the storage files but just relies > on the indexes to be correct. That sound right, Timo? Correct. dbox absolutely relies on index files always being up to date. In some error situations it can figure out that it should do an index rebuild and then it finds any missing mails, but in normal situations it doesn't even try, because that would unnecessarily waste disk IO. (And there's of course doveadm force-resync to force it.) From tss at iki.fi Thu Jan 19 21:25:38 2012 From: tss at iki.fi (Timo Sirainen) Date: Thu, 19 Jan 2012 21:25:38 +0200 Subject: [Dovecot] shared folder files not displaying in thunderbird In-Reply-To: <4F16D52F.2040907@whitehorsetc.com> References: <4F16D52F.2040907@whitehorsetc.com> Message-ID: <69E3CE17-A92B-48A4-8A56-F16EE6450898@iki.fi> On 18.1.2012, at 16.20, Eric Broch wrote: > I have dovecot installed with the configuration below. > One of the subfolders created (using the email client) under the > '/home/vpopmail/domains/mydomain.com/shared/projects' share no longer > (it used to) displays the files located in it. There are about 150 > folders under the '/home/vpopmail/domains/mydomain.com/shared/projects' > share all of which display the files located in them, the one mentioned > used to display the contents but no longer does. > > What would be the reason that one folder would no longer display > existing files in the email client (Thunderbird) and the other folders > would? And, how do I fix this? So the folder itself exists, but it just appears empty? Have you tried with another IMAP client? Have you checked if the files are actually still there in the maildir? You can check if this is a server problem or a client problem by running: doveadm fetch -u user at domain uid mailbox project.missing.sub.folder all If the output is empty, then Dovecot doesn't see any mails in there (check if there are any files in the maildir). If it outputs something, then the client's local cache is broken and you need to tell the client to do a resync. > Would it now be simply a matter of unsubscribing the folder, deleting > the dovecot files, and resubscribing to the folder? Subscriptions won't matter. Deleting Dovecot's files may emulate the client's cache flush because it changes IMAP UIDVALIDITY. From tss at iki.fi Thu Jan 19 21:31:57 2012 From: tss at iki.fi (Timo Sirainen) Date: Thu, 19 Jan 2012 21:31:57 +0200 Subject: [Dovecot] Problems sending email direct into publich folders In-Reply-To: References: Message-ID: <0D641C8A-B7E5-464F-9BFC-3A256ED4C615@iki.fi> On 19.1.2012, at 14.02, Bohlken, Henning wrote: > i want to send mails direct into a public folder. If i send an email via my local postfix the mail will be handled as a normal private mail. Dovecot does create a mailbox in the private Namespace and do use not the mailbox in public one. Depends on how you want to do this.. For example all mails intended to be put to public namespace could be sent to a "publicuser" named user, which has write permissions to the public namespace. Then you'll simply create a sieve script for the publicuser which redirects the mails to the wanted folder (e.g. fileinto "public/hrztest"). From ebroch at whitehorsetc.com Thu Jan 19 23:03:58 2012 From: ebroch at whitehorsetc.com (Eric Broch) Date: Thu, 19 Jan 2012 14:03:58 -0700 Subject: [Dovecot] shared folder files not displaying in thunderbird In-Reply-To: <69E3CE17-A92B-48A4-8A56-F16EE6450898@iki.fi> References: <4F16D52F.2040907@whitehorsetc.com> <69E3CE17-A92B-48A4-8A56-F16EE6450898@iki.fi> Message-ID: <4F18853E.5020003@whitehorsetc.com> Timo, > So the folder itself exists, but it just appears empty? Yes. > Have you tried with another IMAP client? Yes, both Outlook and Thunderbird > Have you checked if the files are actually still there in the maildir? I've done a list (ls -la) of the directory where the files reside (path.to.share.sub.dir/cur). They exist. > You can check if this is a server problem or a client problem by running: doveadm fetch -u user at domain uid mailbox project.missing.sub.folder all I did this per your instructions and there is no output. So, email exists in the share, and it does not show up in Thunderbird, Outlook, or using doveadm. Eric From tss at iki.fi Thu Jan 19 23:29:34 2012 From: tss at iki.fi (Timo Sirainen) Date: Thu, 19 Jan 2012 23:29:34 +0200 Subject: [Dovecot] shared folder files not displaying in thunderbird In-Reply-To: <4F18853E.5020003@whitehorsetc.com> References: <4F16D52F.2040907@whitehorsetc.com> <69E3CE17-A92B-48A4-8A56-F16EE6450898@iki.fi> <4F18853E.5020003@whitehorsetc.com> Message-ID: <489C9E80-1E22-4C18-BC08-2F869592CFD6@iki.fi> On 19.1.2012, at 23.03, Eric Broch wrote: >> Have you checked if the files are actually still there in the maildir? > I've done a list (ls -la) of the directory where the files reside > (path.to.share.sub.dir/cur). They exist. >> You can check if this is a server problem or a client problem by running: doveadm fetch -u user at domain uid mailbox project.missing.sub.folder all > I did this per your instructions and there is no output. Try "touch path.to/cur" and the doveadm fetch again. Does it help? If not, there's some kind of a mismatch between what you think is happening in Dovecot and what is happening in filesystem. I'd like to know the exact full path and the mailbox name then. (Or you could run doveadm through strace and see if it's accessing the intended directory.) From stan at hardwarefreak.com Fri Jan 20 01:51:06 2012 From: stan at hardwarefreak.com (Stan Hoeppner) Date: Thu, 19 Jan 2012 17:51:06 -0600 Subject: [Dovecot] Performance of Maildir vs sdbox/mdbox In-Reply-To: <44D201BF-3A31-4354-8B55-0BFE11721601@iki.fi> References: <02f401ccd5de$ef17c470$cd474d50$@standen.id.au> <1326894871.11500.45.camel@innu> <4F179E68.5020408@hardwarefreak.com> <44D201BF-3A31-4354-8B55-0BFE11721601@iki.fi> Message-ID: <4F18AC6A.4050508@hardwarefreak.com> On 1/19/2012 1:18 PM, Timo Sirainen wrote: > On 19.1.2012, at 6.39, Stan Hoeppner wrote: > >>> You're going to run into NFS caching troubles with the above split >>> setup. I don't recommend it. You will see error messages about index >>> corruption with it, and with dbox it can cause metadata loss. >>> http://wiki2.dovecot.org/NFS http://wiki2.dovecot.org/Director >> >> Would it be possible to fix this NFS mdbox index corruption issue in >> this split scenario by using a dual namespace and disabling indexing on >> the INBOX? The goal being no index file collisions between LDA and imap >> processes. Maybe something like: >> >> namespace { >> separator = / >> prefix = "#mbox/" >> location = mbox:~/mail:INBOX=/var/mail/%u:INDEX=MEMORY >> inbox = yes >> hidden = yes >> list = no >> } >> namespace { >> separator = / >> prefix = >> location = mdbox:~/mdbox >> } >> >> Client access to new mail might be a little slower, but if it eliminates >> the index corruption issue and allows the split architecture, it may be >> a viable option. > > That assumes that mails are only being delivered to INBOX (i.e. no Sieve or +mailbox addressing). I suppose you could do that if you can live with that limitation. Slightly better for performance would be to not actually keep INBOX mails in mbox format but use snarf plugin to move them to mdbox. > > And of course the above still requires that for imap/pop3 access the user is redirected to the same server every time. I don't really see it helping much. I spent a decent amount of time last night researching the NFS cache issue. It seems there is no way to completely disable NFS client caching (in lie of rewriting the code oneself--a daunting tak), which would seem to be the real solution to the mdbox index corruption problem. So I went looking for alternatives and came up with the idea above. Obviously it's far from an optimal solution and introduces some limitations, but I thought it was worth tossing out for discussion. Timo, it seems that when you designed mdbox you didn't have NFS based clusters in mind. Do you consider mdbox simply not suitable for such an NFS cluster deployment? If one has no choice but an NFS cluster architecture, what Dovecot mailbox format do you recommend? Stick with maildir? In this case the OP has Netapp storage. Netapp units support both NFS exports as well as iSCSI LUNs. If the OP could utilize iSCSI instead of NFS, switching to GFS2 or OCFS, do you see these cluster filesystem as preferable for mdbox? -- Stan From tss at iki.fi Fri Jan 20 02:13:26 2012 From: tss at iki.fi (Timo Sirainen) Date: Fri, 20 Jan 2012 02:13:26 +0200 Subject: [Dovecot] Performance of Maildir vs sdbox/mdbox In-Reply-To: <4F18AC6A.4050508@hardwarefreak.com> References: <02f401ccd5de$ef17c470$cd474d50$@standen.id.au> <1326894871.11500.45.camel@innu> <4F179E68.5020408@hardwarefreak.com> <44D201BF-3A31-4354-8B55-0BFE11721601@iki.fi> <4F18AC6A.4050508@hardwarefreak.com> Message-ID: On 20.1.2012, at 1.51, Stan Hoeppner wrote: > I spent a decent amount of time last night researching the NFS cache > issue. It seems there is no way to completely disable NFS client > caching (in lie of rewriting the code oneself--a daunting tak), which > would seem to be the real solution to the mdbox index corruption problem. > > So I went looking for alternatives and came up with the idea above. > Obviously it's far from an optimal solution and introduces some > limitations, but I thought it was worth tossing out for discussion. I spent months looking into NFS related issues. I read through Linux and FreeBSD kernel source codes to figure out if there's something I could do to avoid the problems I see. I sent some patches to try to improve things, which of course didn't get accepted (some alternative ways might have been, but it would have required much more work from my part). The mail_nfs_* settings are the result of what I found out. They don't fully work, so I gave up. > Timo, it seems that when you designed mdbox you didn't have NFS based > clusters in mind. Do you consider mdbox simply not suitable for such an > NFS cluster deployment? If one has no choice but an NFS cluster > architecture, what Dovecot mailbox format do you recommend? Stick with > maildir? In the typical random-access NFS setup I don't consider any of Dovecot's formats suitable. Not maildir, not dbox. Perhaps in future I can redesign everything in a way that just happens to work well with all kinds of NFS setups, but I don't really hold a lot of hope for that. It seems that either you'll get bad performance (I'm not really interested in making Dovecot do that) or you'll use such a setup where you get good performance by avoiding the NFS problems. There are several huge Dovecot+NFS setups. They use director. It works well enough (and with the recent fixes, I'd hope perfectly). > In this case the OP has Netapp storage. Netapp units support both NFS > exports as well as iSCSI LUNs. If the OP could utilize iSCSI instead of > NFS, switching to GFS2 or OCFS, do you see these cluster filesystem as > preferable for mdbox? I don't have personal experience with cluster filesystems in recent years (other than glusterfs, which had some problems, but most(/all?) were fixed already or are available from their commercial support..). Based on what I've heard, I'm guessing they work better than random-access-NFS, but even if there are no actual corruption problems, it sounds like their performance isn't very good. From noel.butler at ausics.net Fri Jan 20 03:18:16 2012 From: noel.butler at ausics.net (Noel Butler) Date: Fri, 20 Jan 2012 11:18:16 +1000 Subject: [Dovecot] Performance of Maildir vs sdbox/mdbox In-Reply-To: References: <02f401ccd5de$ef17c470$cd474d50$@standen.id.au> <1326894871.11500.45.camel@innu> <4F179E68.5020408@hardwarefreak.com> <44D201BF-3A31-4354-8B55-0BFE11721601@iki.fi> <4F18AC6A.4050508@hardwarefreak.com> Message-ID: <1327022296.9133.3.camel@tardis> On Fri, 2012-01-20 at 02:13 +0200, Timo Sirainen wrote: > There are several huge Dovecot+NFS setups. They use director. It works well enough (and with the recent fixes, I'd hope perfectly). Not to mention other huge NFS setups that don't use director, and also have no problems. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 490 bytes Desc: This is a digitally signed message part URL: From stan at hardwarefreak.com Fri Jan 20 04:27:59 2012 From: stan at hardwarefreak.com (Stan Hoeppner) Date: Thu, 19 Jan 2012 20:27:59 -0600 Subject: [Dovecot] Performance of Maildir vs sdbox/mdbox In-Reply-To: References: <02f401ccd5de$ef17c470$cd474d50$@standen.id.au> <1326894871.11500.45.camel@innu> <4F179E68.5020408@hardwarefreak.com> <44D201BF-3A31-4354-8B55-0BFE11721601@iki.fi> <4F18AC6A.4050508@hardwarefreak.com> Message-ID: <4F18D12F.2050809@hardwarefreak.com> On 1/19/2012 6:13 PM, Timo Sirainen wrote: > On 20.1.2012, at 1.51, Stan Hoeppner wrote: > >> I spent a decent amount of time last night researching the NFS cache >> issue. It seems there is no way to completely disable NFS client >> caching (in lie of rewriting the code oneself--a daunting tak), which >> would seem to be the real solution to the mdbox index corruption problem. >> >> So I went looking for alternatives and came up with the idea above. >> Obviously it's far from an optimal solution and introduces some >> limitations, but I thought it was worth tossing out for discussion. > > I spent months looking into NFS related issues. I read through Linux and FreeBSD kernel source codes to figure out if there's something I could do to avoid the problems I see. I sent some patches to try to improve things, which of course didn't get accepted (some alternative ways might have been, but it would have required much more work from my part). The mail_nfs_* settings are the result of what I found out. They don't fully work, so I gave up. Yeah, I recall some of your posts from that time, and your frustration. If an NFS config option existed to simply turn off the NFS client caching, would that resolve most/all of the remaining issues? Or is the problem more complex than just the file caching? I ask as it would seem creating such a Boolean NFS config option should be simple to implement. If the devs could be convinced of the need for it. >> Timo, it seems that when you designed mdbox you didn't have NFS based >> clusters in mind. Do you consider mdbox simply not suitable for such an >> NFS cluster deployment? If one has no choice but an NFS cluster >> architecture, what Dovecot mailbox format do you recommend? Stick with >> maildir? > > In the typical random-access NFS setup I don't consider any of Dovecot's formats suitable. Not maildir, not dbox. Perhaps in future I can redesign everything in a way that just happens to work well with all kinds of NFS setups, but I don't really hold a lot of hope for that. It seems that either you'll get bad performance (I'm not really interested in making Dovecot do that) or you'll use such a setup where you get good performance by avoiding the NFS problems. > > There are several huge Dovecot+NFS setups. They use director. It works well enough (and with the recent fixes, I'd hope perfectly). Are any of these huge setups using mdbox? Or does it make a difference? I.e. Indexes are indexes whether they be maildir or mdbox. Would Director alone allow the OP to avoid the cache corruption issues discussed in this thread? Or would there still be problems due to the split LDA setup? >> In this case the OP has Netapp storage. Netapp units support both NFS >> exports as well as iSCSI LUNs. If the OP could utilize iSCSI instead of >> NFS, switching to GFS2 or OCFS, do you see these cluster filesystem as >> preferable for mdbox? > > I don't have personal experience with cluster filesystems in recent years (other than glusterfs, which had some problems, but most(/all?) were fixed already or are available from their commercial support..). Based on what I've heard, I'm guessing they work better than random-access-NFS, but even if there are no actual corruption problems, it sounds like their performance isn't very good. So would an ideal long term solution to indexes in a cluster (NFS or clusterFS) environment be something like Dovecot's own index metadata broker daemon/lock manager that controls access to the files/indexes? Either a distributed token based architecture, or maybe something 'simple' such as a master node which all others send index updates to with the master performing the actual writes to the files, similar to a database architecture? The former likely being more difficult to implement, the latter having potential scalability and SPOF issues. Or is the percentage of Dovecot cluster deployments so small that it's difficult to justify the development investment for such a thing? Thanks Timo. -- Stan From robert at schetterer.org Fri Jan 20 09:43:01 2012 From: robert at schetterer.org (Robert Schetterer) Date: Fri, 20 Jan 2012 08:43:01 +0100 Subject: [Dovecot] Performance of Maildir vs sdbox/mdbox In-Reply-To: References: <02f401ccd5de$ef17c470$cd474d50$@standen.id.au> <1326894871.11500.45.camel@innu> <4F179E68.5020408@hardwarefreak.com> <44D201BF-3A31-4354-8B55-0BFE11721601@iki.fi> <4F18AC6A.4050508@hardwarefreak.com> Message-ID: <4F191B05.9020409@schetterer.org> Am 20.01.2012 01:13, schrieb Timo Sirainen: > On 20.1.2012, at 1.51, Stan Hoeppner wrote: > >> I spent a decent amount of time last night researching the NFS cache >> issue. It seems there is no way to completely disable NFS client >> caching (in lie of rewriting the code oneself--a daunting tak), which >> would seem to be the real solution to the mdbox index corruption problem. >> >> So I went looking for alternatives and came up with the idea above. >> Obviously it's far from an optimal solution and introduces some >> limitations, but I thought it was worth tossing out for discussion. > > I spent months looking into NFS related issues. I read through Linux and FreeBSD kernel source codes to figure out if there's something I could do to avoid the problems I see. I sent some patches to try to improve things, which of course didn't get accepted (some alternative ways might have been, but it would have required much more work from my part). The mail_nfs_* settings are the result of what I found out. They don't fully work, so I gave up. > >> Timo, it seems that when you designed mdbox you didn't have NFS based >> clusters in mind. Do you consider mdbox simply not suitable for such an >> NFS cluster deployment? If one has no choice but an NFS cluster >> architecture, what Dovecot mailbox format do you recommend? Stick with >> maildir? > > In the typical random-access NFS setup I don't consider any of Dovecot's formats suitable. Not maildir, not dbox. Perhaps in future I can redesign everything in a way that just happens to work well with all kinds of NFS setups, but I don't really hold a lot of hope for that. It seems that either you'll get bad performance (I'm not really interested in making Dovecot do that) or you'll use such a setup where you get good performance by avoiding the NFS problems. > > There are several huge Dovecot+NFS setups. They use director. It works well enough (and with the recent fixes, I'd hope perfectly). > >> In this case the OP has Netapp storage. Netapp units support both NFS >> exports as well as iSCSI LUNs. If the OP could utilize iSCSI instead of >> NFS, switching to GFS2 or OCFS, do you see these cluster filesystem as >> preferable for mdbox? > > I don't have personal experience with cluster filesystems in recent years (other than glusterfs, which had some problems, but most(/all?) were fixed already or are available from their commercial support..). Based on what I've heard, I'm guessing they work better than random-access-NFS, but even if there are no actual corruption problems, it sounds like their performance isn't very good. for info i have 3500 users behind keepalived loadbalancers with drbd ocfs2 on two lucid servers, they are heavy penetrated by pop3 with maildir on dove2 , in the begin i had some performance problem but they were mostly related to the raid controlers io, so imap was very slow. Fixing this raid problems gave good imap performance now ( beside some dovecot and kernel tuneups ), anyway i would overthink this whole setup again going up to more users i.e i guess mixing loadbalancers and directors is no problem, maildir seems to be slow by io in design , so mdbox might better, and after all i would more investigate about drbd and compare gfs ocfs and other cluster filesystems better, i.e switching to iSCSI i.e i think it should be poosible to design partitioning with ldap or sql to i.e split up heavy and big mailboxes in seperate storage partitions etc am i right here Timo ? anyway i would like to test some cross hostingplace setup with i.e glusterfs lustre etc to get more knowledge as base of a multi redundant mailsystem -- Best Regards MfG Robert Schetterer Germany/Munich/Bavaria From ewald.lists at fun.de Fri Jan 20 14:35:39 2012 From: ewald.lists at fun.de (Ewald Dieterich) Date: Fri, 20 Jan 2012 13:35:39 +0100 Subject: [Dovecot] Notify plugin: segmentation fault Message-ID: <4F195F9B.3030202@fun.de> I'm trying to develop a plugin that uses the hooks provided by the notify plugin. The notify plugin segfaults if you don't set the mailbox_rename hook. I attached a patch to notify-plugin.c from Dovecot 2.0.16 that should fix this. Ewald -------------- next part -------------- A non-text attachment was scrubbed... Name: notify-plugin.c.patch Type: text/x-diff Size: 530 bytes Desc: not available URL: From harm at vevida.nl Fri Jan 20 00:30:12 2012 From: harm at vevida.nl (Harm Weites) Date: Thu, 19 Jan 2012 23:30:12 +0100 Subject: [Dovecot] LMTP ignoring tcpwrappers Message-ID: <1327012212.2003.32.camel@manbearpig.lan.kantoor.vevida.net> Hello, we want to use dovecot LMTP for efficient mail delivery from our MX servers (running postfix 2.8) to our storage servers (dovecot 2.0.17). However, the one problem we see is the lack of access control when using LMTP. It apears that every client in our network who has access to the storage machines can drop a message in a Maildir of any user on that storage server. To prevent this behaviour it would be nice to use libwrap, just as it can be used for POP3/IMAP protocols. This, however, seems to be impossible using the configuration as mentioned on the dovecot wiki: login_access_sockets = tcpwrap service tcpwrap { unix_listener login/tcpwrap { group = $default_login_user mode = 0600 user = $default_login_user } } This seems to imply it only works for a login, and LMTP does not use that. The above works perfectly when trying to block access to IMAP or POP3 in /etc/hosts.deny, though a setting for LMTP is simply ignored. Is there a configuration setting needed for this to work for LMTP, or is it simply not possible (yet) and does libwrap support for LMTP requires a patch? Any help is appreciated. Regards, Harm From simon.brereton at buongiorno.com Fri Jan 20 18:06:45 2012 From: simon.brereton at buongiorno.com (Simon Brereton) Date: Fri, 20 Jan 2012 11:06:45 -0500 Subject: [Dovecot] mail_max_userip_connections exceeded. Message-ID: Hi I'm using Dovecot version 1:1.2.15-7 installed on Debian Squeeze via apt-get.. I have this error in the logs. /var/log/mail.log.1:2490:Jan 19 12:02:55 mail dovecot: imap-login: Maximum number of connections from user+IP exceeded (mail_max_userip_connections): user=, method=PLAIN, rip=127.0.0.1, secured I never changed this from the default 10. When I googled this error there was a thread on this list from May 2011 that indicated one would need one connection per user per subscribed folder. However, I know that user doesn't have 10 folders, let alone 10 subscribed folders! I can increase, it but it's not going to scale well. And there are people on this list with many 1000x users than I have - so how do they deal with that? 127.0.0.1 is obviously webmail (IMP5). So, how/why am I seeing this, and should I be concerned? Simon From jesus.navarro at bvox.net Fri Jan 20 18:24:41 2012 From: jesus.navarro at bvox.net (=?utf-8?q?Jes=C3=BAs_M=2E?= Navarro) Date: Fri, 20 Jan 2012 17:24:41 +0100 Subject: [Dovecot] UID 0 problem while issuing an UID THREAD REFS command Message-ID: <201201201724.41631.jesus.navarro@bvox.net> Hi: This is my first message to this list, so pleased to meet you all. Using dovecot 2.0.17 from packages at xi.rename-it.nl on a Debian "Squeeze" i686. Mail storage is a local ext3 partition (I attached the output of dovecot -n to this message). I'm having problems on a maildir due to dovecot returning an UID 0 to an UID THREAD REFS command: in <== TAG5 UID THREAD REFS us-ascii SINCE 18-Jul-2011 out <== * THREAD (0)(51 52)(53)(54 55 56)(57)(58)(59 60)(61) TAG5 OK Thread completed. The issuer is an atmail webmail that after the previous output will try an UID FETCH 0 that will fail with a "TAG6 BAD Error in IMAP command UID FETCH: Invalid uidset" message. I think that, as per a previous answer from Timo Sirainen*1, this should be considered a dovecot's bug, am I right? Anyway, what should I try to find why is this exactly happening? TIA *1 http://www.dovecot.org/list/dovecot/2011-November/061992.html -------------- next part -------------- # 2.0.17 (687949948a83): /etc/dovecot/dovecot.conf # OS: Linux 2.6.29-xs5.5.0.15 i686 Debian 6.0.3 ext3 auth_cache_negative_ttl = 10 mins auth_cache_size = 10 M auth_debug = yes auth_debug_passwords = yes auth_mechanisms = plain login digest-md5 cram-md5 auth_username_chars = abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ01234567890.-_@: auth_verbose = yes disable_plaintext_auth = no mail_gid = vmail mail_location = maildir:/var/vmail/%d/%n mail_plugins = " notify xmpp_pubsub fts fts_squat zlib" mail_privileged_group = mail mail_uid = vmail managesieve_notify_capability = mailto managesieve_sieve_capability = fileinto reject envelope encoded-character vacation subaddress comparator-i;ascii-numeric relational regex imap4flags copy include variables body enotify environment mailbox date ihave namespace { inbox = yes location = prefix = separator = / } passdb { args = /etc/dovecot/dovecot-sql.conf driver = sql } plugin { enotify_xmpp_jid = dovecot at openfire/%l enotify_xmpp_password = [EDITED] enotify_xmpp_server = [EDITED] enotify_xmpp_use_tls = no fts = squat fts_squat = partial=4 full=10 mail_log_events = delete undelete expunge copy mailbox_delete mailbox_rename mail_log_fields = uid box msgid size vsize flags mail_log_group_events = no sieve = ~/.dovecot.sieve sieve_after = /var/lib/dovecot.sieve/after.d/ sieve_before = /var/lib/dovecot.sieve/before.d/ sieve_dir = ~/sieve sieve_global_path = /var/lib/dovecot.sieve/default.sieve xmpp_pubsub_events = delete undelete expunge copy mailbox_delete mailbox_rename xmpp_pubsub_fields = uid box msgid size vsize flags } protocols = " imap lmtp sieve pop3" service auth { unix_listener auth-userdb { group = vmail mode = 0600 user = vmail } } service imap-login { service_count = 0 } service managesieve-login { inet_listener sieve { port = 4190 } inet_listener sieve_deprecated { port = 2000 } } ssl_cert = References: <20120113224607.GS4844@bender.csupomona.edu> Message-ID: <10122B91-B309-413C-BA8F-43BF2DE0C1C7@iki.fi> On 14.1.2012, at 1.19, Mark Moseley wrote: >>> Also another idea to avoid them in the first place: >>> >>> service auth-worker { >>> idle_kill = 20 >>> } >> >> Ah, set the auth-worker timeout to less than the mysql timeout to >> prevent a stale mysql connection from ever being used. I'll try that, >> thanks. > > I gave that a try. Sometimes it seems to kill off the auth-worker but > not till after a minute or so (with idle_kill = 20). Other times, the > worker stays around for more like 5 minutes (I gave up watching), > despite being idle -- and I'm the only person connecting to it, so > it's definitely idle. Does auth-worker perhaps only wake up every so > often to check its idle status? This is fixed in v2.1 hg. The default idle_kill of 60 seconds seems to have gotten rid of the "MySQL server has gone away" errors completely. So I guess the problem was that during some peak times a ton of auth worker processes were created, but afterwards they weren't used until the next peak happened, and then they failed. http://hg.dovecot.org/dovecot-2.1/rev/3963862a4086 http://hg.dovecot.org/dovecot-2.1/rev/58556a90259f From tss at iki.fi Fri Jan 20 19:17:24 2012 From: tss at iki.fi (Timo Sirainen) Date: Fri, 20 Jan 2012 19:17:24 +0200 Subject: [Dovecot] MySQL server has gone away In-Reply-To: <20120114001912.GZ4844@bender.csupomona.edu> References: <4F108834.60709@schetterer.org> <20120114001912.GZ4844@bender.csupomona.edu> Message-ID: <2B3DAEEA-9281-4E5B-BB90-4FCE9C61C9E4@iki.fi> On 14.1.2012, at 2.19, Paul B. Henson wrote: > On Fri, Jan 13, 2012 at 11:38:28AM -0800, Robert Schetterer wrote: > >> by the way , if you use sql for auth have you tried auth caching ? >> >> http://wiki.dovecot.org/Authentication/Caching > > That page says you can send a USR2 signal to the auth process for cache > stats? That doesn't seem to work. OTOH, that page is for version 1, not > 2; is there some other way to generate cache stats in version 2? Works for me. Are you maybe sending it to wrong auth process (auth worker instead of master)? From tss at iki.fi Fri Jan 20 21:14:07 2012 From: tss at iki.fi (Timo Sirainen) Date: Fri, 20 Jan 2012 21:14:07 +0200 Subject: [Dovecot] Notify plugin: segmentation fault In-Reply-To: <4F195F9B.3030202@fun.de> References: <4F195F9B.3030202@fun.de> Message-ID: <4F19BCFF.904@iki.fi> On 01/20/2012 02:35 PM, Ewald Dieterich wrote: > I'm trying to develop a plugin that uses the hooks provided by the > notify plugin. The notify plugin segfaults if you don't set the > mailbox_rename hook. I attached a patch to notify-plugin.c from > Dovecot 2.0.16 that should fix this. Fixed, thanks. From tss at iki.fi Fri Jan 20 21:16:01 2012 From: tss at iki.fi (Timo Sirainen) Date: Fri, 20 Jan 2012 21:16:01 +0200 Subject: [Dovecot] UID 0 problem while issuing an UID THREAD REFS command In-Reply-To: <201201201724.41631.jesus.navarro@bvox.net> References: <201201201724.41631.jesus.navarro@bvox.net> Message-ID: <4F19BD71.9000603@iki.fi> On 01/20/2012 06:24 PM, Jes?s M. Navarro wrote: > I'm having problems on a maildir due to dovecot returning an UID 0 to an UID > THREAD REFS command: > > I think that, as per a previous answer from Timo Sirainen*1, this should be > considered a dovecot's bug, am I right? Anyway, what should I try to find why > is this exactly happening? Yes, it's a bug. > *1 http://www.dovecot.org/list/dovecot/2011-November/061992.html Same question as in that mail: Could you instead send me such a mailbox where you can reproduce this problem? Probably sending dovecot.index, dovecot.index.log and dovecot.index.thread files would be enough. None of those contain any sensitive information. From tss at iki.fi Fri Jan 20 21:19:57 2012 From: tss at iki.fi (Timo Sirainen) Date: Fri, 20 Jan 2012 21:19:57 +0200 Subject: [Dovecot] mail_max_userip_connections exceeded. In-Reply-To: References: Message-ID: <4F19BE5D.20603@iki.fi> On 01/20/2012 06:06 PM, Simon Brereton wrote: > I have this error in the logs. > /var/log/mail.log.1:2490:Jan 19 12:02:55 mail dovecot: imap-login: > Maximum number of connections from user+IP exceeded > (mail_max_userip_connections): user=, method=PLAIN, > rip=127.0.0.1, secured > > I never changed this from the default 10. When I googled this error > there was a thread on this list from May 2011 that indicated one would > need one connection per user per subscribed folder. However, I know > that user doesn't have 10 folders, let alone 10 subscribed folders! I > can increase, it but it's not going to scale well. And there are > people on this list with many 1000x users than I have - so how do they > deal with that? > > 127.0.0.1 is obviously webmail (IMP5). > > So, how/why am I seeing this, and should I be concerned? Well, it really does look like IMP is using more than 10 connections at the same time. Or perhaps some of the existing connections are just hanging for some reason after IMP already discarded them, such as maybe a very long running SEARCH command was started and IMP then gave up. You could look at the process list (with verbose_proctitle=yes) and check if the user has other processes hanging at the time when this error is logged. From tss at iki.fi Fri Jan 20 21:34:07 2012 From: tss at iki.fi (Timo Sirainen) Date: Fri, 20 Jan 2012 21:34:07 +0200 Subject: [Dovecot] LMTP ignoring tcpwrappers In-Reply-To: <1327012212.2003.32.camel@manbearpig.lan.kantoor.vevida.net> References: <1327012212.2003.32.camel@manbearpig.lan.kantoor.vevida.net> Message-ID: On 20.1.2012, at 0.30, Harm Weites wrote: > we want to use dovecot LMTP for efficient mail delivery from our MX > servers (running postfix 2.8) to our storage servers (dovecot 2.0.17). > However, the one problem we see is the lack of access control when using > LMTP. It apears that every client in our network who has access to the > storage machines can drop a message in a Maildir of any user on that > storage server. Is it a real problem? Can't they just as easily drop messages to other users' maildirs simply by sending the mail via SMTP? > To prevent this behaviour it would be nice to use > libwrap, just as it can be used for POP3/IMAP protocols. > This, however, seems to be impossible using the configuration as > mentioned on the dovecot wiki: > > login_access_sockets = tcpwrap > > This seems to imply it only works for a login, and LMTP does not use > that. The above works perfectly when trying to block access to IMAP or > POP3 in /etc/hosts.deny, though a setting for LMTP is simply ignored. Right. I'm not sure if I'd even want to add such feature to LMTP. It doesn't really feel like it belongs there. > Is there a configuration setting needed for this to work for LMTP, or is > it simply not possible (yet) and does libwrap support for LMTP requires > a patch? Not possible in Dovecot currently. You could use firewall rules. From tss at iki.fi Fri Jan 20 21:44:19 2012 From: tss at iki.fi (Timo Sirainen) Date: Fri, 20 Jan 2012 21:44:19 +0200 Subject: [Dovecot] Performance of Maildir vs sdbox/mdbox In-Reply-To: <4F18D12F.2050809@hardwarefreak.com> References: <02f401ccd5de$ef17c470$cd474d50$@standen.id.au> <1326894871.11500.45.camel@innu> <4F179E68.5020408@hardwarefreak.com> <44D201BF-3A31-4354-8B55-0BFE11721601@iki.fi> <4F18AC6A.4050508@hardwarefreak.com> <4F18D12F.2050809@hardwarefreak.com> Message-ID: On 20.1.2012, at 4.27, Stan Hoeppner wrote: >> I spent months looking into NFS related issues. I read through Linux and FreeBSD kernel source codes to figure out if there's something I could do to avoid the problems I see. I sent some patches to try to improve things, which of course didn't get accepted (some alternative ways might have been, but it would have required much more work from my part). The mail_nfs_* settings are the result of what I found out. They don't fully work, so I gave up. > > Yeah, I recall some of your posts from that time, and your frustration. > If an NFS config option existed to simply turn off the NFS client > caching, would that resolve most/all of the remaining issues? Or is the > problem more complex than just the file caching? I ask as it would seem > creating such a Boolean NFS config option should be simple to implement. > If the devs could be convinced of the need for it. It would work, but the performance would suck. >> There are several huge Dovecot+NFS setups. They use director. It works well enough (and with the recent fixes, I'd hope perfectly). > > Are any of these huge setups using mdbox? Or does it make a difference? I think they're all Maildirs currently, but it shouldn't make a difference. The index files are the ones most easily corrupted, so if they work then everything else should work just as well. In those director setups there have been no index corruption errors. > I.e. Indexes are indexes whether they be maildir or mdbox. Would > Director alone allow the OP to avoid the cache corruption issues > discussed in this thread? Or would there still be problems due to the > split LDA setup? By using LMTP proxying with director there wouldn't be any problems. Or using director for IMAP/POP3 and not using dovecot-lda for mail deliveries would work too. >>> In this case the OP has Netapp storage. Netapp units support both NFS >>> exports as well as iSCSI LUNs. If the OP could utilize iSCSI instead of >>> NFS, switching to GFS2 or OCFS, do you see these cluster filesystem as >>> preferable for mdbox? >> >> I don't have personal experience with cluster filesystems in recent years (other than glusterfs, which had some problems, but most(/all?) were fixed already or are available from their commercial support..). Based on what I've heard, I'm guessing they work better than random-access-NFS, but even if there are no actual corruption problems, it sounds like their performance isn't very good. > > So would an ideal long term solution to indexes in a cluster (NFS or > clusterFS) environment be something like Dovecot's own index metadata > broker daemon/lock manager that controls access to the files/indexes? > Either a distributed token based architecture, or maybe something > 'simple' such as a master node which all others send index updates to > with the master performing the actual writes to the files, similar to a > database architecture? The former likely being more difficult to > implement, the latter having potential scalability and SPOF issues. > > Or is the percentage of Dovecot cluster deployments so small that it's > difficult to justify the development investment for such a thing? I'm not sure if such daemons would be of much help. For best performance the user's mail access should be redirected to the same server in any case, and doing that solves all the other problems as well. I've a few other clustering plans besides a regular NFS based setup, but all of them rely on user normally being redirected to the same server (exception: split brain operation when mails are replicated to multiple data centers). From tss at iki.fi Fri Jan 20 21:48:00 2012 From: tss at iki.fi (Timo Sirainen) Date: Fri, 20 Jan 2012 21:48:00 +0200 Subject: [Dovecot] Performance of Maildir vs sdbox/mdbox In-Reply-To: <4F191B05.9020409@schetterer.org> References: <02f401ccd5de$ef17c470$cd474d50$@standen.id.au> <1326894871.11500.45.camel@innu> <4F179E68.5020408@hardwarefreak.com> <44D201BF-3A31-4354-8B55-0BFE11721601@iki.fi> <4F18AC6A.4050508@hardwarefreak.com> <4F191B05.9020409@schetterer.org> Message-ID: <4623523A-742E-4C32-82A0-0918F8B2DFE4@iki.fi> On 20.1.2012, at 9.43, Robert Schetterer wrote: > i.e i think it should be poosible to design partitioning with ldap or sql > to i.e split up heavy and big mailboxes in seperate storage partitions etc > am i right here Timo ? You can use per-user home or mail_location that points to different storages. If you want only some folders in separate storages, you could use symlinks, but deleting such a folder probably wouldn't delete the mails (or at least not all files). From tss at iki.fi Fri Jan 20 21:58:01 2012 From: tss at iki.fi (Timo Sirainen) Date: Fri, 20 Jan 2012 21:58:01 +0200 Subject: [Dovecot] Clients show .subscriptions folder In-Reply-To: References: Message-ID: <816AB6CB-989A-4D87-8FC0-80E8BE880539@iki.fi> On 10.1.2012, at 18.34, Mark Sapiro wrote: > Since upgrading from dovecot-2.1.rc1 to dovecot-2.1.rc3, some clients > are showing a .subscriptions file in the user's mbox path as a folder. Fixed: http://hg.dovecot.org/dovecot-2.1/rev/958ef86e7f5b From tss at iki.fi Fri Jan 20 23:06:57 2012 From: tss at iki.fi (Timo Sirainen) Date: Fri, 20 Jan 2012 23:06:57 +0200 Subject: [Dovecot] MySQL server has gone away In-Reply-To: <10122B91-B309-413C-BA8F-43BF2DE0C1C7@iki.fi> References: <20120113224607.GS4844@bender.csupomona.edu> <10122B91-B309-413C-BA8F-43BF2DE0C1C7@iki.fi> Message-ID: <9E57D55C-5F19-4291-A2E7-BC06678B2F79@iki.fi> On 20.1.2012, at 19.16, Timo Sirainen wrote: > This is fixed in v2.1 hg. The default idle_kill of 60 seconds seems to have gotten rid of the "MySQL server has gone away" errors completely. So I guess the problem was that during some peak times a ton of auth worker processes were created, but afterwards they weren't used until the next peak happened, and then they failed. Hmh. Still doesn't work 100%: auth-worker(28788): Error: mysql: Query failed, retrying: MySQL server has gone away (idled for 181 secs) auth-worker(7413): Error: mysql: Query failed, retrying: MySQL server has gone away (idled for 298 secs) I'm not really sure why it's not killing itself after 60 seconds of idling. Probably related to how mysql code tracks idle time and how idle_kill tracks it.. Anyway, those errors are much more rare now. From henson at acm.org Sat Jan 21 02:00:51 2012 From: henson at acm.org (Paul B. Henson) Date: Fri, 20 Jan 2012 16:00:51 -0800 Subject: [Dovecot] MySQL server has gone away In-Reply-To: <9E57D55C-5F19-4291-A2E7-BC06678B2F79@iki.fi> References: <20120113224607.GS4844@bender.csupomona.edu> <10122B91-B309-413C-BA8F-43BF2DE0C1C7@iki.fi> <9E57D55C-5F19-4291-A2E7-BC06678B2F79@iki.fi> Message-ID: <4F1A0033.8060202@acm.org> On 1/20/2012 1:06 PM, Timo Sirainen wrote: > Hmh. Still doesn't work 100%: > > auth-worker(28788): Error: mysql: Query failed, retrying: MySQL > server has gone away (idled for 181 secs) auth-worker(7413): Error: > mysql: Query failed, retrying: MySQL server has gone away (idled for > 298 secs) > > I'm not really sure why it's not killing itself after 60 seconds of > idling. Probably related to how mysql code tracks idle time and how > idle_kill tracks it.. Anyway, those errors are much more rare now. The mysql server starts tracking idle time as beginning after the last network communication with the client. So presumably if the auth worker gets marked as not idle by anything not involving interaction with the mysql server, they could get out of sync. Before you posted a potential fix to the idle timeout, I was looking at other possible ways to resolve the issue. Currently, an authentication request is tried exactly twice -- one initial try, and one retry. Looking at driver-sqlpool.c: if (result->failed_try_retry && !request->retried) { Currently, retried is a boolean. What if retried was an integer instead, and a new configuration variable allowed you to specify how many times an authentication attempt should be retried? The default could be 2, which would result in exactly the same behavior. But then you could set it to 3 or 4 to prevent a request from hitting a timed out connection twice and failing completely. Ideally, a better fix would be for the client not to consider a "MySQL server has gone away" return as a failure, but instead immediately reconnect and try again without marking it as a retry. However, from reviewing the code, that would be a much more difficult and invasive change. Changing the existing retried variable to an integer count rather than a boolean is pretty simple. -- Paul B. Henson | (909) 979-6361 | http://www.csupomona.edu/~henson/ Operating Systems and Network Analyst | henson at csupomona.edu California State Polytechnic University | Pomona CA 91768 From jtam.home at gmail.com Sat Jan 21 02:48:54 2012 From: jtam.home at gmail.com (Joseph Tam) Date: Fri, 20 Jan 2012 16:48:54 -0800 (PST) Subject: [Dovecot] mail_max_userip_connections exceeded. In-Reply-To: References: Message-ID: Simon Brereton writes: > /var/log/mail.log.1:2490:Jan 19 12:02:55 mail dovecot: imap-login: > Maximum number of connections from user+IP exceeded > (mail_max_userip_connections): user=, method=PLAIN, > rip=127.0.0.1, secured > > I never changed this from the default 10. When I googled this error > there was a thread on this list from May 2011 that indicated one would > need one connection per user per subscribed folder. However, I know > that user doesn't have 10 folders, let alone 10 subscribed folders! I > can increase, it but it's not going to scale well. And there are > people on this list with many 1000x users than I have - so how do they > deal with that? > > 127.0.0.1 is obviously webmail (IMP5). IMAP proxy or lack of proxy? IMAP proxy could be a problem if the user had opened more than 10 (unique) mailboxes. The proxy would keep this connection open until a timeout, and after some time, could accumulate more connections than your limit. The lack of proxy could solve your problem if for some reason your webmail software is not closing the IMAP connection properly (I assume IMP does a connect/authenticate/IMAP command/logout for every webmail operation). Every connection (even to the same mailbox) would open up a new connection. The proxy software will recognize the reconnnection and funnel it through its cached connection. You can lsof the user's IMAP processes (or troll through /proc/{imap-process} or what you have) to figure out which mailboxes it has opened. On my system, file descriptor 9 and 11 gives you the names of the index files that indicate which mailboxes are being accessed. Joseph Tam From mark at msapiro.net Sat Jan 21 03:02:37 2012 From: mark at msapiro.net (Mark Sapiro) Date: Fri, 20 Jan 2012 17:02:37 -0800 Subject: [Dovecot] Clients show .subscriptions folder In-Reply-To: <816AB6CB-989A-4D87-8FC0-80E8BE880539@iki.fi> Message-ID: Timo Sirainen wrote: >On 10.1.2012, at 18.34, Mark Sapiro wrote: > >> Since upgrading from dovecot-2.1.rc1 to dovecot-2.1.rc3, some clients >> are showing a .subscriptions file in the user's mbox path as a folder. > >Fixed: http://hg.dovecot.org/dovecot-2.1/rev/958ef86e7f5b Thanks Timo. I've installed the above and it seems fine. -- Mark Sapiro The highway is for gamblers, San Francisco Bay Area, California better use your sense - B. Dylan From dovecot at knutejohnson.com Sat Jan 21 03:04:46 2012 From: dovecot at knutejohnson.com (Knute Johnson) Date: Fri, 20 Jan 2012 17:04:46 -0800 Subject: [Dovecot] mail_max_userip_connections exceeded. In-Reply-To: References: Message-ID: <4F1A0F2E.9020907@knutejohnson.com> On 1/20/2012 4:48 PM, Joseph Tam wrote: > Simon Brereton writes: > >> /var/log/mail.log.1:2490:Jan 19 12:02:55 mail dovecot: imap-login: >> Maximum number of connections from user+IP exceeded >> (mail_max_userip_connections): user=, method=PLAIN, >> rip=127.0.0.1, secured >> >> I never changed this from the default 10. When I googled this error >> there was a thread on this list from May 2011 that indicated one would >> need one connection per user per subscribed folder. However, I know >> that user doesn't have 10 folders, let alone 10 subscribed folders! I >> can increase, it but it's not going to scale well. And there are >> people on this list with many 1000x users than I have - so how do they >> deal with that? >> >> 127.0.0.1 is obviously webmail (IMP5). > > IMAP proxy or lack of proxy? > > IMAP proxy could be a problem if the user had opened more than 10 (unique) > mailboxes. The proxy would keep this connection open until a timeout, and > after some time, could accumulate more connections than your limit. > > The lack of proxy could solve your problem if for some reason your webmail > software is not closing the IMAP connection properly (I assume IMP does a > connect/authenticate/IMAP command/logout for every webmail operation). > Every connection (even to the same mailbox) would open up a new connection. > The proxy software will recognize the reconnnection and funnel it through > its cached connection. > > You can lsof the user's IMAP processes (or troll through > /proc/{imap-process} or what you have) to figure out which mailboxes it > has opened. On my system, file descriptor 9 and 11 gives you the names > of the index files that indicate which mailboxes are being accessed. > > Joseph Tam I'm not sure that I saw the beginning of this thread but I got the same error. I traced it to the fact that my destktop and my phone email programs were both trying to access my imap from the same local network. I changed it to 20 and I haven't seen any more problems. I don't know if that would be a problem on a really heavily used server or not. -- Knute Johnson From henson at acm.org Sat Jan 21 03:34:41 2012 From: henson at acm.org (Paul B. Henson) Date: Fri, 20 Jan 2012 17:34:41 -0800 Subject: [Dovecot] MySQL server has gone away In-Reply-To: <2B3DAEEA-9281-4E5B-BB90-4FCE9C61C9E4@iki.fi> References: <4F108834.60709@schetterer.org> <20120114001912.GZ4844@bender.csupomona.edu> <2B3DAEEA-9281-4E5B-BB90-4FCE9C61C9E4@iki.fi> Message-ID: <4F1A1631.2000704@acm.org> On 1/20/2012 9:17 AM, Timo Sirainen wrote: > Works for me. Are you maybe sending it to wrong auth process (auth worker instead of master)? I had tried sending it to both; but the underlying problem turned out to be that the updated config hadn't actually been deployed yet 8-/ oops. Once I fixed that, sending the signal did generate the log output. Evidently nothing is printed out in the case where the authentication caching isn't enabled; maybe you should make it print out something like "Hey idiot, caching isn't turned on" ;). Thanks... -- Paul B. Henson | (909) 979-6361 | http://www.csupomona.edu/~henson/ Operating Systems and Network Analyst | henson at csupomona.edu California State Polytechnic University | Pomona CA 91768 From user+dovecot at localhost.localdomain.org Sat Jan 21 03:46:47 2012 From: user+dovecot at localhost.localdomain.org (Pascal Volk) Date: Sat, 21 Jan 2012 02:46:47 +0100 Subject: [Dovecot] 2.1.rc3 (1a722c7676bb): doveadm mailbox list withholds child mailboxes In-Reply-To: <4F0F8815.8070609@localhost.localdomain.org> References: <4F0F8815.8070609@localhost.localdomain.org> Message-ID: <4F1A1907.4070906@localhost.localdomain.org> On 01/13/2012 02:25 AM Pascal Volk wrote: > doveadm mailbox list -u user at example.com doesn't show child mailboxes. Looks like http://hg.dovecot.org/dovecot-2.1/rev/54e74090fb42 fixed the problem. Thanks Regards, Pascal -- The trapper recommends today: defaced.1202102 at localdomain.org From henson at acm.org Sat Jan 21 04:36:56 2012 From: henson at acm.org (Paul B. Henson) Date: Fri, 20 Jan 2012 18:36:56 -0800 Subject: [Dovecot] MySQL server has gone away In-Reply-To: <10122B91-B309-413C-BA8F-43BF2DE0C1C7@iki.fi> References: <20120113224607.GS4844@bender.csupomona.edu> <10122B91-B309-413C-BA8F-43BF2DE0C1C7@iki.fi> Message-ID: <20120121023656.GO4207@bender.csupomona.edu> On Fri, Jan 20, 2012 at 09:16:57AM -0800, Timo Sirainen wrote: > This is fixed in v2.1 hg. The default idle_kill of 60 seconds seems to > have gotten rid of the "MySQL server has gone away" errors completely. > So I guess the problem was that during some peak times a ton of auth > worker processes were created, but afterwards they weren't used until > the next peak happened, and then they failed. > > http://hg.dovecot.org/dovecot-2.1/rev/3963862a4086 > http://hg.dovecot.org/dovecot-2.1/rev/58556a90259f Hmm, I tried to apply this to 2.0.17, and that didn't really work out. Before I spend too much time trying to hand port the changes, do you know off hand if they simply won't apply to 2.0.17 due to other changes made since then? It looks like 2.1 might be out soon, I guess maybe I should just wait for that. Thanks... -- Paul B. Henson | (909) 979-6361 | http://www.csupomona.edu/~henson/ Operating Systems and Network Analyst | henson at csupomona.edu California State Polytechnic University | Pomona CA 91768 From admin at opsys.de Sat Jan 21 20:39:00 2012 From: admin at opsys.de (Markus Fritz) Date: Sat, 21 Jan 2012 19:39:00 +0100 Subject: [Dovecot] Sieve tempoary script folder Message-ID: Hello, I got the issue that sieve wants to write his tmp files to /etc/dovecot/. But I want sieve to write in a folder which it has write rights. I created a script to to put spam in the 'Spam' folder, put it in /etc/dovecot/.dovecot.sieve. When recieving a mail sieve want's to create a tmp file like /etc/dovecot/.dovecot.sieve.12033 How to change the desired tmp folder by sieve? From mikedvct at makuch.org Sun Jan 22 15:55:02 2012 From: mikedvct at makuch.org (Michael Makuch) Date: Sun, 22 Jan 2012 07:55:02 -0600 Subject: [Dovecot] where is subscribed list stored? Message-ID: <4F1C1536.1000407@makuch.org> I'm using $ /usr/sbin/dovecot --version 2.0.15 on $ cat /etc/fedora-release Fedora release 14 (Laughlin) and version 8 of Thunderbird. I use dovecot locally for internal only access to my email archives, of which I have many gigs of email archives. Over time I end up subscribing to a couple dozen different IMAP email folders. Problem is that periodically my list of subscribed folders get zapped to none, and I have to go and re-subscribe to a dozen or two folders again. Anyone seen this happen? It looks like the list of subscribed folders is here ~/Mail/.subscriptions and I can see in my daily backup that it reflects what appears in TBird. What might be zapping it? I use multiple email clients simultaneously on different hosts. (IOW I leave them open) Is this a problem? Does dovecot manage that in some way? Or is that my problem? I don't think this is the problem since this only occurs like a few times per year. If it were the problem I'd expect it to occur much more frequently. Thanks for any clues Mike From me at junc.org Sun Jan 22 16:10:09 2012 From: me at junc.org (Benny Pedersen) Date: Sun, 22 Jan 2012 15:10:09 +0100 Subject: [Dovecot] =?utf-8?q?where_is_subscribed_list_stored=3F?= In-Reply-To: <4F1C1536.1000407@makuch.org> References: <4F1C1536.1000407@makuch.org> Message-ID: On Sun, 22 Jan 2012 07:55:02 -0600, Michael Makuch wrote: > $ cat /etc/fedora-release > Fedora release 14 (Laughlin) > > and version 8 of Thunderbird. can you use thunderbird 9 ? does the account work with eg rouncube webmail ? my own question is, is it a dovecot problem ? do you modify files outside of imap protocol ? if so you asked for it :-) From jk at jkart.de Sun Jan 22 16:22:24 2012 From: jk at jkart.de (Jim Knuth) Date: Sun, 22 Jan 2012 15:22:24 +0100 Subject: [Dovecot] where is subscribed list stored? In-Reply-To: References: <4F1C1536.1000407@makuch.org> Message-ID: <4F1C1BA0.5060305@jkart.de> am 22.01.12 15:10 schrieb Benny Pedersen : > can you use thunderbird 9 ? > > does the account work with eg rouncube webmail ? I`ve TB9 AND Roundcube. No problems with Dovecot 2.017 here > > my own question is, is it a dovecot problem ? > > do you modify files outside of imap protocol ? > > if so you asked for it :-) -- Mit freundlichen Gr??en, with kind regards, Jim Knuth --------- Wahrhaftigkeit und Politik wohnen selten unter einem Dach. (Marie Antoinette) From stan at hardwarefreak.com Sun Jan 22 22:58:03 2012 From: stan at hardwarefreak.com (Stan Hoeppner) Date: Sun, 22 Jan 2012 14:58:03 -0600 Subject: [Dovecot] where is subscribed list stored? In-Reply-To: <4F1C1536.1000407@makuch.org> References: <4F1C1536.1000407@makuch.org> Message-ID: <4F1C785B.8020304@hardwarefreak.com> On 1/22/2012 7:55 AM, Michael Makuch wrote: > Anyone seen this happen? It looks like the list of subscribed folders is > here ~/Mail/.subscriptions and I can see in my daily backup that it > reflects what appears in TBird. What might be zapping it? I use multiple > email clients simultaneously on different hosts. (IOW I leave them open) > Is this a problem? Does dovecot manage that in some way? Or is that my > problem? I don't think this is the problem since this only occurs like a > few times per year. If it were the problem I'd expect it to occur much > more frequently. What do your Dovecot logs and TB Activity Manager tell you, if anything? How about logging on the other MUAs? You are a human being, and are thus limited to physical interaction with a single host at any point in time. How are you "using" multiple MUAs on multiple hosts simultaneously? Can you describe this workflow? I'm guessing you're performing some kind of automated tasks with each MUA, and that is likely the root of the problem. Please describe these automated tasks. -- Stan From jesus.navarro at bvox.net Mon Jan 23 14:55:13 2012 From: jesus.navarro at bvox.net (=?utf-8?q?Jes=C3=BAs_M=2E?= Navarro) Date: Mon, 23 Jan 2012 13:55:13 +0100 Subject: [Dovecot] UID 0 problem while issuing an UID THREAD REFS command In-Reply-To: <4F19BD71.9000603@iki.fi> References: <201201201724.41631.jesus.navarro@bvox.net> <4F19BD71.9000603@iki.fi> Message-ID: <201201231355.15051.jesus.navarro@bvox.net> Hi again, Timo: On Viernes, 20 de Enero de 2012 20:16:01 Timo Sirainen escribi?: > On 01/20/2012 06:24 PM, Jes?s M. Navarro wrote: > > I'm having problems on a maildir due to dovecot returning an UID 0 to an > > UID THREAD REFS command: [...] > Could you instead send me such a mailbox where you can reproduce this > problem? Probably sending dovecot.index, dovecot.index.log and > dovecot.index.thread files would be enough. None of those contain any > sensitive information. Thank you very much. I'm sending to your personal address a whole maildir that reproduces the bug (it's very short) to avoid having it published in the mail archives. From l.chelchowski at slupsk.eurocar.pl Mon Jan 23 15:58:22 2012 From: l.chelchowski at slupsk.eurocar.pl (l.chelchowski) Date: Mon, 23 Jan 2012 14:58:22 +0100 Subject: [Dovecot] Quota-warning and setresgid In-Reply-To: References: Message-ID: Anyone? W dniu 2012-01-10 10:34, l.chelchowski napisa?(a): > Hi! > > Please help me with this. > The problem exists when quota-warning is executing: > > LOG: > Jan 10 10:15:06 lmtp(85973): Debug: none: root=, index=, control=, > inbox=, alt= > Jan 10 10:15:06 lmtp(85973): Info: Connect from local > Jan 10 10:15:06 lmtp(85973): Debug: Loading modules from directory: > /usr/local/lib/dovecot > Jan 10 10:15:06 lmtp(85973): Debug: Module loaded: > /usr/local/lib/dovecot/lib10_quota_plugin.so > Jan 10 10:15:06 lmtp(85973): Debug: Module loaded: > /usr/local/lib/dovecot/lib90_sieve_plugin.so > Jan 10 10:15:06 lmtp(85973): Debug: auth input: tester at domain.eu > home=/home/vmail/domain.eu/tester/ > mail=maildir:/home/vmail/domain.eu/tester/:INDEX=/var/mail/vmail/domain.eu/tester at domain.eu/index/public > uid=101 gid=12 quota_rule=*:storage=2097 acl_groups= > Jan 10 10:15:06 lmtp(85973): Debug: Added userdb setting: > mail=maildir:/home/vmail/domain.eu/tester/:INDEX=/var/mail/vmail/domain.eu/tester at domain.eu/index/public > Jan 10 10:15:06 lmtp(85973): Debug: Added userdb setting: > plugin/quota_rule=*:storage=2097 > Jan 10 10:15:06 lmtp(85973): Debug: Added userdb setting: > plugin/acl_groups= > Jan 10 10:15:06 lmtp(85973, tester at domain.eu): Debug: Effective > uid=101, gid=12, home=/home/vmail/domain.eu/tester/ > Jan 10 10:15:06 lmtp(85973, tester at domain.eu): Debug: Quota root: > name=user backend=dict args=:proxy::quotadict > Jan 10 10:15:06 lmtp(85973, tester at domain.eu): Debug: Quota rule: > root=user mailbox=* bytes=2147328 messages=0 > Jan 10 10:15:06 lmtp(85973, tester at domain.eu): Debug: Quota rule: > root=user mailbox=Trash bytes=+429465 (20%) messages=0 > Jan 10 10:15:06 lmtp(85973, tester at domain.eu): Debug: Quota rule: > root=user mailbox=SPAM bytes=+429465 (20%) messages=0 > Jan 10 10:15:06 lmtp(85973, tester at domain.eu): Debug: Quota warning: > bytes=1717862 (80%) messages=0 reverse=no command=quota-warning 80 > tester at domain.eu > Jan 10 10:15:06 lmtp(85973, tester at domain.eu): Debug: Quota warning: > bytes=1932595 (90%) messages=0 reverse=no command=quota-warning 90 > tester at domain.eu > Jan 10 10:15:06 lmtp(85973, tester at domain.eu): Debug: Quota warning: > bytes=2039961 (95%) messages=0 reverse=no command=quota-warning 95 > tester at domain.eu > Jan 10 10:15:06 lmtp(85973, tester at domain.eu): Debug: dict quota: > user=tester at domain.eu, uri=proxy::quotadict, noenforcing=0 > Jan 10 10:15:06 lmtp(85973, tester at domain.eu): Debug: Namespace : > type=private, prefix=, sep=/, inbox=yes, hidden=no, list=yes, > subscriptions=yes > location=maildir:/home/vmail/domain.eu/tester/:INDEX=/var/mail/vmail/domain.eu/tester at domain.eu/index/public > Jan 10 10:15:06 lmtp(85973, tester at domain.eu): Debug: maildir++: > root=/home/vmail/domain.eu/tester, > index=/var/mail/vmail/domain.eu/tester at domain.eu/index/public, control=, > inbox=/home/vmail/domain.eu/tester, alt= > Jan 10 10:15:06 lmtp(85973, tester at domain.eu): Debug: Namespace : > type=public, prefix=Public/, sep=/, inbox=no, hidden=no, list=children, > subscriptions=yes > location=maildir:/home/vmail/public/:CONTROL=/var/mail/vmail/domain.eu/tester/control/public:INDEX=/var/mail/vmail/domain.eu/tester/index/public:LAYOUT=fs > Jan 10 10:15:06 lmtp(85973, tester at domain.eu): Debug: fs: > root=/home/vmail/public, > index=/var/mail/vmail/domain.eu/tester/index/public, > control=/var/mail/vmail/domain.eu/tester/control/public, inbox=, alt= > Jan 10 10:15:06 lmtp(85973, tester at domain.eu): Debug: Namespace : > type=shared, prefix=Shared/%u/, sep=/, inbox=no, hidden=no, > list=children, subscriptions=no > location=maildir:%h/:INDEX=/var/mail/vmail/domain.eu/tester at domain.eu/index/shared/%u > Jan 10 10:15:06 lmtp(85973, tester at domain.eu): Debug: shared: > root=/var/run/dovecot, index=, control=, inbox=, alt= > ... > Jan 10 10:15:06 lmtp(85973, tester at domain.eu): Debug: quota: Executing > warning: quota-warning 95 tester at domain.eu > Jan 10 10:15:06 lmtp(85973, tester at domain.eu): Info: > bLUfAJoBDE/VTwEA9hAjDg: sieve: msgid=<4F0C0180.3040704 at domain.eu>: > stored mail into mailbox 'INBOX' > Jan 10 10:15:06 lmtp(85973): Info: Disconnect from local: Client quit > (in reset) > Jan 10 10:15:06 lda: Debug: Loading modules from directory: > /usr/local/lib/dovecot > Jan 10 10:15:06 lda: Debug: Module loaded: > /usr/local/lib/dovecot/lib01_acl_plugin.so > Jan 10 10:15:06 lda: Debug: Module loaded: > /usr/local/lib/dovecot/lib10_quota_plugin.so > Jan 10 10:15:06 lda: Debug: Module loaded: > /usr/local/lib/dovecot/lib90_sieve_plugin.so > Jan 10 10:15:06 lda: Debug: auth input: tester at domain.eu > home=/home/vmail/domain.eu/tester/ > mail=maildir:/home/vmail/domain.eu/tester/:INDEX=/var/mail/vmail/domain.eu/tester at domain.eu/index/public > uid=101 gid=12 quota_rule=*:storage=2097 acl_groups= > Jan 10 10:15:06 lda: Debug: Added userdb setting: > mail=maildir:/home/vmail/domain.eu/tester/:INDEX=/var/mail/vmail/domain.eu/tester at domain.eu/index/public > Jan 10 10:15:06 lda: Debug: Added userdb setting: > plugin/quota_rule=*:storage=2097 > Jan 10 10:15:06 lda: Debug: Added userdb setting: plugin/acl_groups= > Jan 10 10:15:06 lda(tester at domain.eu): Fatal: > setresgid(12(mail),12(mail),101(vmail)) failed with euid=101(vmail): > Operation not permitted > Jan 10 10:15:06 master: Error: service(quota-warning): child 85974 > returned error 75 > > dovecot -n > # 2.0.16: /usr/local/etc/dovecot/dovecot.conf > # OS: FreeBSD 8.2-RELEASE-p3 amd64 > auth_master_user_separator = * > auth_mechanisms = plain login cram-md5 > auth_username_format = %Lu > dict { > quotadict = mysql:/usr/local/etc/dovecot/dovecot-dict-sql.conf > } > disable_plaintext_auth = no > first_valid_gid = 12 > first_valid_uid = 101 > log_path = /var/log/dovecot.log > mail_debug = yes > mail_gid = vmail > mail_plugins = " quota acl" > mail_privileged_group = vmail > mail_uid = vmail > managesieve_notify_capability = mailto > managesieve_sieve_capability = fileinto reject envelope > encoded-character vacation subaddress comparator-i;ascii-numeric > relational regex imap4flags copy include variables body enotify > environment mailbox date > namespace { > inbox = yes > location = > prefix = > separator = / > type = private > } > namespace { > list = children > location = > maildir:/home/vmail/public/:CONTROL=/var/mail/vmail/%d/%n/control/public:INDEX=/var/mail/vmail/%d/%n/index/public:LAYOUT=fs > prefix = Public/ > separator = / > subscriptions = yes > type = public > } > namespace { > list = children > location = maildir:%%h/:INDEX=/var/mail/vmail/%d/%u/index/shared/%%u > prefix = Shared/%%u/ > separator = / > subscriptions = no > type = shared > } > passdb { > args = /usr/local/etc/dovecot/dovecot-sql.conf > driver = sql > } > passdb { > args = /usr/local/etc/dovecot/passwd.masterusers > driver = passwd-file > master = yes > pass = yes > } > plugin { > acl = vfile:/usr/local/etc/dovecot/acls > acl_shared_dict = > file:/usr/local/etc/dovecot/shared/shared-mailboxes.db > autocreate = Trash > autocreate2 = Junk > autocreate3 = Sent > autocreate4 = Drafts > autocreate5 = Archives > autosubscribe = Trash > autosubscribe2 = Junk > autosubscribe3 = Sent > autosubscribe4 = Drafts > autosubscribe5 = Public/Poczta > autosubscribe6 = Archives > fts = squat > fts_squat = partial=4 full=10 > quota = dict:user::proxy::quotadict > quota_rule2 = Trash:storage=+20%% > quota_rule3 = SPAM:storage=+20%% > quota_warning = storage=80%% quota-warning 80 %u > quota_warning2 = storage=90%% quota-warning 90 %u > quota_warning3 = storage=95%% quota-warning 95 %u > sieve = ~/.dovecot.sieve > sieve_before = /usr/local/etc/dovecot/sieve/default.sieve > sieve_dir = ~/sieve > sieve_global_dir = /usr/local/etc/dovecot/sieve > sieve_global_path = /usr/local/etc/dovecot/sieve/default.sieve > } > protocols = imap pop3 sieve lmtp > service auth { > unix_listener /var/spool/postfix/private/auth { > group = mail > mode = 0660 > user = postfix > } > unix_listener auth-userdb { > group = mail > mode = 0660 > user = vmail > } > } > service dict { > unix_listener dict { > mode = 0600 > user = vmail > } > } > service imap { > executable = imap postlogin > } > service lmtp { > unix_listener /var/spool/postfix/private/dovecot-lmtp { > group = postfix > mode = 0660 > user = postfix > } > } > service managesieve { > drop_priv_before_exec = yes > } > service pop3 { > drop_priv_before_exec = yes > } > service postlogin { > executable = script-login rawlog > } > service quota-warning { > executable = script /usr/local/bin/quota-warning.sh > unix_listener quota-warning { > user = vmail > } > user = vmail > } > ssl = no > userdb { > args = /usr/local/etc/dovecot/dovecot-sql.conf > driver = sql > } > verbose_proctitle = yes > protocol imap { > imap_client_workarounds = delay-newmail tb-extra-mailbox-sep > mail_plugins = " acl imap_acl autocreate fts fts_squat quota > imap_quota" > } > protocol lmtp { > mail_plugins = quota sieve > } > protocol pop3 { > pop3_client_workarounds = outlook-no-nuls oe-ns-eoh > pop3_uidl_format = %08Xu%08Xv > } > protocol lda { > deliver_log_format = msgid=%m: %$ > mail_plugins = sieve acl quota > postmaster_address = postmaster at domain.eu > sendmail_path = /usr/sbin/sendmail > } -- Pozdrawiam ?ukasz From a.othman at cairosource.com Mon Jan 23 16:30:32 2012 From: a.othman at cairosource.com (Amira Othman) Date: Mon, 23 Jan 2012 16:30:32 +0200 Subject: [Dovecot] change smtp port Message-ID: <000801ccd9db$91671960$b4354c20$@othman@cairosource.com> Hi all I am using postfix 2.8 with dovecot-1.2.17-0_116.el5 on cento 5.7 server . when I changed smtp port from 25 to 587 from postfix configuration my mail server stops to receive emails. I think it sounds strange and I don't understand why this happen any one can help me Regards From giles at coochey.net Mon Jan 23 16:33:27 2012 From: giles at coochey.net (Giles Coochey) Date: Mon, 23 Jan 2012 14:33:27 +0000 Subject: [Dovecot] change smtp port In-Reply-To: <000801ccd9db$91671960$b4354c20$@othman@cairosource.com> References: <000801ccd9db$91671960$b4354c20$@othman@cairosource.com> Message-ID: On 2012-01-23 14:30, Amira Othman wrote: > Hi all > > I am using postfix 2.8 with dovecot-1.2.17-0_116.el5 on cento 5.7 > server . > when I changed smtp port from 25 to 587 from postfix configuration my > mail > server stops to receive emails. I think it sounds strange and I don't > understand why this happen any one can help me > > > > Regards If this SMTP server is your MX record, then you need to use port 25. Only use the 587 port for authenticated submissions from your own users for outgoing email. -- Message sent via my webmail account. From Ralf.Hildebrandt at charite.de Mon Jan 23 16:33:43 2012 From: Ralf.Hildebrandt at charite.de (Ralf Hildebrandt) Date: Mon, 23 Jan 2012 15:33:43 +0100 Subject: [Dovecot] change smtp port In-Reply-To: <000801ccd9db$91671960$b4354c20$@othman@cairosource.com> References: <000801ccd9db$91671960$b4354c20$@othman@cairosource.com> Message-ID: <20120123143343.GI29761@charite.de> * Amira Othman : > Hi all > > I am using postfix 2.8 with dovecot-1.2.17-0_116.el5 on cento 5.7 server . > when I changed smtp port from 25 to 587 from postfix configuration my mail > server stops to receive emails. That's normal. > I think it sounds strange and I don't understand why this happen any > one can help me Mail from other systems comes in via port 25. Once you change the port, nobody can send mail to your server. Easy, no? -- Ralf Hildebrandt Gesch?ftsbereich IT | Abteilung Netzwerk Charit? - Universit?tsmedizin Berlin Campus Benjamin Franklin Hindenburgdamm 30 | D-12203 Berlin Tel. +49 30 450 570 155 | Fax: +49 30 450 570 962 ralf.hildebrandt at charite.de | http://www.charite.de From a.othman at cairosource.com Mon Jan 23 16:38:06 2012 From: a.othman at cairosource.com (Amira Othman) Date: Mon, 23 Jan 2012 16:38:06 +0200 Subject: [Dovecot] change smtp port In-Reply-To: <20120123143343.GI29761@charite.de> References: <000801ccd9db$91671960$b4354c20$@othman@cairosource.com> <20120123143343.GI29761@charite.de> Message-ID: <001601ccd9dc$9ff04d20$dfd0e760$@othman@cairosource.com> And there is no way to receive incoming emails not on port 25 ? > Hi all > > I am using postfix 2.8 with dovecot-1.2.17-0_116.el5 on cento 5.7 server . > when I changed smtp port from 25 to 587 from postfix configuration my mail > server stops to receive emails. That's normal. > I think it sounds strange and I don't understand why this happen any > one can help me Mail from other systems comes in via port 25. Once you change the port, nobody can send mail to your server. Easy, no? -- Ralf Hildebrandt Gesch?ftsbereich IT | Abteilung Netzwerk Charit? - Universit?tsmedizin Berlin Campus Benjamin Franklin Hindenburgdamm 30 | D-12203 Berlin Tel. +49 30 450 570 155 | Fax: +49 30 450 570 962 ralf.hildebrandt at charite.de | http://www.charite.de From giles at coochey.net Mon Jan 23 16:41:52 2012 From: giles at coochey.net (Giles Coochey) Date: Mon, 23 Jan 2012 14:41:52 +0000 Subject: [Dovecot] change smtp port In-Reply-To: <001601ccd9dc$9ff04d20$dfd0e760$@othman@cairosource.com> References: <000801ccd9db$91671960$b4354c20$@othman@cairosource.com> <20120123143343.GI29761@charite.de> <001601ccd9dc$9ff04d20$dfd0e760$@othman@cairosource.com> Message-ID: On 2012-01-23 14:38, Amira Othman wrote: > And there is no way to receive incoming emails not on port 25 ? > > No. http://en.wikipedia.org/wiki/Simple_Mail_Transfer_Protocol From CMarcus at Media-Brokers.com Mon Jan 23 16:50:09 2012 From: CMarcus at Media-Brokers.com (Charles Marcus) Date: Mon, 23 Jan 2012 09:50:09 -0500 Subject: [Dovecot] change smtp port In-Reply-To: References: <000801ccd9db$91671960$b4354c20$@othman@cairosource.com> <20120123143343.GI29761@charite.de> <001601ccd9dc$9ff04d20$dfd0e760$@othman@cairosource.com> Message-ID: <4F1D73A1.2010504@Media-Brokers.com> On 2012-01-23 9:41 AM, Giles Coochey wrote: > On 2012-01-23 14:38, Amira Othman wrote: >> And there is no way to receive incoming emails not on port 25 ? > No. > http://en.wikipedia.org/wiki/Simple_Mail_Transfer_Protocol Well, not precisely correct... You *could* use a router that does port translation (translates incoming port 25 connections to port 587), but that would be extremely ugly and kludgy and I certainly don't recommend it. Amira - what you need to do is re-enable port 25, and then enable the submission service (port 587) at the same time (just uncomment the relevant lines in master.cf), and require your users to use the submission port for relaying their mail. -- Best regards, Charles From giles at coochey.net Mon Jan 23 17:01:57 2012 From: giles at coochey.net (Giles Coochey) Date: Mon, 23 Jan 2012 15:01:57 +0000 Subject: [Dovecot] change smtp port In-Reply-To: <4F1D73A1.2010504@Media-Brokers.com> References: <000801ccd9db$91671960$b4354c20$@othman@cairosource.com> <20120123143343.GI29761@charite.de> <001601ccd9dc$9ff04d20$dfd0e760$@othman@cairosource.com> <4F1D73A1.2010504@Media-Brokers.com> Message-ID: On 2012-01-23 14:50, Charles Marcus wrote: > On 2012-01-23 9:41 AM, Giles Coochey wrote: >> On 2012-01-23 14:38, Amira Othman wrote: >>> And there is no way to receive incoming emails not on port 25 ? > >> No. >> http://en.wikipedia.org/wiki/Simple_Mail_Transfer_Protocol > > Well, not precisely correct... > Now true, you can do anything you like internally, but if you want to listen and speak with the rest of the Internet, you should be RFC compliant. RFC821 Connection Establishment The SMTP transmission channel is a TCP connection established between the sender process port U and the receiver process port L. This single full duplex connection is used as the transmission channel. This protocol is assigned the service port 25 (31 octal), that is L=25. RFC531 4.5.4.2. Receiving Strategy The SMTP server SHOULD attempt to keep a pending listen on the SMTP port (specified by IANA as port 25) at all times. This requires the support of multiple incoming TCP connections for SMTP. Some limit MAY be imposed, but servers that cannot handle more than one SMTP transaction at a time are not in conformance with the intent of this specification. As discussed above, when the SMTP server receives mail from a particular host address, it could activate its own SMTP queuing mechanisms to retry any mail pending for that host address. From rasca at miamammausalinux.org Mon Jan 23 17:04:17 2012 From: rasca at miamammausalinux.org (RaSca) Date: Mon, 23 Jan 2012 16:04:17 +0100 Subject: [Dovecot] Quota is not working (Debian Squeeze - Dovecot 1.2) SOLVED In-Reply-To: <1326898601.11500.56.camel@innu> References: <4F13FF00.1050108@miamammausalinux.org> <1326898601.11500.56.camel@innu> Message-ID: <4F1D76F1.9070106@miamammausalinux.org> Il giorno Mer 18 Gen 2012 15:56:41 CET, Timo Sirainen ha scritto: [...] > You're using SQL only for passdb lookup. [...] > user_query isn't used, because you aren't using userdb sql. Hi Timo, thank you, I confirm everything you wrote. In order to help someone with the same problem, when using virtual profiles in mysql, there must be declared both passwd sql (necessary to verify the authentication) and userdb sql (necessary to verify the user informations). For every value that has not a specific user override it is possible to declare a global value in the plugin area (and there must be also "a quota = maildir:User quota" declaration). In the end, with this configuration the quota plugin works (the sql file remains the same I first posted): protocols = imap pop3 disable_plaintext_auth = no log_timestamp = "%Y-%m-%d %H:%M:%S " mail_location = maildir:/mail/mailboxes/%d/%u mail_privileged_group = mail #mail_debug = yes #auth_debug = yes mail_nfs_storage = yes mmap_disable=yes fsync_disable=no mail_nfs_index = yes protocol imap { mail_plugins = quota imap_quota } protocol pop3 { pop3_uidl_format = %08Xu%08Xv mail_plugins = quota } protocol managesieve { } protocol lda { auth_socket_path = /var/run/dovecot/auth-master postmaster_address = postmaster@ mail_plugins = sieve quota quota_full_tempfail = no } auth default { mechanisms = plain userdb sql { args = /etc/dovecot/dovecot-sql.conf } passdb sql { args = /etc/dovecot/dovecot-sql.conf } user = root socket listen { master { path = /var/run/dovecot/auth-master mode = 0600 user = vmail } client { path = /var/spool/postfix/private/auth mode = 0660 user = postfix group = postfix } } } plugin { quota = maildir:User quota quota2 = fs:Disk quota quota_rule = *:storage=1G quota_warning = storage=95%% /mail/scripts/quota-warning.sh 95 quota_warning2 = storage=80%% /mail/scripts/quota-warning.sh 80 sieve_global_path = /mail/sieve/globalsieverc } -- RaSca Mia Mamma Usa Linux: Niente ? impossibile da capire, se lo spieghi bene! rasca at miamammausalinux.org http://www.miamammausalinux.org From noeldude at gmail.com Mon Jan 23 18:14:11 2012 From: noeldude at gmail.com (Noel) Date: Mon, 23 Jan 2012 10:14:11 -0600 Subject: [Dovecot] change smtp port In-Reply-To: <4f1d70f6.4d25cc0a.5325.ffff88a8SMTPIN_ADDED@mx.google.com> References: <000801ccd9db$91671960$b4354c20$@othman@cairosource.com> <20120123143343.GI29761@charite.de> <4f1d70f6.4d25cc0a.5325.ffff88a8SMTPIN_ADDED@mx.google.com> Message-ID: <4F1D8753.9040900@gmail.com> On 1/23/2012 8:38 AM, Amira Othman wrote: > And there is no way to receive incoming emails not on port 25 ? > You can't randomly change the port you receive mail on because external MTAs have no way to find what port you're using. They will *always* use port 25 and nothing else. If your problem is that your Internet Service Provider is blocking port 25, you can contact them. Some ISPs will unblock port 25 on request, or might even have an online form you can fill out. If you can't get help from the ISP, you need a remailer service -- some outside proxy that accepts the mail for you and forwards connections to some different port on your computer. I don't know of any free services that do this; dyndns and others offer this for a fee, sometimes combined with spam/virus filtering. -- Noel Jones From moseleymark at gmail.com Mon Jan 23 21:13:56 2012 From: moseleymark at gmail.com (Mark Moseley) Date: Mon, 23 Jan 2012 11:13:56 -0800 Subject: [Dovecot] Director questions Message-ID: In playing with dovecot director, a couple of things came up, one related to the other: 1) Is there an effective maximum of directors that shouldn't be exceeded? That is, even if technically possible, that I shouldn't go over? Since we're 100% NFS, we've scaled servers horizontally quite a bit. At this point, we've got servers operating as MTAs, servers doing IMAP/POP directly, and servers separately doing IMAP/POP as webmail backends. Works just dandy for our existing setup. But to director-ize all of them, I'm looking at a director ring of maybe 75-85 servers, which is a bit unnerving, since I don't know if the ring will be able to keep up. Is there a scale where it'll bog down? 2) If it is too big, is there any way, that I might be missing, to use remote directors? It looks as if directors have to live locally on the same box as the proxy. For my MTAs, where they're not customer-facing, I'm much less worried about the latency it'd introduce. Likewise with my webmail servers, the extra latency would probably be trivial compared to the rest of the request--but then again, might not. But for direct IMAP, the latency likely be more noticeable. So ideally I'd be able to make my IMAP servers (well, the frontside of the proxy, that is) be the director pool, while leaving my MTAs to talk to the director remotely, and possibly my webmail servers remote too. Is that a remote possibility? From tss at iki.fi Mon Jan 23 21:37:02 2012 From: tss at iki.fi (Timo Sirainen) Date: Mon, 23 Jan 2012 21:37:02 +0200 Subject: [Dovecot] Director questions In-Reply-To: References: Message-ID: On 23.1.2012, at 21.13, Mark Moseley wrote: > In playing with dovecot director, a couple of things came up, one > related to the other: > > 1) Is there an effective maximum of directors that shouldn't be > exceeded? That is, even if technically possible, that I shouldn't go > over? There's no definite number, but each director adds some extra traffic to network and sometimes extra latency to lookups. So you should have only as many as you need. > Since we're 100% NFS, we've scaled servers horizontally quite a > bit. At this point, we've got servers operating as MTAs, servers doing > IMAP/POP directly, and servers separately doing IMAP/POP as webmail > backends. Works just dandy for our existing setup. But to director-ize > all of them, I'm looking at a director ring of maybe 75-85 servers, > which is a bit unnerving, since I don't know if the ring will be able > to keep up. Is there a scale where it'll bog down? That's definitely too many directors. So far the largest installation I know of has 4 directors. Another one will maybe have 6-10 to handle 2Gbps traffic. > 2) If it is too big, is there any way, that I might be missing, to use > remote directors? It looks as if directors have to live locally on the > same box as the proxy. For my MTAs, where they're not customer-facing, > I'm much less worried about the latency it'd introduce. Likewise with > my webmail servers, the extra latency would probably be trivial > compared to the rest of the request--but then again, might not. But > for direct IMAP, the latency likely be more noticeable. So ideally I'd > be able to make my IMAP servers (well, the frontside of the proxy, > that is) be the director pool, while leaving my MTAs to talk to the > director remotely, and possibly my webmail servers remote too. Is that > a remote possibility? I guess that could be a possibility, but .. Why do you need so many proxies at all? Couldn't all of your traffic go through just a few dedicated proxy/director servers? From harm at vevida.nl Mon Jan 23 22:52:34 2012 From: harm at vevida.nl (Harm Weites) Date: Mon, 23 Jan 2012 21:52:34 +0100 Subject: [Dovecot] LMTP ignoring tcpwrappers In-Reply-To: References: <1327012212.2003.32.camel@manbearpig.lan.kantoor.vevida.net> Message-ID: <1327351954.1940.15.camel@manbearpig> Timo Sirainen schreef op vr 20-01-2012 om 21:34 [+0200]: > On 20.1.2012, at 0.30, Harm Weites wrote: > > > we want to use dovecot LMTP for efficient mail delivery from our MX > > servers (running postfix 2.8) to our storage servers (dovecot 2.0.17). > > However, the one problem we see is the lack of access control when using > > LMTP. It apears that every client in our network who has access to the > > storage machines can drop a message in a Maildir of any user on that > > storage server. > > Is it a real problem? Can't they just as easily drop messages to other users' maildirs simply by sending the mail via SMTP? > This is true, though, in that case messages or not passing our content scanners which is something we do not want. Hence the thought of configuring tcpwrappers, as can be done with the other two protocols, to only allow access to LMTP from our MX servers. > > To prevent this behaviour it would be nice to use > > libwrap, just as it can be used for POP3/IMAP protocols. > > This, however, seems to be impossible using the configuration as > > mentioned on the dovecot wiki: > > > > login_access_sockets = tcpwrap > > > > This seems to imply it only works for a login, and LMTP does not use > > that. The above works perfectly when trying to block access to IMAP or > > POP3 in /etc/hosts.deny, though a setting for LMTP is simply ignored. > > Right. I'm not sure if I'd even want to add such feature to LMTP. It doesn't really feel like it belongs there. > Would you rather implement something completely different to cater in access-control, or just leave things as they are now? > > Is there a configuration setting needed for this to work for LMTP, or is > > it simply not possible (yet) and does libwrap support for LMTP requires > > a patch? > > Not possible in Dovecot currently. You could use firewall rules. Yes indeed, using some firewall rules and perhaps an extra vlan sounds ok, though I would like to use something a little less low-level. From moseleymark at gmail.com Mon Jan 23 23:44:26 2012 From: moseleymark at gmail.com (Mark Moseley) Date: Mon, 23 Jan 2012 13:44:26 -0800 Subject: [Dovecot] Director questions In-Reply-To: References: Message-ID: On Mon, Jan 23, 2012 at 11:37 AM, Timo Sirainen wrote: > On 23.1.2012, at 21.13, Mark Moseley wrote: > >> In playing with dovecot director, a couple of things came up, one >> related to the other: >> >> 1) Is there an effective maximum of directors that shouldn't be >> exceeded? That is, even if technically possible, that I shouldn't go >> over? > > There's no definite number, but each director adds some extra traffic to network and sometimes extra latency to lookups. So you should have only as many as you need. Ok. >> Since we're 100% NFS, we've scaled servers horizontally quite a >> bit. At this point, we've got servers operating as MTAs, servers doing >> IMAP/POP directly, and servers separately doing IMAP/POP as webmail >> backends. Works just dandy for our existing setup. But to director-ize >> all of them, I'm looking at a director ring of maybe 75-85 servers, >> which is a bit unnerving, since I don't know if the ring will be able >> to keep up. Is there a scale where it'll bog down? > > That's definitely too many directors. So far the largest installation I know of has 4 directors. Another one will maybe have 6-10 to handle 2Gbps traffic. Ok >> 2) If it is too big, is there any way, that I might be missing, to use >> remote directors? It looks as if directors have to live locally on the >> same box as the proxy. For my MTAs, where they're not customer-facing, >> I'm much less worried about the latency it'd introduce. Likewise with >> my webmail servers, the extra latency would probably be trivial >> compared to the rest of the request--but then again, might not. But >> for direct IMAP, the latency likely be more noticeable. So ideally I'd >> be able to make my IMAP servers (well, the frontside of the proxy, >> that is) be the director pool, while leaving my MTAs to talk to the >> director remotely, and possibly my webmail servers remote too. Is that >> a remote possibility? > > I guess that could be a possibility, but .. Why do you need so many proxies at all? Couldn't all of your traffic go through just a few dedicated proxy/director servers? I'm probably conceptualizing it wrongly. In our system, since it's NFS, we have everything pooled. For a given mailbox, any number of MTA (Exim) boxes could actually do the delivery, any number of IMAP servers can do IMAP for that mailbox, and any number of webmail servers could do IMAP too for that mailbox. So our horizontal scaling, server-wise, is just adding more servers to pools. This is on the order of a few million mailboxes, per datacenter. It's less messy than it probably sounds :) I was assuming that at any spot where a server touched the actual mailbox, it would need to instead proxy to a set of backend servers. Is that accurate or way off? If it is accurate, it sounds like we'd need to shuffle things up a bit. From janfrode at tanso.net Mon Jan 23 23:48:00 2012 From: janfrode at tanso.net (Jan-Frode Myklebust) Date: Mon, 23 Jan 2012 22:48:00 +0100 Subject: [Dovecot] make imap search less verbose Message-ID: <20120123214800.GA3112@dibs.tanso.net> We have an imap-client (SOGo) that doesn't handle this status output while searching: * OK Searched 76% of the mailbox, ETA 0:50 Is there any way to disable this output on the dovecot-side? -jf From tss at iki.fi Mon Jan 23 23:56:49 2012 From: tss at iki.fi (Timo Sirainen) Date: Mon, 23 Jan 2012 23:56:49 +0200 Subject: [Dovecot] make imap search less verbose In-Reply-To: <20120123214800.GA3112@dibs.tanso.net> References: <20120123214800.GA3112@dibs.tanso.net> Message-ID: On 23.1.2012, at 23.48, Jan-Frode Myklebust wrote: > We have an imap-client (SOGo) that doesn't handle this status output while > searching: > > * OK Searched 76% of the mailbox, ETA 0:50 > > Is there any way to disable this output on the dovecot-side? No way to disable it without modifying code. I think SOGo should fix it anyway.. From janfrode at tanso.net Tue Jan 24 00:19:05 2012 From: janfrode at tanso.net (Jan-Frode Myklebust) Date: Mon, 23 Jan 2012 23:19:05 +0100 Subject: [Dovecot] make imap search less verbose In-Reply-To: References: <20120123214800.GA3112@dibs.tanso.net> Message-ID: <20120123221905.GA3717@dibs.tanso.net> On Mon, Jan 23, 2012 at 11:56:49PM +0200, Timo Sirainen wrote: > > No way to disable it without modifying code. I think SOGo should fix it anyway.. > Ok, thanks. SOGo will get fixed. I was just looking for a quick workaround while we wait for updated sogo. -jf From tss at iki.fi Tue Jan 24 01:19:47 2012 From: tss at iki.fi (Timo Sirainen) Date: Tue, 24 Jan 2012 01:19:47 +0200 Subject: [Dovecot] make imap search less verbose In-Reply-To: <20120123221905.GA3717@dibs.tanso.net> References: <20120123214800.GA3112@dibs.tanso.net> <20120123221905.GA3717@dibs.tanso.net> Message-ID: <6F0CE9DA-1344-4299-AC6C-616B22F54609@iki.fi> On 24.1.2012, at 0.19, Jan-Frode Myklebust wrote: > On Mon, Jan 23, 2012 at 11:56:49PM +0200, Timo Sirainen wrote: >> >> No way to disable it without modifying code. I think SOGo should fix it anyway.. >> > > Ok, thanks. SOGo will get fixed. I was just looking for a quick > workaround while we wait for updated sogo. With Dovecot you can do: diff -r 759e879c4c42 src/lib-storage/index/index-search.c --- a/src/lib-storage/index/index-search.c Fri Jan 20 18:59:16 2012 +0200 +++ b/src/lib-storage/index/index-search.c Tue Jan 24 01:19:18 2012 +0200 @@ -1200,9 +1200,9 @@ text = t_strdup_printf("Searched %d%% of the mailbox, " "ETA %d:%02d", (int)percentage, secs/60, secs%60); - box->storage->callbacks. + /*box->storage->callbacks. notify_ok(box, text, - box->storage->callback_context); + box->storage->callback_context);*/ } T_END; } ctx->last_notify = ioloop_timeval; From tss at iki.fi Tue Jan 24 03:58:23 2012 From: tss at iki.fi (Timo Sirainen) Date: Tue, 24 Jan 2012 03:58:23 +0200 Subject: [Dovecot] dbox + SIS + zlib fixed Message-ID: I think a few people have complained about this combination being somewhat broken, resulting in bogus "cached message size wrong" errors sometimes. This fixes it: http://hg.dovecot.org/dovecot-2.0/rev/9b2931607063 From lists at necoro.eu Tue Jan 24 11:22:48 2012 From: lists at necoro.eu (=?ISO-8859-15?Q?Ren=E9_Neumann?=) Date: Tue, 24 Jan 2012 10:22:48 +0100 Subject: [Dovecot] Capabilities of imapc Message-ID: <4F1E7868.2060102@necoro.eu> Hi *, I can't find any decent information about the capabilities of imapc in the planned future dovecot releases. As I think about using imapc, I'll just give the two use-cases I see for me. Will this be possible with imapc? 1) One (or more) folders in a mailbox which are proxied? 2) Proxy a whole mailbox _and use the folders in it as shared folders_. That means account X on Server 1 (the dovecot box) is proxied via imapc to Server 2 (some other server). The folders of this account on Server 1 are then shared with account Y. When account Y uses these folders they are always up-to-date (so no action of account X is required). The second use-case is just some (ugly) workaround in case the first one is not possible. Thanks, Ren? From tss at iki.fi Tue Jan 24 11:31:27 2012 From: tss at iki.fi (Timo Sirainen) Date: Tue, 24 Jan 2012 11:31:27 +0200 Subject: [Dovecot] Capabilities of imapc In-Reply-To: <4F1E7868.2060102@necoro.eu> References: <4F1E7868.2060102@necoro.eu> Message-ID: <89981E22-65F4-415A-995C-E460093BE21B@iki.fi> On 24.1.2012, at 11.22, Ren? Neumann wrote: > I can't find any decent information about the capabilities of imapc in > the planned future dovecot releases. Mainly it's about adding support for more IMAP commands (e.g. SEARCH), so that it doesn't necessarily have to be used as a rather dummy storage. (Although it always has to be possible to be used as a dummy storage, like it is now.) > As I think about using imapc, I'll just give the two use-cases I see for > me. Will this be possible with imapc? > > 1) One (or more) folders in a mailbox which are proxied? Currently because imapc_* settings are global, you can't have more than one imapc destination. This will be fixed at some point. Otherwise this works the same way as other storage backends: You create namespace(s) for the folders you want to proxy. > 2) Proxy a whole mailbox _and use the folders in it as shared folders_. > That means account X on Server 1 (the dovecot box) is proxied via imapc > to Server 2 (some other server). The folders of this account on Server 1 > are then shared with account Y. When account Y uses these folders they > are always up-to-date (so no action of account X is required). This should be possible, yes. From lists at necoro.eu Tue Jan 24 12:15:48 2012 From: lists at necoro.eu (=?ISO-8859-1?Q?Ren=E9_Neumann?=) Date: Tue, 24 Jan 2012 11:15:48 +0100 Subject: [Dovecot] Capabilities of imapc In-Reply-To: <89981E22-65F4-415A-995C-E460093BE21B@iki.fi> References: <4F1E7868.2060102@necoro.eu> <89981E22-65F4-415A-995C-E460093BE21B@iki.fi> Message-ID: <4F1E84D4.20102@necoro.eu> Am 24.01.2012 10:31, schrieb Timo Sirainen: >> As I think about using imapc, I'll just give the two use-cases I see for >> me. Will this be possible with imapc? >> >> 1) One (or more) folders in a mailbox which are proxied? > > Currently because imapc_* settings are global, you can't have more than one imapc destination. This will be fixed at some point. Otherwise this works the same way as other storage backends: You create namespace(s) for the folders you want to proxy. Ah - this sounds good. I'll try as soon as dovecot-2.1 is released (because 2.0.17 does not include imapc, right?) Thanks, Ren? From CMarcus at Media-Brokers.com Tue Jan 24 13:23:14 2012 From: CMarcus at Media-Brokers.com (Charles Marcus) Date: Tue, 24 Jan 2012 06:23:14 -0500 Subject: [Dovecot] change smtp port In-Reply-To: <4F1D8753.9040900@gmail.com> References: <000801ccd9db$91671960$b4354c20$@othman@cairosource.com> <20120123143343.GI29761@charite.de> <4f1d70f6.4d25cc0a.5325.ffff88a8SMTPIN_ADDED@mx.google.com> <4F1D8753.9040900@gmail.com> Message-ID: <4F1E94A2.6050409@Media-Brokers.com> On 2012-01-23 11:14 AM, Noel wrote: > If your problem is that your Internet Service Provider is blocking > port 25, you can contact them. Some ISPs will unblock port 25 on > request, or might even have an online form you can fill out. The OP specifically said that *he* had changed the port from 25 to 587... obviously he doesn't understand how smtp works... -- Best regards, Charles From joshua at hybrid.pl Tue Jan 24 13:51:28 2012 From: joshua at hybrid.pl (Jacek Osiecki) Date: Tue, 24 Jan 2012 12:51:28 +0100 (CET) Subject: [Dovecot] change smtp port In-Reply-To: <4F1E94A2.6050409@Media-Brokers.com> References: <000801ccd9db$91671960$b4354c20$@othman@cairosource.com> <20120123143343.GI29761@charite.de> <4f1d70f6.4d25cc0a.5325.ffff88a8SMTPIN_ADDED@mx.google.com> <4F1D8753.9040900@gmail.com> <4F1E94A2.6050409@Media-Brokers.com> Message-ID: On Tue, 24 Jan 2012, Charles Marcus wrote: > On 2012-01-23 11:14 AM, Noel wrote: >> If your problem is that your Internet Service Provider is blocking >> port 25, you can contact them. Some ISPs will unblock port 25 on >> request, or might even have an online form you can fill out. > The OP specifically said that *he* had changed the port from 25 to 587... > obviously he doesn't understand how smtp works... Most probably he wanted to enable his users to send emails via his mail server using port 587, because some may have blocked access to port 25. Proper solution is to open additionally port 587 and require users to authenticate in order to send mails through the server. If it is too complicated in postfix, admin can simply map port 587 to 25 - most probably that would work well. Best regards, -- Jacek Osiecki joshua at ceti.pl GG:3828944 I don't want something I need. I want something I want. From CMarcus at Media-Brokers.com Tue Jan 24 14:18:46 2012 From: CMarcus at Media-Brokers.com (Charles Marcus) Date: Tue, 24 Jan 2012 07:18:46 -0500 Subject: [Dovecot] change smtp port In-Reply-To: References: <000801ccd9db$91671960$b4354c20$@othman@cairosource.com> <20120123143343.GI29761@charite.de> <4f1d70f6.4d25cc0a.5325.ffff88a8SMTPIN_ADDED@mx.google.com> <4F1D8753.9040900@gmail.com> <4F1E94A2.6050409@Media-Brokers.com> Message-ID: <4F1EA1A6.2080007@Media-Brokers.com> On 2012-01-24 6:51 AM, Jacek Osiecki wrote: > On Tue, 24 Jan 2012, Charles Marcus wrote: >> On 2012-01-23 11:14 AM, Noel wrote: >>> If your problem is that your Internet Service Provider is blocking >>> port 25, you can contact them. Some ISPs will unblock port 25 on >>> request, or might even have an online form you can fill out. >> The OP specifically said that *he* had changed the port from 25 to >> 587... obviously he doesn't understand how smtp works... > Most probably he wanted to enable his users to send emails via his mail > server using port 587, because some may have blocked access to port 25. Which obviously means he has not even a basic understanding of how smtp works. > Proper solution is to open additionally port 587 and require users to > authenticate in order to send mails through the server. If it is too > complicated in postfix, Which is precisely why I (and a few others) gave him those instructions... > admin can simply map port 587 to 25 - most probably that would work well. Of course it will work... but it is most definitely *not* recommended, and not only that, will totally defeat achieving the goal of using the submission port (because *all* port 587 traffic would be routed to port 25)... I only mentioned that this could be done in answer to someone who said it couldn't... -- Best regards, Charles From a.othman at cairosource.com Tue Jan 24 14:51:59 2012 From: a.othman at cairosource.com (Amira Othman) Date: Tue, 24 Jan 2012 14:51:59 +0200 Subject: [Dovecot] change smtp port In-Reply-To: <4F1EA1A6.2080007@Media-Brokers.com> References: <000801ccd9db$91671960$b4354c20$@othman@cairosource.com> <20120123143343.GI29761@charite.de> <4f1d70f6.4d25cc0a.5325.ffff88a8SMTPIN_ADDED@mx.google.com> <4F1D8753.9040900@gmail.com> <4F1E94A2.6050409@Media-Brokers.com> <4F1EA1A6.2080007@Media-Brokers.com> Message-ID: <001801ccda96$f843e350$e8cba9f0$@othman@cairosource.com> Thanks for reply The problem that ISP for some reason port 25 is not stable and refuse connection for several times so I tried to change port to 587 instead of 25 to keep sending emails. And I though that I can stop using port 25 as it's not always working from ISP -----Original Message----- From: dovecot-bounces at dovecot.org [mailto:dovecot-bounces at dovecot.org] On Behalf Of Charles Marcus Sent: Tuesday, January 24, 2012 2:19 PM To: dovecot at dovecot.org Subject: Re: [Dovecot] change smtp port On 2012-01-24 6:51 AM, Jacek Osiecki wrote: > On Tue, 24 Jan 2012, Charles Marcus wrote: >> On 2012-01-23 11:14 AM, Noel wrote: >>> If your problem is that your Internet Service Provider is blocking >>> port 25, you can contact them. Some ISPs will unblock port 25 on >>> request, or might even have an online form you can fill out. >> The OP specifically said that *he* had changed the port from 25 to >> 587... obviously he doesn't understand how smtp works... > Most probably he wanted to enable his users to send emails via his mail > server using port 587, because some may have blocked access to port 25. Which obviously means he has not even a basic understanding of how smtp works. > Proper solution is to open additionally port 587 and require users to > authenticate in order to send mails through the server. If it is too > complicated in postfix, Which is precisely why I (and a few others) gave him those instructions... > admin can simply map port 587 to 25 - most probably that would work well. Of course it will work... but it is most definitely *not* recommended, and not only that, will totally defeat achieving the goal of using the submission port (because *all* port 587 traffic would be routed to port 25)... I only mentioned that this could be done in answer to someone who said it couldn't... -- Best regards, Charles From noeldude at gmail.com Tue Jan 24 15:39:43 2012 From: noeldude at gmail.com (Noel) Date: Tue, 24 Jan 2012 07:39:43 -0600 Subject: [Dovecot] change smtp port In-Reply-To: <4F1E94A2.6050409@Media-Brokers.com> References: <000801ccd9db$91671960$b4354c20$@othman@cairosource.com> <20120123143343.GI29761@charite.de> <4f1d70f6.4d25cc0a.5325.ffff88a8SMTPIN_ADDED@mx.google.com> <4F1D8753.9040900@gmail.com> <4F1E94A2.6050409@Media-Brokers.com> Message-ID: <4F1EB49F.4090300@gmail.com> On 1/24/2012 5:23 AM, Charles Marcus wrote: > On 2012-01-23 11:14 AM, Noel wrote: >> If your problem is that your Internet Service Provider is blocking >> port 25, you can contact them. Some ISPs will unblock port 25 on >> request, or might even have an online form you can fill out. > > The OP specifically said that *he* had changed the port from 25 to > 587... ... because port 25 didn't work. > obviously he doesn't understand how smtp works... > and we can assume he's here to learn, not to get flamed. Anyway, this is OT for dovecot. Over and out. -- Noel Jones From devurandom at gmx.net Tue Jan 24 16:43:22 2012 From: devurandom at gmx.net (Dennis Schridde) Date: Tue, 24 Jan 2012 15:43:22 +0100 Subject: [Dovecot] Trying to get metadata plugin working In-Reply-To: <201201161651.46232.thomas@koch.ro> References: <201201161651.46232.thomas@koch.ro> Message-ID: <2007528.Wh0gVP3DHS@samson> Hi Thomas and List! Am Montag, 16. Januar 2012, 16:51:45 schrieb Thomas Koch: > dict: Error: file dict commit: file_dotlock_open(~/Maildir/shared-metadata) > failed: No such file or directory The dovecot-metadata is still a work in progress, despite my earlier message reading differently. I assumed because Akonadi began to work (my telnet tests were already successful since a while), that the dovecot plugin would also work, but noticed later that everything was a coincidence. Anyway, my config is: plugin { metadata_dict = proxy::metadata } dict { metadata = file:/var/lib/dovecot/shared-metadata } This appears to work for me - I think the key is the proxy::. --Dennis -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 198 bytes Desc: This is a digitally signed message part. URL: From CMarcus at Media-Brokers.com Tue Jan 24 16:58:29 2012 From: CMarcus at Media-Brokers.com (Charles Marcus) Date: Tue, 24 Jan 2012 09:58:29 -0500 Subject: [Dovecot] change smtp port In-Reply-To: <001801ccda96$f843e350$e8cba9f0$@othman@cairosource.com> References: <000801ccd9db$91671960$b4354c20$@othman@cairosource.com> <20120123143343.GI29761@charite.de> <4f1d70f6.4d25cc0a.5325.ffff88a8SMTPIN_ADDED@mx.google.com> <4F1D8753.9040900@gmail.com> <4F1E94A2.6050409@Media-Brokers.com> <4F1EA1A6.2080007@Media-Brokers.com> <001801ccda96$f843e350$e8cba9f0$@othman@cairosource.com> Message-ID: <4F1EC715.8020700@Media-Brokers.com> On 2012-01-24 7:51 AM, Amira Othman wrote: > Thanks for reply > > The problem that ISP for some reason port 25 is not stable and refuse > connection for several times so I tried to change port to 587 instead > of 25 to keep sending emails. And I though that I can stop using port > 25 as it's not always working from ISP As I said, you obviously do not understand how smtp works. This is made obvious by your questions, and failure to understand that port 25 is *the* port for receiving email on the public internet. Period. If your main problem with port 25 is *sending* (relaying outbound) mails, then you will need to take this up with your ISP. If they are unable or unwilling to address the problem, one option would be to setup your system to relay through some other smtp relay service on the internet using port 587 as you apparently read somwehere, but you don't do this by changing the main smtpd daemon to port 587, because as you discovered, you won't be able to receive *any* emails like this. That said, I fail to see any relevance to dovecot in this thread... -- Best regards, Charles From CMarcus at Media-Brokers.com Tue Jan 24 17:07:04 2012 From: CMarcus at Media-Brokers.com (Charles Marcus) Date: Tue, 24 Jan 2012 10:07:04 -0500 Subject: [Dovecot] change smtp port In-Reply-To: <4F1EB49F.4090300@gmail.com> References: <000801ccd9db$91671960$b4354c20$@othman@cairosource.com> <20120123143343.GI29761@charite.de> <4f1d70f6.4d25cc0a.5325.ffff88a8SMTPIN_ADDED@mx.google.com> <4F1D8753.9040900@gmail.com> <4F1E94A2.6050409@Media-Brokers.com> <4F1EB49F.4090300@gmail.com> Message-ID: <4F1EC918.2060003@Media-Brokers.com> On 2012-01-24 8:39 AM, Noel wrote: > On 1/24/2012 5:23 AM, Charles Marcus wrote: >> The OP specifically said that *he* had changed the port from 25 to >> 587... > ... because port 25 didn't work. For *sending*... And his complaint was that changing the port for the main smtpd process caused him to not be able to *receive* email... >> obviously he doesn't understand how smtp works... > and we can assume he's here to learn, not to get flamed. What!? Please point out how simply pointing out the obvious - that someone doesn't understand something - is the same as *flaming* them... Please... > Anyway, this is OT for dovecot. Over and out. Agreed on that one... nip/tuck From divizio at exentrica.it Tue Jan 24 17:58:34 2012 From: divizio at exentrica.it (Luca Di Vizio) Date: Tue, 24 Jan 2012 16:58:34 +0100 Subject: [Dovecot] [PATCH] autoconf small fix Message-ID: Hi Timo, the attached patch seems to solve a warning from autoconf: libtoolize: Consider adding `AC_CONFIG_MACRO_DIR([m4])' to configure.in and libtoolize: rerunning libtoolize, to keep the correct libtool macros in-tree. Best regards, Luca -------------- next part -------------- A non-text attachment was scrubbed... Name: autoconf.patch Type: text/x-patch Size: 279 bytes Desc: not available URL: From support at palatineweb.com Tue Jan 24 18:35:10 2012 From: support at palatineweb.com (Palatine Web Support) Date: Tue, 24 Jan 2012 16:35:10 +0000 Subject: [Dovecot] =?utf-8?q?Imap_Quota_Exceeded_-_But_Still_Receiving_Ema?= =?utf-8?q?ils=3F?= Message-ID: Hello I am trying to setup dovecot maildir quota, but even though it seems to be working fine, I am still receiving emails into my inbox even though I have exceeded my quota. Here is my dovecot config: plugin { quota = maildir:User Quota quota_rule2 = Trash:storage=+100M } And my SQL config file for Dovecot (dovecot-sql.conf): user_query = SELECT '/var/vmail/%d/%n' as home, 'maildir:/var/vmail/%d/%n' as mail, 150 AS uid, 8 AS gid, CONCAT('*:storage=', quota) AS quota_rule FROM mailbox WHERE username = '%u' AND active = '1' CONCAT('*:storage=', quota) AS quota_rule quota_rule = *:storage=3M So it picks up my set quota of 3MB but dovecot is not rejecting emails if I am over my quota. Can anyone help? Thanks. Carl From lists at wildgooses.com Wed Jan 25 00:06:55 2012 From: lists at wildgooses.com (Ed W) Date: Tue, 24 Jan 2012 22:06:55 +0000 Subject: [Dovecot] Password auth scheme question with mysql Message-ID: <4F1F2B7F.3070005@wildgooses.com> Hi, I have a current auth database using mysql with a "password" column in plain text. The config has "default_pass_scheme = PLAIN" specified In preparation for a more adaptable system I changed a password entry from "asdf" to "{PLAIN}asdf", but now auth fails. Works fine if I change it back to just "asdf". (I don't believe it's a caching problem) What might I be missing? I was under the impression that the password column can include a {scheme} prefix to indicate the password scheme (presumably this also means a password cannot start with a "{"?). Is this still true when using mysql and default_pass_scheme ? Thanks for any hints? Ed W From lists at wildgooses.com Wed Jan 25 00:51:31 2012 From: lists at wildgooses.com (Ed W) Date: Tue, 24 Jan 2012 22:51:31 +0000 Subject: [Dovecot] Password auth scheme question with mysql In-Reply-To: <4F1F2B7F.3070005@wildgooses.com> References: <4F1F2B7F.3070005@wildgooses.com> Message-ID: <4F1F35F3.9070303@wildgooses.com> On 24/01/2012 22:06, Ed W wrote: > Hi, I have a current auth database using mysql with a "password" > column in plain text. The config has "default_pass_scheme = PLAIN" > specified > > In preparation for a more adaptable system I changed a password entry > from "asdf" to "{PLAIN}asdf", but now auth fails. Works fine if I > change it back to just "asdf". (I don't believe it's a caching problem) > > What might I be missing? I was under the impression that the password > column can include a {scheme} prefix to indicate the password scheme > (presumably this also means a password cannot start with a "{"?). Is > this still true when using mysql and default_pass_scheme ? Hmm, so I try: # doveadm pw -p asdf -s sha256 {SHA256}8OTC92xYkW7CWPJGhRvqCR0U1CR6L8PhhpRGGxgW4Ts= I enter this hash into my database column, then enabling debug logging I see this in the logs: Jan 24 22:40:44 mail1 dovecot: auth: Debug: cache(demo at mailasail.com,1.2.24.129): SHA256({PLAIN}asdf) != '8OTC92xYkW7CWPJGhRvqCR0U1CR6L8PhhpRGGxgW4Ts=' Jan 24 22:40:44 mail1 dovecot: auth-worker: Debug: sql(demo at blah.com,1.2.24.129): query: SELECT NULLIF(mail_host, '1.2.24.129') as proxy, NULLIF(mail_host, '1.2.24.129') as host, email as user, password, password as pass, home userdb_home, concat(home, '/', maildir) as userdb_mail, 200 as userdb_uid, 200 as userdb_gid FROM users WHERE email = if('blah.com'<>'','demo at blah.com','demo at blah.com@mailasail.com') and flag_active=1 Jan 24 22:40:44 mail1 dovecot: auth-worker: sql(demo at blah.com,1.2.24.129): Password mismatch (given password: {PLAIN}asdf) Jan 24 22:40:44 mail1 dovecot: auth-worker: Error: md5_verify(demo at mailasail.com): Not a valid MD5-CRYPT or PLAIN-MD5 password Jan 24 22:40:44 mail1 dovecot: auth-worker: Error: ssha256_verify(demo at mailasail.com): SSHA256 password too short Jan 24 22:40:44 mail1 dovecot: auth-worker: Error: ssha512_verify(demo at mailasail.com): SSHA512 password too short Jan 24 22:40:44 mail1 dovecot: auth-worker: Warning: Invalid OTP data in passdb Jan 24 22:40:44 mail1 dovecot: auth-worker: Warning: Invalid OTP data in passdb Jan 24 22:40:44 mail1 dovecot: auth-worker: Debug: sql(demo at blah.com,1.2.24.129): SHA256({PLAIN}asdf) != '8OTC92xYkW7CWPJGhRvqCR0U1CR6L8PhhpRGGxgW4Ts=' Forgot to say. this is with dovecot 2.0.17 Thanks for any pointers Ed W From lists at wildgooses.com Wed Jan 25 01:09:53 2012 From: lists at wildgooses.com (Ed W) Date: Tue, 24 Jan 2012 23:09:53 +0000 Subject: [Dovecot] Password auth scheme question with mysql In-Reply-To: <4F1F35F3.9070303@wildgooses.com> References: <4F1F2B7F.3070005@wildgooses.com> <4F1F35F3.9070303@wildgooses.com> Message-ID: <4F1F3A41.8020206@wildgooses.com> On 24/01/2012 22:51, Ed W wrote: > Hmm, so I try: > > # doveadm pw -p asdf -s sha256 > {SHA256}8OTC92xYkW7CWPJGhRvqCR0U1CR6L8PhhpRGGxgW4Ts= > > I enter this hash into my database column, then enabling debug logging > I see this in the logs: > .. > Jan 24 22:40:44 mail1 dovecot: auth-worker: Debug: > sql(demo at blah.com,1.2.24.129): SHA256({PLAIN}asdf) != > '8OTC92xYkW7CWPJGhRvqCR0U1CR6L8PhhpRGGxgW4Ts=' Gah. Ok, so I discovered the "doveadm auth" command: # doveadm auth -x service=pop3 demo asdf passdb: demo auth succeeded extra fields: user=demo at blah.com proxy host=1.2.24.129 pass={SHA256}8OTC92xYkW7CWPJGhRvqCR0U1CR6L8PhhpRGGxgW4Ts= So why do I get an auth failed and the log files I showed in my last email when I use "telnet localhost 110" and then the commands: user demo pass asdf Help please...? Ed W From lists at wildgooses.com Wed Jan 25 02:03:35 2012 From: lists at wildgooses.com (Ed W) Date: Wed, 25 Jan 2012 00:03:35 +0000 Subject: [Dovecot] Password auth scheme question with mysql In-Reply-To: <4F1F2B7F.3070005@wildgooses.com> References: <4F1F2B7F.3070005@wildgooses.com> Message-ID: <4F1F46D7.7050600@wildgooses.com> On 24/01/2012 22:06, Ed W wrote: > Hi, I have a current auth database using mysql with a "password" > column in plain text. The config has "default_pass_scheme = PLAIN" > specified > > In preparation for a more adaptable system I changed a password entry > from "asdf" to "{PLAIN}asdf", but now auth fails. Works fine if I > change it back to just "asdf". (I don't believe it's a caching problem) > > What might I be missing? I was under the impression that the password > column can include a {scheme} prefix to indicate the password scheme > (presumably this also means a password cannot start with a "{"?). Is > this still true when using mysql and default_pass_scheme ? Bahh. Partly figured this out now - sorry for the noise - looks like a config error on my side: I have traced this to my proxy setup, which appears not to work as expected. Basically all works fine when I test to the main server IP, but fails when I test "localhost", since it triggers me to be proxied to the main IP address (same machine, just using the external IP). The error seems to be that I set the "pass" variable in my password_query to set the master password for the upstream proxied to server. I can't actually remember now why this was required, but it was necessary to allow the proxy to work correctly in the past. I guess this assumption needs revisiting now since it can't be used if the plain password isn't in the database... For interest, here is my auth setup: password_query = SELECT NULLIF(mail_host, '%l') as proxy, NULLIF(mail_host, '%l') as host, \ email as user, password, \ password as pass, \ home userdb_home, concat(home, '/', maildir) as userdb_mail, \ 1234 as userdb_uid, 1234 as userdb_gid \ FROM users \ WHERE email = if('%d'<>'','%u','%u at mailasail.com') and flag_active=1 "mail_host" in this case holds the IP of the machine holding the users mailbox (hence it's easy to push mailboxes to a specific machine and the users get proxied to it) Sorry for the noise Ed W From jd.beaubien at gmail.com Wed Jan 25 05:22:10 2012 From: jd.beaubien at gmail.com (Jean-Daniel Beaubien) Date: Tue, 24 Jan 2012 22:22:10 -0500 Subject: [Dovecot] Persistence of UIDs Message-ID: Hi everyone, I have a question concerning UIDs. How persistant are they? I am thinking about building some form of webmail specialized for some specific business purpose and I am thinking of building a sort of cache in a DB by storing the email addr, date, subject and UID for quick lookups and search of correspondance. I am doing this because I am having issue with multiple people searching thru email folders that have 100k+ emails (which is another problem in itself, searches don't seem to scale well when folder goes above 60k emails). So to come back to my question, can I store the UIDs and reuse those UIDs later on to obtain the body of the email??? Or can the UIDs change on the server and they will not be valid anymore?. My setup is: - dovecot 1.x (will migrate to 2.x soon) - maildir - everything stored on an intel 320 SSD (index and maildir folder) Thanks, -JD From slusarz at curecanti.org Wed Jan 25 07:27:02 2012 From: slusarz at curecanti.org (Michael M Slusarz) Date: Tue, 24 Jan 2012 22:27:02 -0700 Subject: [Dovecot] Persistence of UIDs In-Reply-To: References: Message-ID: <20120124222702.Horde.UpAiY4F5lbhPH5KmSLoWegA@bigworm.curecanti.org> Quoting Jean-Daniel Beaubien : > I have a question concerning UIDs. How persistant are they? [snip] > So to come back to my question, can I store the UIDs and reuse those UIDs > later on to obtain the body of the email??? Or can the UIDs change on the > server and they will not be valid anymore?. You really need to read RFC 3501 (http://tools.ietf.org/html/rfc3501), specifically section 2.3.1.1. Short answer: UIDs will almost always be persistent, but you always need to check UIDVALIDITY in the tiny chance that they may be invalidated. michael From dmiller at amfes.com Wed Jan 25 07:38:47 2012 From: dmiller at amfes.com (Daniel L. Miller) Date: Tue, 24 Jan 2012 21:38:47 -0800 Subject: [Dovecot] Imap Quota Exceeded - But Still Receiving Emails? In-Reply-To: References: Message-ID: On 1/24/2012 8:35 AM, Palatine Web Support wrote: > > Here is my dovecot config: > > plugin { > quota = maildir:User Quota > quota_rule2 = Trash:storage=+100M > } [..] > > So it picks up my set quota of 3MB but dovecot is not rejecting emails > if I am over my quota. > > Can anyone help? > Is the quota plugin being loaded? What is the output of: doveconf | grep -B 2 plug -- Daniel From dovecot at bravenec.eu Wed Jan 25 09:05:47 2012 From: dovecot at bravenec.eu (Petr Bravenec) Date: Wed, 25 Jan 2012 08:05:47 +0100 Subject: [Dovecot] Dovecot antispam plugint got an empty message Message-ID: <201201250805.47430.dovecot@bravenec.eu> Few weeks ago I upgraded dovecot from 1.2 to 2.0.16 and antispam plugin to 2.0_pre20101222. Since the upgrade I'm not able to move messages to my Junk folder. In the maillog I have found this message: dspam[25060]: empty message (no data received) Message is copied from my INBOX to Junk folder, but dspam got an empty message and sent an error return code. So the moving operation is not successfull and the original message in INBOX was not deleted. The dspam was not trained (got an empty message). Looking to source code of dspam and antispam plugin I suspect the dovecot not to sending any content to plugin. Can you help me, please? Petr Bravenec -------------- next part -------------- # 2.0.16: /etc/dovecot/dovecot.conf # OS: Linux 3.1.6-gentoo x86_64 Gentoo Base System release 2.0.3 ext4 auth_mechanisms = plain login base_dir = /var/run/dovecot/ dict { acl = pgsql:/etc/dovecot/dovecot-acl.conf } disable_plaintext_auth = no first_valid_gid = 98 first_valid_uid = 98 last_valid_gid = 98 last_valid_uid = 98 listen = *, [::] mail_location = maildir:/home/dovecot/%u/maildir managesieve_notify_capability = mailto managesieve_sieve_capability = fileinto reject envelope encoded-character vacation subaddress comparator-i;ascii-numeric relational regex imap4flags copy include variables body enotify environment mailbox date ihave namespace { inbox = yes location = prefix = separator = . type = private } namespace { inbox = no list = children location = maildir:/home/dovecot/%%n/maildir:INDEX=/home/dovecot/%n/shared/%%n prefix = Ostatni.%%n. separator = . subscriptions = no type = shared } namespace { inbox = no list = children location = maildir:/home/dovecot/Sdilene/maildir:INDEX=/home/dovecot/%n/public prefix = Sdilene. separator = . subscriptions = no type = public } passdb { args = session=yes driver = pam } plugin { acl = vfile acl_shared_dict = proxy::acl antispam_backend = dspam antispam_dspam_args = --user;%u;--source=error antispam_dspam_binary = /usr/bin/dspam antispam_dspam_notspam = --class=innocent antispam_dspam_result_header = X-DSPAM-Result antispam_dspam_spam = --class=spam antispam_mail_tmpdir = /tmp antispam_signature = X-DSPAM-Signature antispam_signature_missing = move antispam_spam = Junk antispam_trash = Trash antispam_unsure = sieve = /home/dovecot/%u/sieve.default sieve_before = /etc/dovecot/sieve/dspam.sieve sieve_dir = /home/dovecot/%u/sieve } protocols = imap sieve service auth { unix_listener /var/spool/postfix/private/auth { group = postfix mode = 0660 user = postfix } unix_listener auth-master { group = vmails mode = 0660 user = dspam } unix_listener auth-userdb { group = vmails mode = 0660 user = dspam } user = root } service dict { unix_listener dict { group = vmails mode = 0660 user = dspam } } ssl_cert = Hi, I am using dovecot 2.0.16, and assigend globally procmailrc (/etc/procmailrc) which delivers mails to user's home directory in maildir formate. Also I assined quota to User through setquota (edquota) command, If the quota excedded then this case user's mail store to /var/spool/mail/user. After incresing quota how to delivered these mails to user's home dir in maildir formate automatically. Thanks & Regards, Arun Kumar Gupta -- This message has been scanned for viruses and dangerous content by MailScanner, and is believed to be clean. From tss at iki.fi Wed Jan 25 14:45:31 2012 From: tss at iki.fi (Timo Sirainen) Date: Wed, 25 Jan 2012 14:45:31 +0200 Subject: [Dovecot] Persistence of UIDs In-Reply-To: References: Message-ID: <9EDDD923-760E-4D24-BFB7-290172418A62@iki.fi> On 25.1.2012, at 5.22, Jean-Daniel Beaubien wrote: > I have a question concerning UIDs. How persistant are they? With Dovecot persistent enough. But as Michael said, check UIDVALIDITY. > I am thinking about building some form of webmail specialized for some > specific business purpose and I am thinking of building a sort of cache in > a DB by storing the email addr, date, subject and UID for quick lookups and > search of correspondance. Dovecot should already have such cache. If there are problems with that, I think it would be better to fix it on Dovecot's side rather than adding a second cache. > I am doing this because I am having issue with multiple people searching > thru email folders that have 100k+ emails (which is another problem in > itself, searches don't seem to scale well when folder goes above 60k > emails). Maybe enable fts-solr or fts-lucene? (Both work much better in v2.1.) From jd.beaubien at gmail.com Wed Jan 25 15:34:59 2012 From: jd.beaubien at gmail.com (Jean-Daniel Beaubien) Date: Wed, 25 Jan 2012 08:34:59 -0500 Subject: [Dovecot] Persistence of UIDs In-Reply-To: <9EDDD923-760E-4D24-BFB7-290172418A62@iki.fi> References: <9EDDD923-760E-4D24-BFB7-290172418A62@iki.fi> Message-ID: On Wed, Jan 25, 2012 at 7:45 AM, Timo Sirainen wrote: > On 25.1.2012, at 5.22, Jean-Daniel Beaubien wrote: > > > I have a question concerning UIDs. How persistant are they? > > With Dovecot persistent enough. But as Michael said, check UIDVALIDITY. > > > I am thinking about building some form of webmail specialized for some > > specific business purpose and I am thinking of building a sort of cache > in > > a DB by storing the email addr, date, subject and UID for quick lookups > and > > search of correspondance. > > Dovecot should already have such cache. If there are problems with that, I > think it would be better to fix it on Dovecot's side rather than adding a > second cache. > Very true. Has there been many search/index improvements since 1.0.9? I read thru the release notes but nothing jumped out at me. > > > I am doing this because I am having issue with multiple people searching > > thru email folders that have 100k+ emails (which is another problem in > > itself, searches don't seem to scale well when folder goes above 60k > > emails). > > Maybe enable fts-solr or fts-lucene? (Both work much better in v2.1.) > I was under the impression that lucene was for full-text search. I'm just doing simple from/to field searches. I will get a few numbers together about folder_size --> search time and I will post them tonight. -jd From CMarcus at Media-Brokers.com Wed Jan 25 15:40:18 2012 From: CMarcus at Media-Brokers.com (Charles Marcus) Date: Wed, 25 Jan 2012 08:40:18 -0500 Subject: [Dovecot] Persistence of UIDs In-Reply-To: References: <9EDDD923-760E-4D24-BFB7-290172418A62@iki.fi> Message-ID: <4F200642.4020008@Media-Brokers.com> On 2012-01-25 8:34 AM, Jean-Daniel Beaubien wrote: > On Wed, Jan 25, 2012 at 7:45 AM, Timo Sirainen wrote: >> Dovecot should already have such cache. If there are problems with that, I >> think it would be better to fix it on Dovecot's side rather than adding a >> second cache. > Very true. Has there been many search/index improvements since 1.0.9? I > read thru the release notes but nothing jumped out at me. Seriously?? 1.0.9 is *very* old, and even no longer really supported. Upgrade. Really. It isn't that hard. There is zero reason to stay on an unsupported version. >>> I am doing this because I am having issue with multiple people searching >>> thru email folders that have 100k+ emails (which is another problem in >>> itself, searches don't seem to scale well when folder goes above 60k >>> emails). >> Maybe enable fts-solr or fts-lucene? (Both work much better in v2.1.) > I was under the impression that lucene was for full-text search. I'm just > doing simple from/to field searches. > > I will get a few numbers together about folder_size --> search time and I > will post them tonight. Don't waste your time testing such an old and unsupported version, I'm sure Timo has no interest in any such numbers - *unless* you are planning on doing said tests on *both* the 1.0.9 version *and* the latest 2.0.x or 2.1 build and provide a *comparison* - *that* may be interesting... -- Best regards, Charles From tss at iki.fi Wed Jan 25 15:47:28 2012 From: tss at iki.fi (Timo Sirainen) Date: Wed, 25 Jan 2012 15:47:28 +0200 Subject: [Dovecot] Persistence of UIDs In-Reply-To: References: <9EDDD923-760E-4D24-BFB7-290172418A62@iki.fi> Message-ID: <469769DA-F849-4AE3-AB82-BB4AE05E0F11@iki.fi> On 25.1.2012, at 15.34, Jean-Daniel Beaubien wrote: >>> I am thinking about building some form of webmail specialized for some >>> specific business purpose and I am thinking of building a sort of cache >> in >>> a DB by storing the email addr, date, subject and UID for quick lookups >> and >>> search of correspondance. >> >> Dovecot should already have such cache. If there are problems with that, I >> think it would be better to fix it on Dovecot's side rather than adding a >> second cache. >> > > Very true. Has there been many search/index improvements since 1.0.9? I > read thru the release notes but nothing jumped out at me. Disk I/O usage is the same probably, CPU usage is less in newer versions. >>> I am doing this because I am having issue with multiple people searching >>> thru email folders that have 100k+ emails (which is another problem in >>> itself, searches don't seem to scale well when folder goes above 60k >>> emails). >> >> Maybe enable fts-solr or fts-lucene? (Both work much better in v2.1.) >> > > I was under the impression that lucene was for full-text search. I'm just > doing simple from/to field searches. In v2.1 from/to fields are also searched via FTS. From Juergen.Obermann at hrz.uni-giessen.de Wed Jan 25 16:43:11 2012 From: Juergen.Obermann at hrz.uni-giessen.de (=?UTF-8?Q?J=C3=BCrgen_Obermann?=) Date: Wed, 25 Jan 2012 15:43:11 +0100 Subject: [Dovecot] problem compiling imaptest under solaris Message-ID: <89f61bff49f4c5343be06dd45459b14a@imapproxy.hrz> Hallo, today I tried to compile imaptest under solaris 10 with studio 11 compiler and got the following error: gmake[2]: Entering directory `/net/fileserv/export/sunsrc/src/imaptest-20111119/src' source='client.c' object='client.o' libtool=no \ DEPDIR=.deps depmode=none /bin/bash ../depcomp \ cc -DHAVE_CONFIG_H -I. -I. -I.. -I/opt/local/include/dovecot -I/usr/local/include -fast -xarch=v8plusa -I/usr/sfw/include -c client.c "/opt/local/include/dovecot/imap-util.h", line 6: warning: useless declaration "client-state.h", line 6: warning: useless declaration "client.c", line 655: operand cannot have void type: op "==" "client.c", line 655: operands have incompatible types: const void "==" int cc: acomp failed for client.c what can I do? Thanks for any help, J?rgen -- J?rgen Obermann Hochschulrechenzentrum der Justus-Liebig-Universit?t Gie?en Heinrich-Buff-Ring 44 Tel. 0641-9913054 From tom at whyscream.net Wed Jan 25 18:19:18 2012 From: tom at whyscream.net (Tom Hendrikx) Date: Wed, 25 Jan 2012 17:19:18 +0100 Subject: [Dovecot] Dovecot antispam plugint got an empty message In-Reply-To: <201201250805.47430.dovecot@bravenec.eu> References: <201201250805.47430.dovecot@bravenec.eu> Message-ID: <4F202B86.9000102@whyscream.net> On 25-01-12 08:05, Petr Bravenec wrote: > Few weeks ago I upgraded dovecot from 1.2 to 2.0.16 and antispam plugin to > 2.0_pre20101222. Since the upgrade I'm not able to move messages to my Junk > folder. In the maillog I have found this message: > > dspam[25060]: empty message (no data received) > Gentoo has included the antispam plugin from Johannes historically, but added the fork by Eugene to support upgrades to dovecot 2.0. It is not really made clear by the gentoo ebuild is that the forked plugin needs a slightly different config. I use the config below with dovecot 2.0.17 and a git checkout for dovecot-antispam: ===8<======== plugin { antispam_signature = X-DSPAM-Signature antispam_signature_missing = move antispam_spam_pattern_ignorecase = Junk;Junk.* antispam_trash_pattern_ignorecase = Trash;Deleted Items;Deleted Messages # Backend specific antispam_backend = dspam antispam_dspam_binary = /usr/bin/dspamc antispam_dspam_args = --user;%u;--deliver=;--source=error;--signature=%%s antispam_dspam_spam = --class=spam antispam_dspam_notspam = --class=innocent #antispam_dspam_result_header = X-DSPAM-Result } -- Regards, Tom From CMarcus at Media-Brokers.com Wed Jan 25 18:42:39 2012 From: CMarcus at Media-Brokers.com (Charles Marcus) Date: Wed, 25 Jan 2012 11:42:39 -0500 Subject: [Dovecot] move mails from spool to users home dir(maildir formate) automatically In-Reply-To: References: Message-ID: <4F2030FF.1080304@Media-Brokers.com> On 2012-01-25 3:19 AM, Arun Gupta wrote: > I am using dovecot 2.0.16, and assigend globally procmailrc > (/etc/procmailrc) which delivers mails to user's home directory in > maildir formate. Also I assined quota to User through setquota (edquota) > command, If the quota excedded then this case user's mail store to > /var/spool/mail/user. After incresing quota how to delivered these mails > to user's home dir in maildir formate automatically. Best practice is to reject mail for users over quota (as long as you do this during the smtp transaction... Otherwise, whats the point? (they can still fill up your server)... -- Best regards, Charles From dmiller at amfes.com Wed Jan 25 18:55:09 2012 From: dmiller at amfes.com (Daniel L. Miller) Date: Wed, 25 Jan 2012 08:55:09 -0800 Subject: [Dovecot] Imap Quota Exceeded - But Still Receiving Emails? In-Reply-To: <58f41e2e84d4befd5b09a1cb913e57b4@palatineweb.com> References: <4F1F9567.1030804@amfes.com> <58f41e2e84d4befd5b09a1cb913e57b4@palatineweb.com> Message-ID: On 1/25/2012 1:39 AM, Palatine Web Support wrote: > On 2012-01-25 05:38, Daniel L. Miller wrote: >> On 1/24/2012 8:35 AM, Palatine Web Support wrote: >>> >>> Here is my dovecot config: >>> >>> plugin { >>> quota = maildir:User Quota >>> quota_rule2 = Trash:storage=+100M >>> } >> [..] >>> >>> So it picks up my set quota of 3MB but dovecot is not rejecting >>> emails if I am over my quota. >>> >>> Can anyone help? >>> >> Is the quota plugin being loaded? What is the output of: >> >> doveconf | grep -B 2 plug > > Hi Daniel > > I tried the command and it returned the command was not found. I have > installed: > > apt-get install dovecot-common > apt-get install dovecot-dev > apt-get install dovecot-imapd > > > Which package does the binary doveconf come from? You need to make sure to reply to the list - not just to me. If you don't have doveconf...what version of Dovecot are you using? -- Daniel From dmiller at amfes.com Wed Jan 25 19:01:30 2012 From: dmiller at amfes.com (Daniel L. Miller) Date: Wed, 25 Jan 2012 09:01:30 -0800 Subject: [Dovecot] Imap Quota Exceeded - But Still Receiving Emails? In-Reply-To: <747f97172fd71affd2ee5b5ebcc5d16c@palatineweb.com> References: <4F1F9567.1030804@amfes.com> <747f97172fd71affd2ee5b5ebcc5d16c@palatineweb.com> Message-ID: On 1/25/2012 2:01 AM, Palatine Web Support wrote: > On 2012-01-25 05:38, Daniel L. Miller wrote: >> On 1/24/2012 8:35 AM, Palatine Web Support wrote: >>> >>> Here is my dovecot config: >>> >>> plugin { >>> quota = maildir:User Quota >>> quota_rule2 = Trash:storage=+100M >>> } >> [..] >>> >>> So it picks up my set quota of 3MB but dovecot is not rejecting >>> emails if I am over my quota. >>> >>> Can anyone help? >>> >> Is the quota plugin being loaded? What is the output of: >> >> doveconf | grep -B 2 plug > > The modules are being loaded. From the log file with debugging enabled: > > Jan 25 09:59:58 mail dovecot: IMAP(xxx at xxx.com): Loading modules from > directory: /usr/lib/dovecot/modules/imap > Jan 25 09:59:58 mail dovecot: IMAP(xxx at xxx.com): Module loaded: > /usr/lib/dovecot/modules/imap/lib10_quota_plugin.so > Jan 25 09:59:58 mail dovecot: IMAP(xxx at xxx.com): Module loaded: > /usr/lib/dovecot/modules/imap/lib11_imap_quota_plugin.so > Jan 25 09:59:58 mail dovecot: IMAP(xxx at xxx.com): Effective uid=150, > gid=8, home=/var/vmail/xxx.com/support > Jan 25 09:59:58 mail dovecot: IMAP(xxx at xxx.com): Quota root: name=User > Quota backend=dirsize args= > Jan 25 09:59:58 mail dovecot: IMAP(xxx at xxx.com): Quota rule: root=User > Quota mailbox=* bytes=3145728 messages=0 > Jan 25 09:59:58 mail dovecot: IMAP(xxx at xxx.com): Quota rule: root=User > Quota mailbox=Trash bytes=104857600 messages=0 > Jan 25 09:59:58 mail dovecot: IMAP(xxx at xxx.com): maildir: > data=/var/vmail/xxx.com/support > Jan 25 09:59:58 mail dovecot: IMAP(xxx at xxx.com): maildir++: > root=/var/vmail/xxx.com/support, index=, control=, > inbox=/var/vmail/xxx.com/support > Jan 25 09:59:58 mail dovecot: IMAP(xxx at xxx.com): Namespace : Using > permissions from /var/vmail/xxx.com/support: mode=0700 gid=-1 > Jan 25 09:59:58 mail dovecot: IMAP(xxx at xxx.com): Disconnected: Logged > out bytes=82/573 > Jan 25 09:59:58 mail dovecot: IMAP(xxx at xxx.com): Disconnected: Logged > out bytes=269/8243 > I don't know if it makes any difference, but in your config file, try changing: plugin { quota = maildir:User Quota to plugin { quota = maildir:User quota (lowercase the "quota") -- Daniel From tss at iki.fi Thu Jan 26 01:03:58 2012 From: tss at iki.fi (Timo Sirainen) Date: Thu, 26 Jan 2012 01:03:58 +0200 Subject: [Dovecot] v2.1.rc5 released Message-ID: <901DE631-5790-41BF-89A9-73BBB544FDF7@iki.fi> http://dovecot.org/releases/2.1/rc/dovecot-2.1.rc5.tar.gz http://dovecot.org/releases/2.1/rc/dovecot-2.1.rc5.tar.gz.sig I'm still lagging behind reading emails. v2.1.0 will be released after I've finished that. RC5 is already stable and used in production, but I want to make sure that I haven't missed anything important that was reported previously. Most of the recent fixed bugs existed also in v2.0 series. Changes since rc3: * Temporary authentication failures sent to IMAP/POP3 clients now includes the server's hostname and timestamp. This makes it easier to find the error message from logs. + auth: Implemented support for Postfix's "TCP map" sockets for user existence lookups. + auth: Idling auth worker processes are now stopped. This reduces error messages about MySQL disconnections. - director: With >2 directors ring syncing might have stalled during director connect/disconnect, causing logins to fail. - LMTP client/proxy: Fixed potential hanging when sending (big) mails - Compressed mails with external attachments (dbox + SIS + zlib) failed sometimes with bogus "cached message size wrong" errors. (I skipped rc4 release, because I accidentally tagged it too early in hg.) From tss at iki.fi Thu Jan 26 01:15:31 2012 From: tss at iki.fi (Timo Sirainen) Date: Thu, 26 Jan 2012 01:15:31 +0200 Subject: [Dovecot] FOSDEM Message-ID: <91D95FB6-D651-4A82-BC16-241F4DDAEF78@iki.fi> I'll be in FOSDEM giving a small lightning talk about Dovecot: http://fosdem.org/2012/schedule/event/dovecot I'll also be around in FOSDEM the whole time, so if you're there and want to talk to me about anything, send me an email at some point. Poll to dovecot-news list people: Do you want to see these kind of news about my upcoming talks sent to the list? Probably happens a few times/year. A simple "yes" or "no" reply to this mail privately to me is enough. From petr at bravenec.eu Wed Jan 25 23:17:38 2012 From: petr at bravenec.eu (Petr Bravenec) Date: Wed, 25 Jan 2012 22:17:38 +0100 Subject: [Dovecot] Dovecot antispam plugint got an empty message In-Reply-To: <4F202B86.9000102@whyscream.net> References: <201201250805.47430.dovecot@bravenec.eu> <4F202B86.9000102@whyscream.net> Message-ID: <7860878.6BHtT8IiNC@hrabos> Thank you, I have reconfigured my dovecot on gentoo and it looks now that it worked properly. Regards, Petr Bravenec Dne Wednesday 25 of January 2012 17:19:18 Tom Hendrikx napsal(a): > On 25-01-12 08:05, Petr Bravenec wrote: > > Few weeks ago I upgraded dovecot from 1.2 to 2.0.16 and antispam plugin > > to 2.0_pre20101222. Since the upgrade I'm not able to move messages to > > my Junk folder. In the maillog I have found this message: > > > > dspam[25060]: empty message (no data received) > > Gentoo has included the antispam plugin from Johannes historically, but > added the fork by Eugene to support upgrades to dovecot 2.0. It is not > really made clear by the gentoo ebuild is that the forked plugin needs a > slightly different config. > > I use the config below with dovecot 2.0.17 and a git checkout for > dovecot-antispam: > > ===8<======== > plugin { > antispam_signature = X-DSPAM-Signature > antispam_signature_missing = move > antispam_spam_pattern_ignorecase = Junk;Junk.* > antispam_trash_pattern_ignorecase = Trash;Deleted Items;Deleted > Messages > > # Backend specific > antispam_backend = dspam > antispam_dspam_binary = /usr/bin/dspamc > antispam_dspam_args = > --user;%u;--deliver=;--source=error;--signature=%%s > antispam_dspam_spam = --class=spam > antispam_dspam_notspam = --class=innocent > #antispam_dspam_result_header = X-DSPAM-Result > } > > > -- > Regards, > Tom From user+dovecot at localhost.localdomain.org Thu Jan 26 01:24:50 2012 From: user+dovecot at localhost.localdomain.org (Pascal Volk) Date: Thu, 26 Jan 2012 00:24:50 +0100 Subject: [Dovecot] FOSDEM In-Reply-To: <91D95FB6-D651-4A82-BC16-241F4DDAEF78@iki.fi> References: <91D95FB6-D651-4A82-BC16-241F4DDAEF78@iki.fi> Message-ID: <4F208F42.4020007@localhost.localdomain.org> On 01/26/2012 12:15 AM Timo Sirainen wrote: > I'll be in FOSDEM giving a small lightning talk about Dovecot: http://fosdem.org/2012/schedule/event/dovecot > > I'll also be around in FOSDEM the whole time, so if you're there and want to talk to me about anything, send me an email at some point. I'll be there too. > Poll to dovecot-news list people: Do you want to see these kind of news about my upcoming talks sent to the list? Probably happens a few times/year. A simple "yes" or "no" reply to this mail privately to me is enough. yes Regards, Pascal -- The trapper recommends today: f007ba11.1202600 at localdomain.org From dmiller at amfes.com Thu Jan 26 01:37:16 2012 From: dmiller at amfes.com (Daniel L. Miller) Date: Wed, 25 Jan 2012 15:37:16 -0800 Subject: [Dovecot] Crash on mail folder delete Message-ID: Attempting to delete a folder from within the trash folder using Thunderbird. I see the following in the log: Jan 25 15:36:22 bubba dovecot: imap(dmiller at amfes.com): Panic: file mailbox-list-fs.c: line 156 (fs_list_get_path): assertion failed: (mailbox_list_is_valid_pattern(_list, name)) Jan 25 15:36:22 bubba dovecot: imap(dmiller at amfes.com): Error: Raw backtrace: /usr/local/lib/dovecot/libdovecot.so.0(+0x3efba) [0x7f5fe9f86fba] -> /usr/local/lib/dovecot/libdovecot.so.0(+0x3f006) [0x7f5fe9f87006] -> /usr/local/lib/dovecot/libdovecot.so.0(+0x17f5a) [0x7f5fe9f5ff5a] -> /usr/local/lib/dovecot/libdovecot-storage.so.0(+0x47287) [0x7f5fea214287] -> /usr/local/lib/dovecot/lib01_acl_plugin.so(+0x6c71) [0x7f5fe8b9cc71] -> /usr/local/lib/dovecot/lib01_acl_plugin.so(+0x6d47) [0x7f5fe8b9cd47] -> /usr/local/lib/dovecot/lib01_acl_plugin.so(acl_mailbox_allocated+0x9e) [0x7f5fe8ba061e] -> /usr/local/lib/dovecot/libdovecot-storage.so.0(hook_mailbox_allocated+0x62) [0x7f5fea2085b2] -> /usr/local/lib/dovecot/libdovecot-storage.so.0(mailbox_alloc+0xb2) [0x7f5fea2073d2] -> dovecot/imap [dmiller at amfes.com 192.168.0.91 delete](cmd_delete+0x72) [0x409922] -> dovecot/imap [dmiller at amfes.com 192.168.0.91 delete](command_exec+0x3d) [0x4109ad] -> dovecot/imap [dmiller at amfes.com 192.168.0.91 delete]() [0x40f97e] -> dovecot/imap [dmiller at amfes.com 192.168.0.91 delete]() [0x40fa5d] -> dovecot/imap [dmiller at amfes.com 192.168.0.91 delete](client_handle_input+0x135) [0x40fc85] -> dovecot/imap [dmiller at amfes.com 192.168.0.91 delete](client_input+0x5f) [0x4105af] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_call_io+0x36) [0x7f5fe9f93406] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_handler_run+0x9f) [0x7f5fe9f9448f] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_run+0x28) [0x7f5fe9f933a8] -> /usr/local/lib/dovecot/libdovecot.so.0(master_service_run+0x13) [0x7f5fe9f803b3] -> dovecot/imap [dmiller at amfes.com 192.168.0.91 delete](main+0x301) [0x418a61] -> /lib/libc.so.6(__libc_start_main+0xfe) [0x7f5fe9be3d8e] -> dovecot/imap [dmiller at amfes.com 192.168.0.91 delete]() [0x4083f9] Jan 25 15:36:23 bubba dovecot: imap(dmiller at amfes.com): Panic: file mailbox-list-fs.c: line 156 (fs_list_get_path): assertion failed: (mailbox_list_is_valid_pattern(_list, name)) Jan 25 15:36:23 bubba dovecot: imap(dmiller at amfes.com): Error: Raw backtrace: /usr/local/lib/dovecot/libdovecot.so.0(+0x3efba) [0x7f33673dafba] -> /usr/local/lib/dovecot/libdovecot.so.0(+0x3f006) [0x7f33673db006] -> /usr/local/lib/dovecot/libdovecot.so.0(+0x17f5a) [0x7f33673b3f5a] -> /usr/local/lib/dovecot/libdovecot-storage.so.0(+0x47287) [0x7f3367668287] -> /usr/local/lib/dovecot/lib01_acl_plugin.so(+0x6c71) [0x7f3365ff0c71] -> /usr/local/lib/dovecot/lib01_acl_plugin.so(+0x6d47) [0x7f3365ff0d47] -> /usr/local/lib/dovecot/lib01_acl_plugin.so(acl_mailbox_allocated+0x9e) [0x7f3365ff461e] -> /usr/local/lib/dovecot/libdovecot-storage.so.0(hook_mailbox_allocated+0x62) [0x7f336765c5b2] -> /usr/local/lib/dovecot/libdovecot-storage.so.0(mailbox_alloc+0xb2) [0x7f336765b3d2] -> dovecot/imap [dmiller at amfes.com 192.168.0.91 delete](cmd_delete+0x72) [0x409922] -> dovecot/imap [dmiller at amfes.com 192.168.0.91 delete](command_exec+0x3d) [0x4109ad] -> dovecot/imap [dmiller at amfes.com 192.168.0.91 delete]() [0x40f97e] -> dovecot/imap [dmiller at amfes.com 192.168.0.91 delete]() [0x40fa5d] -> dovecot/imap [dmiller at amfes.com 192.168.0.91 delete](client_handle_input+0x135) [0x40fc85] -> dovecot/imap [dmiller at amfes.com 192.168.0.91 delete](client_input+0x5f) [0x4105af] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_call_io+0x36) [0x7f33673e7406] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_handler_run+0x9f) [0x7f33673e848f] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_run+0x28) [0x7f33673e73a8] -> /usr/local/lib/dovecot/libdovecot.so.0(master_service_run+0x13) [0x7f33673d43b3] -> dovecot/imap [dmiller at amfes.com 192.168.0.91 delete](main+0x301) [0x418a61] -> /lib/libc.so.6(__libc_start_main+0xfe) [0x7f3367037d8e] -> dovecot/imap [dmiller at amfes.com 192.168.0.91 delete]() [0x4083f9] Jan 25 15:36:23 bubba dovecot: imap(dmiller at amfes.com): Fatal: master: service(imap): child 6074 killed with signal 6 (core dumps disabled) Jan 25 15:36:23 bubba dovecot: imap(dmiller at amfes.com): Fatal: master: service(imap): child 6589 killed with signal 6 (core dumps disabled) -- Daniel From doctor at doctor.nl2k.ab.ca Thu Jan 26 01:39:30 2012 From: doctor at doctor.nl2k.ab.ca (The Doctor) Date: Wed, 25 Jan 2012 16:39:30 -0700 Subject: [Dovecot] v2.1.rc5 released In-Reply-To: <901DE631-5790-41BF-89A9-73BBB544FDF7@iki.fi> References: <901DE631-5790-41BF-89A9-73BBB544FDF7@iki.fi> Message-ID: <20120125233930.GA17183@doctor.nl2k.ab.ca> On Thu, Jan 26, 2012 at 01:03:58AM +0200, Timo Sirainen wrote: > http://dovecot.org/releases/2.1/rc/dovecot-2.1.rc5.tar.gz > http://dovecot.org/releases/2.1/rc/dovecot-2.1.rc5.tar.gz.sig > > I'm still lagging behind reading emails. v2.1.0 will be released after I've finished that. RC5 is already stable and used in production, but I want to make sure that I haven't missed anything important that was reported previously. Most of the recent fixed bugs existed also in v2.0 series. > > Changes since rc3: > > * Temporary authentication failures sent to IMAP/POP3 clients > now includes the server's hostname and timestamp. This makes it > easier to find the error message from logs. > > + auth: Implemented support for Postfix's "TCP map" sockets for > user existence lookups. > + auth: Idling auth worker processes are now stopped. This reduces > error messages about MySQL disconnections. > - director: With >2 directors ring syncing might have stalled during > director connect/disconnect, causing logins to fail. > - LMTP client/proxy: Fixed potential hanging when sending (big) mails > - Compressed mails with external attachments (dbox + SIS + zlib) failed > sometimes with bogus "cached message size wrong" errors. > > (I skipped rc4 release, because I accidentally tagged it too early in hg.) All right, can you get configure to detect --as-needed flag for ld? This is show stopping for me. -- Member - Liberal International This is doctor at nl2k.ab.ca Ici doctor at nl2k.ab.ca God, Queen and country! Never Satan President Republic! Beware AntiChrist rising! https://www.fullyfollow.me/rootnl2k Birthdate : 29 Jan 1969 Croydon, Surrey, UK From tss at iki.fi Thu Jan 26 01:42:11 2012 From: tss at iki.fi (Timo Sirainen) Date: Thu, 26 Jan 2012 01:42:11 +0200 Subject: [Dovecot] v2.1.rc5 released In-Reply-To: <20120125233930.GA17183@doctor.nl2k.ab.ca> References: <901DE631-5790-41BF-89A9-73BBB544FDF7@iki.fi> <20120125233930.GA17183@doctor.nl2k.ab.ca> Message-ID: On 26.1.2012, at 1.39, The Doctor wrote: > All right, can you get configure to detect --as-needed flag for ld? > > This is show stopping for me. It should only be used with GNU ld. What ld and OS do you use? configure --without-gnu-ld probably works also? From tss at iki.fi Thu Jan 26 01:42:46 2012 From: tss at iki.fi (Timo Sirainen) Date: Thu, 26 Jan 2012 01:42:46 +0200 Subject: [Dovecot] Crash on mail folder delete In-Reply-To: References: Message-ID: <54B5D728-EC26-4633-A927-7EC040043BF5@iki.fi> On 26.1.2012, at 1.37, Daniel L. Miller wrote: > Attempting to delete a folder from within the trash folder using Thunderbird. I see the following in the log: Dovecot version? From dmiller at amfes.com Thu Jan 26 01:43:26 2012 From: dmiller at amfes.com (Daniel L. Miller) Date: Wed, 25 Jan 2012 15:43:26 -0800 Subject: [Dovecot] Crash on mail folder delete In-Reply-To: <54B5D728-EC26-4633-A927-7EC040043BF5@iki.fi> References: <4F20922C.60206@amfes.com> <54B5D728-EC26-4633-A927-7EC040043BF5@iki.fi> Message-ID: On 1/25/2012 3:42 PM, Timo Sirainen wrote: > On 26.1.2012, at 1.37, Daniel L. Miller wrote: > >> Attempting to delete a folder from within the trash folder using Thunderbird. I see the following in the log: > Dovecot version? > 2.1.rc3. I'm compiling rc5 now... -- Daniel From doctor at doctor.nl2k.ab.ca Thu Jan 26 02:01:26 2012 From: doctor at doctor.nl2k.ab.ca (The Doctor) Date: Wed, 25 Jan 2012 17:01:26 -0700 Subject: [Dovecot] v2.1.rc5 released In-Reply-To: References: <901DE631-5790-41BF-89A9-73BBB544FDF7@iki.fi> <20120125233930.GA17183@doctor.nl2k.ab.ca> Message-ID: <20120126000126.GA19765@doctor.nl2k.ab.ca> On Thu, Jan 26, 2012 at 01:42:11AM +0200, Timo Sirainen wrote: > On 26.1.2012, at 1.39, The Doctor wrote: > > > All right, can you get configure to detect --as-needed flag for ld? > > > > This is show stopping for me. > > It should only be used with GNU ld. What ld and OS do you use? configure --without-gnu-ld probably works also? My /usr/bin/ld GNU ld version 2.13.1 Copyright 2002 Free Software Foundation, Inc. This program is free software; you may redistribute it under the terms of the GNU General Public License. This program has absolutely no warranty. on BSD/OS 4.3.1 -- Member - Liberal International This is doctor at nl2k.ab.ca Ici doctor at nl2k.ab.ca God, Queen and country! Never Satan President Republic! Beware AntiChrist rising! https://www.fullyfollow.me/rootnl2k Birthdate : 29 Jan 1969 Croydon, Surrey, UK From dmiller at amfes.com Thu Jan 26 02:04:08 2012 From: dmiller at amfes.com (Daniel L. Miller) Date: Wed, 25 Jan 2012 16:04:08 -0800 Subject: [Dovecot] Crash on mail folder delete In-Reply-To: <4F20939E.4010903@amfes.com> References: <4F20922C.60206@amfes.com> <54B5D728-EC26-4633-A927-7EC040043BF5@iki.fi> <4F20939E.4010903@amfes.com> Message-ID: On 1/25/2012 3:43 PM, Daniel L. Miller wrote: > On 1/25/2012 3:42 PM, Timo Sirainen wrote: >> On 26.1.2012, at 1.37, Daniel L. Miller wrote: >> >>> Attempting to delete a folder from within the trash folder using >>> Thunderbird. I see the following in the log: >> Dovecot version? >> > 2.1.rc3. I'm compiling rc5 now... > Error still there on rc5. Jan 25 16:03:47 bubba dovecot: imap(dmiller at amfes.com): Panic: file mailbox-list-fs.c: line 156 (fs_list_get_path): assertion failed: (mailbox_list_is_valid_pattern(_list, name)) Jan 25 16:03:47 bubba dovecot: imap(dmiller at amfes.com): Error: Raw backtrace: /usr/local/lib/dovecot/libdovecot.so.0(+0x3f1ba) [0x7f7c3f0331ba] -> /usr/local/lib/dovecot/libdovecot.so.0(+0x3f206) [0x7f7c3f033206] -> /usr/local/lib/dovecot/libdovecot.so.0(+0x1804a) [0x7f7c3f00c04a] -> /usr/local/lib/dovecot/libdovecot-storage.so.0(+0x47317) [0x7f7c3f2c0317] -> /usr/local/lib/dovecot/lib01_acl_plugin.so(+0x6c71) [0x7f7c3dc48c71] -> /usr/local/lib/dovecot/lib01_acl_plugin.so(+0x6d47) [0x7f7c3dc48d47] -> /usr/local/lib/dovecot/lib01_acl_plugin.so(acl_mailbox_allocated+0x9e) [0x7f7c3dc4c61e] -> /usr/local/lib/dovecot/libdovecot-storage.so.0(hook_mailbox_allocated+0x62) [0x7f7c3f2b4662] -> /usr/local/lib/dovecot/libdovecot-storage.so.0(mailbox_alloc+0xb2) [0x7f7c3f2b3482] -> dovecot/imap [dmiller at amfes.com 192.168.0.91 delete](cmd_delete+0x72) [0x409972] -> dovecot/imap [dmiller at amfes.com 192.168.0.91 delete](command_exec+0x3d) [0x4109fd] -> dovecot/imap [dmiller at amfes.com 192.168.0.91 delete]() [0x40f9ce] -> dovecot/imap [dmiller at amfes.com 192.168.0.91 delete]() [0x40faad] -> dovecot/imap [dmiller at amfes.com 192.168.0.91 delete](client_handle_input+0x135) [0x40fcd5] -> dovecot/imap [dmiller at amfes.com 192.168.0.91 delete](client_input+0x5f) [0x4105ff] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_call_io+0x36) [0x7f7c3f03f5d6] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_handler_run+0x9f) [0x7f7c3f04065f] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_run+0x28) [0x7f7c3f03f578] -> /usr/local/lib/dovecot/libdovecot.so.0(master_service_run+0x13) [0x7f7c3f02c593] -> dovecot/imap [dmiller at amfes.com 192.168.0.91 delete](main+0x2a5) [0x418a55] -> /lib/libc.so.6(__libc_start_main+0xfe) [0x7f7c3ec8fd8e] -> dovecot/imap [dmiller at amfes.com 192.168.0.91 delete]() [0x408449] Jan 25 16:03:48 bubba dovecot: imap(dmiller at amfes.com): Panic: file mailbox-list-fs.c: line 156 (fs_list_get_path): assertion failed: (mailbox_list_is_valid_pattern(_list, name)) Jan 25 16:03:48 bubba dovecot: imap(dmiller at amfes.com): Error: Raw backtrace: /usr/local/lib/dovecot/libdovecot.so.0(+0x3f1ba) [0x7f9e52e211ba] -> /usr/local/lib/dovecot/libdovecot.so.0(+0x3f206) [0x7f9e52e21206] -> /usr/local/lib/dovecot/libdovecot.so.0(+0x1804a) [0x7f9e52dfa04a] -> /usr/local/lib/dovecot/libdovecot-storage.so.0(+0x47317) [0x7f9e530ae317] -> /usr/local/lib/dovecot/lib01_acl_plugin.so(+0x6c71) [0x7f9e51a36c71] -> /usr/local/lib/dovecot/lib01_acl_plugin.so(+0x6d47) [0x7f9e51a36d47] -> /usr/local/lib/dovecot/lib01_acl_plugin.so(acl_mailbox_allocated+0x9e) [0x7f9e51a3a61e] -> /usr/local/lib/dovecot/libdovecot-storage.so.0(hook_mailbox_allocated+0x62) [0x7f9e530a2662] -> /usr/local/lib/dovecot/libdovecot-storage.so.0(mailbox_alloc+0xb2) [0x7f9e530a1482] -> dovecot/imap [dmiller at amfes.com 192.168.0.91 delete](cmd_delete+0x72) [0x409972] -> dovecot/imap [dmiller at amfes.com 192.168.0.91 delete](command_exec+0x3d) [0x4109fd] -> dovecot/imap [dmiller at amfes.com 192.168.0.91 delete]() [0x40f9ce] -> dovecot/imap [dmiller at amfes.com 192.168.0.91 delete]() [0x40faad] -> dovecot/imap [dmiller at amfes.com 192.168.0.91 delete](client_handle_input+0x135) [0x40fcd5] -> dovecot/imap [dmiller at amfes.com 192.168.0.91 delete](client_input+0x5f) [0x4105ff] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_call_io+0x36) [0x7f9e52e2d5d6] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_handler_run+0x9f) [0x7f9e52e2e65f] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_run+0x28) [0x7f9e52e2d578] -> /usr/local/lib/dovecot/libdovecot.so.0(master_service_run+0x13) [0x7f9e52e1a593] -> dovecot/imap [dmiller at amfes.com 192.168.0.91 delete](main+0x2a5) [0x418a55] -> /lib/libc.so.6(__libc_start_main+0xfe) [0x7f9e52a7dd8e] -> dovecot/imap [dmiller at amfes.com 192.168.0.91 delete]() [0x408449] Jan 25 16:03:48 bubba dovecot: imap(dmiller at amfes.com): Fatal: master: service(imap): child 3300 killed with signal 6 (core dumps disabled) Jan 25 16:03:48 bubba dovecot: imap(dmiller at amfes.com): Fatal: master: service(imap): child 3267 killed with signal 6 (core dumps disabled) -- Daniel From jd.beaubien at gmail.com Thu Jan 26 03:40:16 2012 From: jd.beaubien at gmail.com (Jean-Daniel Beaubien) Date: Wed, 25 Jan 2012 20:40:16 -0500 Subject: [Dovecot] Persistence of UIDs In-Reply-To: <469769DA-F849-4AE3-AB82-BB4AE05E0F11@iki.fi> References: <9EDDD923-760E-4D24-BFB7-290172418A62@iki.fi> <469769DA-F849-4AE3-AB82-BB4AE05E0F11@iki.fi> Message-ID: On Wed, Jan 25, 2012 at 8:47 AM, Timo Sirainen wrote: > On 25.1.2012, at 15.34, Jean-Daniel Beaubien wrote: > > >>> I am thinking about building some form of webmail specialized for some > >>> specific business purpose and I am thinking of building a sort of cache > >> in > >>> a DB by storing the email addr, date, subject and UID for quick lookups > >> and > >>> search of correspondance. > >> > >> Dovecot should already have such cache. If there are problems with > that, I > >> think it would be better to fix it on Dovecot's side rather than adding > a > >> second cache. > >> > > > > Very true. Has there been many search/index improvements since 1.0.9? I > > read thru the release notes but nothing jumped out at me. > > Disk I/O usage is the same probably, CPU usage is less in newer versions. > > >>> I am doing this because I am having issue with multiple people > searching > >>> thru email folders that have 100k+ emails (which is another problem in > >>> itself, searches don't seem to scale well when folder goes above 60k > >>> emails). > >> > >> Maybe enable fts-solr or fts-lucene? (Both work much better in v2.1.) > >> > > > > I was under the impression that lucene was for full-text search. I'm just > > doing simple from/to field searches. > > In v2.1 from/to fields are also searched via FTS. > Ok, I managed to compile 2.1 rc5 on an old ubuntu 8.04 without any issue. However, the config file is giving me a bit of a hard time, I'll figure this part out tomorrow. I'd just like to confirm that there is no risk to the actual mail data is ever something is badly configured when I start dovecot 2.1. I am managing this old server on my spare time for a friend, so I don't want to loose 2million+ emails and have to deal with those consequences :) From gedalya at gedalya.net Thu Jan 26 06:31:20 2012 From: gedalya at gedalya.net (Gedalya) Date: Wed, 25 Jan 2012 23:31:20 -0500 Subject: [Dovecot] IMAP to Maildir Migration preserving UIDs? Message-ID: <4F20D718.9010805@gedalya.net> Hello all, I'm facing the need to migrate from a proprietary IMAP server to Dovecot. The migration must be as smooth and transparent as possible. The mailbox format I would want to use is Maildir++. The storage format used by the current server is unknown, and I don't look forward to trying to reverse-engineer it. This leaves me with the option of reading the mailboxes using IMAP. There are tools like offlineimap or mbsync, and they do store the UID and UIDVALIDITY info. The last piece of the puzzle is a process to properly create the dovecot-uidlist and dovecot-uidvalidity files. So far I wasn't able to find anything on this. Are there any tips? Are there any tools available to do this job, or part of it? In either case I need this done, and I'll have to create whatever I can't find available. If there isn't anything out there that I'm yet to become aware of, then I'm looking at creating something like an offlineimap post-processing routine? Any help would be much appreciated. Gedalya From arung at cdac.in Thu Jan 26 07:13:07 2012 From: arung at cdac.in (Arun Gupta) Date: Thu, 26 Jan 2012 10:43:07 +0530 (IST) Subject: [Dovecot] dovecot Digest, Vol 105, Issue 57 In-Reply-To: References: Message-ID: Dear Sir, Thanks for your reply and I really agreed your point about 'reject mail for users over quota', but I don't want to do it if it is possible to without reject mails to deliver mails from spool to user's home directory automatically, kindly provide solution. I will be highly obliged all of you. -- Thanks & Regards, Arun Kumar Gupta > formate) automatically > Message-ID: > Content-Type: TEXT/PLAIN; format=flowed; charset=US-ASCII > > > Hi, > > I am using dovecot 2.0.16, and assigend globally procmailrc > (/etc/procmailrc) which delivers mails to user's home directory in maildir > formate. Also I assined quota to User through setquota (edquota) command, > If the quota excedded then this case user's mail store to > /var/spool/mail/user. After incresing quota how to delivered these mails > to user's home dir in maildir formate automatically. > > Thanks & Regards, > > Arun Kumar Gupta > Best practice is to reject mail for users over quota (as long as you do this during the smtp transaction... Otherwise, whats the point? (they can still fill up your server)... -- Best regards, Charles -- This message has been scanned for viruses and dangerous content by MailScanner, and is believed to be clean. From mark.zealey at webfusion.com Thu Jan 26 12:14:57 2012 From: mark.zealey at webfusion.com (Mark Zealey) Date: Thu, 26 Jan 2012 10:14:57 +0000 Subject: [Dovecot] auth-worker temporary failures causing lmtp 500 rejection Message-ID: <4F2127A1.2010302@webfusion.com> Hi there, I'm using dovecot 2.0.16 with a mysql user database. From time to time when we have a big influx of messages (perhaps more than 30 concurrent rcpt to:<> sessions at the same time so no auth-workers free?) or when we have a transient issue connecting to the database server, we see the message: Jan 25 16:38:23 mailbox dovecot: auth-worker: sql(foo at bar.com,1.2.3.4): Unknown user and the lmtp process returns: 550 5.1.1 User doesn't exist: foo at bar.com This would be correct for a permanent error where the user doesn't exist in our database, however it seems to be doing this on transient errors too. Is this an issue with the code or perhaps some setting I have missed? Thanks, Mark From CMarcus at Media-Brokers.com Thu Jan 26 14:03:56 2012 From: CMarcus at Media-Brokers.com (Charles Marcus) Date: Thu, 26 Jan 2012 07:03:56 -0500 Subject: [Dovecot] Persistence of UIDs In-Reply-To: References: <9EDDD923-760E-4D24-BFB7-290172418A62@iki.fi> <469769DA-F849-4AE3-AB82-BB4AE05E0F11@iki.fi> Message-ID: <4F21412C.9060105@Media-Brokers.com> On 2012-01-25 8:40 PM, Jean-Daniel Beaubien wrote: > I'd just like to confirm that there is no risk to the actual mail data is > ever something is badly configured when I start dovecot 2.1. I am managing > this old server on my spare time for a friend, so I don't want to loose > 2million+ emails and have to deal with those consequences:) There are *always* risks associated with things like this... maybe the chance is low, but no guarantees... As always, it is *your* responsibility to *backup* *first*... -- Best regards, Charles From CMarcus at Media-Brokers.com Thu Jan 26 14:06:44 2012 From: CMarcus at Media-Brokers.com (Charles Marcus) Date: Thu, 26 Jan 2012 07:06:44 -0500 Subject: [Dovecot] IMAP to Maildir Migration preserving UIDs? In-Reply-To: <4F20D718.9010805@gedalya.net> References: <4F20D718.9010805@gedalya.net> Message-ID: <4F2141D4.806@Media-Brokers.com> On 2012-01-25 11:31 PM, Gedalya wrote: > This leaves me with the option of reading the mailboxes using IMAP. > There are tools like offlineimap or mbsync, Not familiar with those, but I think imapsync will do what you want? http://imapsync.lamiral.info/ I do see that it references those two though... -- Best regards, Charles From support at palatineweb.com Thu Jan 26 14:09:09 2012 From: support at palatineweb.com (Palatine Web Support) Date: Thu, 26 Jan 2012 12:09:09 +0000 Subject: [Dovecot] =?utf-8?q?Imap_Quota_Exceeded_-_But_Still_Receiving_Ema?= =?utf-8?q?ils=3F?= In-Reply-To: References: <4F1F9567.1030804@amfes.com> <747f97172fd71affd2ee5b5ebcc5d16c@palatineweb.com> Message-ID: <32bb69a587decb1d09d618792dc1ed8d@palatineweb.com> On 2012-01-25 17:01, Daniel L. Miller wrote: > On 1/25/2012 2:01 AM, Palatine Web Support wrote: >> On 2012-01-25 05:38, Daniel L. Miller wrote: >>> On 1/24/2012 8:35 AM, Palatine Web Support wrote: >>>> >>>> Here is my dovecot config: >>>> >>>> plugin { >>>> quota = maildir:User Quota >>>> quota_rule2 = Trash:storage=+100M >>>> } >>> [..] >>>> >>>> So it picks up my set quota of 3MB but dovecot is not rejecting >>>> emails if I am over my quota. >>>> >>>> Can anyone help? >>>> >>> Is the quota plugin being loaded? What is the output of: >>> >>> doveconf | grep -B 2 plug >> >> The modules are being loaded. From the log file with debugging >> enabled: >> >> Jan 25 09:59:58 mail dovecot: IMAP(xxx at xxx.com): Loading modules >> from directory: /usr/lib/dovecot/modules/imap >> Jan 25 09:59:58 mail dovecot: IMAP(xxx at xxx.com): Module loaded: >> /usr/lib/dovecot/modules/imap/lib10_quota_plugin.so >> Jan 25 09:59:58 mail dovecot: IMAP(xxx at xxx.com): Module loaded: >> /usr/lib/dovecot/modules/imap/lib11_imap_quota_plugin.so >> Jan 25 09:59:58 mail dovecot: IMAP(xxx at xxx.com): Effective uid=150, >> gid=8, home=/var/vmail/xxx.com/support >> Jan 25 09:59:58 mail dovecot: IMAP(xxx at xxx.com): Quota root: >> name=User Quota backend=dirsize args= >> Jan 25 09:59:58 mail dovecot: IMAP(xxx at xxx.com): Quota rule: >> root=User Quota mailbox=* bytes=3145728 messages=0 >> Jan 25 09:59:58 mail dovecot: IMAP(xxx at xxx.com): Quota rule: >> root=User Quota mailbox=Trash bytes=104857600 messages=0 >> Jan 25 09:59:58 mail dovecot: IMAP(xxx at xxx.com): maildir: >> data=/var/vmail/xxx.com/support >> Jan 25 09:59:58 mail dovecot: IMAP(xxx at xxx.com): maildir++: >> root=/var/vmail/xxx.com/support, index=, control=, >> inbox=/var/vmail/xxx.com/support >> Jan 25 09:59:58 mail dovecot: IMAP(xxx at xxx.com): Namespace : Using >> permissions from /var/vmail/xxx.com/support: mode=0700 gid=-1 >> Jan 25 09:59:58 mail dovecot: IMAP(xxx at xxx.com): Disconnected: >> Logged out bytes=82/573 >> Jan 25 09:59:58 mail dovecot: IMAP(xxx at xxx.com): Disconnected: >> Logged out bytes=269/8243 >> > > I don't know if it makes any difference, but in your config file, try > changing: > plugin { > quota = maildir:User Quota > > to > > plugin { > quota = maildir:User quota > > (lowercase the "quota") The quota is working fine now. The problem was I had my transport agent set to virtual when it should have been set to dovecot. Thanks. From tss at iki.fi Thu Jan 26 14:21:32 2012 From: tss at iki.fi (Timo Sirainen) Date: Thu, 26 Jan 2012 14:21:32 +0200 Subject: [Dovecot] Persistence of UIDs In-Reply-To: <4F21412C.9060105@Media-Brokers.com> References: <9EDDD923-760E-4D24-BFB7-290172418A62@iki.fi> <469769DA-F849-4AE3-AB82-BB4AE05E0F11@iki.fi> <4F21412C.9060105@Media-Brokers.com> Message-ID: <055C0680-BAF5-4617-918D-E12C09266006@iki.fi> On 26.1.2012, at 14.03, Charles Marcus wrote: > On 2012-01-25 8:40 PM, Jean-Daniel Beaubien wrote: >> I'd just like to confirm that there is no risk to the actual mail data is >> ever something is badly configured when I start dovecot 2.1. I am managing >> this old server on my spare time for a friend, so I don't want to loose >> 2million+ emails and have to deal with those consequences:) > > There are *always* risks associated with things like this... maybe the chance is low, but no guarantees... Risks of some trouble, yes .. but you have to be highly creative if you want to accidentally lose any mails. I can't think of any way to do that without explicitly deleting files from filesystem or via IMAP/POP3 client. From tss at iki.fi Thu Jan 26 14:27:15 2012 From: tss at iki.fi (Timo Sirainen) Date: Thu, 26 Jan 2012 14:27:15 +0200 Subject: [Dovecot] IMAP to Maildir Migration preserving UIDs? In-Reply-To: <4F20D718.9010805@gedalya.net> References: <4F20D718.9010805@gedalya.net> Message-ID: On 26.1.2012, at 6.31, Gedalya wrote: > I'm facing the need to migrate from a proprietary IMAP server to Dovecot. The migration must be as smooth and transparent as possible. > > The mailbox format I would want to use is Maildir++. > > The storage format used by the current server is unknown, and I don't look forward to trying to reverse-engineer it. This leaves me with the option of reading the mailboxes using IMAP. There are tools like offlineimap or mbsync, and they do store the UID and UIDVALIDITY info. The last piece of the puzzle is a process to properly create the dovecot-uidlist and dovecot-uidvalidity files. So far I wasn't able to find anything on this. Are there any tips? Are there any tools available to do this job, or part of it? Get Dovecot v2.1 and configure it to work. Then for migration add to dovecot.conf: imapc_host = imap.example.com imapc_port = 993 imapc_ssl = imaps imapc_ssl_ca_dir = /etc/ssl/certs mail_prefetch_count = 50 And do the migration one user at a time: doveadm -o imapc_user=USERNAME -o imapc_password=PASSWORD backup -R imapc: From tss at iki.fi Thu Jan 26 14:31:43 2012 From: tss at iki.fi (Timo Sirainen) Date: Thu, 26 Jan 2012 14:31:43 +0200 Subject: [Dovecot] auth-worker temporary failures causing lmtp 500 rejection In-Reply-To: <4F2127A1.2010302@webfusion.com> References: <4F2127A1.2010302@webfusion.com> Message-ID: <1BEC91E0-E617-4EDA-88E8-A1877CF4C3D9@iki.fi> On 26.1.2012, at 12.14, Mark Zealey wrote: > I'm using dovecot 2.0.16 with a mysql user database. From time to time when we have a big influx of messages (perhaps more than 30 concurrent rcpt to:<> sessions at the same time so no auth-workers free?) or when we have a transient issue connecting to the database server, we see the message: > > Jan 25 16:38:23 mailbox dovecot: auth-worker: sql(foo at bar.com,1.2.3.4): Unknown user This happens only when the SQL query doesn't return any rows, but does return success. > and the lmtp process returns: > > 550 5.1.1 User doesn't exist: foo at bar.com > > This would be correct for a permanent error where the user doesn't exist in our database, however it seems to be doing this on transient errors too. Is this an issue with the code or perhaps some setting I have missed? The problem is that temporary errors are returning "unknown user". Can you reproduce this somehow? Like if you stop MySQL it always returns that "Unknown user"? From ar-dovecotlist at acrconsulting.co.uk Thu Jan 26 14:38:17 2012 From: ar-dovecotlist at acrconsulting.co.uk (Andrew Richards) Date: 26 Jan 2012 12:38:17 +0000 Subject: [Dovecot] IMAP to Maildir Migration preserving UIDs? In-Reply-To: <4F20D718.9010805@gedalya.net> References: <4F20D718.9010805@gedalya.net> Message-ID: <201201261238.17182.ar-dovecotlist@acrconsulting.co.uk> On Thursday 26 January 2012 04:31:20 Gedalya wrote: > I'm facing the need to migrate from a proprietary IMAP server to > Dovecot. The migration must be as smooth and transparent as possible. Ignoring the migration of individual mailboxes addressed in other replies, I trust you've met Perdition - very useful for this sort of situation, http://horms.net/projects/perdition/ to provide an IMAP "Server" (actually proxy) that knows where the real mailboxes are located, and directs connections accordingly. That way you can migrate mailboxes one-by-one as you've migrated them, helpful to test a few mailboxes first without affecting the bulk of users' mailboxes atall. cheers, Andrew. From gedalya at gedalya.net Thu Jan 26 15:11:32 2012 From: gedalya at gedalya.net (Gedalya) Date: Thu, 26 Jan 2012 08:11:32 -0500 Subject: [Dovecot] IMAP to Maildir Migration preserving UIDs? In-Reply-To: <4F2141D4.806@Media-Brokers.com> References: <4F20D718.9010805@gedalya.net> <4F2141D4.806@Media-Brokers.com> Message-ID: <4F215104.4000409@gedalya.net> On 01/26/2012 07:06 AM, Charles Marcus wrote: > On 2012-01-25 11:31 PM, Gedalya wrote: >> This leaves me with the option of reading the mailboxes using IMAP. >> There are tools like offlineimap or mbsync, > > Not familiar with those, but I think imapsync will do what you want? > > http://imapsync.lamiral.info/ > > I do see that it references those two though... > As I understand, there is no way an IMAP-to-IAMP process can preserve UIDs, since new UIDs are assigned for every message by the target server. Also, imapsync found 0 messages in all mailboxes on my evil to-be-eliminated server, something I didn't bother troubleshooting much. Timo's idea sounds interesting, time to look into 2.1 ! From gedalya at gedalya.net Thu Jan 26 15:18:32 2012 From: gedalya at gedalya.net (Gedalya) Date: Thu, 26 Jan 2012 08:18:32 -0500 Subject: [Dovecot] IMAP to Maildir Migration preserving UIDs? In-Reply-To: <201201261238.17182.ar-dovecotlist@acrconsulting.co.uk> References: <4F20D718.9010805@gedalya.net> <201201261238.17182.ar-dovecotlist@acrconsulting.co.uk> Message-ID: <4F2152A8.2040302@gedalya.net> On 01/26/2012 07:38 AM, Andrew Richards wrote: > On Thursday 26 January 2012 04:31:20 Gedalya wrote: >> I'm facing the need to migrate from a proprietary IMAP server to >> Dovecot. The migration must be as smooth and transparent as possible. > Ignoring the migration of individual mailboxes addressed in other replies, I > trust you've met Perdition - very useful for this sort of situation, > http://horms.net/projects/perdition/ > > to provide an IMAP "Server" (actually proxy) that knows where the real > mailboxes are located, and directs connections accordingly. That way you can > migrate mailboxes one-by-one as you've migrated them, helpful to test a few > mailboxes first without affecting the bulk of users' mailboxes atall. > > cheers, > > Andrew. Sounds very cool. I already have dovecot set up as a proxy, working, and it should allow me to forcefully disconnect users and lock them out while they are being migrated and then once they are done they'll be served locally rather than proxied. My main problem is that most connections are simply coming directly to the old server, using the deprecated hostname. I need all clients to use the right hostnames, or clog up this new server with redirectors and proxies for all the junk done on the old server.. bummer. What I might want to look into is actually setting up a proxy like this but on the evil (windows) server - to get *him* to pass on those requests he shouldn't be handling. From CMarcus at Media-Brokers.com Thu Jan 26 16:11:27 2012 From: CMarcus at Media-Brokers.com (Charles Marcus) Date: Thu, 26 Jan 2012 09:11:27 -0500 Subject: [Dovecot] IMAP to Maildir Migration preserving UIDs? In-Reply-To: <4F215104.4000409@gedalya.net> References: <4F20D718.9010805@gedalya.net> <4F2141D4.806@Media-Brokers.com> <4F215104.4000409@gedalya.net> Message-ID: <4F215F0F.9010106@Media-Brokers.com> On 2012-01-26 8:11 AM, Gedalya wrote: > As I understand, there is no way an IMAP-to-IAMP process can preserve > UIDs, since new UIDs are assigned for every message by the target server. > Also, imapsync found 0 messages in all mailboxes on my evil > to-be-eliminated server, something I didn't bother troubleshooting much. > Timo's idea sounds interesting, time to look into 2.1 ! Yep, it definitely sounds like the way to go... -- Best regards, Charles From Mark.Zealey at webfusion.com Thu Jan 26 16:37:49 2012 From: Mark.Zealey at webfusion.com (Mark Zealey) Date: Thu, 26 Jan 2012 14:37:49 +0000 Subject: [Dovecot] auth-worker temporary failures causing lmtp 500 rejection In-Reply-To: <1BEC91E0-E617-4EDA-88E8-A1877CF4C3D9@iki.fi> References: <4F2127A1.2010302@webfusion.com>, <1BEC91E0-E617-4EDA-88E8-A1877CF4C3D9@iki.fi> Message-ID: I've tried reproducing by having long running auth queries in the sql and KILLing them on the server, restarting the mysql service, and setting max auth workers to 1 and running 2 sessions at the same time (with long-running auth queries), but to no effect. There must be something else going on here; I saw it in particular when exim on our frontend servers had queued a large number of messages and suddenly released them all at once hence the auth-worker hypothesis although the log messages do not support this. I'll try to see if I can trigger this manually although we have been doing some massively parallel testing previously and not seen this. Mark ________________________________________ From: Timo Sirainen [tss at iki.fi] Sent: 26 January 2012 12:31 To: Mark Zealey Cc: dovecot at dovecot.org Subject: Re: [Dovecot] auth-worker temporary failures causing lmtp 500 rejection On 26.1.2012, at 12.14, Mark Zealey wrote: > I'm using dovecot 2.0.16 with a mysql user database. From time to time when we have a big influx of messages (perhaps more than 30 concurrent rcpt to:<> sessions at the same time so no auth-workers free?) or when we have a transient issue connecting to the database server, we see the message: > > Jan 25 16:38:23 mailbox dovecot: auth-worker: sql(foo at bar.com,1.2.3.4): Unknown user This happens only when the SQL query doesn't return any rows, but does return success. > and the lmtp process returns: > > 550 5.1.1 User doesn't exist: foo at bar.com > > This would be correct for a permanent error where the user doesn't exist in our database, however it seems to be doing this on transient errors too. Is this an issue with the code or perhaps some setting I have missed? The problem is that temporary errors are returning "unknown user". Can you reproduce this somehow? Like if you stop MySQL it always returns that "Unknown user"? From lists at wildgooses.com Thu Jan 26 18:02:28 2012 From: lists at wildgooses.com (Ed W) Date: Thu, 26 Jan 2012 16:02:28 +0000 Subject: [Dovecot] IMAP to Maildir Migration preserving UIDs? In-Reply-To: <4F2152A8.2040302@gedalya.net> References: <4F20D718.9010805@gedalya.net> <201201261238.17182.ar-dovecotlist@acrconsulting.co.uk> <4F2152A8.2040302@gedalya.net> Message-ID: <4F217914.1050501@wildgooses.com> Hi > Sounds very cool. I already have dovecot set up as a proxy, working, > and it should allow me to forcefully disconnect users and lock them > out while they are being migrated and then once they are done they'll > be served locally rather than proxied. My main problem is that most > connections are simply coming directly to the old server, using the > deprecated hostname. I need all clients to use the right hostnames, or > clog up this new server with redirectors and proxies for all the junk > done on the old server.. bummer. Why not put the old server IP to redirect to the new machine, then give the old machine some new temp IP in order to proxy back to it? That way you can do the proxying on the dovecot machine, which as you already established is working ok? Good luck Ed W From lists at wildgooses.com Thu Jan 26 18:06:24 2012 From: lists at wildgooses.com (Ed W) Date: Thu, 26 Jan 2012 16:06:24 +0000 Subject: [Dovecot] auth-worker temporary failures causing lmtp 500 rejection In-Reply-To: References: <4F2127A1.2010302@webfusion.com>, <1BEC91E0-E617-4EDA-88E8-A1877CF4C3D9@iki.fi> Message-ID: <4F217A00.8090504@wildgooses.com> On 26/01/2012 14:37, Mark Zealey wrote: > I've tried reproducing by having long running auth queries in the sql and KILLing them on the server, restarting the mysql service, and setting max auth workers to 1 and running 2 sessions at the same time (with long-running auth queries), but to no effect. There must be something else going on here; I saw it in particular when exim on our frontend servers had queued a large number of messages and suddenly released them all at once hence the auth-worker hypothesis although the log messages do not support this. I'll try to see if I can trigger this manually although we have been doing some massively parallel testing previously and not seen this. > Could it be a *timeout* rather than lack of worker processes? Theory would be that disk starvation causes other processes to take a long time to respond, hence the worker is *alive*, but doesn't return a response quickly enough, which in turn causes the "unknown user" message? You could try a different disk io scheduler, or ionice to control the effect of these big bursts of disk activity on other processes? (Most MTA programs such as postfix and qmail do a lot of fsyncs - this will cause a lot of IO activity and could easily starve other processes on the same box?) Good luck Ed W From gedalya at gedalya.net Thu Jan 26 18:30:53 2012 From: gedalya at gedalya.net (Gedalya) Date: Thu, 26 Jan 2012 11:30:53 -0500 Subject: [Dovecot] IMAP to Maildir Migration preserving UIDs? In-Reply-To: <4F217914.1050501@wildgooses.com> References: <4F20D718.9010805@gedalya.net> <201201261238.17182.ar-dovecotlist@acrconsulting.co.uk> <4F2152A8.2040302@gedalya.net> <4F217914.1050501@wildgooses.com> Message-ID: <4F217FBD.6070908@gedalya.net> On 01/26/2012 11:02 AM, Ed W wrote: > Hi > >> Sounds very cool. I already have dovecot set up as a proxy, working, >> and it should allow me to forcefully disconnect users and lock them >> out while they are being migrated and then once they are done they'll >> be served locally rather than proxied. My main problem is that most >> connections are simply coming directly to the old server, using the >> deprecated hostname. I need all clients to use the right hostnames, >> or clog up this new server with redirectors and proxies for all the >> junk done on the old server.. bummer. > > Why not put the old server IP to redirect to the new machine, then > give the old machine some new temp IP in order to proxy back to it? > That way you can do the proxying on the dovecot machine, which as you > already established is working ok? > > Good luck > > Ed W Yeap, taht's what I'm doing to do, except that I would have to proxy more than just IMAP and POP - it's a one-does-it-all kind of machine accepting mail delivered from the outside, relaying outgoing mail, does webmail, does all this things very poorly... I have the choice of forcing all users to change to the new, dedicated servers doing these things, or reimplementing / porxying all of this on my new dovecot server which I so desperately want to keep neat and tidy... From tss at iki.fi Thu Jan 26 18:50:06 2012 From: tss at iki.fi (Timo Sirainen) Date: Thu, 26 Jan 2012 18:50:06 +0200 Subject: [Dovecot] auth-worker temporary failures causing lmtp 500 rejection In-Reply-To: <4F217A00.8090504@wildgooses.com> References: <4F2127A1.2010302@webfusion.com>, <1BEC91E0-E617-4EDA-88E8-A1877CF4C3D9@iki.fi> <4F217A00.8090504@wildgooses.com> Message-ID: <85410DCB-B5A8-44F3-A942-031C5E4C932C@iki.fi> On 26.1.2012, at 18.06, Ed W wrote: > Could it be a *timeout* rather than lack of worker processes? The message in log was "Unknown user". The only reason this happens is if MySQL library's query functions returned success without any rows. No timeouts, crashes, or anything else can give that error message. So I'd the problem is either in MySQL library or MySQL server. Try if the attached patch gives any crashes. If it does, it means that mysql library returned mysql_errno()=0 (success) even though it should have returned a failure. Or you could even change it to only: i_assert(result->result != NULL); if you're not using MySQL for anything else than auth. The other possibility is if in driver_mysql_result_next_row() the mysql_fetch_row() returns NULL, but also there I'm checking mysql_errno(). -------------- next part -------------- A non-text attachment was scrubbed... Name: diff Type: application/octet-stream Size: 435 bytes Desc: not available URL: -------------- next part -------------- From lists at wildgooses.com Thu Jan 26 22:08:18 2012 From: lists at wildgooses.com (Ed W) Date: Thu, 26 Jan 2012 20:08:18 +0000 Subject: [Dovecot] IMAP to Maildir Migration preserving UIDs? In-Reply-To: <4F217FBD.6070908@gedalya.net> References: <4F20D718.9010805@gedalya.net> <201201261238.17182.ar-dovecotlist@acrconsulting.co.uk> <4F2152A8.2040302@gedalya.net> <4F217914.1050501@wildgooses.com> <4F217FBD.6070908@gedalya.net> Message-ID: <4F21B2B2.6030505@wildgooses.com> Hi > Yeap, taht's what I'm doing to do, except that I would have to proxy > more than just IMAP and POP - it's a one-does-it-all kind of machine > accepting mail delivered from the outside, relaying outgoing mail, > does webmail, does all this things very poorly... I have the choice of > forcing all users to change to the new, dedicated servers doing these > things, or reimplementing / porxying all of this on my new dovecot > server which I so desperately want to keep neat and tidy... In that case I would suggest perhaps that the IP is taken over by a dedicated firewall box (running the OS of your choice). The firewall could then be used to port forward the services to the individual machines responsible for each service. This would give you the benefit that you could easily move other services off/around We are clearly off topic to dovecot... Plenty of good firewall options. If you want small, compact and low power, then you can pickup a bunch off intel compatible boards around the low couple hundred ?s mark fairly easily. Run your favourite distro and firewall on them. If you hadn't seen them before, I quite like Lanner for appliances, eg: http://www.lannerinc.com/x86_Network_Appliances/x86_Desktop_Appliances For example if you added a small appliance running linux which runs that IP, then you could add intrusion detection, bounce the web traffic to the windows box (or even just certain URLs, other URLs could go to some hypothetical linux box, etc), port forwarding the mail to the new dovecot box, etc, etc. Incremental price would be surprisingly low, but lots of extra flexibility? Just a thought Good luck Ed W From stan at hardwarefreak.com Thu Jan 26 22:51:02 2012 From: stan at hardwarefreak.com (Stan Hoeppner) Date: Thu, 26 Jan 2012 14:51:02 -0600 Subject: [Dovecot] v2.1.rc5 released In-Reply-To: <20120126000126.GA19765@doctor.nl2k.ab.ca> References: <901DE631-5790-41BF-89A9-73BBB544FDF7@iki.fi> <20120125233930.GA17183@doctor.nl2k.ab.ca> <20120126000126.GA19765@doctor.nl2k.ab.ca> Message-ID: <4F21BCB6.6030908@hardwarefreak.com> On 1/25/2012 6:01 PM, The Doctor wrote: > BSD/OS 4.3.1 A defunct/dead operating system, last released in 2003, support withdrawn in 2004. BSDI went belly up. Wind River acquired and then killed BSD/OS. You're using a dead, 9 year old OS, that hasn't seen official updates for 8 years. Do you think it's fair to ask application developers to support the oddities of your one-of-a-kind, ancient, patchwork of a platform? We've had this discussion before. And I don't believe you ever provided a sane rational for continuing to use an OS that's been officially dead for 8 years. What is the reason you are unable or unwilling to migrate to a newer and supported no cost BSD variant, or Linux distro? You're trying to run bleeding edge Dovecot, compiling it from source, on an 8 year old platform... -- Stan From Mark.Zealey at webfusion.com Thu Jan 26 23:35:24 2012 From: Mark.Zealey at webfusion.com (Mark Zealey) Date: Thu, 26 Jan 2012 21:35:24 +0000 Subject: [Dovecot] auth-worker temporary failures causing lmtp 500 rejection In-Reply-To: <420B5E34BFEE9646B7198438F9978AE223E4CB48@mail01.internal.webfusion.com> References: <4F2127A1.2010302@webfusion.com>, <1BEC91E0-E617-4EDA-88E8-A1877CF4C3D9@iki.fi>, <420B5E34BFEE9646B7198438F9978AE223E4CB48@mail01.internal.webfusion.com> Message-ID: Hi Timo thanks for the patch; I have now analyzed network dumps & discovered that the cause is actually our frontend mail servers not dovecot - we were delivering to the wrong lmtp port which we then use in the mysql query hence getting empty records. Sorry about this! Mark ________________________________________ From: Mark Zealey Sent: 26 January 2012 14:37 To: Timo Sirainen Cc: dovecot at dovecot.org Subject: RE: [Dovecot] auth-worker temporary failures causing lmtp 500 rejection I've tried reproducing by having long running auth queries in the sql and KILLing them on the server, restarting the mysql service, and setting max auth workers to 1 and running 2 sessions at the same time (with long-running auth queries), but to no effect. There must be something else going on here; I saw it in particular when exim on our frontend servers had queued a large number of messages and suddenly released them all at once hence the auth-worker hypothesis although the log messages do not support this. I'll try to see if I can trigger this manually although we have been doing some massively parallel testing previously and not seen this. Mark ________________________________________ From: Timo Sirainen [tss at iki.fi] Sent: 26 January 2012 12:31 To: Mark Zealey Cc: dovecot at dovecot.org Subject: Re: [Dovecot] auth-worker temporary failures causing lmtp 500 rejection On 26.1.2012, at 12.14, Mark Zealey wrote: > I'm using dovecot 2.0.16 with a mysql user database. From time to time when we have a big influx of messages (perhaps more than 30 concurrent rcpt to:<> sessions at the same time so no auth-workers free?) or when we have a transient issue connecting to the database server, we see the message: > > Jan 25 16:38:23 mailbox dovecot: auth-worker: sql(foo at bar.com,1.2.3.4): Unknown user This happens only when the SQL query doesn't return any rows, but does return success. > and the lmtp process returns: > > 550 5.1.1 User doesn't exist: foo at bar.com > > This would be correct for a permanent error where the user doesn't exist in our database, however it seems to be doing this on transient errors too. Is this an issue with the code or perhaps some setting I have missed? The problem is that temporary errors are returning "unknown user". Can you reproduce this somehow? Like if you stop MySQL it always returns that "Unknown user"? From gedalya at gedalya.net Fri Jan 27 01:42:05 2012 From: gedalya at gedalya.net (Gedalya) Date: Thu, 26 Jan 2012 18:42:05 -0500 Subject: [Dovecot] IMAP to Maildir Migration preserving UIDs? In-Reply-To: References: <4F20D718.9010805@gedalya.net> Message-ID: <4F21E4CD.3070001@gedalya.net> On 01/26/2012 07:27 AM, Timo Sirainen wrote: > On 26.1.2012, at 6.31, Gedalya wrote: > >> I'm facing the need to migrate from a proprietary IMAP server to Dovecot. The migration must be as smooth and transparent as possible. >> >> The mailbox format I would want to use is Maildir++. >> >> The storage format used by the current server is unknown, and I don't look forward to trying to reverse-engineer it. This leaves me with the option of reading the mailboxes using IMAP. There are tools like offlineimap or mbsync, and they do store the UID and UIDVALIDITY info. The last piece of the puzzle is a process to properly create the dovecot-uidlist and dovecot-uidvalidity files. So far I wasn't able to find anything on this. Are there any tips? Are there any tools available to do this job, or part of it? > Get Dovecot v2.1 and configure it to work. Then for migration add to dovecot.conf: > > imapc_host = imap.example.com > imapc_port = 993 > imapc_ssl = imaps > imapc_ssl_ca_dir = /etc/ssl/certs > mail_prefetch_count = 50 > > And do the migration one user at a time: > > doveadm -o imapc_user=USERNAME -o imapc_password=PASSWORD backup -R imapc: > Still working on it on my side, but for now: # doveadm -o imapc_user=gedalya at thisdomain.com -o imapc_password=***** backup -u gedalya at thisdomain.com -R imapc: Segmentation fault syslog: Jan 26 18:34:29 imap01 kernel: [ 9055.766548] doveadm[8015]: segfault at 4 ip b7765752 sp bff90600 error 4 in libdovecot-storage.so.0.0.0[b769a000+ff000] Jan 26 18:34:53 imap01 kernel: [ 9078.883024] doveadm[8046]: segfault at 4 ip b7828752 sp bf964450 error 4 in libdovecot-storage.so.0.0.0[b775d000+ff000] (I tried twice) Also, I happen to have no idea what I'm doing, but still, segfault.. This is a debian testing "wheezy" machine I put up to do the initial playing around, i386, using Dovecot prebuilt binary packages from http://xi.rename-it.nl/debian/pool/testing-auto/dovecot-2.1/ From tss at iki.fi Fri Jan 27 01:46:16 2012 From: tss at iki.fi (Timo Sirainen) Date: Fri, 27 Jan 2012 01:46:16 +0200 Subject: [Dovecot] IMAP to Maildir Migration preserving UIDs? In-Reply-To: <4F21E4CD.3070001@gedalya.net> References: <4F20D718.9010805@gedalya.net> <4F21E4CD.3070001@gedalya.net> Message-ID: On 27.1.2012, at 1.42, Gedalya wrote: >> doveadm -o imapc_user=USERNAME -o imapc_password=PASSWORD backup -R imapc: >> > Still working on it on my side, but for now: > > # doveadm -o imapc_user=gedalya at thisdomain.com -o imapc_password=***** backup -u gedalya at thisdomain.com -R imapc: > Segmentation fault gdb backtrace would be helpful. You should be able to get that by running (as root): gdb --args doveadm ... bt full (assuming you haven't changed base_dir, otherwise it might fail) From gedalya at gedalya.net Fri Jan 27 02:00:44 2012 From: gedalya at gedalya.net (Gedalya) Date: Thu, 26 Jan 2012 19:00:44 -0500 Subject: [Dovecot] IMAP to Maildir Migration preserving UIDs? In-Reply-To: References: <4F20D718.9010805@gedalya.net> <4F21E4CD.3070001@gedalya.net> Message-ID: <4F21E92C.4090509@gedalya.net> On 01/26/2012 06:46 PM, Timo Sirainen wrote: > On 27.1.2012, at 1.42, Gedalya wrote: > >>> doveadm -o imapc_user=USERNAME -o imapc_password=PASSWORD backup -R imapc: >>> >> Still working on it on my side, but for now: >> >> # doveadm -o imapc_user=gedalya at thisdomain.com -o imapc_password=***** backup -u gedalya at thisdomain.com -R imapc: >> Segmentation fault > gdb backtrace would be helpful. You should be able to get that by running (as root): > > gdb --args doveadm ... > bt full > > (assuming you haven't changed base_dir, otherwise it might fail) > Does this help? GNU gdb (GDB) 7.3-debian Copyright (C) 2011 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. Type "show copying" and "show warranty" for details. This GDB was configured as "i486-linux-gnu". For bug reporting instructions, please see: ... Reading symbols from /usr/bin/doveadm...Reading symbols from /usr/lib/debug/usr/bin/doveadm...done. done. (gdb) run Starting program: /usr/bin/doveadm -o imapc_user=jedi at example.com -o imapc_password=**** backup -u jedi at example.com -R imapc: [Thread debugging using libthread_db enabled] Program received signal SIGSEGV, Segmentation fault. mailbox_log_iter_open_next (iter=0x80cbd90) at mailbox-log.c:213 213 mailbox-log.c: No such file or directory. in mailbox-log.c (gdb) bt full #0 mailbox_log_iter_open_next (iter=0x80cbd90) at mailbox-log.c:213 No locals. #1 0xb7fa7dd4 in mailbox_log_iter_init (log=0x0) at mailbox-log.c:239 iter = 0x80cbd90 #2 0x0806ffd3 in dsync_worker_get_list_mailbox_log (list=0x80b6180, worker=0x80c3138) at dsync-worker-local.c:316 log = iter = 0x8 rec = #3 dsync_worker_get_mailbox_log (worker=0x80c3138) at dsync-worker-local.c:386 ns = 0x80a5f90 ret = #4 0x0807032f in dsync_worker_get_mailbox_log (worker=0x80c3138) at dsync-worker-local.c:372 No locals. #5 local_worker_mailbox_iter_init (_worker=0x80c3138) at dsync-worker-local.c:410 worker = 0x80c3138 iter = 0x80b6920 patterns = {0x8076124 "*", 0x0} #6 0x08065a2f in dsync_brain_mailbox_list_init (brain=0x80b68e8, worker=0x80c3138) at dsync-brain.c:141 list = 0x80c5940 pool = 0x80c5930 #7 0x0806680f in dsync_brain_sync (brain=0x80b68e8) at dsync-brain.c:827 No locals. #8 dsync_brain_sync (brain=0x80b68e8) at dsync-brain.c:813 No locals. #9 0x08067038 in dsync_brain_sync_all (brain=0x80b68e8) at dsync-brain.c:895 old_state = DSYNC_STATE_GET_MAILBOXES __FUNCTION__ = "dsync_brain_sync_all" #10 0x08064cfd in cmd_dsync_run (_ctx=0x8098ec0, user=0x80a9e98) at doveadm-dsync.c:237 ctx = 0x8098ec0 worker1 = 0x80c3138 worker2 = 0x80aedb8 workertmp = brain = 0x80b68e8 #11 0x0805371e in doveadm_mail_next_user (error_r=0xbffffa1c, ctx=0x8098ec0, input=) at doveadm-mail.c:221 ret = #12 doveadm_mail_next_user (ctx=0x8098ec0, input=, error_r=0xbffffa1c) at doveadm-mail.c:187 error = ret = #13 0x08053b2e in doveadm_mail_single_user (ctx=0x8098ec0, input=0xbffffa6c) at doveadm-mail.c:242 ---Type to continue, or q to quit--- error = 0x0 ret = __FUNCTION__ = "doveadm_mail_single_user" #14 0x08053f58 in doveadm_mail_cmd (cmd=0x8096f60, argc=, argv=0x80901e4) at doveadm-mail.c:425 input = {module = 0x0, service = 0x8076b3a "doveadm", username = 0x8090242 "jedi at example.com", local_ip = {family = 0, u = { ip6 = {__in6_u = {__u6_addr8 = '\000' , __u6_addr16 = {0, 0, 0, 0, 0, 0, 0, 0}, __u6_addr32 = {0, 0, 0, 0}}}, ip4 = {s_addr = 0}}}, remote_ip = {family = 0, u = {ip6 = {__in6_u = {__u6_addr8 = '\000' , __u6_addr16 = {0, 0, 0, 0, 0, 0, 0, 0}, __u6_addr32 = {0, 0, 0, 0}}}, ip4 = {s_addr = 0}}}, local_port = 0, remote_port = 0, userdb_fields = 0x0, flags_override_add = 0, flags_override_remove = 0, no_userdb_lookup = 0} ctx = 0x8098ec0 getopt_args = wildcard_user = 0x0 c = #15 0x080543d9 in doveadm_mail_try_run (cmd_name=0x8090238 "backup", argc=5, argv=0x80901d4) at doveadm-mail.c:482 cmd__foreach_end = 0x8096f9c cmd = 0x8096f60 cmd_name_len = 6 __FUNCTION__ = "doveadm_mail_try_run" #16 0x08053347 in main (argc=5, argv=0x80901d4) at doveadm.c:352 cmd_name = i = quick_init = false c = From tss at iki.fi Fri Jan 27 02:06:22 2012 From: tss at iki.fi (Timo Sirainen) Date: Fri, 27 Jan 2012 02:06:22 +0200 Subject: [Dovecot] IMAP to Maildir Migration preserving UIDs? In-Reply-To: <4F21E92C.4090509@gedalya.net> References: <4F20D718.9010805@gedalya.net> <4F21E4CD.3070001@gedalya.net> <4F21E92C.4090509@gedalya.net> Message-ID: On 27.1.2012, at 2.00, Gedalya wrote: >>> # doveadm -o imapc_user=gedalya at thisdomain.com -o imapc_password=***** backup -u gedalya at thisdomain.com -R imapc: >>> Segmentation fault >> gdb backtrace would be helpful. You should be able to get that by running (as root): >> > 213 mailbox-log.c: No such file or directory. > in mailbox-log.c > (gdb) bt full > #0 mailbox_log_iter_open_next (iter=0x80cbd90) at mailbox-log.c:213 > No locals. > #1 0xb7fa7dd4 in mailbox_log_iter_init (log=0x0) at mailbox-log.c:239 > iter = 0x80cbd90 > #2 0x0806ffd3 in dsync_worker_get_list_mailbox_log (list=0x80b6180, worker=0x80c3138) at dsync-worker-local.c:316 Ah, right, dsync really wants index files. Of course it shouldn't crash, I'll fix that, but you should be able to work around it: rm -rf /tmp/imapc doveadm -o imapc_user=USERNAME -o imapc_password=PASSWORD backup -R imapc:/tmp/imapc From gedalya at gedalya.net Fri Jan 27 02:17:42 2012 From: gedalya at gedalya.net (Gedalya) Date: Thu, 26 Jan 2012 19:17:42 -0500 Subject: [Dovecot] IMAP to Maildir Migration preserving UIDs? In-Reply-To: References: <4F20D718.9010805@gedalya.net> <4F21E4CD.3070001@gedalya.net> <4F21E92C.4090509@gedalya.net> Message-ID: <4F21ED26.6020908@gedalya.net> On 01/26/2012 07:06 PM, Timo Sirainen wrote: > On 27.1.2012, at 2.00, Gedalya wrote: > >>>> # doveadm -o imapc_user=gedalya at thisdomain.com -o imapc_password=***** backup -u gedalya at thisdomain.com -R imapc: >>>> Segmentation fault >>> gdb backtrace would be helpful. You should be able to get that by running (as root): >>> >> 213 mailbox-log.c: No such file or directory. >> in mailbox-log.c >> (gdb) bt full >> #0 mailbox_log_iter_open_next (iter=0x80cbd90) at mailbox-log.c:213 >> No locals. >> #1 0xb7fa7dd4 in mailbox_log_iter_init (log=0x0) at mailbox-log.c:239 >> iter = 0x80cbd90 >> #2 0x0806ffd3 in dsync_worker_get_list_mailbox_log (list=0x80b6180, worker=0x80c3138) at dsync-worker-local.c:316 > Ah, right, dsync really wants index files. Of course it shouldn't crash, I'll fix that, but you should be able to work around it: > > rm -rf /tmp/imapc > doveadm -o imapc_user=USERNAME -o imapc_password=PASSWORD backup -R imapc:/tmp/imapc > # doveadm -o imapc_user=jedi at example.com -o imapc_password=***** backup -u jedi at example.com -R imapc:/tmp/imapc dsync(jedi at example.com): Error: Failed to sync mailbox Drafts: STATUS cannot access mailbox Drafts dsync(jedi at example.com): Fatal: dsync backup: Looks like you're trying to run backup in wrong direction. Source is empty and destination is not. To be clear, I am trying to pull all the mailboxes from the old server on to this dovecot server, which has no mailboxes populated yet. It looks like this command would be pushing the messages from here to the imapc_host rather than pulling? From gedalya at gedalya.net Fri Jan 27 02:33:46 2012 From: gedalya at gedalya.net (Gedalya) Date: Thu, 26 Jan 2012 19:33:46 -0500 Subject: [Dovecot] IMAP to Maildir Migration preserving UIDs? In-Reply-To: <4F21ED26.6020908@gedalya.net> References: <4F20D718.9010805@gedalya.net> <4F21E4CD.3070001@gedalya.net> <4F21E92C.4090509@gedalya.net> <4F21ED26.6020908@gedalya.net> Message-ID: <4F21F0EA.5090700@gedalya.net> On 01/26/2012 07:17 PM, Gedalya wrote: > On 01/26/2012 07:06 PM, Timo Sirainen wrote: >> On 27.1.2012, at 2.00, Gedalya wrote: >> >>>>> # doveadm -o imapc_user=gedalya at thisdomain.com -o >>>>> imapc_password=***** backup -u gedalya at thisdomain.com -R imapc: >>>>> Segmentation fault >>>> gdb backtrace would be helpful. You should be able to get that by >>>> running (as root): >>>> >>> 213 mailbox-log.c: No such file or directory. >>> in mailbox-log.c >>> (gdb) bt full >>> #0 mailbox_log_iter_open_next (iter=0x80cbd90) at mailbox-log.c:213 >>> No locals. >>> #1 0xb7fa7dd4 in mailbox_log_iter_init (log=0x0) at mailbox-log.c:239 >>> iter = 0x80cbd90 >>> #2 0x0806ffd3 in dsync_worker_get_list_mailbox_log (list=0x80b6180, >>> worker=0x80c3138) at dsync-worker-local.c:316 >> Ah, right, dsync really wants index files. Of course it shouldn't >> crash, I'll fix that, but you should be able to work around it: >> >> rm -rf /tmp/imapc >> doveadm -o imapc_user=USERNAME -o imapc_password=PASSWORD backup -R >> imapc:/tmp/imapc >> > # doveadm -o imapc_user=jedi at example.com -o imapc_password=***** > backup -u jedi at example.com -R imapc:/tmp/imapc > dsync(jedi at example.com): Error: Failed to sync mailbox Drafts: STATUS > cannot access mailbox Drafts > dsync(jedi at example.com): Fatal: dsync backup: Looks like you're trying > to run backup in wrong direction. Source is empty and destination is not. > > To be clear, I am trying to pull all the mailboxes from the old server > on to this dovecot server, which has no mailboxes populated yet. It > looks like this command would be pushing the messages from here to the > imapc_host rather than pulling? > This got me somewhere... # doveadm -o imapc_user=jedi1 at example.com -o imapc_password=***** import -u jedi1 at example.com imapc: "" all doveadm(jedi1 at example.com): Error: Copying box=INBOX uid=1 failed: Message GUID not available in this server (guid) doveadm(jedi1 at example.com): Error: Copying box=INBOX uid=2 failed: Message GUID not available in this server (guid) doveadm(jedi1 at example.com): Error: Copying box=INBOX uid=3 failed: Message GUID not available in this server (guid) doveadm(jedi1 at example.com): Error: Copying box=INBOX uid=4 failed: Message GUID not available in this server (guid) doveadm(jedi1 at example.com): Error: Copying box=INBOX uid=5 failed: Message GUID not available in this server (guid) doveadm(jedi1 at example.com): Error: Copying box=INBOX uid=6 failed: Message GUID not available in this server (guid) doveadm(jedi1 at example.com): Error: Copying box=INBOX uid=7 failed: Message GUID not available in this server (guid) doveadm(jedi1 at example.com): Error: Copying box=INBOX uid=8 failed: Message GUID not available in this server (guid) doveadm(jedi1 at example.com): Error: Copying box=INBOX uid=9 failed: Message GUID not available in this server (guid) doveadm(jedi1 at example.com): Error: Copying box=INBOX uid=10 failed: Message GUID not available in this server (guid) doveadm(jedi1 at example.com): Error: Copying box=INBOX uid=11 failed: Message GUID not available in this server (guid) doveadm(jedi1 at example.com): Error: Copying box=INBOX uid=12 failed: Message GUID not available in this server (guid) doveadm(jedi1 at example.com): Error: Copying box=INBOX uid=13 failed: Message GUID not available in this server (guid) doveadm(jedi1 at example.com): Error: Copying box=INBOX uid=14 failed: Message GUID not available in this server (guid) doveadm(jedi1 at example.com): Error: Copying box=INBOX uid=15 failed: Message GUID not available in this server (guid) doveadm(jedi1 at example.com): Error: Copying box=INBOX uid=16 failed: Message GUID not available in this server (guid) doveadm(jedi1 at example.com): Error: Copying box=INBOX uid=17 failed: Message GUID not available in this server (guid) Should I / how can I disable this message GUID thing? From gedalya at gedalya.net Fri Jan 27 02:44:01 2012 From: gedalya at gedalya.net (Gedalya) Date: Thu, 26 Jan 2012 19:44:01 -0500 Subject: [Dovecot] IMAP to Maildir Migration preserving UIDs? In-Reply-To: <4F21ED26.6020908@gedalya.net> References: <4F20D718.9010805@gedalya.net> <4F21E4CD.3070001@gedalya.net> <4F21E92C.4090509@gedalya.net> <4F21ED26.6020908@gedalya.net> Message-ID: <4F21F351.3090907@gedalya.net> On 01/26/2012 07:17 PM, Gedalya wrote: > On 01/26/2012 07:06 PM, Timo Sirainen wrote: >> On 27.1.2012, at 2.00, Gedalya wrote: >> >>>>> # doveadm -o imapc_user=gedalya at thisdomain.com -o >>>>> imapc_password=***** backup -u gedalya at thisdomain.com -R imapc: >>>>> Segmentation fault >>>> gdb backtrace would be helpful. You should be able to get that by >>>> running (as root): >>>> >>> 213 mailbox-log.c: No such file or directory. >>> in mailbox-log.c >>> (gdb) bt full >>> #0 mailbox_log_iter_open_next (iter=0x80cbd90) at mailbox-log.c:213 >>> No locals. >>> #1 0xb7fa7dd4 in mailbox_log_iter_init (log=0x0) at mailbox-log.c:239 >>> iter = 0x80cbd90 >>> #2 0x0806ffd3 in dsync_worker_get_list_mailbox_log (list=0x80b6180, >>> worker=0x80c3138) at dsync-worker-local.c:316 >> Ah, right, dsync really wants index files. Of course it shouldn't >> crash, I'll fix that, but you should be able to work around it: >> >> rm -rf /tmp/imapc >> doveadm -o imapc_user=USERNAME -o imapc_password=PASSWORD backup -R >> imapc:/tmp/imapc >> > # doveadm -o imapc_user=jedi at example.com -o imapc_password=***** > backup -u jedi at example.com -R imapc:/tmp/imapc > dsync(jedi at example.com): Error: Failed to sync mailbox Drafts: STATUS > cannot access mailbox Drafts > dsync(jedi at example.com): Fatal: dsync backup: Looks like you're trying > to run backup in wrong direction. Source is empty and destination is not. > > To be clear, I am trying to pull all the mailboxes from the old server > on to this dovecot server, which has no mailboxes populated yet. It > looks like this command would be pushing the messages from here to the > imapc_host rather than pulling? > Sorry, my bad. That was a malfunction on the old IMAP server - that mailbox is inaccessible. Tried with another account: doveadm -o imapc_user=jedi1 at example.com -o imapc_password=***** backup -u jedi1 at example.com -R imapc:/tmp/imapc dsync(jedi1 at example.com): Error: msg guid lookup failed: Message GUID not available in this server dsync(jedi1 at example.com): Error: msg guid lookup failed: Message GUID not available in this server dsync(jedi1 at example.com): Panic: file dsync-brain.c: line 901 (dsync_brain_sync_all): assertion failed: (brain->state != old_state) dsync(jedi1 at example.com): Error: Raw backtrace: /usr/lib/dovecot/libdovecot.so.0(+0x3e98a) [0xb756a98a] -> /usr/lib/dovecot/libdovecot.so.0(default_fatal_handler+0x41) [0xb756aa91] -> /usr/lib/dovecot/libdovecot.so.0(i_fatal+0) [0xb753f66b] -> doveadm() [0x8067095] -> doveadm() [0x8064cfd] -> doveadm() [0x805371e] -> doveadm(doveadm_mail_single_user+0x5e) [0x8053b2e] -> doveadm() [0x8053f58] -> doveadm(doveadm_mail_try_run+0x139) [0x80543d9] -> doveadm(main+0x3a7) [0x8053347] -> /lib/i386-linux-gnu/i686/cmov/libc.so.6(__libc_start_main+0xe6) [0xb73e8e46] -> doveadm() [0x8053519] Aborted So there :D From tss at iki.fi Fri Jan 27 02:45:45 2012 From: tss at iki.fi (Timo Sirainen) Date: Fri, 27 Jan 2012 02:45:45 +0200 Subject: [Dovecot] IMAP to Maildir Migration preserving UIDs? In-Reply-To: <4F21F0EA.5090700@gedalya.net> References: <4F20D718.9010805@gedalya.net> <4F21E4CD.3070001@gedalya.net> <4F21E92C.4090509@gedalya.net> <4F21ED26.6020908@gedalya.net> <4F21F0EA.5090700@gedalya.net> Message-ID: <59F2504B-1B62-4036-A4DD-F476C9D0A02A@iki.fi> On 27.1.2012, at 2.33, Gedalya wrote: >> # doveadm -o imapc_user=jedi at example.com -o imapc_password=***** backup -u jedi at example.com -R imapc:/tmp/imapc >> dsync(jedi at example.com): Error: Failed to sync mailbox Drafts: STATUS cannot access mailbox Drafts Apparently your server doesn't like sending STATUS command to Drafts mailbox and returns a failure. This isn't very nice from it. >> dsync(jedi at example.com): Fatal: dsync backup: Looks like you're trying to run backup in wrong direction. Source is empty and destination is not. The -R parameter reversed the direction. It possibly fails because of the STATUS error. Or maybe some other problem, I'd need to look into it. You could try giving "-m INBOX" parameter to see if it works for one mailbox. > This got me somewhere... > > # doveadm -o imapc_user=jedi1 at example.com -o imapc_password=***** import -u jedi1 at example.com imapc: "" all > doveadm(jedi1 at example.com): Error: Copying box=INBOX uid=1 failed: Message GUID not available in this server (guid) Fixed: http://hg.dovecot.org/dovecot-2.1/rev/f3e000992f61 But doveadm import doesn't preserve UIDs. From gedalya at gedalya.net Fri Jan 27 02:57:39 2012 From: gedalya at gedalya.net (Gedalya) Date: Thu, 26 Jan 2012 19:57:39 -0500 Subject: [Dovecot] IMAP to Maildir Migration preserving UIDs? In-Reply-To: <59F2504B-1B62-4036-A4DD-F476C9D0A02A@iki.fi> References: <4F20D718.9010805@gedalya.net> <4F21E4CD.3070001@gedalya.net> <4F21E92C.4090509@gedalya.net> <4F21ED26.6020908@gedalya.net> <4F21F0EA.5090700@gedalya.net> <59F2504B-1B62-4036-A4DD-F476C9D0A02A@iki.fi> Message-ID: <4F21F683.3080200@gedalya.net> On 01/26/2012 07:45 PM, Timo Sirainen wrote: > On 27.1.2012, at 2.33, Gedalya wrote: > >>> # doveadm -o imapc_user=jedi at example.com -o imapc_password=***** backup -u jedi at example.com -R imapc:/tmp/imapc >>> dsync(jedi at example.com): Error: Failed to sync mailbox Drafts: STATUS cannot access mailbox Drafts > Apparently your server doesn't like sending STATUS command to Drafts mailbox and returns a failure. This isn't very nice from it. > This particular is broken - I'm pretty sure it doesn't do this for other accounts. >>> dsync(jedi at example.com): Fatal: dsync backup: Looks like you're trying to run backup in wrong direction. Source is empty and destination is not. > The -R parameter reversed the direction. It possibly fails because of the STATUS error. Or maybe some other problem, I'd need to look into it. You could try giving "-m INBOX" parameter to see if it works for one mailbox. Must be that broken account. >> This got me somewhere... >> >> # doveadm -o imapc_user=jedi1 at example.com -o imapc_password=***** import -u jedi1 at example.com imapc: "" all >> doveadm(jedi1 at example.com): Error: Copying box=INBOX uid=1 failed: Message GUID not available in this server (guid) > Fixed: http://hg.dovecot.org/dovecot-2.1/rev/f3e000992f61 > > But doveadm import doesn't preserve UIDs. OK - I got a different error from running doveadm backup on a non-broken account - see my other email :) From tss at iki.fi Fri Jan 27 03:00:15 2012 From: tss at iki.fi (Timo Sirainen) Date: Fri, 27 Jan 2012 03:00:15 +0200 Subject: [Dovecot] IMAP to Maildir Migration preserving UIDs? In-Reply-To: <4F21F683.3080200@gedalya.net> References: <4F20D718.9010805@gedalya.net> <4F21E4CD.3070001@gedalya.net> <4F21E92C.4090509@gedalya.net> <4F21ED26.6020908@gedalya.net> <4F21F0EA.5090700@gedalya.net> <59F2504B-1B62-4036-A4DD-F476C9D0A02A@iki.fi> <4F21F683.3080200@gedalya.net> Message-ID: On 27.1.2012, at 2.57, Gedalya wrote: >>> # doveadm -o imapc_user=jedi1 at example.com -o imapc_password=***** import -u jedi1 at example.com imapc: "" all >>> doveadm(jedi1 at example.com): Error: Copying box=INBOX uid=1 failed: Message GUID not available in this server (guid) >> Fixed: http://hg.dovecot.org/dovecot-2.1/rev/f3e000992f61 >> >> But doveadm import doesn't preserve UIDs. > OK - I got a different error from running doveadm backup on a non-broken account - see my other email :) The GUID error is the same. The crash is probably the result of it. Try if upgrading fixes it. From gedalya at gedalya.net Fri Jan 27 03:03:33 2012 From: gedalya at gedalya.net (Gedalya) Date: Thu, 26 Jan 2012 20:03:33 -0500 Subject: [Dovecot] IMAP to Maildir Migration preserving UIDs? In-Reply-To: References: <4F20D718.9010805@gedalya.net> <4F21E4CD.3070001@gedalya.net> <4F21E92C.4090509@gedalya.net> <4F21ED26.6020908@gedalya.net> <4F21F0EA.5090700@gedalya.net> <59F2504B-1B62-4036-A4DD-F476C9D0A02A@iki.fi> <4F21F683.3080200@gedalya.net> Message-ID: <4F21F7E5.1020606@gedalya.net> On 01/26/2012 08:00 PM, Timo Sirainen wrote: > On 27.1.2012, at 2.57, Gedalya wrote: > >>>> # doveadm -o imapc_user=jedi1 at example.com -o imapc_password=***** import -u jedi1 at example.com imapc: "" all >>>> doveadm(jedi1 at example.com): Error: Copying box=INBOX uid=1 failed: Message GUID not available in this server (guid) >>> Fixed: http://hg.dovecot.org/dovecot-2.1/rev/f3e000992f61 >>> >>> But doveadm import doesn't preserve UIDs. >> OK - I got a different error from running doveadm backup on a non-broken account - see my other email :) > The GUID error is the same. The crash is probably the result of it. Try if upgrading fixes it. > OK. Thank you very very much for everything so far. I'm going to wait for the changes to pop up in the prebuilt binary repository - I assume it's a matter of hours? For now I need to go eat something :-) and get back to this later, I'll post the results at that time. From gedalya at gedalya.net Fri Jan 27 06:16:42 2012 From: gedalya at gedalya.net (Gedalya) Date: Thu, 26 Jan 2012 23:16:42 -0500 Subject: [Dovecot] IMAP to Maildir Migration preserving UIDs? In-Reply-To: References: <4F20D718.9010805@gedalya.net> <4F21E4CD.3070001@gedalya.net> <4F21E92C.4090509@gedalya.net> <4F21ED26.6020908@gedalya.net> <4F21F0EA.5090700@gedalya.net> <59F2504B-1B62-4036-A4DD-F476C9D0A02A@iki.fi> <4F21F683.3080200@gedalya.net> Message-ID: <4F22252A.4070204@gedalya.net> On 01/26/2012 08:00 PM, Timo Sirainen wrote: > On 27.1.2012, at 2.57, Gedalya wrote: > >>>> # doveadm -o imapc_user=jedi1 at example.com -o imapc_password=***** import -u jedi1 at example.com imapc: "" all >>>> doveadm(jedi1 at example.com): Error: Copying box=INBOX uid=1 failed: Message GUID not available in this server (guid) >>> Fixed: http://hg.dovecot.org/dovecot-2.1/rev/f3e000992f61 >>> >>> But doveadm import doesn't preserve UIDs. >> OK - I got a different error from running doveadm backup on a non-broken account - see my other email :) > The GUID error is the same. The crash is probably the result of it. Try if upgrading fixes it. > Yeap. Worked impeccably (doveadm backup)!! Pretty fast, too. Very impressed! I'll have to do some very thorough testing with various clients etc, will post interesting findings if any come up. From alexis.lelion at gmail.com Fri Jan 27 12:59:02 2012 From: alexis.lelion at gmail.com (Alexis Lelion) Date: Fri, 27 Jan 2012 11:59:02 +0100 Subject: [Dovecot] LMTP : Can't handle mixed proxy/non-proxy destinations Message-ID: Hello, In my current setup, I uses two mailservers to handle the users connections, and my emails are stored on a distant server using NFS (maildir architecture) Dovecot is both my IMAP server and the delivery agent (LMTP via postfix) To avoid indexing issues related to NFS, proxying is enabled both on IMAP and LMTP. But when a mail is sent to users that are shared between the servers, I got the subject mentionned error in the logs : Jan 25 09:05:12 mail01 postfix/lmtp[23934]: A92709300DB: to=< user_on_mail02 at domain.com>, relay=mail01.domain.com[private/dovecot-lmtp], delay=0.07, delays=0.01/0/0/0.06, dsn=4.3.0, status=deferred (host mail01.domain.com[private/dovecot-lmtp] said: 451 4.3.0 < user_on_mail02 at domain.com> Can't handle mixed proxy/non-proxy destinations (in reply to RCPT TO command)) >From what I saw, the mail is then put in the queue, and wait until the next time Postifx will browse the queue. The mail will then be correctly delivered on "mail02". However, the "queue_run_delay" postfix parameter is set to 900, which means that the mail will be delivered with a lag of 15 minutes. I was wondering if there was another way of handling this, for example by triggering an immediate queue lookup from postfix or forwarding a copy of the mail to the other server. Note that the postfix "queue_run_delay" was increased to 15min on purpose, so I cannot change that. I'm using dovecot 2.0.15 on Debian Squeeze, kernel 2.6.32-5-amd64. Thanks, Alexis From clube33-mail at yahoo.com Fri Jan 27 14:32:17 2012 From: clube33-mail at yahoo.com (Gustavo) Date: Fri, 27 Jan 2012 04:32:17 -0800 (PST) Subject: [Dovecot] Problem with Postfix + Dovecot + MySQL + Squirrelmail Message-ID: <1327667537.79787.YahooMailNeo@web65309.mail.ac2.yahoo.com> Dear friends, I try configure a webmail on my server using Postfix + Dovecot + MySQL + Squirrelmail. My system is a Debian6 and dovecot version is: #dovecot --version 1.2.15 But, when I try to access an account on squirrel I recieve this message: ?ERROR Error connecting to IMAP server: localhost. 111 : Connection refused? Looking for a problem I foud this: #service dovecot start Starting IMAP/POP3 mail server: dovecotLast died with error (see error log for more information): Auth process died too early - shutting down If you have trouble with authentication failures, enable auth_debug setting. See http://wiki.dovecot.org/WhyDoesItNotWork This message goes away after the first successful login. . And the status of doveco is: #service dovecot status dovecot is not running ... failed! The other services seems to be OK: #service postfix status postfix is running. # service mysql status /usr/bin/mysqladmin ?Ver 8.42 Distrib 5.1.49, for debian-linux-gnu on x86_64 Copyright 2000-2008 MySQL AB, 2008 Sun Microsystems, Inc. This software comes with ABSOLUTELY NO WARRANTY. This is free software, and you are welcome to modify and redistribute it under the GPL license Server version5.1.49-3 Protocol version10 ConnectionLocalhost via UNIX socket UNIX socket/var/run/mysqld/mysqld.sock Uptime:32 days 14 hours 23 min 39 sec Threads: 1 ?Questions: 6743 ?Slow queries: 0 ?Opens: 385 ?Flush tables: 1 ?Open tables: 47 ?Queries per second avg: 0.2. Looking at dovecot.conf I found some incosistences: On dovecot.conf: protocol lda { sendmail_path = /usr/lib/sendmail auth_socket_path = /var/run/dovecot/auth-master } socket listen { master { path = /var/run/dovecot/auth-master mode = 0600 user = vmail group = mail } client { path = /var/run/dovecot/auth-client mode = 0660 user = vmail group = mail } } But in the system I don1t found this files!!! /var/run/dovecot# ls total 20K drwxr-xr-x 3 root root ???4.0K Jan 27 11:35 . drwxr-xr-x 8 root root ???4.0K Jan 27 09:33 .. srw------- 1 root root ??????0 Jan 27 11:35 auth-worker.26163 srwxrwxrwx 1 root root ??????0 Jan 27 11:35 dict-server lrwxrwxrwx 1 root root ?????25 Jan 27 11:35 dovecot.conf -> /etc/dovecot/dovecot.conf drwxr-x--- 2 root dovecot 4.0K Jan 27 11:35 login -rw------- 1 root root ?????43 Jan 27 11:35 master-fatal.lastlog -rw------- 1 root root ??????6 Jan 27 11:35 master.pid /var/run/dovecot# ls login/ total 12K drwxr-x--- 2 root dovecot 4.0K Jan 27 11:35 . drwxr-xr-x 3 root root ???4.0K Jan 27 11:35 .. srw-rw---- 1 root dovecot ???0 Jan 27 11:35 default -rw-r--r-- 2 root root ????230 Jan 23 19:12 ssl-parameters.dat I think maybe that is the problem. Anyone knows how I fix that? Or what is the real problem? Thanks for any help! ? --? Gustavo? From odhiambo at gmail.com Fri Jan 27 17:28:50 2012 From: odhiambo at gmail.com (Odhiambo Washington) Date: Fri, 27 Jan 2012 18:28:50 +0300 Subject: [Dovecot] v2.1.rc5 released In-Reply-To: <4F21BCB6.6030908@hardwarefreak.com> References: <901DE631-5790-41BF-89A9-73BBB544FDF7@iki.fi> <20120125233930.GA17183@doctor.nl2k.ab.ca> <20120126000126.GA19765@doctor.nl2k.ab.ca> <4F21BCB6.6030908@hardwarefreak.com> Message-ID: On Thu, Jan 26, 2012 at 23:51, Stan Hoeppner wrote: > On 1/25/2012 6:01 PM, The Doctor wrote: > > BSD/OS 4.3.1 > > A defunct/dead operating system, last released in 2003, support > withdrawn in 2004. BSDI went belly up. Wind River acquired and then > killed BSD/OS. You're using a dead, 9 year old OS, that hasn't seen > official updates for 8 years. > > Do you think it's fair to ask application developers to support the > oddities of your one-of-a-kind, ancient, patchwork of a platform? > > We've had this discussion before. And I don't believe you ever provided > a sane rational for continuing to use an OS that's been officially dead > for 8 years. What is the reason you are unable or unwilling to migrate > to a newer and supported no cost BSD variant, or Linux distro? > > You're trying to run bleeding edge Dovecot, compiling it from source, on > an 8 year old platform... > > Maybe "The Doctor" has no idea on how to migrate. I see no other sane reason to continue running that OS. -- Best regards, Odhiambo WASHINGTON, Nairobi,KE +254733744121/+254722743223 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ I can't hear you -- I'm using the scrambler. Please consider the environment before printing this email. -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 652 bytes Desc: not available URL: From mcazzador at gmail.com Fri Jan 27 18:48:31 2012 From: mcazzador at gmail.com (Matteo Cazzador) Date: Fri, 27 Jan 2012 17:48:31 +0100 Subject: [Dovecot] dovecot imap cluster Message-ID: Hello, i'm using postfix like smtp server, i need to choose an imap server with a special features. I have a customer with 3 different geographic locations. Every locations have a mail server for the same domain (example.com). If user1 at example.com receive mail form external this mail going on every locations server. I've a problem now, is it possible to syncronize the state (mail flag) of user1 imap folder mails on every mail locations server? Example, if user1 read a mail on server one is it possible to change flag of the same mail file on server 2 and server 3? Is it possible to use dsync for it? I need something like imap cluster. Or an action in post processing imap mail read. I can't use distribuited file system. Thank's a lot -- Rispetta l'ambiente: se non ti ? necessario, non stampare questa mail. ****************************************** Ing. Matteo Cazzador Email: mcazzador at gmail.com ****************************************** From info at simonecaruso.com Fri Jan 27 20:11:59 2012 From: info at simonecaruso.com (Simone Caruso) Date: Fri, 27 Jan 2012 19:11:59 +0100 Subject: [Dovecot] dovecot imap cluster In-Reply-To: References: Message-ID: <4F22E8EF.7070609@simonecaruso.com> On 27/01/2012 17:48, Matteo Cazzador wrote: > Hello, i'm using postfix like smtp server, i need to choose an imap > server with a special features. > > I have a customer with 3 different geographic locations. > > Every locations have a mail server for the same domain (example.com). > > If user1 at example.com receive mail form external this mail going on > every locations server. > > I've a problem now, is it possible to syncronize the state (mail flag) > of user1 imap folder mails on every mail locations server? > > Example, if user1 read a mail on server one is it possible to change > flag of the same mail file on server 2 and server 3? > > Is it possible to use dsync for it? > > I need something like imap cluster. > > Or an action in post processing imap mail read. > > I can't use distribuited file system. > > Thank's a lot > Syncronize your storage with DRBD, (or async replica like rsync) and use dovecot director for connection persistence. -- Simone Caruso From me at junc.org Fri Jan 27 22:53:02 2012 From: me at junc.org (Benny Pedersen) Date: Fri, 27 Jan 2012 21:53:02 +0100 Subject: [Dovecot] v2.1.rc5 released In-Reply-To: <4F21BCB6.6030908@hardwarefreak.com> References: <901DE631-5790-41BF-89A9-73BBB544FDF7@iki.fi> <20120125233930.GA17183@doctor.nl2k.ab.ca> <20120126000126.GA19765@doctor.nl2k.ab.ca> <4F21BCB6.6030908@hardwarefreak.com> Message-ID: On Thu, 26 Jan 2012 14:51:02 -0600, Stan Hoeppner wrote: > You're trying to run bleeding edge Dovecot, compiling it from source, > on > an 8 year old platform... i remember freebsd 4.9 installed from 2 1440kb floppy disks, why is upgradeing so hard to keep without reinstalling ? gentoo/funtoo it keeps emerge world forever, and portage exists on freebsd From tss at iki.fi Fri Jan 27 22:57:05 2012 From: tss at iki.fi (Timo Sirainen) Date: Fri, 27 Jan 2012 22:57:05 +0200 Subject: [Dovecot] dovecot imap cluster In-Reply-To: <4F22E8EF.7070609@simonecaruso.com> References: <4F22E8EF.7070609@simonecaruso.com> Message-ID: <3797A713-4DA9-4AEC-A155-006E3574BB6C@iki.fi> On 27.1.2012, at 20.11, Simone Caruso wrote: >> I have a customer with 3 different geographic locations. >> >> Every locations have a mail server for the same domain (example.com). >> >> If user1 at example.com receive mail form external this mail going on >> every locations server. >> >> I've a problem now, is it possible to syncronize the state (mail flag) >> of user1 imap folder mails on every mail locations server? > > Syncronize your storage with DRBD, (or async replica like rsync) and use dovecot > director for connection persistence. There are a couple of problems with DRBD and most (all?) other filesystem based solutions when doing multi-master replication across wide geographic locations: 1. Multi-master requires synchronous replication -> latency may be very high -> performance probably is bad enough that the system is unusable. 2. Network outages are still common -> you can't handle split brain situations in filesystem level without either a) loss of availability (everyone's email down) or b) data loss/corruption (what do you do when multiple sites have modified the same file?) With dsync-based replication it's possible to avoid both of these problems, because application-level replication can intelligently handle situations where asynchronous replication results in data conflicts. (This kind of conflict resolution is also what I hope to do with some nosql database in future when Dovecot supports them.) I've been working on dsync-based easy-to-use replication recently, and it's almost in a condition where I'm going to start using it myself (maybe this weekend). From doctor at doctor.nl2k.ab.ca Fri Jan 27 23:03:11 2012 From: doctor at doctor.nl2k.ab.ca (The Doctor) Date: Fri, 27 Jan 2012 14:03:11 -0700 Subject: [Dovecot] v2.1.rc5 released In-Reply-To: References: <901DE631-5790-41BF-89A9-73BBB544FDF7@iki.fi> <20120125233930.GA17183@doctor.nl2k.ab.ca> <20120126000126.GA19765@doctor.nl2k.ab.ca> <4F21BCB6.6030908@hardwarefreak.com> Message-ID: <20120127210310.GA2218@doctor.nl2k.ab.ca> On Fri, Jan 27, 2012 at 09:53:02PM +0100, Benny Pedersen wrote: > On Thu, 26 Jan 2012 14:51:02 -0600, Stan Hoeppner wrote: > >> You're trying to run bleeding edge Dovecot, compiling it from source, on >> an 8 year old platform... > > i remember freebsd 4.9 installed from 2 1440kb floppy disks, why is > upgradeing so hard to keep without reinstalling ? > > gentoo/funtoo it keeps emerge world forever, and portage exists on freebsd I got 2.1rc to work on this old work horse, just that the --as-needed flag needs to be edited out of 21 files. IT might be easier just in configuration to look up which version of ld you have as if it does not need the --as-needed flag. -- Member - Liberal International This is doctor at nl2k.ab.ca Ici doctor at nl2k.ab.ca God, Queen and country! Never Satan President Republic! Beware AntiChrist rising! https://www.fullyfollow.me/rootnl2k Birthdate : 29 Jan 1969 Croydon, Surrey, UK From me at junc.org Fri Jan 27 23:31:55 2012 From: me at junc.org (Benny Pedersen) Date: Fri, 27 Jan 2012 22:31:55 +0100 Subject: [Dovecot] v2.1.rc5 released In-Reply-To: <20120127210310.GA2218@doctor.nl2k.ab.ca> References: <901DE631-5790-41BF-89A9-73BBB544FDF7@iki.fi> <20120125233930.GA17183@doctor.nl2k.ab.ca> <20120126000126.GA19765@doctor.nl2k.ab.ca> <4F21BCB6.6030908@hardwarefreak.com> <20120127210310.GA2218@doctor.nl2k.ab.ca> Message-ID: <70d5df2910c161d844f6dbb7aa8fef8c@junc.org> On Fri, 27 Jan 2012 14:03:11 -0700, The Doctor wrote: > IT might be easier just in configuration to look up > which version of ld you have as if it does not need the --as-needed > flag. replyed sent privately, keep up the good work on freebsd :=) From me at junc.org Sat Jan 28 00:05:57 2012 From: me at junc.org (Benny Pedersen) Date: Fri, 27 Jan 2012 23:05:57 +0100 Subject: [Dovecot] =?utf-8?q?IMAP_to_Maildir_Migration_preserving_UIDs=3F?= In-Reply-To: <4F20D718.9010805@gedalya.net> References: <4F20D718.9010805@gedalya.net> Message-ID: <2989c8bf4cccf90002e99389385c97d8@junc.org> On Wed, 25 Jan 2012 23:31:20 -0500, Gedalya wrote: > I'm facing the need to migrate from a proprietary IMAP server to > Dovecot. The migration must be as smooth and transparent as possible. setup dovecot and make it listen on 127.0.0.2 only, modify your current to only listen on 127.0.0.1 this so you now can have 2 imap servers running at the same time next step is here http://www.howtoforge.com/how-to-migrate-mailboxes-between-imap-servers-with-imapsync when all accounts is transfered, stop the old server, make dovecot listen on any ip, done, it worked for me when i changed from courier-imap to dovecot From gedalya at gedalya.net Sat Jan 28 00:35:40 2012 From: gedalya at gedalya.net (Gedalya) Date: Fri, 27 Jan 2012 17:35:40 -0500 Subject: [Dovecot] IMAP to Maildir Migration preserving UIDs? In-Reply-To: References: <4F20D718.9010805@gedalya.net> <4F21E4CD.3070001@gedalya.net> <4F21E92C.4090509@gedalya.net> <4F21ED26.6020908@gedalya.net> <4F21F0EA.5090700@gedalya.net> <59F2504B-1B62-4036-A4DD-F476C9D0A02A@iki.fi> <4F21F683.3080200@gedalya.net> Message-ID: <4F2326BC.608@gedalya.net> On 01/26/2012 08:00 PM, Timo Sirainen wrote: > On 27.1.2012, at 2.57, Gedalya wrote: > >>>> # doveadm -o imapc_user=jedi1 at example.com -o imapc_password=***** import -u jedi1 at example.com imapc: "" all >>>> doveadm(jedi1 at example.com): Error: Copying box=INBOX uid=1 failed: Message GUID not available in this server (guid) >>> Fixed: http://hg.dovecot.org/dovecot-2.1/rev/f3e000992f61 >>> >>> But doveadm import doesn't preserve UIDs. >> OK - I got a different error from running doveadm backup on a non-broken account - see my other email :) > The GUID error is the same. The crash is probably the result of it. Try if upgrading fixes it. > This is what I ended up doing. I have the production machine acting as a dovecot imap server, and as a proxy for accounts not yet migrated. Running dovecot 2.0.15, with directly attached 6 TB of storage. Per Timo's instructions, I set up a quick VM running debian wheezy and the latest dovecot 2.1, and copied the config from the production server with tiny modifications, and connected it to the same mysql user database. I gave this machine the same hostname as the production machine, just so that the maildir filenames end up looking neat. I don't know if this has anything more than psychological value :-) I mounted the storage from the production machine (sshfs surprisingly didn't seem slower than NFS) and set up dovecot 2.1 to find the mailboxes under there, then things like doveadm -o imapc_user=jedi1 at example.com -o imapc_password=****** backup -u jedi1 at example.com -R imapc:/tmp/imapc started doing the job. No output, no problems. So far the only glitch I noticed is that I have dovecot autocreate a Spam folder and when starting a Windows Live Mail which was reading a proxied account, after it was migrated and served by dovecot, it doesn't find the Spam folder until I click "Download all folders". We have thousands of mailboxes being read from every conceivable client, so there will be more tiny issues like this. Can't wait to test a blackberry. Other than that, things work as intended - UID and UIDVALIDITY seem to be preserved, the clients don't seem to notice the migration or react to it in any way. What's left is to wrap around this a proper process to lock the mailbox, essentially put the right things in the database in the beginning and in the end of the process. Looks beautiful. From kyle.lafkoff at cpanel.net Sat Jan 28 00:57:15 2012 From: kyle.lafkoff at cpanel.net (Kyle Lafkoff) Date: Fri, 27 Jan 2012 16:57:15 -0600 Subject: [Dovecot] Test suite? Message-ID: <5319F037-A973-45EE-9129-93489C026619@cpanel.net> Hi I am building a RPM for dovecot. Is there a test suite available I could use during the build to verify proper functionality? Thanks! Kyle From tss at iki.fi Sat Jan 28 01:15:53 2012 From: tss at iki.fi (Timo Sirainen) Date: Sat, 28 Jan 2012 01:15:53 +0200 Subject: [Dovecot] Test suite? In-Reply-To: <5319F037-A973-45EE-9129-93489C026619@cpanel.net> References: <5319F037-A973-45EE-9129-93489C026619@cpanel.net> Message-ID: <37E8F766-8456-49D2-8360-DB70288E7A8A@iki.fi> On 28.1.2012, at 0.57, Kyle Lafkoff wrote: > I am building a RPM for dovecot. Is there a test suite available I could use during the build to verify proper functionality? Thanks! It would be nice to have a proper finished test suite testing all kinds of functionality. Unfortunately I haven't had time to write such a thing, and no one's tried to help creating one. There is "make check" that you can run, which goes through some unit tests, but it's not very useful in catching bugs. There is also imaptest tool (http://imapwiki.org/ImapTest), which is very useful in catching bugs. I've been planning on creating a comprehensive test suite by creating Dovecot-specific scripts for imaptest and running them against many different Dovecot configurations (mbox/maildir/sdbox/mdbox formats each against different kinds of namespaces, as well as many other tests). That plan has existed several years now, but unfortunately only in my head. Perhaps soon I can hire someone else to do that via my company. :) From stan at hardwarefreak.com Sat Jan 28 02:23:35 2012 From: stan at hardwarefreak.com (Stan Hoeppner) Date: Fri, 27 Jan 2012 18:23:35 -0600 Subject: [Dovecot] dovecot imap cluster In-Reply-To: <3797A713-4DA9-4AEC-A155-006E3574BB6C@iki.fi> References: <4F22E8EF.7070609@simonecaruso.com> <3797A713-4DA9-4AEC-A155-006E3574BB6C@iki.fi> Message-ID: <4F234007.9030907@hardwarefreak.com> On 1/27/2012 2:57 PM, Timo Sirainen wrote: > On 27.1.2012, at 20.11, Simone Caruso wrote: > >>> I have a customer with 3 different geographic locations. >>> >>> Every locations have a mail server for the same domain (example.com). >>> >>> If user1 at example.com receive mail form external this mail going on >>> every locations server. >>> >>> I've a problem now, is it possible to syncronize the state (mail flag) >>> of user1 imap folder mails on every mail locations server? >> >> Syncronize your storage with DRBD, (or async replica like rsync) and use dovecot >> director for connection persistence. > > > There are a couple of problems with DRBD and most (all?) other filesystem based solutions when doing multi-master replication across wide geographic locations: > > 1. Multi-master requires synchronous replication -> latency may be very high -> performance probably is bad enough that the system is unusable. > > 2. Network outages are still common -> you can't handle split brain situations in filesystem level without either a) loss of availability (everyone's email down) or b) data loss/corruption (what do you do when multiple sites have modified the same file?) > > With dsync-based replication it's possible to avoid both of these problems, because application-level replication can intelligently handle situations where asynchronous replication results in data conflicts. (This kind of conflict resolution is also what I hope to do with some nosql database in future when Dovecot supports them.) I've been working on dsync-based easy-to-use replication recently, and it's almost in a condition where I'm going to start using it myself (maybe this weekend). Can you provide a basic diagram/high level description of how this dsync replication would be configured to work over a 2 node wide area network? Are we looking at something like period scripts, something more automatic, a replication daemon? -- Stan From tss at iki.fi Sat Jan 28 02:32:15 2012 From: tss at iki.fi (Timo Sirainen) Date: Sat, 28 Jan 2012 02:32:15 +0200 Subject: [Dovecot] dovecot imap cluster In-Reply-To: <4F234007.9030907@hardwarefreak.com> References: <4F22E8EF.7070609@simonecaruso.com> <3797A713-4DA9-4AEC-A155-006E3574BB6C@iki.fi> <4F234007.9030907@hardwarefreak.com> Message-ID: On 28.1.2012, at 2.23, Stan Hoeppner wrote: >> With dsync-based replication it's possible to avoid both of these problems, because application-level replication can intelligently handle situations where asynchronous replication results in data conflicts. (This kind of conflict resolution is also what I hope to do with some nosql database in future when Dovecot supports them.) I've been working on dsync-based easy-to-use replication recently, and it's almost in a condition where I'm going to start using it myself (maybe this weekend). > > Can you provide a basic diagram/high level description of how this dsync > replication would be configured to work over a 2 node wide area network? I'll write a description at some point.. It's anyway meant to be more scalable than just 2 nodes, so the idea is to have userdb lookup return the 2 (or more) replicas. > Are we looking at something like period scripts, something more > automatic, a replication daemon? It's a replication daemon that basically calls "doveadm sync" when needed (via doveadm server connection). Initially it's not as optimal from performance point of view as it could, but should get better. :) From dmiller at amfes.com Sat Jan 28 09:15:33 2012 From: dmiller at amfes.com (Daniel L. Miller) Date: Fri, 27 Jan 2012 23:15:33 -0800 Subject: [Dovecot] Crash on mail folder delete In-Reply-To: <4F209878.5040505@amfes.com> References: <4F20922C.60206@amfes.com> <54B5D728-EC26-4633-A927-7EC040043BF5@iki.fi> <4F20939E.4010903@amfes.com> <4F209878.5040505@amfes.com> Message-ID: On 1/25/2012 4:04 PM, Daniel L. Miller wrote: > On 1/25/2012 3:43 PM, Daniel L. Miller wrote: >> On 1/25/2012 3:42 PM, Timo Sirainen wrote: >>> On 26.1.2012, at 1.37, Daniel L. Miller wrote: >>> >>>> Attempting to delete a folder from within the trash folder using >>>> Thunderbird. I see the following in the log: >>> Dovecot version? >>> >> 2.1.rc3. I'm compiling rc5 now... >> > Error still there on rc5. > Can I do anything to help find this? Folders are still shown in Trash - unable to delete. -- Daniel From user+dovecot at localhost.localdomain.org Sat Jan 28 17:34:16 2012 From: user+dovecot at localhost.localdomain.org (Pascal Volk) Date: Sat, 28 Jan 2012 16:34:16 +0100 Subject: [Dovecot] v2.1.rc5 (85a9b5236b6c) Error: lmtp client: DNS lookup of $FQDN failed: connect(dns-client) failed: No such file or directory Message-ID: <4F241578.2090904@localhost.localdomain.org> When the Sieve plugin tries to send a vacation message or redirect a message to another address it fails. dovecot: lmtp(6412, user at example.com): Error: lmtp client: DNS lookup of orange.example.com failed: connect(dns-client) failed: No such file or directory dovecot: lmtp(6412, user at example.com): Error: dAIOClYSJE8MGQAAhQ0vrQ: sieve: msgid=<4F241255.2060900 at example.com>: failed to redirect message to (refer to server log for more information) But the dns-client sockets are created when Dovecot starts up: # find /usr/local/var/run/dovecot -name dns-client -exec ls -l {} + srw-rw-rw- 1 root staff 0 Jan 28 16:15 /usr/local/var/run/dovecot/dns-client srw-rw-rw- 1 root root 0 Jan 28 16:15 /usr/local/var/run/dovecot/login/dns-client Hum, is it Dovecot or Pigeonhole (dovecot-2.1-pigeonhole 1600:b2a456e15ed5)? Regards, Pascal -- The trapper recommends today: c01dcofe.1202816 at localdomain.org From adrian.minta at gmail.com Sat Jan 28 17:48:53 2012 From: adrian.minta at gmail.com (Adrian Minta) Date: Sat, 28 Jan 2012 17:48:53 +0200 Subject: [Dovecot] XFS Developer Takes Shots At Btrfs, EXT4 Message-ID: <4F2418E5.2020107@gmail.com> Nice article about XFS improvements: http://tinyurl.com/7pvr9ju From jd.beaubien at gmail.com Sat Jan 28 17:59:39 2012 From: jd.beaubien at gmail.com (Jean-Daniel Beaubien) Date: Sat, 28 Jan 2012 10:59:39 -0500 Subject: [Dovecot] maildir vs mdbox Message-ID: Hi, I am planning on running on test between maildir and mdbox to see which is a better fit for my use case. And I'm just looking for general advice/recommendation. I will post any results I obtain here. Important question: I have multiple users hitting the same email account at the same time. Can be a problem with mdbox? (either via thunderbird or with custom webmail apps). I remember having huge issues with mbox a decade ago because of this. Maildir fixed this... will mdbox reintroduce this problem? This is a very important point for me. Here is my use case: - Ubuntu server (any specific recommandations on FS to use?) - Standard PC hardware (core i5 or i7, few gigs of ram, hdds at first, probably ssd afterwards, nothing very fancy) - Serving only a hand full of email accounts but some of the accounst have over 3 millions emails in them (with individual mail folders having 100k+ emails) - Will use latest dovecot (2.1 when it comes out) - fts-lucene or fts-solr? -jd From tss at iki.fi Sat Jan 28 18:05:48 2012 From: tss at iki.fi (Timo Sirainen) Date: Sat, 28 Jan 2012 18:05:48 +0200 Subject: [Dovecot] maildir vs mdbox In-Reply-To: References: Message-ID: <959D4700-F076-45A4-9C42-47ED218A50BC@iki.fi> On 28.1.2012, at 17.59, Jean-Daniel Beaubien wrote: > I am planning on running on test between maildir and mdbox to see which is > a better fit for my use case. And I'm just looking for general > advice/recommendation. I will post any results I obtain here. Maildir is good for reliability, since it's just about impossible to corrupt, and even in case of filesystem corruption it's easier to recover than other formats. mdbox is good if you want the best performance. > Important question: I have multiple users hitting the same email account at > the same time. Can be a problem with mdbox? No problem. > - Serving only a hand full of email accounts but some of the accounst have > over 3 millions emails in them (with individual mail folders having 100k+ > emails) Maildir gets slow with that many mails in one folder. > - fts-lucene or fts-solr? fts-lucene uses the latest CLucene version, which is a little old. With fts-solr you can use the latest Solr/Lucene. So as long as you don't mind setting up a Solr instance it should be better. The good thing about fts-lucene is that you can simply enable it and it works without any external servers. From jd.beaubien at gmail.com Sat Jan 28 18:13:59 2012 From: jd.beaubien at gmail.com (Jean-Daniel Beaubien) Date: Sat, 28 Jan 2012 11:13:59 -0500 Subject: [Dovecot] maildir vs mdbox In-Reply-To: <959D4700-F076-45A4-9C42-47ED218A50BC@iki.fi> References: <959D4700-F076-45A4-9C42-47ED218A50BC@iki.fi> Message-ID: Wow, incredible response time :) I have 1 more question which I forgot to put in the initial post. Considering my use case (small number of accounts but alot of emails per account, and I should add that they are mostly small emails, most under 5k, alot under 30k) what mdbox setting would you recommend i start testing with (mdbox_rotate_size and mdbox_rotate_interval). -JD On Sat, Jan 28, 2012 at 11:05 AM, Timo Sirainen wrote: > On 28.1.2012, at 17.59, Jean-Daniel Beaubien wrote: > > > I am planning on running on test between maildir and mdbox to see which > is > > a better fit for my use case. And I'm just looking for general > > advice/recommendation. I will post any results I obtain here. > > Maildir is good for reliability, since it's just about impossible to > corrupt, and even in case of filesystem corruption it's easier to recover > than other formats. mdbox is good if you want the best performance. > > > Important question: I have multiple users hitting the same email account > at > > the same time. Can be a problem with mdbox? > > No problem. > > > - Serving only a hand full of email accounts but some of the accounst > have > > over 3 millions emails in them (with individual mail folders having 100k+ > > emails) > > Maildir gets slow with that many mails in one folder. > > > - fts-lucene or fts-solr? > > > fts-lucene uses the latest CLucene version, which is a little old. With > fts-solr you can use the latest Solr/Lucene. So as long as you don't mind > setting up a Solr instance it should be better. The good thing about > fts-lucene is that you can simply enable it and it works without any > external servers. From tss at iki.fi Sat Jan 28 18:37:19 2012 From: tss at iki.fi (Timo Sirainen) Date: Sat, 28 Jan 2012 18:37:19 +0200 Subject: [Dovecot] maildir vs mdbox In-Reply-To: References: <959D4700-F076-45A4-9C42-47ED218A50BC@iki.fi> Message-ID: <548DDD91-D0F1-49F7-8E8D-3EA03DF72397@iki.fi> On 28.1.2012, at 18.13, Jean-Daniel Beaubien wrote: > Considering my use case (small number of accounts but alot of emails per > account, and I should add that they are mostly small emails, most under 5k, > alot under 30k) what mdbox setting would you recommend i start testing with > (mdbox_rotate_size and mdbox_rotate_interval). mdbox_rotate_interval is useful only if you want smaller incremental backups (so files that are backed up no longer change unless messages are deleted). Its default is 0 (I just fixed example-config, which showed it as 1day). I don't really know about mdbox_rotate_size. It would be nice if someone were to test different values over longer period and report how it affects disk IO. From jd.beaubien at gmail.com Sat Jan 28 19:02:46 2012 From: jd.beaubien at gmail.com (Jean-Daniel Beaubien) Date: Sat, 28 Jan 2012 12:02:46 -0500 Subject: [Dovecot] maildir vs mdbox In-Reply-To: <548DDD91-D0F1-49F7-8E8D-3EA03DF72397@iki.fi> References: <959D4700-F076-45A4-9C42-47ED218A50BC@iki.fi> <548DDD91-D0F1-49F7-8E8D-3EA03DF72397@iki.fi> Message-ID: On Sat, Jan 28, 2012 at 11:37 AM, Timo Sirainen wrote: > On 28.1.2012, at 18.13, Jean-Daniel Beaubien wrote: > > > Considering my use case (small number of accounts but alot of emails per > > account, and I should add that they are mostly small emails, most under > 5k, > > alot under 30k) what mdbox setting would you recommend i start testing > with > > (mdbox_rotate_size and mdbox_rotate_interval). > > mdbox_rotate_interval is useful only if you want smaller incremental > backups (so files that are backed up no longer change unless messages are > deleted). Its default is 0 (I just fixed example-config, which showed it as > 1day). > > To be honest, the smaller incremental backup part is interesting. That along with auto-gzip of the mdbox files are very interesting for me. > I don't really know about mdbox_rotate_size. It would be nice if someone > were to test different values over longer period and report how it affects > disk IO. > > I was thinking on doing a test with 20MB and 80MB, look at the results and go from there. Btw, when I migrate my emails from Maildir to mdbox, dsync should take into account the rotate_size parameter. If I want to change the rotate_size parameter, I simply edit the config file, change the parameter (erase the mdbox folder?) and re-run dsync. Is that correct? From tss at iki.fi Sat Jan 28 19:27:53 2012 From: tss at iki.fi (Timo Sirainen) Date: Sat, 28 Jan 2012 19:27:53 +0200 Subject: [Dovecot] Crash on mail folder delete In-Reply-To: References: <4F20922C.60206@amfes.com> <54B5D728-EC26-4633-A927-7EC040043BF5@iki.fi> <4F20939E.4010903@amfes.com> <4F209878.5040505@amfes.com> Message-ID: On 28.1.2012, at 9.15, Daniel L. Miller wrote: > Can I do anything to help find this? Folders are still shown in Trash - unable to delete. gdb backtrace would be helpful: http://dovecot.org/bugreport.html and doveconf -n and the folder name. From tss at iki.fi Sat Jan 28 19:29:18 2012 From: tss at iki.fi (Timo Sirainen) Date: Sat, 28 Jan 2012 19:29:18 +0200 Subject: [Dovecot] v2.1.rc5 (85a9b5236b6c) Error: lmtp client: DNS lookup of $FQDN failed: connect(dns-client) failed: No such file or directory In-Reply-To: <4F241578.2090904@localhost.localdomain.org> References: <4F241578.2090904@localhost.localdomain.org> Message-ID: <64779967-206F-4F44-8F01-32810EE0795A@iki.fi> On 28.1.2012, at 17.34, Pascal Volk wrote: > When the Sieve plugin tries to send a vacation message or redirect > a message to another address it fails. > > dovecot: lmtp(6412, user at example.com): Error: lmtp client: DNS lookup of orange.example.com failed: connect(dns-client) failed: No such file or directory Fixed: http://hg.dovecot.org/dovecot-2.1/rev/bc2eea348f55 http://hg.dovecot.org/dovecot-2.1/rev/32318f1588d4 The same problem exists in v2.0 also, but I didn't bother to fix it there. A workaround is to use IP instead of host in submission_host. From tss at iki.fi Sat Jan 28 19:30:05 2012 From: tss at iki.fi (Timo Sirainen) Date: Sat, 28 Jan 2012 19:30:05 +0200 Subject: [Dovecot] v2.1.rc5 (85a9b5236b6c) Error: lmtp client: DNS lookup of $FQDN failed: connect(dns-client) failed: No such file or directory In-Reply-To: <64779967-206F-4F44-8F01-32810EE0795A@iki.fi> References: <4F241578.2090904@localhost.localdomain.org> <64779967-206F-4F44-8F01-32810EE0795A@iki.fi> Message-ID: <36FD5315-1EB7-4F70-AB4E-9E6C1D535747@iki.fi> On 28.1.2012, at 19.29, Timo Sirainen wrote: > On 28.1.2012, at 17.34, Pascal Volk wrote: > >> When the Sieve plugin tries to send a vacation message or redirect >> a message to another address it fails. >> >> dovecot: lmtp(6412, user at example.com): Error: lmtp client: DNS lookup of orange.example.com failed: connect(dns-client) failed: No such file or directory > > Fixed: http://hg.dovecot.org/dovecot-2.1/rev/bc2eea348f55 http://hg.dovecot.org/dovecot-2.1/rev/32318f1588d4 > > The same problem exists in v2.0 also, but I didn't bother to fix it there. A workaround is to use IP instead of host in submission_host. Oh, clarification: With LMTP it just happens to work with v2.0, but with dovecot-lda it doesn't work. From tss at iki.fi Sat Jan 28 19:32:48 2012 From: tss at iki.fi (Timo Sirainen) Date: Sat, 28 Jan 2012 19:32:48 +0200 Subject: [Dovecot] LMTP : Can't handle mixed proxy/non-proxy destinations In-Reply-To: References: Message-ID: <33BD52FA-1FE0-46D5-A1E8-9A54C406BE64@iki.fi> On 27.1.2012, at 12.59, Alexis Lelion wrote: > Jan 25 09:05:12 mail01 postfix/lmtp[23934]: A92709300DB: to=< > user_on_mail02 at domain.com>, relay=mail01.domain.com[private/dovecot-lmtp], > delay=0.07, delays=0.01/0/0/0.06, dsn=4.3.0, status=deferred (host > mail01.domain.com[private/dovecot-lmtp] said: 451 4.3.0 < > user_on_mail02 at domain.com> Can't handle mixed proxy/non-proxy destinations > (in reply to RCPT TO command)) > > I was wondering if there was another way of handling this, for example > by triggering an immediate queue lookup from postfix or forwarding a > copy of the mail to the other server. Note that the postfix > "queue_run_delay" was increased to 15min on purpose, so I cannot change > that. It would be possible to change the code to support mixed destinations, but it's probably not a simple change and I have other things to do.. Maybe you could work around it so that LMTP always proxies the mails, to localhost as well, but to a different port which doesn't do proxying at all. From tss at iki.fi Sat Jan 28 19:45:13 2012 From: tss at iki.fi (Timo Sirainen) Date: Sat, 28 Jan 2012 19:45:13 +0200 Subject: [Dovecot] IMAP to Maildir Migration preserving UIDs? In-Reply-To: <4F21E92C.4090509@gedalya.net> References: <4F20D718.9010805@gedalya.net> <4F21E4CD.3070001@gedalya.net> <4F21E92C.4090509@gedalya.net> Message-ID: <3F3C09E9-1E8F-4243-BC39-BAEA38AF5300@iki.fi> On 27.1.2012, at 2.00, Gedalya wrote: > Starting program: /usr/bin/doveadm -o imapc_user=jedi at example.com -o imapc_password=**** backup -u jedi at example.com -R imapc: > > Program received signal SIGSEGV, Segmentation fault. > mailbox_log_iter_open_next (iter=0x80cbd90) at mailbox-log.c:213 > 213 mailbox-log.c: No such file or directory. > in mailbox-log.c This crash is now fixed, so there's no need to give /tmp/imapc path anymore: http://hg.dovecot.org/dovecot-2.1/rev/7b94d1c8a6e7 From tss at iki.fi Sat Jan 28 19:51:08 2012 From: tss at iki.fi (Timo Sirainen) Date: Sat, 28 Jan 2012 19:51:08 +0200 Subject: [Dovecot] Problem with Postfix + Dovecot + MySQL + Squirrelmail In-Reply-To: <1327667537.79787.YahooMailNeo@web65309.mail.ac2.yahoo.com> References: <1327667537.79787.YahooMailNeo@web65309.mail.ac2.yahoo.com> Message-ID: <8C57281B-2C18-4C19-9F80-57BDF77D83B4@iki.fi> On 27.1.2012, at 14.32, Gustavo wrote: > #service dovecot start > Starting IMAP/POP3 mail server: dovecotLast died with error (see error log for more information): Auth process died too early - shutting down No need to keep guessing the problem. "See error log for more information" like it says. http://wiki.dovecot.org/Logging From tss at iki.fi Sat Jan 28 19:55:09 2012 From: tss at iki.fi (Timo Sirainen) Date: Sat, 28 Jan 2012 19:55:09 +0200 Subject: [Dovecot] problem compiling imaptest under solaris In-Reply-To: <89f61bff49f4c5343be06dd45459b14a@imapproxy.hrz> References: <89f61bff49f4c5343be06dd45459b14a@imapproxy.hrz> Message-ID: <3A621688-A7AE-4C08-96EA-D9668ECA02D1@iki.fi> On 25.1.2012, at 16.43, J?rgen Obermann wrote: > today I tried to compile imaptest under solaris 10 with studio 11 compiler and got the following error: > > gmake[2]: Entering directory `/net/fileserv/export/sunsrc/src/imaptest-20111119/src' > source='client.c' object='client.o' libtool=no \ > DEPDIR=.deps depmode=none /bin/bash ../depcomp \ > cc -DHAVE_CONFIG_H -I. -I. -I.. -I/opt/local/include/dovecot -I/usr/local/include -fast -xarch=v8plusa -I/usr/sfw/include -c client.c > "/opt/local/include/dovecot/imap-util.h", line 6: warning: useless declaration > "client-state.h", line 6: warning: useless declaration > "client.c", line 655: operand cannot have void type: op "==" > "client.c", line 655: operands have incompatible types: > const void "==" int > cc: acomp failed for client.c http://hg.dovecot.org/imaptest/rev/7e490e59f1ee should fix it? From tss at iki.fi Sat Jan 28 19:57:29 2012 From: tss at iki.fi (Timo Sirainen) Date: Sat, 28 Jan 2012 19:57:29 +0200 Subject: [Dovecot] Password auth scheme question with mysql In-Reply-To: <4F1F46D7.7050600@wildgooses.com> References: <4F1F2B7F.3070005@wildgooses.com> <4F1F46D7.7050600@wildgooses.com> Message-ID: <143E640C-EE04-4B5B-B5A5-991AF3C2D567@iki.fi> On 25.1.2012, at 2.03, Ed W wrote: > The error seems to be that I set the "pass" variable in my password_query to set the master password for the upstream proxied to server. I can't actually remember now why this was required, but it was necessary to allow the proxy to work correctly in the past. I guess this assumption needs revisiting now since it can't be used if the plain password isn't in the database... I'm not sure if I understand correctly, but if you need the user's plaintext password it's in %w variable (assuming plaintext authentication). So a common configuration is to use: '%w' AS pass From tss at iki.fi Sat Jan 28 19:58:55 2012 From: tss at iki.fi (Timo Sirainen) Date: Sat, 28 Jan 2012 19:58:55 +0200 Subject: [Dovecot] [PATCH] autoconf small fix In-Reply-To: References: Message-ID: <1BE2A6DE-DC86-4BC4-BFBC-E58A57361368@iki.fi> On 24.1.2012, at 17.58, Luca Di Vizio wrote: > the attached patch seems to solve a warning from autoconf: > > libtoolize: Consider adding `AC_CONFIG_MACRO_DIR([m4])' to configure.in and > libtoolize: rerunning libtoolize, to keep the correct libtool macros in-tree. I have considered it before, but I remember at one point there was some reason why I didn't want to do it. I just can't remember the reason anymore, maybe there isn't any.. But I don't really understand why libtoolize keeps complaining about that, since it works just fine as it is. From tss at iki.fi Sat Jan 28 20:06:01 2012 From: tss at iki.fi (Timo Sirainen) Date: Sat, 28 Jan 2012 20:06:01 +0200 Subject: [Dovecot] Quota-warning and setresgid In-Reply-To: References: Message-ID: <480D0593-2405-42B5-8EA9-9A66CD8F3B97@iki.fi> On 10.1.2012, at 11.34, l.chelchowski wrote: > Jan 10 10:15:06 lda: Debug: auth input: tester at domain.eu home=/home/vmail/domain.eu/tester/ mail=maildir:/home/vmail/domain.eu/tester/:INDEX=/var/mail/vmail/domain.eu/tester at domain.eu/index/public uid=101 gid=12 quota_rule=*:storage=2097 acl_groups= Note that userdb lookup returns gid=12(mail) > Jan 10 10:15:06 lda(tester at domain.eu): Fatal: setresgid(12(mail),12(mail),101(vmail)) failed with euid=101(vmail): Operation not permitted But you're running it with gid=101(vmail). > mail_gid = vmail > mail_privileged_group = vmail > mail_uid = vmail Here you're also using gid=101(vmail). (The mail_privileged_group=vmail is a useless setting BTW) > userdb { > args = /usr/local/etc/dovecot/dovecot-sql.conf > driver = sql > } My guess for the best fix: Change the user_query not to return uid or gid fields at all. From tss at iki.fi Sat Jan 28 20:23:12 2012 From: tss at iki.fi (Timo Sirainen) Date: Sat, 28 Jan 2012 20:23:12 +0200 Subject: [Dovecot] UID 0 problem while issuing an UID THREAD REFS command In-Reply-To: <201201231355.15051.jesus.navarro@bvox.net> References: <201201201724.41631.jesus.navarro@bvox.net> <4F19BD71.9000603@iki.fi> <201201231355.15051.jesus.navarro@bvox.net> Message-ID: <30046BB5-6E1C-41E5-9B04-787F568DE604@iki.fi> On 23.1.2012, at 14.55, Jes?s M. Navarro wrote: >>> I'm having problems on a maildir due to dovecot returning an UID 0 to an >>> UID THREAD REFS command: > > I'm sending to your personal address a whole maildir that reproduces the bug > (it's very short) to avoid having it published in the mail archives. Thanks, I finally looked at this. The problem happens only when the THREADing isn't done for all messages. I thought this would have been a much more complex bug. Fixed: http://hg.dovecot.org/dovecot-2.0/rev/57498cad6ab9 From tss at iki.fi Sat Jan 28 20:29:36 2012 From: tss at iki.fi (Timo Sirainen) Date: Sat, 28 Jan 2012 20:29:36 +0200 Subject: [Dovecot] where is subscribed list stored? In-Reply-To: <4F1C1536.1000407@makuch.org> References: <4F1C1536.1000407@makuch.org> Message-ID: <544CBFAD-1A55-422A-9292-2D876E65AE53@iki.fi> On 22.1.2012, at 15.55, Michael Makuch wrote: > I use dovecot locally for internal only access to my email archives, of which I have many gigs of email archives. Over time I end up subscribing to a couple dozen different IMAP email folders. Problem is that periodically my list of subscribed folders get zapped to none, and I have to go and re-subscribe to a dozen or two folders again. > > Anyone seen this happen? It looks like the list of subscribed folders is here ~/Mail/.subscriptions and I can see in my daily backup that it reflects what appears in TBird. What might be zapping it? I use multiple email clients simultaneously on different hosts. (IOW I leave them open) Is this a problem? Does dovecot manage that in some way? Or is that my problem? I don't think this is the problem since this only occurs like a few times per year. If it were the problem I'd expect it to occur much more frequently. No idea, but you could prevent it by making sure that it can't change the subscriptions: mail_location = mbox:~/Mail:CONTROL=~/mail-subscriptions mkdir ~/mail-subscriptions mv ~/Mail/.subscriptions ~/mail-subscriptions chmod 0500 ~/mail-subscriptions I thought Dovecot would also log an error if client tried to change subscriptions, but looks like it doesn't. It only returns failure to client: a unsubscribe INBOX a NO [NOPERM] No permission to modify subscriptions From tss at iki.fi Sat Jan 28 22:07:02 2012 From: tss at iki.fi (Timo Sirainen) Date: Sat, 28 Jan 2012 22:07:02 +0200 Subject: [Dovecot] MySQL server has gone away In-Reply-To: References: Message-ID: <02F8EC25-CB7D-4AD1-8ED8-5CB4950B6EC5@iki.fi> On 13.1.2012, at 20.29, Mark Moseley wrote: > If there are multiple hosts, it seems like the most robust thing to do > would be to exhaust the existing connections and if none of those > succeed, then start a new connection to one of them. It will probably > result in much more convoluted logic but it'd probably match better > what people expect from a retry. Done: http://hg.dovecot.org/dovecot-2.0/rev/4e7676b890f1 From tss at iki.fi Sat Jan 28 22:24:49 2012 From: tss at iki.fi (Timo Sirainen) Date: Sat, 28 Jan 2012 22:24:49 +0200 Subject: [Dovecot] imap-login process_limit reached In-Reply-To: <4F158473.1000901@orlitzky.com> References: <4F14A7AA.8010507@easystreet.net> <4F158473.1000901@orlitzky.com> Message-ID: <2D5E0681-DF1F-4798-83BF-54648B2DAFB4@iki.fi> On 17.1.2012, at 16.23, Michael Orlitzky wrote: > First of all, feature request: > > doveconf -d > show the default value of all settings Done: http://hg.dovecot.org/dovecot-2.1/rev/41cb0217b7c3 From tss at iki.fi Sat Jan 28 22:42:21 2012 From: tss at iki.fi (Timo Sirainen) Date: Sat, 28 Jan 2012 22:42:21 +0200 Subject: [Dovecot] 2.1.rc3 (1a722c7676bb):dsync umlaut problems In-Reply-To: <4F0FA0A7.10909@localhost.localdomain.org> References: <4F0FA0A7.10909@localhost.localdomain.org> Message-ID: <7D563028-0149-4A06-A7DF-9A3F7B84F805@iki.fi> On 13.1.2012, at 5.10, Pascal Volk wrote: > All umlauts in mailbox names are lost after converting mbox/Maildir > mailboxes to mdbox. Looks like it was a generic problem in v2.1 dsync. Fixed: http://hg.dovecot.org/dovecot-2.1/rev/ef6f3b7f6038 From tss at iki.fi Sat Jan 28 22:44:45 2012 From: tss at iki.fi (Timo Sirainen) Date: Sat, 28 Jan 2012 22:44:45 +0200 Subject: [Dovecot] moving mail out of alt storage In-Reply-To: References: <87sjnya3z5.fsf@algae.riseup.net> <1316077133.12936.18.camel@hurina> <87obylafsw.fsf_-_@algae.riseup.net> Message-ID: On 12.1.2012, at 20.32, Mark Moseley wrote: >>> On Wed, 2011-09-14 at 23:17 -0400, Micah Anderson wrote: >>>> I moved some mail into the alt storage: >>>> >>>> doveadm altmove -u johnd at example.com seen savedbefore 1w >>>> >>>> and now I want to move it back to the regular INBOX, but I can't see how >>>> I can do that with either 'altmove' or 'mailbox move'. >>> >>> Is this sdbox or mdbox? With sdbox you could simply "mv" the files. Or >>> apply patch: http://hg.dovecot.org/dovecot-2.0/rev/1910c76a6cc9 >> >> This is mdbox, which is why I am not sure how to operate because I am >> used to individual files as is with maildir. >> >> micah >> > > I'm curious about this too. Is moving the m.# file out of the ALT > path's storage/ directory into the non-ALT storage/ directory > sufficient? Or will that cause odd issues? You can manually move m.* files to alt storage and back. Just make sure that the same file isn't being simultaneously modified by Dovecot or you'll corrupt it. From tss at iki.fi Sat Jan 28 23:04:24 2012 From: tss at iki.fi (Timo Sirainen) Date: Sat, 28 Jan 2012 23:04:24 +0200 Subject: [Dovecot] dovecot 2.0.15 - purge errors In-Reply-To: <87hb00run6.fsf@alfa.kjonca> References: <87hb00run6.fsf@alfa.kjonca> Message-ID: <88D79565-2FEC-4B69-88F3-FC6F6AAB435A@iki.fi> On 13.1.2012, at 8.20, Kamil Jo?ca wrote: > Dovecot 2.0.15, debian package, am I lost some mails? How can I check > what is in *.broken file? You can look at the .broken file with text editor for example :) > --8<---------------cut here---------------start------------->8--- > $doveadm -v purge > doveadm(kjonca): Error: Corrupted dbox file /home/kjonca/Mail/0/storage/m.6469 (around offset=291530): purging found mismatched offsets (291500 vs 299692, 60/215) 299692 - 291500 = 8192 = output stream's buffering size. I guess what happened is that sometimes earlier Dovecot crashed while it was saving a message, but it had managed to write 8192 bytes. Now purging notices the extra 8192 bytes and wonders what to do about them, so it starts index rebuild, which probably adds it as a new message to mailbox. In future this check probably should be done before appending the next message to mdbox, so it's noticed earlier and it probably should delete the message instead of adding a partially saved message to mailbox. > doveadm(kjonca): Error: Corrupted dbox file /home/kjonca/Mail/0/storage/m.6469 (around offset=599914): metadata header has bad magic value This is about the same error as above. So, in short: Nothing to worry about. Although you could look into why the earlier saving crashed in the first place. From robert at schetterer.org Sat Jan 28 23:06:01 2012 From: robert at schetterer.org (Robert Schetterer) Date: Sat, 28 Jan 2012 22:06:01 +0100 Subject: [Dovecot] MySQL server has gone away In-Reply-To: <02F8EC25-CB7D-4AD1-8ED8-5CB4950B6EC5@iki.fi> References: <02F8EC25-CB7D-4AD1-8ED8-5CB4950B6EC5@iki.fi> Message-ID: <4F246339.708@schetterer.org> Am 28.01.2012 21:07, schrieb Timo Sirainen: > On 13.1.2012, at 20.29, Mark Moseley wrote: > >> If there are multiple hosts, it seems like the most robust thing to do >> would be to exhaust the existing connections and if none of those >> succeed, then start a new connection to one of them. It will probably >> result in much more convoluted logic but it'd probably match better >> what people expect from a retry. > > Done: http://hg.dovecot.org/dovecot-2.0/rev/4e7676b890f1 > Hi Timo doc/example-config/dovecot-sql.conf.ext from hg has something like # Database connection string. This is driver-specific setting. # HA / round-robin load-balancing is supported by giving multiple host # settings, like: host=sql1.host.org host=sql2.host.org but i dont find it in http://wiki2.dovecot.org/AuthDatabase/SQL -- Best Regards MfG Robert Schetterer Germany/Munich/Bavaria From tss at iki.fi Sat Jan 28 23:47:56 2012 From: tss at iki.fi (Timo Sirainen) Date: Sat, 28 Jan 2012 23:47:56 +0200 Subject: [Dovecot] dovecot 2.0.15 - purge errors In-Reply-To: <88D79565-2FEC-4B69-88F3-FC6F6AAB435A@iki.fi> References: <87hb00run6.fsf@alfa.kjonca> <88D79565-2FEC-4B69-88F3-FC6F6AAB435A@iki.fi> Message-ID: <3F7BA98D-9295-4823-80E5-A647FBD71D68@iki.fi> On 28.1.2012, at 23.04, Timo Sirainen wrote: > 299692 - 291500 = 8192 = output stream's buffering size. I guess what happened is that sometimes earlier Dovecot crashed while it was saving a message, but it had managed to write 8192 bytes. Now purging notices the extra 8192 bytes and wonders what to do about them, so it starts index rebuild, which probably adds it as a new message to mailbox. > > In future this check probably should be done before appending the next message to mdbox, so it's noticed earlier Done: http://hg.dovecot.org/dovecot-2.1/rev/bde005e302e0 > and it probably should delete the message instead of adding a partially saved message to mailbox. Not done. Safer to not delete any data. From tss at iki.fi Sat Jan 28 23:54:17 2012 From: tss at iki.fi (Timo Sirainen) Date: Sat, 28 Jan 2012 23:54:17 +0200 Subject: [Dovecot] howto disable indexing on dovecot-lda ? In-Reply-To: <4F0DC747.4070505@gmail.com> References: <4F06D5D9.20001@gmail.com> <4F06DFF5.40707@hardwarefreak.com> <4F06F0E7.904@gmail.com> <4F0DC747.4070505@gmail.com> Message-ID: On 11.1.2012, at 19.30, Adrian Minta wrote: > Hello, > > I tested with "mail_location = whatever-you-have-now:INDEX=MEMORY" and it seems to help, but in the mean time I found another option completely undocumented that seems to do exactly what I wanted: > protocol lda { > mailbox_list_index_disable= yes > > } > > Does anyone knows exactly what "mailbox_list_index_disable" does and if is still available in 2.0 and 2.1 branch ? mailbox_list_index_disable does absolutely nothing in v2.0, and it defaults to "no" in v2.1 also. It's about a different kind of index. From tss at iki.fi Sun Jan 29 00:00:27 2012 From: tss at iki.fi (Timo Sirainen) Date: Sun, 29 Jan 2012 00:00:27 +0200 Subject: [Dovecot] A small bug and a question about CONTEXT=SEARCH In-Reply-To: <20120111155746.BD7BDDA030B2B@bmail06.one.com> References: <20120111155746.BD7BDDA030B2B@bmail06.one.com> Message-ID: On 11.1.2012, at 17.57, Anders wrote: > Sorry, apparently I was a bit too fast there. ADDTO and REMOVEFROM should not > be sent by a client, but I think that a client can send CONTEXT as a hint to > the server, see > > http://tools.ietf.org/html/rfc5267#section-4.2 Yes, that was a bug. Thanks, fixed: http://hg.dovecot.org/dovecot-2.0/rev/fd16e200f0f7 From tss at iki.fi Sun Jan 29 00:04:16 2012 From: tss at iki.fi (Timo Sirainen) Date: Sun, 29 Jan 2012 00:04:16 +0200 Subject: [Dovecot] sieve under lmtp using wrong homedir ? In-Reply-To: References: Message-ID: On 11.1.2012, at 17.35, Frank Post wrote: > All is working well except lmtp. Sieve scripts are correctly saved under > /var/vmail/test.com/test/sieve, but under lmtp sieve will use > /var/vmail//testuser/ > Uid testuser has mail=test at test.com configured in ldap. > > As i could see in the debug logs, there is a difference between the auth > "master out" lines, but why ? .. > Jan 11 14:39:53 auth: Debug: master in: USER 1 testuser > service=lmtp lip=10.234.201.9 rip=10.234.201.4 This means that Dovecot LMTP got: RCPT TO: instead of: RCPT TO: You probably should fix your userdb lookup so that that would return "unknown user" instead of accepting it. But the real problem is anyway in your MTA setup. From tss at iki.fi Sun Jan 29 00:17:44 2012 From: tss at iki.fi (Timo Sirainen) Date: Sun, 29 Jan 2012 00:17:44 +0200 Subject: [Dovecot] MySQL server has gone away In-Reply-To: <4F246339.708@schetterer.org> References: <02F8EC25-CB7D-4AD1-8ED8-5CB4950B6EC5@iki.fi> <4F246339.708@schetterer.org> Message-ID: <9B43B5C1-8375-43E9-8CA3-722F601846A2@iki.fi> On 28.1.2012, at 23.06, Robert Schetterer wrote: > doc/example-config/dovecot-sql.conf.ext > from hg > has something like > > # Database connection string. This is driver-specific setting. > # HA / round-robin load-balancing is supported by giving multiple host > # settings, like: host=sql1.host.org host=sql2.host.org > > but i dont find it in > http://wiki2.dovecot.org/AuthDatabase/SQL I added something about it there. From tss at iki.fi Sun Jan 29 00:20:06 2012 From: tss at iki.fi (Timo Sirainen) Date: Sun, 29 Jan 2012 00:20:06 +0200 Subject: [Dovecot] maildir vs mdbox In-Reply-To: References: <959D4700-F076-45A4-9C42-47ED218A50BC@iki.fi> <548DDD91-D0F1-49F7-8E8D-3EA03DF72397@iki.fi> Message-ID: On 28.1.2012, at 19.02, Jean-Daniel Beaubien wrote: > Btw, when I migrate my emails from Maildir to mdbox, dsync should take into > account the rotate_size parameter. If I want to change the rotate_size > parameter, I simply edit the config file, change the parameter (erase the > mdbox folder?) and re-run dsync. Is that correct? Yes. You can also give -o mdbox_rotate_size=X parameter to dsync to override the config. The mdbox_rotate_size is started to be used immediately, so if you increase it Dovecot may start appending new mails to old files. The existing files aren't immediately shrunk, but during purge when writing new files the files can become smaller. From tss at iki.fi Sun Jan 29 00:26:01 2012 From: tss at iki.fi (Timo Sirainen) Date: Sun, 29 Jan 2012 00:26:01 +0200 Subject: [Dovecot] compressed mboxes very slow In-Reply-To: <8739blw6gl.fsf@alfa.kjonca> References: <87iptnoans.fsf@alfa.kjonca> <8739blw6gl.fsf@alfa.kjonca> Message-ID: <0C550F94-3CAE-4B0E-9E95-B6E1A708DBA0@iki.fi> I wonder if this patch helps here: http://hg.dovecot.org/dovecot-2.0/rev/9b2931607063 At least I can't now see any slowness with either v2.1 or the latest v2.0. But I don't know if I would have slowness with older versions either.. From robert at schetterer.org Sun Jan 29 00:27:22 2012 From: robert at schetterer.org (Robert Schetterer) Date: Sat, 28 Jan 2012 23:27:22 +0100 Subject: [Dovecot] MySQL server has gone away In-Reply-To: <9B43B5C1-8375-43E9-8CA3-722F601846A2@iki.fi> References: <02F8EC25-CB7D-4AD1-8ED8-5CB4950B6EC5@iki.fi> <4F246339.708@schetterer.org> <9B43B5C1-8375-43E9-8CA3-722F601846A2@iki.fi> Message-ID: <4F24764A.1080207@schetterer.org> Am 28.01.2012 23:17, schrieb Timo Sirainen: > On 28.1.2012, at 23.06, Robert Schetterer wrote: > >> doc/example-config/dovecot-sql.conf.ext >> from hg >> has something like >> >> # Database connection string. This is driver-specific setting. >> # HA / round-robin load-balancing is supported by giving multiple host >> # settings, like: host=sql1.host.org host=sql2.host.org >> >> but i dont find it in >> http://wiki2.dovecot.org/AuthDatabase/SQL > > I added something about it there. > cool thanks ! -- Best Regards MfG Robert Schetterer Germany/Munich/Bavaria From tss at iki.fi Sun Jan 29 00:36:25 2012 From: tss at iki.fi (Timo Sirainen) Date: Sun, 29 Jan 2012 00:36:25 +0200 Subject: [Dovecot] 2.0.17: Index lost -> SAVEDON lost as well? In-Reply-To: <17E5EF90-C133-4F70-A377-0D93694181DE@iki.fi> References: <20120109074057.GC22506@charite.de> <17E5EF90-C133-4F70-A377-0D93694181DE@iki.fi> Message-ID: On 9.1.2012, at 16.57, Timo Sirainen wrote: >> After that, the SAVEDON date for all mails was reset to today: > > Yeah. The "save date" is stored only in index. And index rebuild drops all those fields. I guess this could/should be fixed in index rebuild. Fixed: http://hg.dovecot.org/dovecot-2.0/rev/c30ea8aec902 From tss at iki.fi Sun Jan 29 00:38:54 2012 From: tss at iki.fi (Timo Sirainen) Date: Sun, 29 Jan 2012 00:38:54 +0200 Subject: [Dovecot] Attribute Cache flush errors on FreeBSD 8.2 In-Reply-To: <4F079021.4090001@kernick.org> References: <4F079021.4090001@kernick.org> Message-ID: On 7.1.2012, at 2.21, Phil Kernick wrote: > I'm running dovecot 2.0.16 on FreeBSD 8.2 with the mail spool and indexes on an NFS server. > > Lines like the following keep appearing in syslog for access to each mailbox: > > Error: nfs_flush_attr_cache_fd_locked: fchown(/home/philk/Mail/Deleted) failed: Bad file descriptor I've given up on trying to make mail_nfs_* settings work. If you have only one Dovecot server, you don't need these settings at all. If you have more than one Dovecot server, use director (and then you also don't need these settings). From tss at iki.fi Sun Jan 29 00:40:53 2012 From: tss at iki.fi (Timo Sirainen) Date: Sun, 29 Jan 2012 00:40:53 +0200 Subject: [Dovecot] 2.1.rc1 (056934abd2ef): virtual plugin mailbox search pattern In-Reply-To: <4EF4BB6C.3050902@gmx.de> References: <4EF4BB6C.3050902@gmx.de> Message-ID: <1F065FD5-11B7-44C0-A4CB-96B346801986@iki.fi> On 23.12.2011, at 19.33, e-frog wrote: > For testing propose I created the following folders with each containing one unread message > > INBOX, INBOX/level1 and INBOX/level1/level2 .. > Result: virtual/unread shows only 1 unseen message. Further tests showed it's the one from INBOX. The mails from the deeper levels are not found. What mailbox format are you using? Maybe I fixed this with http://hg.dovecot.org/dovecot-2.1/rev/54e74090fb42 From ronald at rmacd.com Sun Jan 29 01:16:19 2012 From: ronald at rmacd.com (Ronald MacDonald) Date: Sat, 28 Jan 2012 18:16:19 -0500 Subject: [Dovecot] Migration to multi-dbox and SiS Message-ID: Dear list, A huge thank-you first of all for all the work that's gone into Dovecot itself. I'm rebuilding a mail server next week and so, taking the rare opportunity to re-consider all the options I've had running over the past couple of years. Around the time of the last re-build (2010), there had been some discussion on single instance storage, which was quite new on Dovecot around then. I chickened out of setting it up though. Now with it having been in the wild for a couple of years, I wonder, how have people found SiS to behave? Additionally, though there was talk of the prospect of it being merged with 2.x am I right in thinking it's not yet in the main project? Couldn't find any 2.x changelogs that could confirm this. With best wishes, Ronald. From tss at iki.fi Sun Jan 29 01:53:59 2012 From: tss at iki.fi (Timo Sirainen) Date: Sun, 29 Jan 2012 01:53:59 +0200 Subject: [Dovecot] Migration to multi-dbox and SiS In-Reply-To: References: Message-ID: <7E4C5ED4-BE84-4638-8D2E-51D25FF88EB5@iki.fi> On 29.1.2012, at 1.16, Ronald MacDonald wrote: > Around the time of the last re-build (2010), there had been some discussion on single instance storage, which was quite new on Dovecot around then. I chickened out of setting it up though. Now with it having been in the wild for a couple of years, I wonder, how have people found SiS to behave? Additionally, though there was talk of the prospect of it being merged with 2.x am I right in thinking it's not yet in the main project? Couldn't find any 2.x changelogs that could confirm this. It's in v2.0 and used by at least a few installations. Apparently it works quite well. As long as you have a pretty typical setup it should work fine. It gets more complex if you want to spread the data across multiple mount points. Backups may also be more difficult, since filesystem snapshots are pretty much the only 100% safe way to do them. BTW. SIS, not SiS ("Instance", not "in") From stan at hardwarefreak.com Sun Jan 29 02:25:50 2012 From: stan at hardwarefreak.com (Stan Hoeppner) Date: Sat, 28 Jan 2012 18:25:50 -0600 Subject: [Dovecot] XFS Developer Takes Shots At Btrfs, EXT4 In-Reply-To: <4F2418E5.2020107@gmail.com> References: <4F2418E5.2020107@gmail.com> Message-ID: <4F24920E.6080500@hardwarefreak.com> On 1/28/2012 9:48 AM, Adrian Minta wrote: > Nice article about XFS improvements: > http://tinyurl.com/7pvr9ju The "article" is strictly a badly written summary of the video. But, the video was great. Until watching this I'd never seen Dave in a photo or video, though I correspond with him regularly on the XFS list. Nice to finally put a face and voice to a name. One of many reasons the summary is badly written is the use of present tense when referring to XFS deficiencies, specifically the part about EXT4 being 20-50x faster with some metadata operations. The author writes as if this was the current state of affairs right up to Dave's recent presentation. The author misread or misinterpreted the slides or Dave's speech, and apparently has no personal knowledge of Linux filesystem development. This 20-50x EXT4 advantage disappeared in 2009, almost 3 years ago. I've mentioned many of these "new" improvements on this list over the past 2-3 years. They're not "new". We have an "author" writing about something he knows nothing about, and making lots of mistakes in his summary. This seems to be a trend with Phoronix. They are decided desktop-only oriented. Thus when they attempt to write about the big stuff they fail badly. And the title? Juvenile attempt to draw readers. Pretty pathetic. The "article" was all about Dave's presentation. Dave's 50 minute presentation took 2 "shots" of 10-15 seconds each at EXT4 and BTRFS. A better title would have been simply something like "XFS dev details improvements at Linux.Conf.Au 2012." -- Stan From moseleymark at gmail.com Sun Jan 29 06:04:44 2012 From: moseleymark at gmail.com (Mark Moseley) Date: Sat, 28 Jan 2012 20:04:44 -0800 Subject: [Dovecot] MySQL server has gone away In-Reply-To: <02F8EC25-CB7D-4AD1-8ED8-5CB4950B6EC5@iki.fi> References: <02F8EC25-CB7D-4AD1-8ED8-5CB4950B6EC5@iki.fi> Message-ID: On Sat, Jan 28, 2012 at 12:07 PM, Timo Sirainen wrote: > On 13.1.2012, at 20.29, Mark Moseley wrote: > >> If there are multiple hosts, it seems like the most robust thing to do >> would be to exhaust the existing connections and if none of those >> succeed, then start a new connection to one of them. It will probably >> result in much more convoluted logic but it'd probably match better >> what people expect from a retry. > > Done: http://hg.dovecot.org/dovecot-2.0/rev/4e7676b890f1 > Excellent, thanks! From e-frog at gmx.de Sun Jan 29 10:33:01 2012 From: e-frog at gmx.de (e-frog) Date: Sun, 29 Jan 2012 09:33:01 +0100 Subject: [Dovecot] 2.1.rc1 (056934abd2ef): virtual plugin mailbox search pattern In-Reply-To: <1F065FD5-11B7-44C0-A4CB-96B346801986@iki.fi> References: <4EF4BB6C.3050902@gmx.de> <1F065FD5-11B7-44C0-A4CB-96B346801986@iki.fi> Message-ID: <4F25043D.7000501@gmx.de> On 28.01.2012 23:40, wrote Timo Sirainen: > On 23.12.2011, at 19.33, e-frog wrote: > >> For testing propose I created the following folders with each containing one unread message >> >> INBOX, INBOX/level1 and INBOX/level1/level2 > .. >> Result: virtual/unread shows only 1 unseen message. Further tests showed it's the one from INBOX. The mails from the deeper levels are not found. > > What mailbox format are you using? mdbox > Maybe I fixed this with http://hg.dovecot.org/dovecot-2.1/rev/54e74090fb42 Just tested and yes it works with the above mentioned patch. Thanks a lot Timo! From adrian.minta at gmail.com Sun Jan 29 12:08:44 2012 From: adrian.minta at gmail.com (Adrian Minta) Date: Sun, 29 Jan 2012 12:08:44 +0200 Subject: [Dovecot] XFS Developer Takes Shots At Btrfs, EXT4 In-Reply-To: <4F24920E.6080500@hardwarefreak.com> References: <4F2418E5.2020107@gmail.com> <4F24920E.6080500@hardwarefreak.com> Message-ID: <4F251AAC.4050803@gmail.com> On 01/29/12 02:25, Stan Hoeppner wrote: > The "article" is strictly a badly written summary of the video. But, > the video was great. Until watching this I'd never seen Dave in a photo > or video, though I correspond with him regularly on the XFS list. Nice > to finally put a face and voice to a name. Yes, the video is very nice. From CMarcus at Media-Brokers.com Sun Jan 29 20:00:49 2012 From: CMarcus at Media-Brokers.com (Charles Marcus) Date: Sun, 29 Jan 2012 13:00:49 -0500 Subject: [Dovecot] imap-login process_limit reached In-Reply-To: <2D5E0681-DF1F-4798-83BF-54648B2DAFB4@iki.fi> References: <4F14A7AA.8010507@easystreet.net> <4F158473.1000901@orlitzky.com> <2D5E0681-DF1F-4798-83BF-54648B2DAFB4@iki.fi> Message-ID: <4F258951.20006@Media-Brokers.com> On 2012-01-28 3:24 PM, Timo Sirainen wrote: > On 17.1.2012, at 16.23, Michael Orlitzky wrote: > >> First of all, feature request: >> >> doveconf -d >> show the default value of all settings > > Done: http://hg.dovecot.org/dovecot-2.1/rev/41cb0217b7c3 Awesome, thanks Timo! This makes it much easier to make sure that you aren't specifying anything which would be the same as the default, which minimizes doveconf -n 'noise'... -- Best regards, Charles From user+dovecot at localhost.localdomain.org Mon Jan 30 01:36:24 2012 From: user+dovecot at localhost.localdomain.org (Pascal Volk) Date: Mon, 30 Jan 2012 00:36:24 +0100 Subject: [Dovecot] 2.1.rc3 (1a722c7676bb): Panic: file ostream.c: line 173 (o_stream_sendv): assertion failed: (stream->stream_errno != 0) In-Reply-To: <4F10AA71.6030901@localhost.localdomain.org> References: <4F10AA71.6030901@localhost.localdomain.org> Message-ID: <4F25D7F8.7060609@localhost.localdomain.org> Looks like http://hg.dovecot.org/dovecot-2.1/rev/3c0bd1fd035b has solved the problem. -- The trapper recommends today: fabaceae.1203000 at localdomain.org From bryder at wetafx.co.nz Mon Jan 30 02:05:58 2012 From: bryder at wetafx.co.nz (Bill Ryder) Date: Mon, 30 Jan 2012 13:05:58 +1300 Subject: [Dovecot] A namespace error on 2.1rc5 Message-ID: <4F25DEE6.7020402@wetafx.co.nz> Hello all, I'm not sure if this is a bug. It's probably just an upgrade note. In summary I had no namespace section in my 2.0.17 config. When trying out 2.1rc5 no user could login because of a namespace error. 2.1rc5 adds a default namespace clause which broke my logins (It was noted in the changelog) I seemed to fix it by just putting this in the config file: namespace inbox { inbox = yes } Long story: I've been recently testing dovecot against cyrus to decide where we should go for our next mail server(s) I loaded up the mail server with mail delivered via postfix all on dovecot 2.0.15 (I've since moved to 2.0.17) I have three dovecot directors, two backends on the same NFS mail store. With dovecot 2.0.xx the tester works fine (it't just a script which logins in and emulates thunderbird when a user is idle - without using IDLE so the client asks for mail every few minutes). When I moved to 2.1rc5 I got namespace errors and the user can not login. The server said: dovecot-error.log-20120128:Jan 27 13:37:59 imap(ethab01): Error: user ethab01: Initialization failed: namespace configuration error: inbox=yes namespace missing The client says: * BYE Internal error occurred. Refer to server log for more information. The session looks like 0.000000 192.168.121.37 -> 192.168.121.2 TCP 33213 > imap [SYN] Seq=0 Win=5840 Len=0 MSS=1460 TSV=1056649457 TSER=0 WS=5 0.000036 192.168.121.2 -> 192.168.121.37 TCP imap > 33213 [SYN, ACK] Seq=0 Ack=1 Win=5792 Len=0 MSS=1460 TSV=3264407631 TSER=1056649457 WS=7 0.000187 192.168.121.37 -> 192.168.121.2 TCP 33213 > imap [ACK] Seq=1 Ack=1 Win=5856 Len=0 TSV=1056649458 TSER=3264407631 0.006338 192.168.121.2 -> 192.168.121.37 IMAP Response: * OK [CAPABILITY IMAP4rev1 LITERAL+ SASL-IR LOGIN-REFERRALS ID ENABLE IDLE STARTTLS AUTH=PLAIN AUTH=LOGIN] Dovecot ready. 0.006889 192.168.121.37 -> 192.168.121.2 TCP 33213 > imap [ACK] Seq=1 Ack=124 Win=5856 Len=0 TSV=1056649465 TSER=3264407637 0.006973 192.168.121.37 -> 192.168.121.2 IMAP Request: I ID ("x-originating-ip" "192.168.114.249" "x-originating-port" "49403" "x-connected-ip" "192.168.121.37" "x-connected-port" "143") 0.006980 192.168.121.2 -> 192.168.121.37 TCP imap > 33213 [ACK] Seq=124 Ack=178 Win=6912 Len=0 TSV=3264407638 TSER=1056649465 0.007086 192.168.121.2 -> 192.168.121.37 IMAP Response: * ID NIL 0.018471 192.168.121.2 -> 192.168.121.37 IMAP Response: * BYE Internal error occurred. Refer to server log for more information. (interestingly the tshark output strips out the user name and password which is convenient but which may mean there's not enough information?) I rolled back to 2.0.17 and it was fine again. It's the same config files for both, same maildirs etc etc. All I did was change the dovecot version from 2.0.17 to 2.1rc5 However I see from the changelog that 2.1rc5 added a default namespace inbox: diff doveconf-n.2.0.17 doveconf-n.2.1-rc5 1c1 < # 2.0.17 (684381041dc4+): /etc/dovecot/dovecot.conf --- > # 2.1.rc5: /etc/dovecot/dovecot.conf 20a21,39 > namespace inbox { > location = > mailbox Drafts { > special_use = \Drafts > } > mailbox Junk { > special_use = \Junk > } > mailbox Sent { > special_use = \Sent > } > mailbox "Sent Messages" { > special_use = \Sent > } > mailbox Trash { > special_use = \Trash > } > prefix = > } We had this section commented out in 2.0.x so there was no namespace inbox anywhere. ============== doveconf -n for 2.0.17 (for the backends) # 2.0.17 (684381041dc4+): /etc/dovecot/dovecot.conf # OS: Linux 2.6.32-131.6.1.el6.x86_64 x86_64 Scientific Linux release 6.1 (Carbon) nfs auth_mechanisms = plain login auth_username_format = %n auth_verbose = yes debug_log_path = /var/log/dovecot/dovecot-debug.log disable_plaintext_auth = no first_valid_uid = 200 info_log_path = /var/log/dovecot/dovecot-info.log log_path = /var/log/dovecot/dovecot-error.log mail_debug = yes mail_fsync = always mail_gid = vmail mail_location = maildir:/vol/dt_mailstore1/spool/%n:INDEX=/var/indexes/%n mail_nfs_storage = yes mail_plugins = " fts fts_solr mail_log notify quota" mail_uid = vmail maildir_very_dirty_syncs = yes managesieve_notify_capability = mailto managesieve_sieve_capability = fileinto reject envelope encoded-character vacation subaddress comparator-i;ascii-numeric relational regex imap4flags copy include variables body enotify environment mailbox date ihave passdb { driver = pam } plugin { autocreate = Trash autocreate2 = Drafts autocreate3 = Sent autocreate4 = Templates autosubscribe = Trash autosubscribe2 = Drafts autosubscribe3 = Sent autosubscribe4 = Templates fts = solr fts_solr = break-imap-search debug url=http://dovecot-solr1.wetafx.co.nz:8080/solr/ mail_log_events = delete undelete expunge copy mailbox_delete mailbox_rename sieve = ~/.dovecot.sieve sieve_dir = ~/sieve } protocols = imap pop3 lmtp sieve service auth { unix_listener auth-userdb { group = vmail user = vmail } } service lmtp { inet_listener lmtp { address = 192.168.121.2 127.0.0.1 port = 24 } process_min_avail = 20 unix_listener /var/spool/postfix/private/dovecot-lmtp { group = postfix mode = 0660 user = postfix } } service managesieve-login { inet_listener sieve { port = 4190 } inet_listener sieve_deprecated { port = 2000 } } ssl_cert = hi folks hi timo hi master of "Fu" I just migrate my emails from one type of Maildir to Mailbox I did as I was having problems reading speed with my webmail. I did it in order to optimize when do you my current config work for me sincerely -- http://pgp.mit.edu:11371/pks/lookup?op=get&search=0xC2626742 gpg --keyserver pgp.mit.edu --recv-key C2626742 http://urlshort.eu fakessh @ http://gplus.to/sshfake http://gplus.to/sshswilting http://gplus.to/john.swilting https://lists.fakessh.eu/mailman/ This list is moderated by me, but all applications will be accepted provided they receive a note of presentation -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 189 bytes Desc: Ceci est une partie de message num?riquement sign?e URL: From gedalya at gedalya.net Mon Jan 30 05:29:51 2012 From: gedalya at gedalya.net (Gedalya) Date: Sun, 29 Jan 2012 22:29:51 -0500 Subject: [Dovecot] IMAP to Maildir Migration preserving UIDs? In-Reply-To: References: <4F20D718.9010805@gedalya.net> Message-ID: <4F260EAF.4090408@gedalya.net> On 01/26/2012 07:27 AM, Timo Sirainen wrote: > On 26.1.2012, at 6.31, Gedalya wrote: > >> I'm facing the need to migrate from a proprietary IMAP server to Dovecot. The migration must be as smooth and transparent as possible. >> >> The mailbox format I would want to use is Maildir++. >> >> The storage format used by the current server is unknown, and I don't look forward to trying to reverse-engineer it. This leaves me with the option of reading the mailboxes using IMAP. There are tools like offlineimap or mbsync, and they do store the UID and UIDVALIDITY info. The last piece of the puzzle is a process to properly create the dovecot-uidlist and dovecot-uidvalidity files. So far I wasn't able to find anything on this. Are there any tips? Are there any tools available to do this job, or part of it? > Get Dovecot v2.1 and configure it to work. Then for migration add to dovecot.conf: > > imapc_host = imap.example.com > imapc_port = 993 > imapc_ssl = imaps > imapc_ssl_ca_dir = /etc/ssl/certs > mail_prefetch_count = 50 > > And do the migration one user at a time: > > doveadm -o imapc_user=USERNAME -o imapc_password=PASSWORD backup -R imapc: > Now, to the issue of POP3. The old system uses the message filename for UIDL, but we need to migrate via IMAP in order to preserve IMAP info and UIDs (which have nothing to do with the POP3 UIDL in this case). So I've just finished writing a script to insert X-UIDL headers, and pop3_reuse_xuidl is doing the job. Question: Since the system currently serves in excess of 10 pop3 connections per second, would there be any performance gain from using pop3_save_uidl? Would it be faster or slower to fetch the UIDL list from the uidlist rather than look up the X-UIDL in the index? Just wondering. Also, what order does dovecot return the UIDLs in? From tss at iki.fi Mon Jan 30 08:31:39 2012 From: tss at iki.fi (Timo Sirainen) Date: Mon, 30 Jan 2012 08:31:39 +0200 Subject: [Dovecot] Mountpoints Message-ID: <563DC292-C26B-42FD-9E0D-119A5ECC451B@iki.fi> I've been thinking about mountpoints recently. There have been a few problems related to them: - If dbox mails and indexes are in different filesystems, and index fs isn't mounted and mailbox is accessed -> Dovecot rebuilds indexes from scratch, which changes UIDVALIDITY, which causes client to redownload mails. All mails will also show up as unread. Once index fs gets mounted again, the UIDVALIDITY changes again and client again redownloads mails. What should happen instead is that Dovecot simply refuses to rebuild indexes when the index fs isn't mounted. This isn't as critical for mbox/maildir, but probably a good idea to do there as well. - If dbox's alternative storage isn't mounted and a mail from there is tried to be accessed -> Dovecot rebuilds indexes and sees that all mails in alt path are gone, so Dovecot also deletes them from indexes as well. Once alt fs is mounted again, the mails in there won't come back without manual index rebuild and then they have also lost flags and have updated UIDs causing clients to redownload them. So again what should happen is that Dovecot won't rebuild indexes while alt fs isn't mounted. - For dsync-based replication I need to keep a state of each mountpoint (online, offline, failover) to determine how to access user's mails. So in the first two cases the main problem is: How does Dovecot know where a mountpoint begins? If the mountpoint is actually mounted there is no problem, because there are functions to find it (e.g. from /etc/mtab). So how to find a mountpoint that should exist, but doesn't? In some OSes Dovecot could maybe read and parse /etc/fstab, but that doesn't exist in all OSes, and do all installations even have all of the filesystems listed there anyway? (They could be in some startup script.) So, I was thinking about adding doveadm commands to explicitly tell Dovecot about the mountpoints that it needs to care about. When no mountpoints are defined Dovecot would behave as it does now. doveadm mount add|remove - add/remove mountpoint doveadm mount state [ []] - get/set state of mountpoint (used by replication) - if path isn't given list states of all mountpoints List of mountpoints is kept in /var/lib/dovecot/mounts. But because the dovecot directory is only accessible to root (and probably too much trouble to change that), there's another list in /var/run/dovecot/mounts. This one also contains the states of the mounts. When Dovecot starts up and can't find the mounts from rundir, it creates it from vardir's mounts. When mail processes notice that a directory is missing, it usually autocreates it. With mountpoints enabled, Dovecot first finds the root mountpoint for the directory. The mount root is stat()ed and its parent is stat()ed. If their device numbers equal, the filesystem is unmounted currently, and Dovecot fails instead of creating a new directory. Similar logic is used to avoid doing a dbox rebuild if its alt dir is currently in unmounted filesystem. The main problem I see with all this is how to make sysadmins remember to use these commands when they add/remove mountpoints?.. Perhaps the additions could be automatic at startup. Whenever Dovecot sees a new mountpoint, it's added. If an old mountpoint doesn't exist at startup a warning is logged about it. Of course many of the mountpoints aren't intended for mail storage. They could be hidden from the "mount state" list by setting their state to "ignore". Dovecot could also skip some of the common known mountpoints, such as where type is proc/tmpfs/sysfs. Thoughts? From tss at iki.fi Mon Jan 30 08:34:08 2012 From: tss at iki.fi (Timo Sirainen) Date: Mon, 30 Jan 2012 08:34:08 +0200 Subject: [Dovecot] Mountpoints In-Reply-To: <563DC292-C26B-42FD-9E0D-119A5ECC451B@iki.fi> References: <563DC292-C26B-42FD-9E0D-119A5ECC451B@iki.fi> Message-ID: <8CA38742-F346-4925-82D7-B282E6B284FF@iki.fi> On 30.1.2012, at 8.31, Timo Sirainen wrote: > The main problem I see with all this is how to make sysadmins remember to use these commands when they add/remove mountpoints?.. Perhaps the additions could be automatic at startup. Whenever Dovecot sees a new mountpoint, it's added. If an old mountpoint doesn't exist at startup a warning is logged about it. Of course many of the mountpoints aren't intended for mail storage. They could be hidden from the "mount state" list by setting their state to "ignore". Dovecot could also skip some of the common known mountpoints, such as where type is proc/tmpfs/sysfs. I wonder how automounts would work with this.. Probably rather randomly.. From Juergen.Obermann at hrz.uni-giessen.de Mon Jan 30 09:57:22 2012 From: Juergen.Obermann at hrz.uni-giessen.de (=?UTF-8?Q?J=C3=BCrgen_Obermann?=) Date: Mon, 30 Jan 2012 08:57:22 +0100 Subject: [Dovecot] problem compiling imaptest under solaris In-Reply-To: <3A621688-A7AE-4C08-96EA-D9668ECA02D1@iki.fi> References: <89f61bff49f4c5343be06dd45459b14a@imapproxy.hrz> <3A621688-A7AE-4C08-96EA-D9668ECA02D1@iki.fi> Message-ID: <1dfabd64651b84755e914d4510ba1310@imapproxy.hrz> Am 28.01.2012 18:55, schrieb Timo Sirainen: > On 25.1.2012, at 16.43, J?rgen Obermann wrote: > >> today I tried to compile imaptest under solaris 10 with studio 11 >> compiler and got the following error: >> >> gmake[2]: Entering directory >> `/net/fileserv/export/sunsrc/src/imaptest-20111119/src' >> source='client.c' object='client.o' libtool=no \ >> DEPDIR=.deps depmode=none /bin/bash ../depcomp \ >> cc -DHAVE_CONFIG_H -I. -I. -I.. -I/opt/local/include/dovecot >> -I/usr/local/include -fast -xarch=v8plusa -I/usr/sfw/include -c >> client.c >> "/opt/local/include/dovecot/imap-util.h", line 6: warning: useless >> declaration >> "client-state.h", line 6: warning: useless declaration >> "client.c", line 655: operand cannot have void type: op "==" >> "client.c", line 655: operands have incompatible types: >> const void "==" int >> cc: acomp failed for client.c > > http://hg.dovecot.org/imaptest/rev/7e490e59f1ee should fix it? Yes it does. Thank you, J?rgen Obermann From f.bonnet at esiee.fr Mon Jan 30 10:37:59 2012 From: f.bonnet at esiee.fr (Frank Bonnet) Date: Mon, 30 Jan 2012 09:37:59 +0100 Subject: [Dovecot] converting from mbox to maildir ? Message-ID: <4F2656E7.8060501@esiee.fr> Hello We are planning to convert our mailhub ( freebsd 7.4 ) from mbox format to maildir format. I've read the documentation and performed some tests on another machine it is a bit long ... I would like some feedback from guys who did this operation and need some advice on what to convert first ? - first convert INBOX then convert IMAP folders ? - first convert IMAP folders then convert INBOX ? the machine use real users thru openldap ( pam_ldap + nss_ldap ) another problem is disk space. The users's email data takes about 2 Terabytes of data and I cannot duplicate as I only have 3 Tb on the raid array of the server. My idea is to use one of our NFS netapp filer during the convertion to throw the result of the convertion on an NFS mounted directory. Anyone did this before ? If yes I would be greatly interrested by their experience Thank you From alexis.lelion at gmail.com Mon Jan 30 11:24:02 2012 From: alexis.lelion at gmail.com (Alexis Lelion) Date: Mon, 30 Jan 2012 10:24:02 +0100 Subject: [Dovecot] LMTP : Can't handle mixed proxy/non-proxy destinations In-Reply-To: <33BD52FA-1FE0-46D5-A1E8-9A54C406BE64@iki.fi> References: <33BD52FA-1FE0-46D5-A1E8-9A54C406BE64@iki.fi> Message-ID: On 1/28/12, Timo Sirainen wrote: > On 27.1.2012, at 12.59, Alexis Lelion wrote: > >> Jan 25 09:05:12 mail01 postfix/lmtp[23934]: A92709300DB: to=< >> user_on_mail02 at domain.com>, relay=mail01.domain.com[private/dovecot-lmtp], >> delay=0.07, delays=0.01/0/0/0.06, dsn=4.3.0, status=deferred (host >> mail01.domain.com[private/dovecot-lmtp] said: 451 4.3.0 < >> user_on_mail02 at domain.com> Can't handle mixed proxy/non-proxy destinations >> (in reply to RCPT TO command)) >> >> I was wondering if there was another way of handling this, for example >> by triggering an immediate queue lookup from postfix or forwarding a >> copy of the mail to the other server. Note that the postfix >> "queue_run_delay" was increased to 15min on purpose, so I cannot change >> that. > > It would be possible to change the code to support mixed destinations, but > it's probably not a simple change and I have other things to do.. Yes I understand, this is a quite specific request, and not that impacting actually. But it would be cool if you could keep this request somewhere in your queue :-) > > Maybe you could work around it so that LMTP always proxies the mails, to > localhost as well, but to a different port which doesn't do proxying at all. Actually this was my first try, but I had proxying loops because unlike for IMAP, the LMTP server doesn't seem to support 'proxy_maybe' option yet, does it? > > From jesus.navarro at bvox.net Mon Jan 30 12:48:43 2012 From: jesus.navarro at bvox.net (=?iso-8859-1?q?Jes=FAs_M=2E?= Navarro) Date: Mon, 30 Jan 2012 11:48:43 +0100 Subject: [Dovecot] UID 0 problem while issuing an UID THREAD REFS command In-Reply-To: <30046BB5-6E1C-41E5-9B04-787F568DE604@iki.fi> References: <201201201724.41631.jesus.navarro@bvox.net> <201201231355.15051.jesus.navarro@bvox.net> <30046BB5-6E1C-41E5-9B04-787F568DE604@iki.fi> Message-ID: <201201301148.43979.jesus.navarro@bvox.net> Hi Timo: On S?bado, 28 de Enero de 2012 19:23:12 Timo Sirainen escribi?: > On 23.1.2012, at 14.55, Jes?s M. Navarro wrote: > >>> I'm having problems on a maildir due to dovecot returning an UID 0 to > >>> an > > > >>> UID THREAD REFS command: > > I'm sending to your personal address a whole maildir that reproduces the > > bug (it's very short) to avoid having it published in the mail archives. > > Thanks, I finally looked at this. The problem happens only when the > THREADing isn't done for all messages. I thought this would have been a > much more complex bug. Fixed: > http://hg.dovecot.org/dovecot-2.0/rev/57498cad6ab9 Thank you very much. Do you have a expected date for new packages covering this issue to be published at xi.rename-it.nl? From mark.zealey at webfusion.com Mon Jan 30 15:32:33 2012 From: mark.zealey at webfusion.com (Mark Zealey) Date: Mon, 30 Jan 2012 15:32:33 +0200 Subject: [Dovecot] Director to keep redirecting users to the same server even after all sessions closed? Message-ID: <4F269BF1.8010607@webfusion.com> Hi there, Just wondering how easy it would be to make the director continue to send a user to the same server (assuming it's still in the pool) for say 90 seconds after they have last been active (ie lmtp or pop/imap)? Basically we are working in quite a heavily cached environment so it takes perhaps 60-90 seconds for our imap servers to properly flush to our network storage meaning if the user got put on a different server in that time we would see some issues. Presently we have fixed proxying, but I'd really like to use the director if possible to allow us to more easily add & remove servers. Mark From tss at iki.fi Mon Jan 30 15:58:37 2012 From: tss at iki.fi (Timo Sirainen) Date: Mon, 30 Jan 2012 15:58:37 +0200 Subject: [Dovecot] Director to keep redirecting users to the same server even after all sessions closed? In-Reply-To: <4F269BF1.8010607@webfusion.com> References: <4F269BF1.8010607@webfusion.com> Message-ID: On 30.1.2012, at 15.32, Mark Zealey wrote: > Just wondering how easy it would be to make the director continue to send a user to the same server (assuming it's still in the pool) for say 90 seconds after they have last been active (ie lmtp or pop/imap)? Basically we are working in quite a heavily cached environment so it takes perhaps 60-90 seconds for our imap servers to properly flush to our network storage meaning if the user got put on a different server in that time we would see some issues. Presently we have fixed proxying, but I'd really like to use the director if possible to allow us to more easily add & remove servers. Already done, and enabled by default: # How long to redirect users to a specific server after it no longer has # any connections. #director_user_expire = 15 min I added this mainly to make sure that all attribute caches have timed out. From Mark.Zealey at webfusion.com Mon Jan 30 16:07:16 2012 From: Mark.Zealey at webfusion.com (Mark Zealey) Date: Mon, 30 Jan 2012 14:07:16 +0000 Subject: [Dovecot] Director to keep redirecting users to the same server even after all sessions closed? In-Reply-To: References: <4F269BF1.8010607@webfusion.com>, Message-ID: Brilliant; I had read the director page in the wiki but didn't see it there & a search of the wiki text doesn't show up the option - perhaps you could add it or is there another place to see a list of director options? Mark ________________________________________ From: Timo Sirainen [tss at iki.fi] Sent: 30 January 2012 13:58 To: Mark Zealey Cc: dovecot at dovecot.org Subject: Re: [Dovecot] Director to keep redirecting users to the same server even after all sessions closed? On 30.1.2012, at 15.32, Mark Zealey wrote: > Just wondering how easy it would be to make the director continue to send a user to the same server (assuming it's still in the pool) for say 90 seconds after they have last been active (ie lmtp or pop/imap)? Basically we are working in quite a heavily cached environment so it takes perhaps 60-90 seconds for our imap servers to properly flush to our network storage meaning if the user got put on a different server in that time we would see some issues. Presently we have fixed proxying, but I'd really like to use the director if possible to allow us to more easily add & remove servers. Already done, and enabled by default: # How long to redirect users to a specific server after it no longer has # any connections. #director_user_expire = 15 min I added this mainly to make sure that all attribute caches have timed out. From f.bonnet at esiee.fr Mon Jan 30 19:29:04 2012 From: f.bonnet at esiee.fr (Frank Bonnet) Date: Mon, 30 Jan 2012 18:29:04 +0100 Subject: [Dovecot] INBOX and IMAP forlders on differents machines ? Message-ID: <4F26D360.303@esiee.fr> Hello In MBOX format would it be possible with dovecot 2 to have two machines one containing the INBOX and the other containing IMAP folders. Of course this need a frontend but would it be possible ? thanks From tss at iki.fi Mon Jan 30 22:03:50 2012 From: tss at iki.fi (Timo Sirainen) Date: Mon, 30 Jan 2012 22:03:50 +0200 Subject: [Dovecot] INBOX and IMAP forlders on differents machines ? In-Reply-To: <4F26D360.303@esiee.fr> References: <4F26D360.303@esiee.fr> Message-ID: <7D7B1E45-9ED4-4E34-BF1C-EE14671F15AD@iki.fi> On 30.1.2012, at 19.29, Frank Bonnet wrote: > In MBOX format would it be possible with dovecot 2 to have two machines > one containing the INBOX and the other containing IMAP folders. > > Of course this need a frontend but would it be possible ? With v2.1 I guess you could in theory do this with imapc backend. From jtam.home at gmail.com Tue Jan 31 02:03:45 2012 From: jtam.home at gmail.com (Joseph Tam) Date: Mon, 30 Jan 2012 16:03:45 -0800 (PST) Subject: [Dovecot] Mountpoints In-Reply-To: References: Message-ID: On Mon, 30 Jan 2012, dovecot-request at dovecot.org wrote: > So, I was thinking about adding doveadm commands to explicitly tell > Dovecot about the mountpoints that it needs to care about. When no > mountpoints are defined Dovecot would behave as it does now. Maybe I don't understand the subtlety of your question, but are you trying to disambiguate between a mounted filesytem and a failed mount that presents the underlying filesystem (which looks like an uninitilized index directory)? Couldn't you write some cookie file "/mount/.../dovecot-data-root/.dovemount", whose existence will tell you whether the FS is mounted without trying to find the mount root. Oh, but then again if you have per-user mounts, that's going to get messy. Joseph Tam From deepa.malleeswaran at gmail.com Mon Jan 30 19:12:00 2012 From: deepa.malleeswaran at gmail.com (Deepa Malleeswaran) Date: Mon, 30 Jan 2012 12:12:00 -0500 Subject: [Dovecot] Help required Message-ID: Hi I use dovecot on CentOS. It was installed and configured by some other person who doesn't work here anymore. I am trying to renew ssl. But the command works fine and restarts the dovecot. But the license shows the same old expiry. Can you please help me with the same. When I type in dovecot --version, I get command not found. Please guide me! Regards, -- Deepa Malleeswaran From tss at iki.fi Tue Jan 31 02:42:33 2012 From: tss at iki.fi (Timo Sirainen) Date: Tue, 31 Jan 2012 02:42:33 +0200 Subject: [Dovecot] Mountpoints In-Reply-To: References: Message-ID: On 31.1.2012, at 2.03, Joseph Tam wrote: > On Mon, 30 Jan 2012, dovecot-request at dovecot.org wrote: > >> So, I was thinking about adding doveadm commands to explicitly tell >> Dovecot about the mountpoints that it needs to care about. When no >> mountpoints are defined Dovecot would behave as it does now. > > Maybe I don't understand the subtlety of your question, but are you > trying to disambiguate between a mounted filesytem and a failed mount > that presents the underlying filesystem (which looks like an uninitilized > index directory)? Yes. A mounted filesystem where a directory doesn't exist vs. accidentally unmounted filesystem. > Couldn't you write some cookie file "/mount/.../dovecot-data-root/.dovemount", > whose existence will tell you whether the FS is mounted without trying to > find the mount root. This would require that existing installations create such a file or start failing after upgrade. Or that it's made optional and most people wouldn't use this functionality at all.. And I'm sure many people with a single filesystem wouldn't be all that happy creating /.dovemount or /home/.dovemount or such files. From moseleymark at gmail.com Tue Jan 31 03:24:12 2012 From: moseleymark at gmail.com (Mark Moseley) Date: Mon, 30 Jan 2012 17:24:12 -0800 Subject: [Dovecot] moving mail out of alt storage In-Reply-To: References: <87sjnya3z5.fsf@algae.riseup.net> <1316077133.12936.18.camel@hurina> <87obylafsw.fsf_-_@algae.riseup.net> Message-ID: On Sat, Jan 28, 2012 at 12:44 PM, Timo Sirainen wrote: > On 12.1.2012, at 20.32, Mark Moseley wrote: > >>>> On Wed, 2011-09-14 at 23:17 -0400, Micah Anderson wrote: >>>>> I moved some mail into the alt storage: >>>>> >>>>> doveadm altmove -u johnd at example.com seen savedbefore 1w >>>>> >>>>> and now I want to move it back to the regular INBOX, but I can't see how >>>>> I can do that with either 'altmove' or 'mailbox move'. >>>> >>>> Is this sdbox or mdbox? With sdbox you could simply "mv" the files. Or >>>> apply patch: http://hg.dovecot.org/dovecot-2.0/rev/1910c76a6cc9 >>> >>> This is mdbox, which is why I am not sure how to operate because I am >>> used to individual files as is with maildir. >>> >>> micah >>> >> >> I'm curious about this too. Is moving the m.# file out of the ALT >> path's storage/ directory into the non-ALT storage/ directory >> sufficient? Or will that cause odd issues? > > You can manually move m.* files to alt storage and back. Just make sure that the same file isn't being simultaneously modified by Dovecot or you'll corrupt it. > Cool, good to know. Thanks! From stan at hardwarefreak.com Tue Jan 31 12:30:46 2012 From: stan at hardwarefreak.com (Stan Hoeppner) Date: Tue, 31 Jan 2012 04:30:46 -0600 Subject: [Dovecot] Help required In-Reply-To: References: Message-ID: <4F27C2D6.5070508@hardwarefreak.com> On 1/30/2012 11:12 AM, Deepa Malleeswaran wrote: > I use dovecot on CentOS. It was installed and configured by some other > person who doesn't work here anymore. I am trying to renew ssl. But the > command works fine and restarts the dovecot. But the license shows the same > old expiry. Can you please help me with the same. Please be much more specific. We need details. Log entries of errors would be very useful as well. > When I type in dovecot --version, I get command not found. Please guide me! That's strange. Are you sure you're on the right machine? What version of CentOS? -- Stan From nmilas at noa.gr Tue Jan 31 14:07:29 2012 From: nmilas at noa.gr (Nikolaos Milas) Date: Tue, 31 Jan 2012 14:07:29 +0200 Subject: [Dovecot] Renaming user account / mailbox Message-ID: <4F27D981.7060304@noa.gr> Hello, I am running dovecot-2.0.13-1_128.el5 x86_64 RPM on CentOS 5.7. All accounts are virtual, hosted on LDAP Server. We are using Maildir mailboxes. The question: What is the process to rename an existing account/mailbox? I would like to rename userx with email: userx at example.com to ux at example.com with a mailbox of ux (currently: userx) Of course the idea is that new mail will continue to be delivered to the same mailbox, although it has been renamed. How can I achieve it? Would it be enough (after changing the associated data in the associated LDAP entry) to simply rename the virtual user directory name, e.g. from /home/vmail/userx to /home/vmail/ux ? Thanks in advance, Nick From ath at b-one.net Tue Jan 31 14:36:13 2012 From: ath at b-one.net (Anders) Date: Tue, 31 Jan 2012 13:36:13 +0100 Subject: [Dovecot] A small bug and a question about CONTEXT=SEARCH Message-ID: <20120131123613.49B53A7952BCD@bmail02.one.com> Hi, My colleague just pointed me to the recent fix of this issue, thanks! From la at iki.fi Tue Jan 31 17:48:39 2012 From: la at iki.fi (Lauri Alanko) Date: Tue, 31 Jan 2012 17:48:39 +0200 Subject: [Dovecot] force-resync fails to recover all messages in mdbox Message-ID: <20120131174839.13512v46jc7ur23b.lealanko@webmail.helsinki.fi> To my understanding, when using mdbox, doveadm force-resync should be able to recover all the messages from the storage files alone, though of course losing all metadata except the initial delivery folder. However, this does not seem to be the case. For me, force-resync creates only partial indices that lose messages. The message contents are of course still in the storage files, but dovecot just doesn't seem to be aware of some of them after recreating the indices. Here is an example. I created a test mdbox by syncing a mailing list folder from a mbox location: $ dsync -m haskell-cafe backup mdbox:~/dbox Then I switched the location to the new mdbox: $ /usr/sbin/dovecot -n # 2.0.15: /etc/dovecot/dovecot.conf # OS: Linux 3.2.0-0.bpo.1-amd64 x86_64 Debian wheezy/sid mail_fsync = never mail_location = mdbox:~/dbox mail_plugins = zlib passdb { driver = pam } plugin { sieve = ~/etc/sieve/dovecot.sieve sieve_dir = ~/etc/sieve zlib_save = bz2 zlib_save_level = 9 } protocols = " imap" ssl_cert = References: <20120131174839.13512v46jc7ur23b.lealanko@webmail.helsinki.fi> Message-ID: <38EB3A30-DFD5-484B-852B-327BDA5E936E@iki.fi> On 31.1.2012, at 17.48, Lauri Alanko wrote: > $ doveadm search all | wc > 93236 186472 3625098 .. > Then I removed all the indices and rebuilt them: > > $ doveadm search all | wc > 43864 87728 1699590 > > Somehow dovecot lost over half of the messages! There may be a bug, and I just yesterday noticed something weird in the rebuilding code. I'll have to look into that. But anyway, "search all" isn't the proper way to test this. Try instead with: doveadm fetch guid all | sort | uniq | wc When you removed indexes Dovecot no longer knew about copies of messages. From la at iki.fi Tue Jan 31 18:34:45 2012 From: la at iki.fi (Lauri Alanko) Date: Tue, 31 Jan 2012 18:34:45 +0200 Subject: [Dovecot] force-resync fails to recover all messages in mdbox In-Reply-To: <38EB3A30-DFD5-484B-852B-327BDA5E936E@iki.fi> References: <20120131174839.13512v46jc7ur23b.lealanko@webmail.helsinki.fi> <38EB3A30-DFD5-484B-852B-327BDA5E936E@iki.fi> Message-ID: <20120131183445.545717eennh24eg5.lealanko@webmail.helsinki.fi> Quoting "Timo Sirainen" : > Try instead with: > > doveadm fetch guid all | sort | uniq | wc > > When you removed indexes Dovecot no longer knew about copies of messages. Well, well, well. This is interesting. Back with the indices created by dsync: $ doveadm fetch guid all | grep guid: | sort | uniq -c | sort -n | tail 17 guid: 1b28b22d4b2ee2885b5b81221c41201d 17 guid: 730c692395661dd62f82088804b85652 17 guid: 865e1537fddba6698e010d0b9dbddd02 17 guid: d271b6ba8af0e7fa39c16ea8ed13abcf 17 guid: d2cd391e837cf51cc85991bde814dc54 17 guid: ebce8373da6ffb134b58aca7906d61f1 18 guid: 1222b6c222ecb53fdbbec407400cba36 18 guid: 65695586efc69adc2d7294216ea88e55 19 guid: 4288f61ebbdcd44870c670439a97693b 20 guid: 080ec72aa49e2a01c8e249fe127605f6 This would explain why rebuilding the indices reduced the number of messages. However, those guid assignments seem really weird, because: $ doveadm fetch hdr guid 080ec72aa49e2a01c8e249fe127605f6 | grep -i '^Message-ID: ' Message-ID: <4B1ACA53.7040503 at rkit.pp.ru> Message-ID: <29bf512f0912051251u74d246afxafdfb9e5ea24342c at mail.gmail.com> Message-ID: <5e0214850912051300r3ebd0e44n61a4d6e020c94f4c at mail.gmail.com> Message-ID: <4B1ACD40.3040507 at btinternet.com> Message-Id: <200912052220.00317.daniel.is.fischer at web.de> Message-Id: <200912052225.28597.daniel.is.fischer at web.de> Message-ID: <20091205212848.GA23711 at seas.upenn.edu> Message-Id: <200912051336.13792.hgolden at socal.rr.com> Message-Id: <200912052243.03144.daniel.is.fischer at web.de> Message-Id: <0B59A706-8C41-47B9-A858-5ACE297581E1 at cs.uu.nl> Message-ID: <20091205215707.GA6161 at protagoras.phil.berkeley.edu> Message-ID: <471726.55822.qm at web113106.mail.gq1.yahoo.com> Message-ID: <4B1AD7FB.8050704 at btinternet.com> Message-ID: <5fdc56d70912051400h663a25a9w4f9b2e065a5b395e at mail.gmail.com> Message-Id: <1B613EE3-B4F8-4F6E-8A36-74BACF0C86FC at yandex.ru> Message-ID: <4B1ADA0E.5070207 at btinternet.com> Message-Id: <36C40624-B050-4A8C-8CAF-F15D84467180 at phys.washington.edu> Message-ID: Message-id: Message-ID: <29bf512f0912051423safd7842ka39c8b8b6dee1ac0 at mail.gmail.com> So all these completely unrelated messages have somehow received the same guid? And that guid is stored even in the storage files themselves so they cannot be cleaned up even with force-resync? Something is _seriously_ wrong. The complexity and opaqueness of the mdbox format is a worrisome. It would ease my mind quite a bit if there were a simple tool that would just dump out the plain message contents that are stored inside the storage files, without involving any of dovecot's index machinery. Then I would at least know that whatever happens, as long as the storage files stay intact, I can always migrate my mails into some other format. Lauri From stan at hardwarefreak.com Sun Jan 1 00:12:00 2012 From: stan at hardwarefreak.com (Stan Hoeppner) Date: Sat, 31 Dec 2011 16:12:00 -0600 Subject: [Dovecot] imap process limits problem In-Reply-To: <4EFE6367.5000408@hardwarefreak.com> References: <4EFE6367.5000408@hardwarefreak.com> Message-ID: <4EFF88B0.6090908@hardwarefreak.com> On 12/30/2011 7:20 PM, Stan Hoeppner wrote: > Just out of curiosity, have you tried the non > one-login-process-per-connection setup? > > login_process_size = 64 > login_process_per_connection = yes Correction. This should be 'no' ^^^ > login_processes_count = 3 > login_max_processes_count = 128 > login_max_connections = 256 > > Season values to taste. -- Stan From tlx at leuxner.net Sun Jan 1 11:31:22 2012 From: tlx at leuxner.net (Thomas Leuxner) Date: Sun, 1 Jan 2012 10:31:22 +0100 Subject: [Dovecot] Some Doveadm Tools lack proper exit codes Message-ID: <172CBBB1-DEFD-42E6-937E-B625FB9028EF@leuxner.net> Happy New Year Everyone, and *yes* it's that time of the year to archive old stuff again. Please implement proper error codes to support this (scripting) endeavor. => Good $ doveadm user foo userdb lookup: user foo doesn't exist $ echo $? 2 => Bad $ doveadm acl get -u tlx at leuxner.net FOO doveadm(tlx at leuxner.net): Error: Can't open mailbox FOO: Mailbox doesn't exist: FOO ID Global Rights $ echo $? 0 $ doveconf -n | head # 2.1.rc1 (056934abd2ef): /etc/dovecot/dovecot.conf # OS: Linux 2.6.32-5-amd64 x86_64 Debian 6.0.3 Thanks Thomas -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 163 bytes Desc: Message signed with OpenPGP using GPGMail URL: From gedalya at gedalya.net Sun Jan 1 18:48:06 2012 From: gedalya at gedalya.net (Gedalya) Date: Sun, 01 Jan 2012 11:48:06 -0500 Subject: [Dovecot] Trouble with proxy_maybe and auth_default_realm In-Reply-To: <4EFCA20F.10107@gedalya.net> References: <4EFCA20F.10107@gedalya.net> Message-ID: <4F008E46.6040000@gedalya.net> On 12/29/2011 12:23 PM, Gedalya wrote: > Hello, > > I'm using proxy_maybe and auth_default_realm. It seems that when a > user logs in without the domain name, relying on auth_default_realm, > and the "host" field points to the local server, I get the Proxying > loops to itself error. It does work as expected - log on to the local > server without proxying, if the user does include the domain name in > the login. > > (IP's and domain name masked below) > > No domain: > > Dec 29 11:49:07 imap01 dovecot: pop3-login: Error: Proxying loops to > itself: user=, method=PLAIN, rip=00.00.52.18, > lip=00.00.241.140 > Dec 29 11:49:27 imap01 dovecot: pop3-login: Disconnected (auth failed, > 1 attempts): user=, method=PLAIN, rip=00.00.52.18, > lip=00.00.241.140 > > With domain: > > Dec 29 11:52:13 imap01 dovecot: pop3-login: Login: > user=, method=PLAIN, rip=00.00.52.18, lip=00.00.241.140, > mpid=19969 > Dec 29 11:52:18 imap01 dovecot: pop3(jedi at ---.com): Disconnected: > Logged out top=0/0, retr=0/0, del=0/1, size=731 > > Otherwise, e.g. when the proxy host is indeed another host, > auth_default_domain works fine, including or not including the domain > seems to make no difference, and everything works. > > I'm using mysql, and I'm able to get around this problem including the > following in the password query: > IF(host='' or host='00.00.241.140', NULL, 'Y') as proxy_maybe > > # dovecot --version > 2.0.15 > > # dovecot -n > # 2.0.15: /etc/dovecot/dovecot.conf > # OS: Linux 2.6.32-5-amd64 x86_64 Debian 6.0.3 > auth_default_realm = ----.com > auth_mechanisms = plain login cram-md5 ntlm > auth_username_format = %Lu > auth_verbose = yes > auth_verbose_passwords = plain > dict { > quota = mysql:/etc/dovecot/dovecot-dict-sql.conf.ext > } > disable_plaintext_auth = no > login_greeting = How can I help you? > mail_gid = vmail > mail_uid = vmail > passdb { > args = /etc/dovecot/dovecot-sql.conf.ext > driver = sql > } > protocols = imap pop3 lmtp > service lmtp { > inet_listener lmtp { > address = 0.0.0.0 > port = 7025 > } > } > ssl_cert = ssl_key = userdb { > driver = prefetch > } > userdb { > args = /etc/dovecot/dovecot-sql.conf.ext > driver = sql > } > verbose_proctitle = yes > > ----- dovecot-sql.conf.ext ---- > driver = mysql > connect = host=localhost dbname=email user=email > default_pass_scheme = PLAIN > password_query = SELECT password, \ > IF('%s' = 'pop3', host_pop3, host) as host, \ > IF(host='' or host='00.00.241.140', NULL, 'Y') as proxy_maybe, \ > concat(userid, '@', domain) as destuser, \ > password as pass, \ > '/stor/mail/domains/%d/%n' AS userdb_home, \ > 'maildir:/stor/mail/domains/%d/%n/Maildir' as userdb_mail, \ > concat('*:storage=', quota_mb, 'M') as userdb_quota_rule, \ > 'vmail' AS userdb_uid, 'vmail' AS userdb_gid \ > FROM email WHERE userid = '%n' AND domain = '%d' > user_query = SELECT '/stor/mail/domains/%d/%n' AS home, \ > 'maildir:/stor/mail/domains/%d/%n/Maildir' as mail, \ > concat('*:storage=', quota_mb, 'M') as quota_rule, \ > 'vmail' AS uid, 'vmail' AS gid \ > FROM email WHERE userid = '%n' AND domain = '%d' > > OK, it turns out the problem went away when I removed the destuser field from the password query - it turned out to be unnecessary anyhow. My requirements are to allow users to log in using non-plaintext mechanisms such as CRAM-MD5, while my IMAP backends are non-dovecot and do not have a master user feature. Passwords are stored in the database in plaintext and presumably what I need to do is fetch the plaintext password from the database and simply use the user's own username and password when logging in to the backend. The wiki page on this subject only discusses a master-user setup, and my misunderstanding of that page lead me to think I need the destuser field. This turns out to be a simple setup, the only field involved being the "pass" field, and should probably be documented on the wiki. Either way, proxy_maybe doesn't work with auth_default_realm and destuser, even if destuser ends up containing the exact same username that would be constructed by auth_default_realm. From janfrode at tanso.net Sun Jan 1 21:59:07 2012 From: janfrode at tanso.net (Jan-Frode Myklebust) Date: Sun, 1 Jan 2012 20:59:07 +0100 Subject: [Dovecot] Dsync fails on second sync for folders with dot in the name Message-ID: <20120101195907.GA21500@dibs.tanso.net> I'm in the processes of running our first dsync backup of all users (from maildir to mdbox on remote server), and one problem I'm hitting that dsync will work fine on first run for some users, and then reliably fail whenever I try a new run: $ sudo dsync -u janfrode at example.net backup ssh -q mailbackup at repo1.example.net dsync -u janfrode at example.net $ sudo dsync -u janfrode at example.net backup ssh -q mailbackup at repo1.example.net dsync -u janfrode at example.net dsync-remote(janfrode at example.net): Error: Can't delete mailbox directory INBOX/a: Mailbox has children, delete them first The problem here seems to be that this user has a maildir named ".a.b". On the backup side I see this as "a/b/". So dsync doesn't quite seem to agree with itself for how to handle folders with dot in the name. -jf From rnabioullin at gmail.com Mon Jan 2 03:32:38 2012 From: rnabioullin at gmail.com (Ruslan Nabioullin) Date: Sun, 01 Jan 2012 20:32:38 -0500 Subject: [Dovecot] Multiple Maildirs per Virtual User Message-ID: <4F010936.7080107@gmail.com> How would it be possible to configure dovecot (2.0.16) in such a way that it would serve several maildirs (e.g., INBOX, INBOX.Drafts, INBOX.Sent, forum_email, [Gmail].Trash, etc.) per virtual user? I am only able to specify a single maildir, but I want all maildirs in /home/my-username/mail/account1/ to be served. e.g., /etc/dovecot/passwd: my-username_account1:{PLAIN}password:my-username:my-group::::userdb_mail=maildir:/home/my-username/mail/account1/INBOX Thanks in advance, Ruslan -- Ruslan Nabioullin rnabioullin at gmail.com -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 900 bytes Desc: OpenPGP digital signature URL: From Juergen.Obermann at hrz.uni-giessen.de Mon Jan 2 16:33:07 2012 From: Juergen.Obermann at hrz.uni-giessen.de (=?UTF-8?Q?J=C3=BCrgen_Obermann?=) Date: Mon, 02 Jan 2012 15:33:07 +0100 Subject: [Dovecot] error bad file number with compressed mbox files Message-ID: <77e69f67dbffe67a6205ed1de7d2d0df@imapproxy.hrz> Hello, can dsync convert from compressed mbox to compressed mdbox format? When I use compressed mbox files, either with gzip or with bzip2, I can read the mails as usual, but I find the following errors in dovecots log file: imap(userxy): Error: nfs_flush_fcntl: fcntl(/home/hrz/userxy/Mail/mymbox.gz, F_RDLCK) failed: Bad file number imap(userxy): Error: nfs_flush_fcntl: fcntl(/home/hrz/userxy/Mail/mymbox.bz2, F_RDLCK) failed: Bad file number These errors also appear when I use dsync to convert the compressed mbox to mdbox format on a second dovecot server: /opt/local/bin/dsync -v -u userxy backup mdbox:/sanpool/mail/home/hrz/userxy/mdbox dsync(userxy): Error: nfs_flush_fcntl: fcntl(/home/hrz/userxy/Mail/mymbox.gz, F_RDLCK) failed: Bad file number But now dovecot does not find the mails in the folder mymbox.gz on the second dovecot server in mdbox format! The relevant part of the dovcot configuration is: # 2.0.16: /opt/local/etc/dovecot/dovecot.conf # OS: SunOS 5.10 sun4v mail_fsync = always mail_location = mbox:~/Mail:INBOX=/var/mail/%u mail_nfs_index = yes mail_nfs_storage = yes mail_plugins = mail_log notify zlib mmap_disable = yes Thank you, -- J?rgen Obermann Hochschulrechenzentrum der Justus-Liebig-Universit?t Gie?en Heinrich-Buff-Ring 44 Tel. 0641-9913054 From CMarcus at Media-Brokers.com Mon Jan 2 16:51:00 2012 From: CMarcus at Media-Brokers.com (Charles Marcus) Date: Mon, 02 Jan 2012 09:51:00 -0500 Subject: [Dovecot] Dsync fails on second sync for folders with dot in the name In-Reply-To: <20120101195907.GA21500@dibs.tanso.net> References: <20120101195907.GA21500@dibs.tanso.net> Message-ID: <4F01C454.8030701@Media-Brokers.com> On 2012-01-01 2:59 PM, Jan-Frode Myklebust wrote: > I'm in the processes of running our first dsync backup of all users > (from maildir to mdbox on remote server), and one problem I'm hitting > that dsync will work fine on first run for some users, and then > reliably fail whenever I try a new run: > > $ sudo dsync -u janfrode at example.net backup ssh -q mailbackup at repo1.example.net dsync -u janfrode at example.net > $ sudo dsync -u janfrode at example.net backup ssh -q mailbackup at repo1.example.net dsync -u janfrode at example.net > dsync-remote(janfrode at example.net): Error: Can't delete mailbox directory INBOX/a: Mailbox has children, delete them first > > The problem here seems to be that this user has a maildir named > ".a.b". On the backup side I see this as "a/b/". > > So dsync doesn't quite seem to agree with itself for how to handle > folders with dot in the name. dovecot -n output? What are you using for the namespace hierarchy separator? http://wiki2.dovecot.org/Namespaces -- Best regards, Charles From janfrode at tanso.net Mon Jan 2 17:11:00 2012 From: janfrode at tanso.net (Jan-Frode Myklebust) Date: Mon, 2 Jan 2012 16:11:00 +0100 Subject: [Dovecot] Dsync fails on second sync for folders with dot in the name In-Reply-To: <4F01C454.8030701@Media-Brokers.com> References: <20120101195907.GA21500@dibs.tanso.net> <4F01C454.8030701@Media-Brokers.com> Message-ID: <20120102151059.GA10419@dibs.tanso.net> On Mon, Jan 02, 2012 at 09:51:00AM -0500, Charles Marcus wrote: > > dovecot -n output? What are you using for the namespace hierarchy separator? I have the folder format default separator (maildir "."), but still dovecot creates directories named ".a.b". On receiving dsync server: ===================================================================== $ dovecot -n # 2.0.14: /etc/dovecot/dovecot.conf mail_location = mdbox:~/mdbox mail_plugins = zlib mdbox_rotate_size = 5 M passdb { driver = static } plugin { zlib_save = gz zlib_save_level = 9 } protocols = service auth-worker { user = $default_internal_user } service auth { unix_listener auth-userdb { mode = 0600 user = mailbackup } } ssl = no userdb { args = home=/srv/mailbackup/%256Hu/%d/%n driver = static } On POP/IMAP-server: ===================================================================== $ doveconf -n # 2.0.14: /etc/dovecot/dovecot.conf auth_cache_size = 100 M auth_verbose = yes auth_verbose_passwords = sha1 disable_plaintext_auth = no login_trusted_networks = 192.168.0.0/16 mail_gid = 3000 mail_location = maildir:~/:INDEX=/indexes/%1u/%1.1u/%u mail_plugins = quota zlib mail_uid = 3000 maildir_stat_dirs = yes maildir_very_dirty_syncs = yes managesieve_notify_capability = mailto managesieve_sieve_capability = fileinto reject envelope encoded-character vacation subaddress comparator-i;ascii-numeric relational regex imap4flags copy include variables body enotify environment mailbox date mmap_disable = yes namespace { inbox = yes location = prefix = INBOX. type = private } passdb { args = /etc/dovecot/dovecot-ldap.conf.ext driver = ldap } plugin { quota = maildir:UserQuota sieve = /sieve/%1u/%1.1u/%u/.dovecot.sieve sieve_dir = /sieve/%1u/%1.1u/%u sieve_max_script_size = 1M zlib_save = gz zlib_save_level = 6 } postmaster_address = postmaster at example.net protocols = imap pop3 lmtp sieve service auth-worker { user = $default_internal_user } service auth { client_limit = 4521 unix_listener auth-userdb { group = mode = 0600 user = atmail } } service imap-login { inet_listener imap { address = * port = 143 } process_min_avail = 4 service_count = 0 vsz_limit = 1 G } service imap-postlogin { executable = script-login /usr/local/sbin/imap-postlogin.sh } service imap { executable = imap imap-postlogin process_limit = 2048 } service lmtp { client_limit = 1 inet_listener lmtp { address = * port = 24 } process_limit = 25 } service managesieve-login { inet_listener sieve { address = * port = 4190 } service_count = 1 } service pop3-login { inet_listener pop3 { address = * port = 110 } process_min_avail = 4 service_count = 0 vsz_limit = 1 G } service pop3-postlogin { executable = script-login /usr/local/sbin/pop3-postlogin.sh } service pop3 { executable = pop3 pop3-postlogin process_limit = 2048 } ssl = no userdb { args = /etc/dovecot/dovecot-ldap.conf.ext driver = ldap } protocol lmtp { mail_plugins = quota zlib sieve } protocol imap { imap_client_workarounds = delay-newmail mail_plugins = quota zlib imap_quota } protocol pop3 { mail_plugins = quota zlib pop3_client_workarounds = outlook-no-nuls oe-ns-eoh pop3_uidl_format = UID%u-%v } protocol sieve { managesieve_logout_format = bytes=%i/%o } -jf From preacher_net at gmx.net Mon Jan 2 18:17:10 2012 From: preacher_net at gmx.net (Preacher) Date: Mon, 02 Jan 2012 17:17:10 +0100 Subject: [Dovecot] Problem with huge IMAP Archive after Courier migration Message-ID: <4F01D886.6070905@gmx.net> I have a mail server running Debian 6.0 with Courier IMAP to store project related mail. Currently the maildir of the archive (one user) contains about 37GB of data. Our staff is acessing the archive via Outlook 2007 where they drag their Exchange inbox or sent files to it. The problem with courier is that is sometimes mixes up headers with message bodies, so I wanted to migrate to dovecot. I tried this on my proxy running Debian 7.0 with some test data and this worked fine (OK, spent some hours to get the config files done - Dovecot without authentication). Dovecot version here is 2.0.15. Tried it with our productive system today, but got Dovecot 1.2.15 installed on Debian 6.0 Config files and parameters I took from my test system were not compatible and I didn't get it to work. So I forced to install the Debisn 7.0 packages with 2.0.15 and finally got the server running, I also restarted the whole machine to empty caches. But the problem I got was that in the huge folder hierarchy the downloaded headers in the individual folders disappeared, some folders showed a few very old messages, some none. Also some subfolders disappeared. I checked this with Outlook and Thunderbird. The difference was, that Thunderbird shows more messages (but not all) than Outlook in some folders, but also none in some others. Outlook brought up a message in some cases, that the connection timed out, although I set the timeout to 60s. After being frustrated uninstalled dovecot, went back to Courier and folder contents are displayed correctly again. Anyone a clue what's wrong here? Finally some config information: proxy-server:~# dovecot -n # 2.0.15: /etc/dovecot/dovecot.conf # OS: Linux 3.1.0-1-686-pae i686 Debian wheezy/sid auth_debug_passwords = yes auth_mechanisms = plain login disable_plaintext_auth = no namespace { inbox = yes location = prefix = INBOX. separator = . type = private } passdb { driver = pam } plugin { sieve = ~/.dovecot.sieve sieve_dir = ~/sieve } protocols = imap ssl = no ssl_cert = References: <4F01D886.6070905@gmx.net> Message-ID: <4F020328.7090303@hardwarefreak.com> On 1/2/2012 10:17 AM, Preacher wrote: ... > So I forced to install the Debisn 7.0 packages with 2.0.15 and finally > got the server running, I also restarted the whole machine to empty caches. > But the problem I got was that in the huge folder hierarchy the > downloaded headers in the individual folders disappeared, some folders > showed a few very old messages, some none. Also some subfolders > disappeared. > I checked this with Outlook and Thunderbird. The difference was, that > Thunderbird shows more messages (but not all) than Outlook in some > folders, but also none in some others. Outlook brought up a message in > some cases, that the connection timed out, although I set the timeout to > 60s. ... > Anyone a clue what's wrong here? Absolutely. What's wrong is a lack of planning, self education, and patience on the part of the admin. Dovecot gets its speed from its indexes. How long do you think it takes Dovecot to index 37GB of maildir messages, many thousands per directory, hundreds of directories, millions of files total? Until those indexes are built you will not see a complete folder tree and all kinds of stuff will be missing. For your education: Dovecot indexes every message and these indexes are the key to its speed. Normally indexing occurs during delivery when using deliver or lmtp, so the index updates are small and incremental, keeping performance high. You tried to do this and expected Dovecot to instantly process it all: http://www.youtube.com/watch?v=THVz5aweqYU If you don't know, that's a coal train car being dumped. 100 tons of coal in a few seconds. Visuals are always good teaching tools. I think this drives the point home rather well. -- Stan From mpapet at yahoo.com Tue Jan 3 08:48:15 2012 From: mpapet at yahoo.com (Michael Papet) Date: Mon, 2 Jan 2012 22:48:15 -0800 (PST) Subject: [Dovecot] Newbie: LDA Isn't Logging Message-ID: <1325573295.74202.YahooMailClassic@web125405.mail.ne1.yahoo.com> Hi, I'm a newbie having some trouble getting deliver to log anything. Related to this, there are no return values unless the -d is missing. I'm using LDAP to store virtual domain and user account information. Test #1: /usr/lib/dovecot/deliver -e -f mpapet at yahoo.com -d zed at mailswansong.dom < bad.mail Expected result: supposed to fail, there's no zed account via ldap lookup and supposed to get a return code per the wiki at http://wiki2.dovecot.org/LDA. Supposed to log too. Actual result: nothing gets delivered, no return code, nothing is logged. Test #2: /usr/lib/dovecot/deliver -f mpapet at yahoo.com -d dude at mailswansong.dom < good.mail Expected result: deliver to dude and return 0. Actual result: delivers, but no return code. Nothing logged. The wiki is vague about the difficulties of getting deliver LDA to log, but I thought I had it covered in my config. I even opened permissions up wide (777) on my log files specified below. Nothing gets logged. The ONLY thing changed in 15-lda.conf is as follows. protocol lda { # Space separated list of plugins to load (default is global mail_plugins). #mail_plugins = $mail_plugins log_path = /var/log/dovecot/lda.log info_log_path = /var/log/dovecot/lda-info.log service auth { unix_listener auth-client { mode = 0600 user = vmail } } I'm running plain Debian Testing and used dovecot from Debian's repository. The end-goal is to write a qpsmtpd queue plugin, but I need to figure out what's the matter first. Thanks in advance. mpapet From janfrode at tanso.net Tue Jan 3 10:14:49 2012 From: janfrode at tanso.net (Jan-Frode Myklebust) Date: Tue, 3 Jan 2012 09:14:49 +0100 Subject: [Dovecot] GPFS for mail-storage (Was: Re: Compressing existing maildirs) In-Reply-To: <4EFEBFB8.1070301@hardwarefreak.com> References: <20111224152050.GA3958@dibs.tanso.net> <20111229084916.GA5895@dibs.tanso.net> <4EFC6453.8020304@hardwarefreak.com> <20111230144124.GA3936@dibs.tanso.net> <4EFE5984.9080905@hardwarefreak.com> <20111231065649.GA19046@dibs.tanso.net> <4EFEBFB8.1070301@hardwarefreak.com> Message-ID: <20120103081449.GA26269@dibs.tanso.net> On Sat, Dec 31, 2011 at 01:54:32AM -0600, Stan Hoeppner wrote: > Nice setup. I've mentioned GPFS for cluster use on this list before, > but I think you're the only operator to confirm using it. I'm sure > others would be interested in hearing of your first hand experience: > pros, cons, performance, etc. And a ball park figure on the licensing > costs, whether one can only use GPFS on IBM storage or if storage from > others vendors is allowed in the GPFS pool. I used to work for IBM, so I've been a bit uneasy about pushing GPFS too hard publicly, for risk of being accused of being biased. But I changed job in November, so now I'm only a satisfied customer :-) Pros: Extremely simple to configure and manage. Assuming root on all nodes can ssh freely, and port 1191/tcp is open between the nodes, these are the commands to create the cluster, create a NSD (network shared disks), and create a filesystem: # echo hostname1:manager-quorum > NodeFile # "manager" means this node can be selected as filesystem manager # echo hostname2:manager-quorum >> NodeFile # "quorum" means this node has a vote in the quorum selection # echo hostname3:manager-quorum >> NodeFile # all my nodes are usually the same, so they all have same roles. # mmcrcluster -n NodeFile -p $(hostname) -A ### sdb1 is either a local disk on hostname1 (in which case the other nodes will access it over tcp to ### hostname1), or a SAN-disk that they can access directly over FC/iSCSI. # echo sdb1:hostname1::dataAndMetadata:: > DescFile # This disk can be used for both data and metadata # mmcrnsd -F DescFile # mmstartup -A # starts GPFS services on all nodes # mmcrfs /gpfs1 gpfs1 -F DescFile # mount /gpfs1 You can add and remove disks from the filesystem, and change most settings without downtime. You can scale out your workload by adding more nodes (SAN attached or not), and scale out your disk performance by adding more disks on the fly. (IBM uses GPFS to create scale-out NAS solutions http://www-03.ibm.com/systems/storage/network/sonas/ , which highlights a few of the features available with GPFS) There's no problem running GPFS on other vendors disk systems. I've used Nexsan SATAboy earlier, for a HPC cluster. One can easily move from one disksystem to another without downtime. Cons: It has it's own page cache, staticly configured. So you don't get the "all available memory used for page caching" behaviour as you normally do on linux. There is a kernel module that needs to be rebuilt on every upgrade. It's a simple process, but it needs to be done and means we can't just run "yum update ; reboot" to upgrade. % export SHARKCLONEROOT=/usr/lpp/mmfs/src % cp /usr/lpp/mmfs/src/config/site.mcr.proto /usr/lpp/mmfs/src/config/site.mcr % vi /usr/lpp/mmfs/src/config/site.mcr # correct GPFS_ARCH, LINUX_DISTRIBUTION and LINUX_KERNEL_VERSION % cd /usr/lpp/mmfs/src/ ; make clean ; make World % su - root # export SHARKCLONEROOT=/usr/lpp/mmfs/src # cd /usr/lpp/mmfs/src/ ; make InstallImages > > To this point IIRC everyone here doing clusters is using NFS, GFS, or > OCFS. Each has its downsides, mostly because everyone is using maildir. > NFS has locking issues with shared dovecot index files. GFS and OCFS > have filesystem metadata performance issues. How does GPFS perform with > your maildir workload? Maildir is likely a worst case type workload for filesystems. Millions of tiny-tiny files, making all IO random, and getting minimal controller read cache utilized (unless you can cache all active files). So I've concluded that our performance issues are mostly design errors (and the fact that there were no better mail storage formats than maildir at the time these servers were implemented). I expect moving to mdbox will fix all our performance issues. I *think* GPFS is as good as it gets for maildir storage on clusterfs, but have no number to back that up ... Would be very interesting if we could somehow compare numbers for a few clusterfs'. I believe our main limitation in this setup is the iops we can get from the backend storage system. It's hard to balance the IO over enough RAID arrays (the fs is spread over 11 RAID5 arrays of 5 disks each), and we're always having hotspots. Right now two arrays are doing <100 iops, while others are doing 4-500 iops. Would very much like to replace it by something smarter where we can utilize SSDs for active data and something slower for stale data. GPFS can manage this by itself trough it's ILM interface, but we don't have the very fast storage to put in as tier-1. -jf From tss at iki.fi Tue Jan 3 10:49:27 2012 From: tss at iki.fi (Timo Sirainen) Date: Tue, 3 Jan 2012 10:49:27 +0200 Subject: [Dovecot] Compressing existing maildirs In-Reply-To: <4EFEBFB8.1070301@hardwarefreak.com> References: <20111224152050.GA3958@dibs.tanso.net> <20111229084916.GA5895@dibs.tanso.net> <4EFC6453.8020304@hardwarefreak.com> <20111230144124.GA3936@dibs.tanso.net> <4EFE5984.9080905@hardwarefreak.com> <20111231065649.GA19046@dibs.tanso.net> <4EFEBFB8.1070301@hardwarefreak.com> Message-ID: On 31.12.2011, at 9.54, Stan Hoeppner wrote: > Timo, is there any technical or sanity based upper bound on mdbox size? > Anything wrong with using 64MB, 128MB, or even larger for > mdbox_rotate_size? Should be fine. The only issue is the extra disk I/O required to recreate the files during doveadm purge. From ludek.finstrle at pzkagis.cz Mon Jan 2 20:20:15 2012 From: ludek.finstrle at pzkagis.cz (Ludek Finstrle) Date: Mon, 2 Jan 2012 19:20:15 +0100 Subject: [Dovecot] Small LOGIN_MAX_INBUF_SIZE for GSSAPI with samba4 (AD) Message-ID: <20120102182014.GA20872@pzkagis.cz> Hello, I faced the problem with samba (AD) + mutt (gssapi) + dovecot (imap). From dovecot log: Jan 2 17:58:42 server dovecot: imap-login: Disconnected: Input buffer full (no auth attempts): rip=192.167.14.16, lip=192.167.14.16, secured My situation: CentOS 6.2 IMAP: dovecot --version: 2.0.9 (CentOS 6.2) MUA: mutt 1.5.20 (CentOS 6.2) Kerberos: samba4 4.0.0alpha17 as AD PDC $ klist -e Ticket cache: FILE:/tmp/krb5cc_1002_Mmg2Rc Default principal: luf at TEST Valid starting Expires Service principal 01/02/12 15:56:16 01/03/12 01:56:16 krbtgt/TEST at TEST renew until 01/03/12 01:56:16, Etype (skey, tkt): arcfour-hmac, arcfour-hmac 01/02/12 16:33:19 01/03/12 01:56:16 imap/server.test at TEST Etype (skey, tkt): arcfour-hmac, arcfour-hmac I fixed this problem with enlarging LOGIN_MAX_INBUF_SIZE. I also red about wrong lower/uppercase but it's not definitely my problem (I tried all possibilities of lower/uppercas in login). I sniffed the plain communication and the "a0000 AUTHENTICATE GSSAPI" line has around 1873 chars. When I enlarged the LOGIN_MAX_INBUF_SIZE to 2048 the problem disappeared and I'm now able to login to dovecot using gssapi in mutt client. I use also thunderbird (on windows with sspi) and it works ok with LOGIN_MAX_INBUF_SIZE = 1024. Does anybody have any idea why it's so large or how to fix it another way? It's terrible to patch each version of dovecot rpm package. Or is there any possibility to change constant? I have no idea how much this should affect memory usage. The simple patch I have to use is attached. Please cc: to me (luf at pzkagis dot cz) as I'm not member of the this list. Best regards, Ludek Finstrle -------------- next part -------------- diff -cr dovecot-2.0.9.orig/src/login-common/client-common.h dovecot-2.0.9/src/login-common/client-common.h *** dovecot-2.0.9.orig/src/login-common/client-common.h 2012-01-02 18:09:53.371909782 +0100 --- dovecot-2.0.9/src/login-common/client-common.h 2012-01-02 18:30:58.057787619 +0100 *************** *** 10,16 **** IMAP: Max. length of a single parameter POP3: Max. length of a command line (spec says 512 would be enough) */ ! #define LOGIN_MAX_INBUF_SIZE 1024 /* max. size of output buffer. if it gets full, the client is disconnected. SASL authentication gives the largest output. */ #define LOGIN_MAX_OUTBUF_SIZE 4096 --- 10,16 ---- IMAP: Max. length of a single parameter POP3: Max. length of a command line (spec says 512 would be enough) */ ! #define LOGIN_MAX_INBUF_SIZE 2048 /* max. size of output buffer. if it gets full, the client is disconnected. SASL authentication gives the largest output. */ #define LOGIN_MAX_OUTBUF_SIZE 4096 From tss at iki.fi Tue Jan 3 13:16:29 2012 From: tss at iki.fi (Timo Sirainen) Date: Tue, 03 Jan 2012 13:16:29 +0200 Subject: [Dovecot] Small LOGIN_MAX_INBUF_SIZE for GSSAPI with samba4 (AD) In-Reply-To: <20120102182014.GA20872@pzkagis.cz> References: <20120102182014.GA20872@pzkagis.cz> Message-ID: <1325589389.6987.55.camel@innu> On Mon, 2012-01-02 at 19:20 +0100, Ludek Finstrle wrote: > Jan 2 17:58:42 server dovecot: imap-login: Disconnected: Input buffer full (no auth attempts): rip=192.167.14.16, lip=192.167.14.16, secured .. > I fixed this problem with enlarging LOGIN_MAX_INBUF_SIZE. I also red about wrong lower/uppercase > but it's not definitely my problem (I tried all possibilities of lower/uppercas in login). > > I sniffed the plain communication and the "a0000 AUTHENTICATE GSSAPI" line has around 1873 chars. > When I enlarged the LOGIN_MAX_INBUF_SIZE to 2048 the problem disappeared and I'm now able to login > to dovecot using gssapi in mutt client. There was already code that allowed 16kB SAS messages, but that didn't work for initial SASL reponse with IMAP SASL-IR extension. > I use also thunderbird (on windows with sspi) and it works ok with LOGIN_MAX_INBUF_SIZE = 1024. TB probably doesn't support SASL-IR. > Does anybody have any idea why it's so large or how to fix it another way? It's terrible to > patch each version of dovecot rpm package. Or is there any possibility to change constant? > I have no idea how much this should affect memory usage. > > The simple patch I have to use is attached. I increased it to 4 kB: http://hg.dovecot.org/dovecot-2.0/rev/d06061408f6d From tss at iki.fi Tue Jan 3 13:29:36 2012 From: tss at iki.fi (Timo Sirainen) Date: Tue, 03 Jan 2012 13:29:36 +0200 Subject: [Dovecot] error bad file number with compressed mbox files In-Reply-To: <77e69f67dbffe67a6205ed1de7d2d0df@imapproxy.hrz> References: <77e69f67dbffe67a6205ed1de7d2d0df@imapproxy.hrz> Message-ID: <1325590176.6987.57.camel@innu> On Mon, 2012-01-02 at 15:33 +0100, J?rgen Obermann wrote: > can dsync convert from compressed mbox to compressed mdbox format? > > When I use compressed mbox files, either with gzip or with bzip2, I can > read the mails as usual, but I find the following errors in dovecots log > file: > > imap(userxy): Error: nfs_flush_fcntl: > fcntl(/home/hrz/userxy/Mail/mymbox.gz, F_RDLCK) failed: Bad file number > imap(userxy): Error: nfs_flush_fcntl: > fcntl(/home/hrz/userxy/Mail/mymbox.bz2, F_RDLCK) failed: Bad file number This happens because of mail_nfs_* settings. You can either ignore those errors, or disable the settings. Those settings are useful only if you attempt to access the same mailbox from multiple servers at the same time, which is randomly going to fail even with those settings, so they aren't hugely useful. From tss at iki.fi Tue Jan 3 13:42:13 2012 From: tss at iki.fi (Timo Sirainen) Date: Tue, 03 Jan 2012 13:42:13 +0200 Subject: [Dovecot] Problem with huge IMAP Archive after Courier migration In-Reply-To: <4F01D886.6070905@gmx.net> References: <4F01D886.6070905@gmx.net> Message-ID: <1325590933.6987.59.camel@innu> On Mon, 2012-01-02 at 17:17 +0100, Preacher wrote: > So I forced to install the Debisn 7.0 packages with 2.0.15 and finally > got the server running, I also restarted the whole machine to empty caches. > But the problem I got was that in the huge folder hierarchy the > downloaded headers in the individual folders disappeared, some folders > showed a few very old messages, some none. Also some subfolders disappeared. > I checked this with Outlook and Thunderbird. The difference was, that > Thunderbird shows more messages (but not all) than Outlook in some > folders, but also none in some others. Outlook brought up a message in > some cases, that the connection timed out, although I set the timeout to > 60s. Did you run the Courier migration script? http://wiki2.dovecot.org/Migration/Courier Also explicitly setting mail_location would be a good idea. From tss at iki.fi Tue Jan 3 13:52:12 2012 From: tss at iki.fi (Timo Sirainen) Date: Tue, 03 Jan 2012 13:52:12 +0200 Subject: [Dovecot] Multiple Maildirs per Virtual User In-Reply-To: <4F010936.7080107@gmail.com> References: <4F010936.7080107@gmail.com> Message-ID: <1325591532.6987.60.camel@innu> On Sun, 2012-01-01 at 20:32 -0500, Ruslan Nabioullin wrote: > How would it be possible to configure dovecot (2.0.16) in such a way > that it would serve several maildirs (e.g., INBOX, INBOX.Drafts, > INBOX.Sent, forum_email, [Gmail].Trash, etc.) per virtual user? > > I am only able to specify a single maildir, but I want all maildirs in > /home/my-username/mail/account1/ to be served. Sounds like you want LAYOUT=fs rather than the default LAYOUT=maildir++. http://wiki2.dovecot.org/MailboxFormat/Maildir#Directory_Structure From tss at iki.fi Tue Jan 3 13:55:01 2012 From: tss at iki.fi (Timo Sirainen) Date: Tue, 03 Jan 2012 13:55:01 +0200 Subject: [Dovecot] dsync / separator / namespace config-problem In-Reply-To: <20111229200345.GA17871@dibs.tanso.net> References: <20111229111455.GA9344@dibs.tanso.net> <3F4112A3-FF46-4ABA-9EC5-E04651D50E87@iki.fi> <20111229134234.GB11809@dibs.tanso.net> <20111229200345.GA17871@dibs.tanso.net> Message-ID: <1325591701.6987.62.camel@innu> On Thu, 2011-12-29 at 21:03 +0100, Jan-Frode Myklebust wrote: > On Thu, Dec 29, 2011 at 03:49:57PM +0200, Timo Sirainen wrote: > > >> > > >> With mdbox the internal separator is '/', but it's not valid to have "INBOX." prefix then (it should be "INBOX/"). > > > > > > But how should this be handled in the migration phase from maildir to > > > mdbox then? Can we have different namespaces for users with maildirs vs. > > > mdboxes? (..or am i misunderstanding something?) > > > > You'll most likely want to keep the '.' separator with mdbox, at > least initially. Some clients don't like if the separator changes. > Perhaps in future if you want to allow users to use '.' character in > mailbox names you could change it, or possibly make it a per-user > setting. > > > > Sorry for being so dense, but I don't quite get it still. Do you suggest > dropping the trailing dot from prefix=INBOX. ? I.e. > > namespace { > inbox = yes > location = > prefix = INBOX > type = private > separator = . > } > > when we do the migration to mdbox? And this should work without issues > for both current maildir users, and mdbox users ? With that setup you can't even start up Dovecot. The prefix must end with the separator. So initially just do it like above, but with "prefix=INBOX." > Ideally I don't want to use the . as a separator, since it's causing > problems for our users who expect to be able to use them in folder > names. But I don't understand if I can change them without causing > problems to existing users.. or how these problems will appear to the > users. It's going to be problematic to change the separator for existing users. Clients can become confused. From tss at iki.fi Tue Jan 3 14:00:08 2012 From: tss at iki.fi (Timo Sirainen) Date: Tue, 03 Jan 2012 14:00:08 +0200 Subject: [Dovecot] Dsync fails on second sync for folders with dot in the name In-Reply-To: <20120101195907.GA21500@dibs.tanso.net> References: <20120101195907.GA21500@dibs.tanso.net> Message-ID: <1325592008.6987.63.camel@innu> On Sun, 2012-01-01 at 20:59 +0100, Jan-Frode Myklebust wrote: > I'm in the processes of running our first dsync backup of all users > (from maildir to mdbox on remote server), and one problem I'm hitting > that dsync will work fine on first run for some users, and then > reliably fail whenever I try a new run: > > $ sudo dsync -u janfrode at example.net backup ssh -q mailbackup at repo1.example.net dsync -u janfrode at example.net > $ sudo dsync -u janfrode at example.net backup ssh -q mailbackup at repo1.example.net dsync -u janfrode at example.net > dsync-remote(janfrode at example.net): Error: Can't delete mailbox directory INBOX/a: Mailbox has children, delete them first > > The problem here seems to be that this user has a maildir named > ".a.b". On the backup side I see this as "a/b/". > > So dsync doesn't quite seem to agree with itself for how to handle > folders with dot in the name. So here on source you have namespace separator '.' and in destination you have separator '/'? Maybe that's the problem? Try with both having '.' separator. From janfrode at tanso.net Tue Jan 3 14:12:22 2012 From: janfrode at tanso.net (Jan-Frode Myklebust) Date: Tue, 3 Jan 2012 13:12:22 +0100 Subject: [Dovecot] Dsync fails on second sync for folders with dot in the name In-Reply-To: <1325592008.6987.63.camel@innu> References: <20120101195907.GA21500@dibs.tanso.net> <1325592008.6987.63.camel@innu> Message-ID: <20120103121222.GA30793@dibs.tanso.net> On Tue, Jan 03, 2012 at 02:00:08PM +0200, Timo Sirainen wrote: > > So here on source you have namespace separator '.' and in destination > you have separator '/'? Maybe that's the problem? Try with both having > '.' separator. I added this namespace on the destination: namespace { inbox = yes location = prefix = INBOX. separator = . type = private } and am getting the same error: dsync-remote(janfrode at tanso.net): Error: Can't delete mailbox directory INBOX.a: Mailbox has children, delete them first This was with a freshly created .a.b folder on source. With no messages in .a.b and also no plain .a folder on source: $ find /usr/local/atmail/users/j/a/janfrode at tanso.net/.a* /usr/local/atmail/users/j/a/janfrode at tanso.net/.a.b /usr/local/atmail/users/j/a/janfrode at tanso.net/.a.b/maildirfolder /usr/local/atmail/users/j/a/janfrode at tanso.net/.a.b/cur /usr/local/atmail/users/j/a/janfrode at tanso.net/.a.b/new /usr/local/atmail/users/j/a/janfrode at tanso.net/.a.b/tmp /usr/local/atmail/users/j/a/janfrode at tanso.net/.a.b/dovecot-uidlist -jf From tss at iki.fi Tue Jan 3 14:15:45 2012 From: tss at iki.fi (Timo Sirainen) Date: Tue, 03 Jan 2012 14:15:45 +0200 Subject: [Dovecot] Newbie: LDA Isn't Logging In-Reply-To: <1325573295.74202.YahooMailClassic@web125405.mail.ne1.yahoo.com> References: <1325573295.74202.YahooMailClassic@web125405.mail.ne1.yahoo.com> Message-ID: <1325592945.6987.70.camel@innu> On Mon, 2012-01-02 at 22:48 -0800, Michael Papet wrote: > Hi, > > I'm a newbie having some trouble getting deliver to log anything. Related to this, there are no return values unless the -d is missing. I'm using LDAP to store virtual domain and user account information. > > Test #1: /usr/lib/dovecot/deliver -e -f mpapet at yahoo.com -d zed at mailswansong.dom < bad.mail > Expected result: supposed to fail, there's no zed account via ldap lookup and supposed to get a return code per the wiki at http://wiki2.dovecot.org/LDA. Supposed to log too. > Actual result: nothing gets delivered, no return code, nothing is logged. As in return code is 0? Something's definitely wrong there then. First check that deliver at least reads the config file. Add something broken in there, such as: "foo=bar" at the beginning of dovecot.conf. Does deliver fail now? Also running deliver via strace could show something useful. From tss at iki.fi Tue Jan 3 14:34:59 2012 From: tss at iki.fi (Timo Sirainen) Date: Tue, 03 Jan 2012 14:34:59 +0200 Subject: [Dovecot] Dsync fails on second sync for folders with dot in the name In-Reply-To: <20120103121222.GA30793@dibs.tanso.net> References: <20120101195907.GA21500@dibs.tanso.net> <1325592008.6987.63.camel@innu> <20120103121222.GA30793@dibs.tanso.net> Message-ID: <1325594099.6987.71.camel@innu> On Tue, 2012-01-03 at 13:12 +0100, Jan-Frode Myklebust wrote: > dsync-remote(janfrode at tanso.net): Error: Can't delete mailbox directory INBOX.a: Mailbox has children, delete them first Oh, this happens only with dsync backup, and only with Maildir++ -> FS layout change. You can simply ignore this error, or patch with http://hg.dovecot.org/dovecot-2.0/rev/69c6d7436f7f that hides it. From tss at iki.fi Tue Jan 3 14:36:52 2012 From: tss at iki.fi (Timo Sirainen) Date: Tue, 03 Jan 2012 14:36:52 +0200 Subject: [Dovecot] lmtp-postlogin ? In-Reply-To: <20111230130804.GA2107@dibs.tanso.net> References: <20111230090053.GA30820@dibs.tanso.net> <16B30E6C-AE5E-44CB-8F48-66274FEAB357@iki.fi> <20111230130804.GA2107@dibs.tanso.net> Message-ID: <1325594212.6987.73.camel@innu> On Fri, 2011-12-30 at 14:08 +0100, Jan-Frode Myklebust wrote: > > Maybe create a new plugin for this using notify plugin. > > Is there any documentation for this plugin? I've tried searching both > this list, and the wiki's. Nope. You could look at mail-log and http://dovecot.org/patches/2.0/touch-plugin.c and write something based on them. From janfrode at tanso.net Tue Jan 3 14:54:10 2012 From: janfrode at tanso.net (Jan-Frode Myklebust) Date: Tue, 3 Jan 2012 13:54:10 +0100 Subject: [Dovecot] Dsync fails on second sync for folders with dot in the name In-Reply-To: <1325594099.6987.71.camel@innu> References: <20120101195907.GA21500@dibs.tanso.net> <1325592008.6987.63.camel@innu> <20120103121222.GA30793@dibs.tanso.net> <1325594099.6987.71.camel@innu> Message-ID: <20120103125410.GA2966@dibs.tanso.net> On Tue, Jan 03, 2012 at 02:34:59PM +0200, Timo Sirainen wrote: > On Tue, 2012-01-03 at 13:12 +0100, Jan-Frode Myklebust wrote: > > dsync-remote(janfrode at tanso.net): Error: Can't delete mailbox directory INBOX.a: Mailbox has children, delete them first > > Oh, this happens only with dsync backup, and only with Maildir++ -> FS > layout change. You can simply ignore this error, or patch with > http://hg.dovecot.org/dovecot-2.0/rev/69c6d7436f7f that hides it. Oh, it was so quick to fail that I didn't realize it had successfully updated the remote mailboxes :-) Thanks! But isn't it a bug that users are allowed to create folders named .a.b, or that dovecot creates this as a folder named .a.b instead of .a/.b when the separator is "." ? -jf From mikko at woima.fi Tue Jan 3 16:54:11 2012 From: mikko at woima.fi (Mikko Lampikoski) Date: Tue, 3 Jan 2012 16:54:11 +0200 Subject: [Dovecot] What is normal CPU usage of dovecot imap? Message-ID: <6FD1B169-1409-40BF-9B2F-53598B1300CB@woima.fi> I got Dual Core Intel Xeon CPU 3.00GHz, over 1000 mailbox and almost 1 dovecot login / second (peak time). Server stats says that load is continually over 2 and cpu usage is 60%. top says that imap is making this load. virtual users are in mysql database and mysqld is running on another server (this server is ok). Do I need better CPU or is there something going on that I do not understand? # dovecot -n # 1.1.11: /etc/dovecot/dovecot.conf # OS: Linux 2.6.32-4-pve i686 Ubuntu 9.10 nfs log_timestamp: %Y-%m-%d %H:%M:%S protocols: imap imaps pop3 pop3s listen: *, [::] ssl_ca_file: /etc/ssl/**********.crt ssl_cert_file: /etc/ssl/**********.crt ssl_key_file: /etc/ssl/**********.key ssl_key_password: ********** disable_plaintext_auth: no verbose_ssl: yes shutdown_clients: no login_dir: /var/run/dovecot/login login_executable(default): /usr/lib/dovecot/imap-login login_executable(imap): /usr/lib/dovecot/imap-login login_executable(pop3): /usr/lib/dovecot/pop3-login login_greeting_capability(default): yes login_greeting_capability(imap): yes login_greeting_capability(pop3): no login_process_size: 128 login_processes_count: 10 login_max_processes_count: 2048 mail_max_userip_connections(default): 10 mail_max_userip_connections(imap): 10 mail_max_userip_connections(pop3): 3 first_valid_uid: 99 last_valid_uid: 100 mail_privileged_group: mail mail_location: maildir:/var/vmail/%d/%n:INDEX=/var/indexes/%d/%n fsync_disable: yes mail_nfs_storage: yes mbox_write_locks: fcntl mbox_min_index_size: 4 mail_executable(default): /usr/lib/dovecot/imap mail_executable(imap): /usr/lib/dovecot/imap mail_executable(pop3): /usr/lib/dovecot/pop3 mail_process_size: 2048 mail_plugin_dir(default): /usr/lib/dovecot/modules/imap mail_plugin_dir(imap): /usr/lib/dovecot/modules/imap mail_plugin_dir(pop3): /usr/lib/dovecot/modules/pop3 imap_client_workarounds(default): outlook-idle imap_client_workarounds(imap): outlook-idle imap_client_workarounds(pop3): pop3_client_workarounds(default): pop3_client_workarounds(imap): pop3_client_workarounds(pop3): outlook-no-nuls auth default: mechanisms: plain login cram-md5 cache_size: 1024 passdb: driver: sql args: /etc/dovecot/dovecot-sql.conf userdb: driver: static args: uid=99 gid=99 home=/var/vmail/%d/%n allow_all_users=yes socket: type: listen client: path: /var/spool/postfix/private/auth-client mode: 432 user: postfix group: postfix master: path: /var/run/dovecot/auth-master mode: 384 user: vmail From tss at iki.fi Tue Jan 3 17:08:14 2012 From: tss at iki.fi (Timo Sirainen) Date: Tue, 3 Jan 2012 17:08:14 +0200 Subject: [Dovecot] Dsync fails on second sync for folders with dot in the name In-Reply-To: <20120103125410.GA2966@dibs.tanso.net> References: <20120101195907.GA21500@dibs.tanso.net> <1325592008.6987.63.camel@innu> <20120103121222.GA30793@dibs.tanso.net> <1325594099.6987.71.camel@innu> <20120103125410.GA2966@dibs.tanso.net> Message-ID: <9D352B5F-77C3-4473-92E1-9ED2AFB5FFFB@iki.fi> On 3.1.2012, at 14.54, Jan-Frode Myklebust wrote: > But isn't it a bug that users are allowed to create folders named .a.b, The folder name is "a.b", it just exists in filesystem with Maildir++ as ".a.b". > or that dovecot creates this as a folder named .a.b instead of .a/.b > when the separator is "." ? The separator is the IMAP separator, not the filesystem separator. From tss at iki.fi Tue Jan 3 17:12:34 2012 From: tss at iki.fi (Timo Sirainen) Date: Tue, 3 Jan 2012 17:12:34 +0200 Subject: [Dovecot] What is normal CPU usage of dovecot imap? In-Reply-To: <6FD1B169-1409-40BF-9B2F-53598B1300CB@woima.fi> References: <6FD1B169-1409-40BF-9B2F-53598B1300CB@woima.fi> Message-ID: On 3.1.2012, at 16.54, Mikko Lampikoski wrote: > I got Dual Core Intel Xeon CPU 3.00GHz, over 1000 mailbox and almost 1 dovecot login / second (peak time). > Server stats says that load is continually over 2 and cpu usage is 60%. top says that imap is making this load. You mean an actual "imap" process? Or more than one imap processes? Or something else, e.g. "imap-login" process? If there's one long running IMAP process eating CPU, it might have simply gone to an infinite loop, and upgrading could help. > virtual users are in mysql database and mysqld is running on another server (this server is ok). > > Do I need better CPU or is there something going on that I do not understand? Your CPU usage should probably be closer to 0%. > login_process_size: 128 > login_processes_count: 10 > login_max_processes_count: 2048 Switching to http://wiki2.dovecot.org/LoginProcess#High-performance_mode may be helpful. > mail_nfs_storage: yes Do you have more than one Dovecot server? This setting doesn't anyway work reliably. If you've only one server accessing mails, you can set this to "no". From stan at hardwarefreak.com Tue Jan 3 17:20:28 2012 From: stan at hardwarefreak.com (Stan Hoeppner) Date: Tue, 03 Jan 2012 09:20:28 -0600 Subject: [Dovecot] GPFS for mail-storage (Was: Re: Compressing existing maildirs) In-Reply-To: <20120103081449.GA26269@dibs.tanso.net> References: <20111224152050.GA3958@dibs.tanso.net> <20111229084916.GA5895@dibs.tanso.net> <4EFC6453.8020304@hardwarefreak.com> <20111230144124.GA3936@dibs.tanso.net> <4EFE5984.9080905@hardwarefreak.com> <20111231065649.GA19046@dibs.tanso.net> <4EFEBFB8.1070301@hardwarefreak.com> <20120103081449.GA26269@dibs.tanso.net> Message-ID: <4F031CBC.60302@hardwarefreak.com> On 1/3/2012 2:14 AM, Jan-Frode Myklebust wrote: > On Sat, Dec 31, 2011 at 01:54:32AM -0600, Stan Hoeppner wrote: >> Nice setup. I've mentioned GPFS for cluster use on this list before, >> but I think you're the only operator to confirm using it. I'm sure >> others would be interested in hearing of your first hand experience: >> pros, cons, performance, etc. And a ball park figure on the licensing >> costs, whether one can only use GPFS on IBM storage or if storage from >> others vendors is allowed in the GPFS pool. > > I used to work for IBM, so I've been a bit uneasy about pushing GPFS too > hard publicly, for risk of being accused of being biased. But I changed job in > November, so now I'm only a satisfied customer :-) Fascinating. And good timing. :) > Pros: > Extremely simple to configure and manage. Assuming root on all > nodes can ssh freely, and port 1191/tcp is open between the > nodes, these are the commands to create the cluster, create a > NSD (network shared disks), and create a filesystem: > > # echo hostname1:manager-quorum > NodeFile # "manager" means this node can be selected as filesystem manager > # echo hostname2:manager-quorum >> NodeFile # "quorum" means this node has a vote in the quorum selection > # echo hostname3:manager-quorum >> NodeFile # all my nodes are usually the same, so they all have same roles. > # mmcrcluster -n NodeFile -p $(hostname) -A > > ### sdb1 is either a local disk on hostname1 (in which case the other nodes will access it over tcp to > ### hostname1), or a SAN-disk that they can access directly over FC/iSCSI. > # echo sdb1:hostname1::dataAndMetadata:: > DescFile # This disk can be used for both data and metadata > # mmcrnsd -F DescFile > > # mmstartup -A # starts GPFS services on all nodes > # mmcrfs /gpfs1 gpfs1 -F DescFile > # mount /gpfs1 > > You can add and remove disks from the filesystem, and change most > settings without downtime. You can scale out your workload by adding > more nodes (SAN attached or not), and scale out your disk performance > by adding more disks on the fly. (IBM uses GPFS to create > scale-out NAS solutions http://www-03.ibm.com/systems/storage/network/sonas/ , > which highlights a few of the features available with GPFS) > > There's no problem running GPFS on other vendors disk systems. I've used Nexsan > SATAboy earlier, for a HPC cluster. One can easily move from one disksystem to > another without downtime. That's good to know. The only FC SAN arrays I've installed/used are IBM FasTt 600 and Nexsan SataBlade/Boy. I much prefer the web management interface on the Nexsan units, much more intuitive, more flexible. The FasTt is obviously much more suitable to random IOPS workloads with its 15k FC disks vs 7.2K SATA disks in the Nexsan units (although Nexsan has offered 15K SAS disks and SSDs for a while now). > Cons: > It has it's own page cache, staticly configured. So you don't get the "all > available memory used for page caching" behaviour as you normally do on linux. Yep, that's ugly. > There is a kernel module that needs to be rebuilt on every > upgrade. It's a simple process, but it needs to be done and means we > can't just run "yum update ; reboot" to upgrade. > > % export SHARKCLONEROOT=/usr/lpp/mmfs/src > % cp /usr/lpp/mmfs/src/config/site.mcr.proto /usr/lpp/mmfs/src/config/site.mcr > % vi /usr/lpp/mmfs/src/config/site.mcr # correct GPFS_ARCH, LINUX_DISTRIBUTION and LINUX_KERNEL_VERSION > % cd /usr/lpp/mmfs/src/ ; make clean ; make World > % su - root > # export SHARKCLONEROOT=/usr/lpp/mmfs/src > # cd /usr/lpp/mmfs/src/ ; make InstallImages So is this, but it's totally expected since this is proprietary code and not in mainline. >> To this point IIRC everyone here doing clusters is using NFS, GFS, or >> OCFS. Each has its downsides, mostly because everyone is using maildir. >> NFS has locking issues with shared dovecot index files. GFS and OCFS >> have filesystem metadata performance issues. How does GPFS perform with >> your maildir workload? > > Maildir is likely a worst case type workload for filesystems. Millions > of tiny-tiny files, making all IO random, and getting minimal controller > read cache utilized (unless you can cache all active files). So I've Yep. Which is the reason I've stuck with mbox everywhere I can over the years, minor warts and all, and will be moving to mdbox at some point. IMHO maildir solved one set of problems but created a bigger problem. Many sites hailed maildir as a savior in many ways, then decried it as their user base and IO demands exceeded their storage, scrambling for budget money for fix an "unforeseen" problem, which is absolutely clear from day one. At least for anyone with more than a cursory knowledge of filesystem design and hardware performance. > concluded that our performance issues are mostly design errors (and the > fact that there were no better mail storage formats than maildir at the > time these servers were implemented). I expect moving to mdbox will > fix all our performance issues. Yeah, it should decrease FS IOPS by a couple orders or magnitude, especially if you go with large mdbox files. The larger the better. > I *think* GPFS is as good as it gets for maildir storage on clusterfs, > but have no number to back that up ... Would be very interesting if we > could somehow compare numbers for a few clusterfs'. Apparently no one (vendor) with the resources to do so has the desire to do so. > I believe our main limitation in this setup is the iops we can get from > the backend storage system. It's hard to balance the IO over enough > RAID arrays (the fs is spread over 11 RAID5 arrays of 5 disks each), > and we're always having hotspots. Right now two arrays are doing <100 iops, > while others are doing 4-500 iops. Would very much like to replace > it by something smarter where we can utilize SSDs for active data and > something slower for stale data. GPFS can manage this by itself trough > it's ILM interface, but we don't have the very fast storage to put in as > tier-1. Obviously not news to you, balancing mail workload IO across large filesystems and wide disk farms will always be a problem, due to which users are logged in at a given moment, and the fact you can't stripe all users' small mail files across all disks. And this is true of all mailbox formats to one degree or another, obviously worst with maildir. A properly engineered XFS can get far closer to linear IO distribution across arrays than most filesystems due to its allocation group design, but it still won't be perfect. Simply getting away from maildir, with its extraneous metadata IOs, is a huge win for decreasing custerFS and SAN IOPs. I'm anxious to see your report on your SAN IOPs after you've converted to mdbox, especially if you go with 16/32MB or larger mdbox files. -- Stan From mikko at woima.fi Tue Jan 3 17:38:48 2012 From: mikko at woima.fi (Mikko Lampikoski) Date: Tue, 3 Jan 2012 17:38:48 +0200 Subject: [Dovecot] What is normal CPU usage of dovecot imap? In-Reply-To: References: <6FD1B169-1409-40BF-9B2F-53598B1300CB@woima.fi> Message-ID: <3B6D056C-1D1E-46F4-AB56-FDD5B98BC669@woima.fi> On 3.1.2012, at 17.12, Timo Sirainen wrote: > On 3.1.2012, at 16.54, Mikko Lampikoski wrote: > >> I got Dual Core Intel Xeon CPU 3.00GHz, over 1000 mailbox and almost 1 dovecot login / second (peak time). >> Server stats says that load is continually over 2 and cpu usage is 60%. top says that imap is making this load. > > You mean an actual "imap" process? Or more than one imap processes? Or something else, e.g. "imap-login" process? If there's one long running IMAP process eating CPU, it might have simply gone to an infinite loop, and upgrading could help. It is "imap" process and process takes cpu like 10-30 seconds and then PID changes to another imap process (process also takes 10% of memory = 150MB). Restarting dovecot does not help. >> virtual users are in mysql database and mysqld is running on another server (this server is ok). >> Do I need better CPU or is there something going on that I do not understand? > > Your CPU usage should probably be closer to 0%. I think so too, but I ran out of good ideas. If someone have lots of mails in mailbox can it make effect like this? >> login_process_size: 128 >> login_processes_count: 10 >> login_max_processes_count: 2048 > > Switching to http://wiki2.dovecot.org/LoginProcess#High-performance_mode may be helpful. This loses much of the security benefits, no thanks. >> mail_nfs_storage: yes > > Do you have more than one Dovecot server? This setting doesn't anyway work reliably. If you've only one server accessing mails, you can set this to "no". Trying this too, but I think its not going to help.. From tss at iki.fi Tue Jan 3 17:44:21 2012 From: tss at iki.fi (Timo Sirainen) Date: Tue, 3 Jan 2012 17:44:21 +0200 Subject: [Dovecot] What is normal CPU usage of dovecot imap? In-Reply-To: <3B6D056C-1D1E-46F4-AB56-FDD5B98BC669@woima.fi> References: <6FD1B169-1409-40BF-9B2F-53598B1300CB@woima.fi> <3B6D056C-1D1E-46F4-AB56-FDD5B98BC669@woima.fi> Message-ID: <8188AE59-6646-4686-9320-F11D25A42F5D@iki.fi> On 3.1.2012, at 17.38, Mikko Lampikoski wrote: > On 3.1.2012, at 17.12, Timo Sirainen wrote: > >> On 3.1.2012, at 16.54, Mikko Lampikoski wrote: >> >>> I got Dual Core Intel Xeon CPU 3.00GHz, over 1000 mailbox and almost 1 dovecot login / second (peak time). >>> Server stats says that load is continually over 2 and cpu usage is 60%. top says that imap is making this load. >> >> You mean an actual "imap" process? Or more than one imap processes? Or something else, e.g. "imap-login" process? If there's one long running IMAP process eating CPU, it might have simply gone to an infinite loop, and upgrading could help. > > It is "imap" process and process takes cpu like 10-30 seconds and then PID changes to another imap process (process also takes 10% of memory = 150MB). > Restarting dovecot does not help. Is the IMAP process always for the same user (or the same few users)? verbose_proctitle=yes shows the username in ps output. > If someone have lots of mails in mailbox can it make effect like this? Possibly. maildir_very_dirty_syncs=yes is helpful with huge maildirs (I don't remember if it exists in v1.1 yet). From preacher_net at gmx.net Tue Jan 3 18:47:23 2012 From: preacher_net at gmx.net (Preacher) Date: Tue, 03 Jan 2012 17:47:23 +0100 Subject: [Dovecot] Problem with huge IMAP Archive after Courier migration In-Reply-To: <4F020328.7090303@hardwarefreak.com> References: <4F01D886.6070905@gmx.net> <4F020328.7090303@hardwarefreak.com> Message-ID: <4F03311B.8030109@gmx.net> Actually I took a look inside the folders right after starting up and waited for two hours to let Dovecot work. Saving the whole Maildir into a tar on the same partition also took only 2 hours before. But nothing did change and when looking at activities with top, the server was idle, dovecot not indexing. Also I wasn't able to drag new messages to the folder hierachy. With courier it takes not more than 5 seconds to download the headers in a folder containing more than 3.000 messages. Stan Hoeppner schrieb: > On 1/2/2012 10:17 AM, Preacher wrote: > ... >> So I forced to install the Debisn 7.0 packages with 2.0.15 and finally >> got the server running, I also restarted the whole machine to empty caches. >> But the problem I got was that in the huge folder hierarchy the >> downloaded headers in the individual folders disappeared, some folders >> showed a few very old messages, some none. Also some subfolders >> disappeared. >> I checked this with Outlook and Thunderbird. The difference was, that >> Thunderbird shows more messages (but not all) than Outlook in some >> folders, but also none in some others. Outlook brought up a message in >> some cases, that the connection timed out, although I set the timeout to >> 60s. > ... >> Anyone a clue what's wrong here? > Absolutely. What's wrong is a lack of planning, self education, and > patience on the part of the admin. > > Dovecot gets its speed from its indexes. How long do you think it takes > Dovecot to index 37GB of maildir messages, many thousands per directory, > hundreds of directories, millions of files total? Until those indexes > are built you will not see a complete folder tree and all kinds of stuff > will be missing. > > For your education: Dovecot indexes every message and these indexes are > the key to its speed. Normally indexing occurs during delivery when > using deliver or lmtp, so the index updates are small and incremental, > keeping performance high. You tried to do this and expected Dovecot to > instantly process it all: > > http://www.youtube.com/watch?v=THVz5aweqYU > > If you don't know, that's a coal train car being dumped. 100 tons of > coal in a few seconds. Visuals are always good teaching tools. I think > this drives the point home rather well. > From preacher_net at gmx.net Tue Jan 3 18:50:51 2012 From: preacher_net at gmx.net (Preacher) Date: Tue, 03 Jan 2012 17:50:51 +0100 Subject: [Dovecot] Problem with huge IMAP Archive after Courier migration In-Reply-To: <1325590933.6987.59.camel@innu> References: <4F01D886.6070905@gmx.net> <1325590933.6987.59.camel@innu> Message-ID: <4F0331EB.2000006@gmx.net> Yes i did, followed this guide you mentioned, it said that it found the 3 mailboxes I have set-up in total, conversion took only a few moments. I guess the mail location was automaticall set correctly as the folder hierachy was displayed correctly Timo Sirainen schrieb: > On Mon, 2012-01-02 at 17:17 +0100, Preacher wrote: >> So I forced to install the Debisn 7.0 packages with 2.0.15 and finally >> got the server running, I also restarted the whole machine to empty caches. >> But the problem I got was that in the huge folder hierarchy the >> downloaded headers in the individual folders disappeared, some folders >> showed a few very old messages, some none. Also some subfolders disappeared. >> I checked this with Outlook and Thunderbird. The difference was, that >> Thunderbird shows more messages (but not all) than Outlook in some >> folders, but also none in some others. Outlook brought up a message in >> some cases, that the connection timed out, although I set the timeout to >> 60s. > Did you run the Courier migration script? > http://wiki2.dovecot.org/Migration/Courier > > Also explicitly setting mail_location would be a good idea. > > > From rnabioullin at gmail.com Tue Jan 3 19:33:31 2012 From: rnabioullin at gmail.com (Ruslan Nabioullin) Date: Tue, 03 Jan 2012 12:33:31 -0500 Subject: [Dovecot] Multiple Maildirs per Virtual User In-Reply-To: <1325591532.6987.60.camel@innu> References: <4F010936.7080107@gmail.com> <1325591532.6987.60.camel@innu> Message-ID: <4F033BEB.4070103@gmail.com> On 01/03/2012 06:52 AM, Timo Sirainen wrote: > On Sun, 2012-01-01 at 20:32 -0500, Ruslan Nabioullin wrote: >> How would it be possible to configure dovecot (2.0.16) in such a way >> that it would serve several maildirs (e.g., INBOX, INBOX.Drafts, >> INBOX.Sent, forum_email, [Gmail].Trash, etc.) per virtual user? >> >> I am only able to specify a single maildir, but I want all maildirs in >> /home/my-username/mail/account1/ to be served. > > Sounds like you want LAYOUT=fs rather than the default LAYOUT=maildir++. > http://wiki2.dovecot.org/MailboxFormat/Maildir#Directory_Structure > > I changed /etc/dovecot/passwd to: my-username_account1:{PLAIN}password:my-username:my-group::::userdb_mail=maildir:/home/my-username/mail/account1:LAYOUT=fs Dovecot creates {tmp,new,cur} dirs within account1 (the root), apparently not recognizing the maildirs within the root (e.g., /home/my-username/mail/account1/INBOX/{tmp,new,cur}). -- Ruslan Nabioullin rnabioullin at gmail.com -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 900 bytes Desc: OpenPGP digital signature URL: From tss at iki.fi Tue Jan 3 19:47:52 2012 From: tss at iki.fi (Timo Sirainen) Date: Tue, 3 Jan 2012 19:47:52 +0200 Subject: [Dovecot] Multiple Maildirs per Virtual User In-Reply-To: <4F033BEB.4070103@gmail.com> References: <4F010936.7080107@gmail.com> <1325591532.6987.60.camel@innu> <4F033BEB.4070103@gmail.com> Message-ID: <7543064C-1F66-49D1-8694-4793958CCFD8@iki.fi> On 3.1.2012, at 19.33, Ruslan Nabioullin wrote: > I changed /etc/dovecot/passwd to: > my-username_account1:{PLAIN}password:my-username:my-group::::userdb_mail=maildir:/home/my-username/mail/account1:LAYOUT=fs > > Dovecot creates {tmp,new,cur} dirs within account1 (the root), > apparently not recognizing the maildirs within the root (e.g., > /home/my-username/mail/account1/INBOX/{tmp,new,cur}). Your client probably only shows subscribed folders, and none are subscribed. From stan at hardwarefreak.com Tue Jan 3 19:55:27 2012 From: stan at hardwarefreak.com (Stan Hoeppner) Date: Tue, 03 Jan 2012 11:55:27 -0600 Subject: [Dovecot] Problem with huge IMAP Archive after Courier migration In-Reply-To: <4F03311B.8030109@gmx.net> References: <4F01D886.6070905@gmx.net> <4F020328.7090303@hardwarefreak.com> <4F03311B.8030109@gmx.net> Message-ID: <4F03410F.6080404@hardwarefreak.com> On 1/3/2012 10:47 AM, Preacher wrote: > Actually I took a look inside the folders right after starting up and > waited for two hours to let Dovecot work. So two hours after clicking on an IMAP folder the contents of that folder were still not displayed correctly? > Saving the whole Maildir into a tar on the same partition also took only > 2 hours before. This doesn't have any relevance. > But nothing did change and when looking at activities with top, the > server was idle, dovecot not indexing. > Also I wasn't able to drag new messages to the folder hierachy. Then something is seriously wrong. The fact that you "forced" the Wheezy Dovecot package into a Squeeze system may have something, if not everything, to do with your problem (somehow I missed this fact in your previous email). Debian testing/sid packages are compiled against newer system libraries. If you check various logs you'll likely see problems related to this. This is also why the Backports project exists--TESTING packages are compiled against the STABLE libraries so newer application revs can be used on the STABLE distribution. Currently there is no Dovecot 2.x backport for Squeeze. I would suggest you thoroughly remove the Wheezy 2.0.15 package and install the 1.2.15-7 STABLE package. Read the documentation for 1.2.x and configure it properly. Then things will likely work as they should. -- Stan From Ralf.Hildebrandt at charite.de Tue Jan 3 20:09:29 2012 From: Ralf.Hildebrandt at charite.de (Ralf Hildebrandt) Date: Tue, 3 Jan 2012 19:09:29 +0100 Subject: [Dovecot] Deliver all addresses to the same mdbox:? Message-ID: <20120103180929.GA21651@charite.de> For archiving purposes I'm delivering all addresses to the same mdbox: like this: passdb { driver = passwd-file args = username_format=%u /etc/dovecot/passwd } userdb { driver = static args = uid=1000 gid=1000 home=/home/copymail allow_all_users=yes } Yet I'm getting this: Jan 3 19:03:27 mail postfix/lmtp[29378]: 3THjg02wfWzFvmL: to=, relay=mail.charite.de[private/dovecot-lmtp], conn_use=20, delay=323, delays=323/0/0/0, dsn=4.1.1, status=SOFTBOUNCE (host mail.charite.de[private/dovecot-lmtp] said: 550 5.1.1 <"firstname.lastname at charite.de"@backup.invalid> User doesn't exist: "firstname.lastname at charite.de"@backup.invalid (in reply to RCPT TO command)) (using soft_bounce = yes here in Postfix) In short: backup.invalid is delivered to dovecot by means of LMTP (local socket). I thought my static mapping in userdb would enable the lmtp listener to accept ALL recipients and map their $home to /home/copymail - why is that not working? -- Ralf Hildebrandt Gesch?ftsbereich IT | Abteilung Netzwerk Charit? - Universit?tsmedizin Berlin Campus Benjamin Franklin Hindenburgdamm 30 | D-12203 Berlin Tel. +49 30 450 570 155 | Fax: +49 30 450 570 962 ralf.hildebrandt at charite.de | http://www.charite.de From tss at iki.fi Tue Jan 3 20:34:11 2012 From: tss at iki.fi (Timo Sirainen) Date: Tue, 3 Jan 2012 20:34:11 +0200 Subject: [Dovecot] Deliver all addresses to the same mdbox:? In-Reply-To: <20120103180929.GA21651@charite.de> References: <20120103180929.GA21651@charite.de> Message-ID: On 3.1.2012, at 20.09, Ralf Hildebrandt wrote: > For archiving purposes I'm delivering all addresses to the same mdbox: > like this: > > passdb { > driver = passwd-file > args = username_format=%u /etc/dovecot/passwd > } > > userdb { > driver = static > args = uid=1000 gid=1000 home=/home/copymail allow_all_users=yes > } allow_all_users=yes is used only when the passdb is incapable of telling if the user exists or not. > Yet I'm getting this: > > Jan 3 19:03:27 mail postfix/lmtp[29378]: 3THjg02wfWzFvmL: to=, > relay=mail.charite.de[private/dovecot-lmtp], conn_use=20, delay=323, delays=323/0/0/0, dsn=4.1.1, status=SOFTBOUNCE (host > mail.charite.de[private/dovecot-lmtp] said: 550 5.1.1 <"firstname.lastname at charite.de"@backup.invalid> User doesn't exist: "firstname.lastname at charite.de"@backup.invalid (in reply to RCPT TO > command)) Fails because user doesn't exist in passwd-file, I guess. Maybe use passdb static? If you also need authentication to work, put passdb static in protocol lmtp {} and passdb passwd-file in protocol !lmtp {} From Ralf.Hildebrandt at charite.de Tue Jan 3 20:43:38 2012 From: Ralf.Hildebrandt at charite.de (Ralf Hildebrandt) Date: Tue, 3 Jan 2012 19:43:38 +0100 Subject: [Dovecot] Deliver all addresses to the same mdbox:? In-Reply-To: References: <20120103180929.GA21651@charite.de> Message-ID: <20120103184338.GC21651@charite.de> * Timo Sirainen : > On 3.1.2012, at 20.09, Ralf Hildebrandt wrote: > > > For archiving purposes I'm delivering all addresses to the same mdbox: > > like this: > > > > passdb { > > driver = passwd-file > > args = username_format=%u /etc/dovecot/passwd > > } > > > > userdb { > > driver = static > > args = uid=1000 gid=1000 home=/home/copymail allow_all_users=yes > > } > > allow_all_users=yes is used only when the passdb is incapable of telling if the user exists or not. Ah, damn :| > Fails because user doesn't exist in passwd-file, I guess. Indeed. > Maybe use passdb static? Right now I simply solved it by using + addressing like this: Jan 3 19:42:49 mail postfix/lmtp[2728]: 3THkfd20f1zFvlF: to=, relay=mail.charite.de[private/dovecot-lmtp], delay=0.01, delays=0.01/0/0/0, dsn=2.0.0, status=sent (250 2.0.0 IHdDM9VLA0/aCwAAY73zkw Saved) Call me lazy :) > If you also need authentication to work, put passdb static in protocol > lmtp {} and passdb passwd-file in protocol !lmtp {} Ah yes, good idea. -- Ralf Hildebrandt Gesch?ftsbereich IT | Abteilung Netzwerk Charit? - Universit?tsmedizin Berlin Campus Benjamin Franklin Hindenburgdamm 30 | D-12203 Berlin Tel. +49 30 450 570 155 | Fax: +49 30 450 570 962 ralf.hildebrandt at charite.de | http://www.charite.de From djonas at vitalwerks.com Tue Jan 3 21:14:28 2012 From: djonas at vitalwerks.com (David Jonas) Date: Tue, 03 Jan 2012 11:14:28 -0800 Subject: [Dovecot] Maildir migration and uids In-Reply-To: <81E45F76-34A4-4666-9F10-7566B7BD496C@iki.fi> References: <4EF28D7B.8050601@vitalwerks.com> <81E45F76-34A4-4666-9F10-7566B7BD496C@iki.fi> Message-ID: <4F035394.8090701@vitalwerks.com> On 12/29/11 5:35 AM, Timo Sirainen wrote: > On 22.12.2011, at 3.52, David Jonas wrote: > >> I'm in the process of migrating a large number of maildirs to a 3rd >> party dovecot server (from a dovecot server). Tests have shown that >> using imap to sync the accounts doesn't preserve the uidl for pop3 access. >> >> My current attempt is to convert the maildir to mbox and add an X-UIDL >> header in the process. Run a second dovecot that serves the converted >> mbox. But dovecot's docs say, "None of these headers are sent to >> IMAP/POP3 clients when they read the mail". > > That's rather complex. Thanks, Timo. Unfortunately I don't have shell access at the new dovecot servers. They have a migration tool that doesn't keep the uids intact when I sync via imap. Looks like I'm going to have to sync twice, once with POP3 (which maintains uids) and once with imap skipping the inbox. Ugh. >> Is there any way to sync these maildirs to the new server and maintain >> the uids? > > What Dovecot versions? dsync could do this easily. You could simply install the dsync binary even if you're using Dovecot v1.x. Good idea with dsync though, I had forgotten about that. Perhaps they'll do something custom for me. > You could also log in with POP3 and get the UIDL list and write a script to add them to dovecot-uidlist. > From CMarcus at Media-Brokers.com Tue Jan 3 22:40:02 2012 From: CMarcus at Media-Brokers.com (Charles Marcus) Date: Tue, 03 Jan 2012 15:40:02 -0500 Subject: [Dovecot] Storing passwords encrypted... bcrypt? Message-ID: <4F0367A2.1000807@Media-Brokers.com> Hi everyone, Was just perusing this article about how trivial it is to decrypt passwords that are stored using most (standard) encryption methods (like MD5), and was wondering - is it possible to use bcrypt with dovecot+postfix+mysql (or posgres)? -- Best regards, Charles From CMarcus at Media-Brokers.com Tue Jan 3 22:59:39 2012 From: CMarcus at Media-Brokers.com (Charles Marcus) Date: Tue, 03 Jan 2012 15:59:39 -0500 Subject: [Dovecot] Storing passwords encrypted... bcrypt? In-Reply-To: <4F0367A2.1000807@Media-Brokers.com> References: <4F0367A2.1000807@Media-Brokers.com> Message-ID: <4F036C3B.5080904@Media-Brokers.com> On 2012-01-03 3:40 PM, Charles Marcus wrote: > Hi everyone, > > Was just perusing this article about how trivial it is to decrypt > passwords that are stored using most (standard) encryption methods (like > MD5), and was wondering - is it possible to use bcrypt with > dovecot+postfix+mysql (or posgres)? Ooop... forgot the link: http://codahale.com/how-to-safely-store-a-password/ But after perusing the wiki: http://wiki2.dovecot.org/Authentication/PasswordSchemes it appears not? Timo - any chance for adding support for it? Or is that web page incorrect? -- Best regards, Charles From david at blue-labs.org Tue Jan 3 23:03:34 2012 From: david at blue-labs.org (David Ford) Date: Tue, 03 Jan 2012 16:03:34 -0500 Subject: [Dovecot] Storing passwords encrypted... bcrypt? In-Reply-To: <4F036C3B.5080904@Media-Brokers.com> References: <4F0367A2.1000807@Media-Brokers.com> <4F036C3B.5080904@Media-Brokers.com> Message-ID: <4F036D26.9010409@blue-labs.org> md5 is deprecated, *nix has used sha1 for a while now From bill-dovecot at carpenter.org Wed Jan 4 00:10:13 2012 From: bill-dovecot at carpenter.org (WJCarpenter) Date: Tue, 03 Jan 2012 14:10:13 -0800 Subject: [Dovecot] Storing passwords encrypted... bcrypt? In-Reply-To: <4F036C3B.5080904@Media-Brokers.com> References: <4F0367A2.1000807@Media-Brokers.com> <4F036C3B.5080904@Media-Brokers.com> Message-ID: <4F037CC5.9030900@carpenter.org> >> Was just perusing this article about how trivial it is to decrypt >> passwords that are stored using most (standard) encryption methods (like >> MD5), and was wondering - is it possible to use bcrypt with >> dovecot+postfix+mysql (or posgres)? > > Ooop... forgot the link: > > http://codahale.com/how-to-safely-store-a-password/ AFAIK, that web page is correct in a relative sense, but getting bcrypt support might not be the most urgent priority. In his description, he uses the example of passwords which are "lowercase, alphanumeric, and 6 characters long" (and in another place the example is "lowercase, alphabetic passwords which are ?7 characters", I guess to illustrate that things have gotten faster). If you are allowing your users to create such weak passwords, using bcrypt will not save you/them. Attackers will just be wasting more of your CPU time making attempts. If they get a copy of your hashed passwords, they'll likely be wasting their own CPU time, but they have plenty of that, too. There are plenty of recommendations for what makes a good password / passphrase. If you are not already enforcing such rules (perhaps also with a lookaside to one or more of the leaked tables of passwords floating around), then IMHO that's much more urgent. (One of the best twists I read somewhere [sorry, I forget where] was to require at least one uppercase and one digit, but to not count them as fulfilling the requirement if they were used as the first or last character.) Side note, but for the sake of precision ... attackers are not literally decrypting passwords. They are guessing passwords and then performing a one-way hash to see if they guessed correctly. As a practical matter, that means that you have to ask your users to update their passwords any time you change the password storage scheme. (I don't know enough about bcrypt to know if that would be required if you wanted to simply increase the work factor.) From CMarcus at Media-Brokers.com Wed Jan 4 00:27:16 2012 From: CMarcus at Media-Brokers.com (Charles Marcus) Date: Tue, 03 Jan 2012 17:27:16 -0500 Subject: [Dovecot] Storing passwords encrypted... bcrypt? In-Reply-To: <4F036D26.9010409@blue-labs.org> References: <4F0367A2.1000807@Media-Brokers.com> <4F036C3B.5080904@Media-Brokers.com> <4F036D26.9010409@blue-labs.org> Message-ID: <4F0380C4.8040205@Media-Brokers.com> On 2012-01-03 4:03 PM, David Ford wrote: > md5 is deprecated, *nix has used sha1 for a while now That link lumps sha1 in with MD5 and others: "Why Not {MD5, SHA1, SHA256, SHA512, SHA-3, etc}?" -- Best regards, Charles From CMarcus at Media-Brokers.com Wed Jan 4 00:30:30 2012 From: CMarcus at Media-Brokers.com (Charles Marcus) Date: Tue, 03 Jan 2012 17:30:30 -0500 Subject: [Dovecot] Storing passwords encrypted... bcrypt? In-Reply-To: <4F037CC5.9030900@carpenter.org> References: <4F0367A2.1000807@Media-Brokers.com> <4F036C3B.5080904@Media-Brokers.com> <4F037CC5.9030900@carpenter.org> Message-ID: <4F038186.3030505@Media-Brokers.com> On 2012-01-03 5:10 PM, WJCarpenter wrote: > In his description, he uses the example of passwords which are > "lowercase, alphanumeric, and 6 characters long" (and in another place > the example is "lowercase, alphabetic passwords which are ?7 > characters", I guess to illustrate that things have gotten faster). If > you are allowing your users to create such weak passwords, using bcrypt > will not save you/them. Attackers will just be wasting more of your CPU > time making attempts. If they get a copy of your hashed passwords, > they'll likely be wasting their own CPU time, but they have plenty of > that, too. I require strong passwords of 15 characters in length. Whats more, they are assigned (by me), and the user cannot change it. But, he isn't talking about brute force attacks against the server. He is talking about if someone gained access to the SQL database where the passwords are stored (as has happened countless times in the last few years), and then had the luxury of brute forcing an attack locally (on their own systems) against your password database. -- Best regards, Charles From david at blue-labs.org Wed Jan 4 00:35:14 2012 From: david at blue-labs.org (David Ford) Date: Tue, 03 Jan 2012 17:35:14 -0500 Subject: [Dovecot] Storing passwords encrypted... bcrypt? In-Reply-To: <4F038186.3030505@Media-Brokers.com> References: <4F0367A2.1000807@Media-Brokers.com> <4F036C3B.5080904@Media-Brokers.com> <4F037CC5.9030900@carpenter.org> <4F038186.3030505@Media-Brokers.com> Message-ID: <4F0382A2.9010105@blue-labs.org> On 01/03/2012 05:30 PM, Charles Marcus wrote: > On 2012-01-03 5:10 PM, WJCarpenter wrote: >> In his description, he uses the example of passwords which are >> "lowercase, alphanumeric, and 6 characters long" (and in another place >> the example is "lowercase, alphabetic passwords which are ?7 >> characters", I guess to illustrate that things have gotten faster). If >> you are allowing your users to create such weak passwords, using bcrypt >> will not save you/them. Attackers will just be wasting more of your CPU >> time making attempts. If they get a copy of your hashed passwords, >> they'll likely be wasting their own CPU time, but they have plenty of >> that, too. > > I require strong passwords of 15 characters in length. Whats more, > they are assigned (by me), and the user cannot change it. But, he > isn't talking about brute force attacks against the server. He is > talking about if someone gained access to the SQL database where the > passwords are stored (as has happened countless times in the last few > years), and then had the luxury of brute forcing an attack locally (on > their own systems) against your password database. when it comes to brute force, passwords like "51k$jh#21hiaj2" are significantly weaker than "wePut85umbrellasIn2shoes". considerably difficult for humans which makes them far more likely to write it on a sticky and make it handily available. From simon.brereton at buongiorno.com Wed Jan 4 00:38:36 2012 From: simon.brereton at buongiorno.com (Simon Brereton) Date: Tue, 3 Jan 2012 17:38:36 -0500 Subject: [Dovecot] Storing passwords encrypted... bcrypt? In-Reply-To: <4F038186.3030505@Media-Brokers.com> References: <4F0367A2.1000807@Media-Brokers.com> <4F036C3B.5080904@Media-Brokers.com> <4F037CC5.9030900@carpenter.org> <4F038186.3030505@Media-Brokers.com> Message-ID: On 3 January 2012 17:30, Charles Marcus wrote: > On 2012-01-03 5:10 PM, WJCarpenter wrote: >> >> In his description, he uses the example of passwords which are >> "lowercase, alphanumeric, and 6 characters long" (and in another place >> the example is "lowercase, alphabetic passwords which are ?7 >> characters", I guess to illustrate that things have gotten faster). ?If >> you are allowing your users to create such weak passwords, using bcrypt >> will not save you/them. ?Attackers will just be wasting more of your CPU >> time making attempts. ?If they get a copy of your hashed passwords, >> they'll likely be wasting their own CPU time, but they have plenty of >> that, too. > > > I require strong passwords of 15 characters in length. Whats more, they are > assigned (by me), and the user cannot change it. But, he isn't talking about > brute force attacks against the server. He is talking about if someone > gained access to the SQL database where the passwords are stored (as has > happened countless times in the last few years), and then had the luxury of > brute forcing an attack locally (on their own systems) against your password > database. 24+ would be better.. http://xkcd.com/936/ Simon From dg at dguhl.org Wed Jan 4 00:48:05 2012 From: dg at dguhl.org (Dennis Guhl) Date: Tue, 3 Jan 2012 23:48:05 +0100 Subject: [Dovecot] Problem with huge IMAP Archive after Courier migration In-Reply-To: <4F03410F.6080404@hardwarefreak.com> References: <4F01D886.6070905@gmx.net> <4F020328.7090303@hardwarefreak.com> <4F03311B.8030109@gmx.net> <4F03410F.6080404@hardwarefreak.com> Message-ID: <20120103224804.GA16434@laptop-dg.leere.eu> On Tue, Jan 03, 2012 at 11:55:27AM -0600, Stan Hoeppner wrote: [..] > I would suggest you thoroughly remove the Wheezy 2.0.15 package and Not to use the Wheezy package might be wise. > install the 1.2.15-7 STABLE package. Read the documentation for 1.2.x Alternatively you could use Stephan Bosch's repository: deb http://xi.rename-it.nl/debian/ stable-auto/dovecot-2.0 main Despite the warning at http://wiki2.dovecot.org/PrebuiltBinaries#Automatically_Built_Packages they work very stable. > and configure it properly. Then things will likely work as they should. Dennis From bill-dovecot at carpenter.org Wed Jan 4 01:12:50 2012 From: bill-dovecot at carpenter.org (WJCarpenter) Date: Tue, 03 Jan 2012 15:12:50 -0800 Subject: [Dovecot] Storing passwords encrypted... bcrypt? In-Reply-To: References: <4F0367A2.1000807@Media-Brokers.com> <4F036C3B.5080904@Media-Brokers.com> <4F037CC5.9030900@carpenter.org> <4F038186.3030505@Media-Brokers.com> Message-ID: <4F038B72.1000003@carpenter.org> On 1/3/2012 2:38 PM, Simon Brereton wrote: > http://xkcd.com/936/ As they saying goes, entropy ain't what it used to be. https://www.grc.com/haystack.htm However, both links actually illustrate the same point: once you get past dictionary attacks, the length of the password is dominant factor in the strength of the password against brute force attack. From gedalya at gedalya.net Wed Jan 4 01:59:28 2012 From: gedalya at gedalya.net (Gedalya) Date: Tue, 03 Jan 2012 18:59:28 -0500 Subject: [Dovecot] Problem with huge IMAP Archive after Courier migration In-Reply-To: <20120103224804.GA16434@laptop-dg.leere.eu> References: <4F01D886.6070905@gmx.net> <4F020328.7090303@hardwarefreak.com> <4F03311B.8030109@gmx.net> <4F03410F.6080404@hardwarefreak.com> <20120103224804.GA16434@laptop-dg.leere.eu> Message-ID: <4F039660.1010903@gedalya.net> On 01/03/2012 05:48 PM, Dennis Guhl wrote: > On Tue, Jan 03, 2012 at 11:55:27AM -0600, Stan Hoeppner wrote: > > [..] > >> I would suggest you thoroughly remove the Wheezy 2.0.15 package and > Not to use the Wheezy package might be wise. > >> install the 1.2.15-7 STABLE package. Read the documentation for 1.2.x > Alternatively you could use Stephan Bosch's repository: > > deb http://xi.rename-it.nl/debian/ stable-auto/dovecot-2.0 main > > Despite the warning at > http://wiki2.dovecot.org/PrebuiltBinaries#Automatically_Built_Packages > they work very stable. > >> and configure it properly. Then things will likely work as they should. > Dennis See http://www.prato.linux.it/~mnencia/debian/dovecot-squeeze/ and http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=592959 I have the packages from this repository running in production on a squeeze system, working fine. From CMarcus at Media-Brokers.com Wed Jan 4 03:25:02 2012 From: CMarcus at Media-Brokers.com (Charles Marcus) Date: Tue, 03 Jan 2012 20:25:02 -0500 Subject: [Dovecot] Storing passwords encrypted... bcrypt? In-Reply-To: <4F038B72.1000003@carpenter.org> References: <4F0367A2.1000807@Media-Brokers.com> <4F036C3B.5080904@Media-Brokers.com> <4F037CC5.9030900@carpenter.org> <4F038186.3030505@Media-Brokers.com> <4F038B72.1000003@carpenter.org> Message-ID: <4F03AA6E.30003@Media-Brokers.com> On 2012-01-03 6:12 PM, WJCarpenter wrote: > On 1/3/2012 2:38 PM, Simon Brereton wrote: >> http://xkcd.com/936/ > > As they saying goes, entropy ain't what it used to be. > > https://www.grc.com/haystack.htm > > However, both links actually illustrate the same point: once you get > past dictionary attacks, the length of the password is dominant factor > in the strength of the password against brute force attack. I think ya'll are missing the point... not sure, because I'm still not completely sure that this is saying what I think it is saying (that's why I asked)... I'm not worried about *active* brute force attacks against my server using the standard smtp or imap protocols - fail2ban takes care of those in a hurry. What I'm worried about is the worst case scenario of someone getting ahold of the entire user database of *stored* passwords, where they can then take their time and brute force them at their leisure, on *their* *own* systems, without having to hammer my server over smtp/imap and without the automated limit of *my* fail2ban getting in their way. As for people writing their passwords down... our policy is that it is a potentially *firable* *offense* (never even encountered one case of anyone posting their password, and I'm on these systems off and on all the time) if they do post these anywhere that is not under lock and key. Also, I always set up their email clients for them (on their workstations and on their phones - and of course tell it to remember the password, so they basically never have to enter it. -- Best regards, Charles From david at blue-labs.org Wed Jan 4 03:37:21 2012 From: david at blue-labs.org (David Ford) Date: Tue, 03 Jan 2012 20:37:21 -0500 Subject: [Dovecot] Storing passwords encrypted... bcrypt? In-Reply-To: <4F03AA6E.30003@Media-Brokers.com> References: <4F0367A2.1000807@Media-Brokers.com> <4F036C3B.5080904@Media-Brokers.com> <4F037CC5.9030900@carpenter.org> <4F038186.3030505@Media-Brokers.com> <4F038B72.1000003@carpenter.org> <4F03AA6E.30003@Media-Brokers.com> Message-ID: <4F03AD51.7080506@blue-labs.org> On 01/03/2012 08:25 PM, Charles Marcus wrote: > > I think ya'll are missing the point... not sure, because I'm still not > completely sure that this is saying what I think it is saying (that's > why I asked)... > > I'm not worried about *active* brute force attacks against my server > using the standard smtp or imap protocols - fail2ban takes care of > those in a hurry. > > What I'm worried about is the worst case scenario of someone getting > ahold of the entire user database of *stored* passwords, where they > can then take their time and brute force them at their leisure, on > *their* *own* systems, without having to hammer my server over > smtp/imap and without the automated limit of *my* fail2ban getting in > their way. > > As for people writing their passwords down... our policy is that it is > a potentially *firable* *offense* (never even encountered one case of > anyone posting their password, and I'm on these systems off and on all > the time) if they do post these anywhere that is not under lock and > key. Also, I always set up their email clients for them (on their > workstations and on their phones - and of course tell it to remember > the password, so they basically never have to enter it. perhaps. part of my point along that of brute force resistance, is that when security becomes onerous to the typical user such as requiring non-repeat passwords of "10 characters including punctuation and mixed case", even stalwart policy followers start tending toward avoiding it. if anyone has a stressful job, spends a lot of time working, missing sleep, is thereby prone to memory lapse, it's almost a sure guarantee they *will* write it down/store it somewhere -- usually not in a password safe. or, they'll export their saved passwords to make a backup plain text copy, and leave it on their Desktop folder but coyly named and prefixed with a few random emails to grandma, so mr. sysadmin doesn't notice it. on a tangent, you should worry about active brute force attacks. fail2ban and iptables heuristics become meaningless when the brute forcing is done by bot nets which is more and more common than single-host attacks these days. one IP per attempt in a 10-20 minute window will probably never trigger any of these methods. From michael at orlitzky.com Wed Jan 4 03:58:51 2012 From: michael at orlitzky.com (Michael Orlitzky) Date: Tue, 03 Jan 2012 20:58:51 -0500 Subject: [Dovecot] Storing passwords encrypted... bcrypt? In-Reply-To: <4F03AA6E.30003@Media-Brokers.com> References: <4F0367A2.1000807@Media-Brokers.com> <4F036C3B.5080904@Media-Brokers.com> <4F037CC5.9030900@carpenter.org> <4F038186.3030505@Media-Brokers.com> <4F038B72.1000003@carpenter.org> <4F03AA6E.30003@Media-Brokers.com> Message-ID: <4F03B25B.2020309@orlitzky.com> On 01/03/2012 08:25 PM, Charles Marcus wrote: > > What I'm worried about is the worst case scenario of someone getting > ahold of the entire user database of *stored* passwords, where they can > then take their time and brute force them at their leisure, on *their* > *own* systems, without having to hammer my server over smtp/imap and > without the automated limit of *my* fail2ban getting in their way. To prevent rainbow table attacks, salt your passwords. You can make them a little bit more difficult in plenty of ways, but salt is the /solution/. > As for people writing their passwords down... our policy is that it is a > potentially *firable* *offense* (never even encountered one case of > anyone posting their password, and I'm on these systems off and on all > the time) if they do post these anywhere that is not under lock and key. > Also, I always set up their email clients for them (on their > workstations and on their phones - and of course tell it to remember the > password, so they basically never have to enter it. You realize they're just walking around with a $400 post-it note with the password written on it, right? From bill-dovecot at carpenter.org Wed Jan 4 05:07:47 2012 From: bill-dovecot at carpenter.org (WJCarpenter) Date: Tue, 03 Jan 2012 19:07:47 -0800 Subject: [Dovecot] Storing passwords encrypted... bcrypt? In-Reply-To: <4F03AA6E.30003@Media-Brokers.com> References: <4F0367A2.1000807@Media-Brokers.com> <4F036C3B.5080904@Media-Brokers.com> <4F037CC5.9030900@carpenter.org> <4F038186.3030505@Media-Brokers.com> <4F038B72.1000003@carpenter.org> <4F03AA6E.30003@Media-Brokers.com> Message-ID: <4F03C283.6070005@carpenter.org> On 1/3/2012 5:25 PM, Charles Marcus wrote: > I think ya'll are missing the point... not sure, because I'm still not > completely sure that this is saying what I think it is saying (that's > why I asked)... I'm sure I'm not missing the point. My comment was that password length and complexity are probably more important than bcrypt versus sha1, and you've already addressed those. Given that you use strong 15-character passwords, pretty much all hash functions are already out of reach for brute force. bcrypt is probably better in the same sense that it's harder to drive a car to Saturn than it is to drive to Mars. From list at airstreamcomm.net Wed Jan 4 08:09:39 2012 From: list at airstreamcomm.net (=?utf-8?B?bGlzdEBhaXJzdHJlYW1jb21tLm5ldA==?=) Date: Wed, 04 Jan 2012 00:09:39 -0600 Subject: [Dovecot] =?utf-8?q?GPFS_for_mail-storage_=28Was=3A_Re=3A_Compres?= =?utf-8?q?sing_existing_maildirs=29?= Message-ID: <20120104060924.0E7C727659@osmtp-1.airstreamcomm.net> Great information, thank you. Could you remark on GPFS services hosting mail storage over a WAN between two geographically separated data centers? ----- Reply message ----- From: "Jan-Frode Myklebust" To: "Stan Hoeppner" Cc: "Timo Sirainen" , Subject: [Dovecot] GPFS for mail-storage (Was: Re: Compressing existing maildirs) Date: Tue, Jan 3, 2012 2:14 am On Sat, Dec 31, 2011 at 01:54:32AM -0600, Stan Hoeppner wrote: > Nice setup. I've mentioned GPFS for cluster use on this list before, > but I think you're the only operator to confirm using it. I'm sure > others would be interested in hearing of your first hand experience: > pros, cons, performance, etc. And a ball park figure on the licensing > costs, whether one can only use GPFS on IBM storage or if storage from > others vendors is allowed in the GPFS pool. I used to work for IBM, so I've been a bit uneasy about pushing GPFS too hard publicly, for risk of being accused of being biased. But I changed job in November, so now I'm only a satisfied customer :-) Pros: Extremely simple to configure and manage. Assuming root on all nodes can ssh freely, and port 1191/tcp is open between the nodes, these are the commands to create the cluster, create a NSD (network shared disks), and create a filesystem: # echo hostname1:manager-quorum > NodeFile # "manager" means this node can be selected as filesystem manager # echo hostname2:manager-quorum >> NodeFile # "quorum" means this node has a vote in the quorum selection # echo hostname3:manager-quorum >> NodeFile # all my nodes are usually the same, so they all have same roles. # mmcrcluster -n NodeFile -p $(hostname) -A ### sdb1 is either a local disk on hostname1 (in which case the other nodes will access it over tcp to ### hostname1), or a SAN-disk that they can access directly over FC/iSCSI. # echo sdb1:hostname1::dataAndMetadata:: > DescFile # This disk can be used for both data and metadata # mmcrnsd -F DescFile # mmstartup -A # starts GPFS services on all nodes # mmcrfs /gpfs1 gpfs1 -F DescFile # mount /gpfs1 You can add and remove disks from the filesystem, and change most settings without downtime. You can scale out your workload by adding more nodes (SAN attached or not), and scale out your disk performance by adding more disks on the fly. (IBM uses GPFS to create scale-out NAS solutions http://www-03.ibm.com/systems/storage/network/sonas/ , which highlights a few of the features available with GPFS) There's no problem running GPFS on other vendors disk systems. I've used Nexsan SATAboy earlier, for a HPC cluster. One can easily move from one disksystem to another without downtime. Cons: It has it's own page cache, staticly configured. So you don't get the "all available memory used for page caching" behaviour as you normally do on linux. There is a kernel module that needs to be rebuilt on every upgrade. It's a simple process, but it needs to be done and means we can't just run "yum update ; reboot" to upgrade. % export SHARKCLONEROOT=/usr/lpp/mmfs/src % cp /usr/lpp/mmfs/src/config/site.mcr.proto /usr/lpp/mmfs/src/config/site.mcr % vi /usr/lpp/mmfs/src/config/site.mcr # correct GPFS_ARCH, LINUX_DISTRIBUTION and LINUX_KERNEL_VERSION % cd /usr/lpp/mmfs/src/ ; make clean ; make World % su - root # export SHARKCLONEROOT=/usr/lpp/mmfs/src # cd /usr/lpp/mmfs/src/ ; make InstallImages > > To this point IIRC everyone here doing clusters is using NFS, GFS, or > OCFS. Each has its downsides, mostly because everyone is using maildir. > NFS has locking issues with shared dovecot index files. GFS and OCFS > have filesystem metadata performance issues. How does GPFS perform with > your maildir workload? Maildir is likely a worst case type workload for filesystems. Millions of tiny-tiny files, making all IO random, and getting minimal controller read cache utilized (unless you can cache all active files). So I've concluded that our performance issues are mostly design errors (and the fact that there were no better mail storage formats than maildir at the time these servers were implemented). I expect moving to mdbox will fix all our performance issues. I *think* GPFS is as good as it gets for maildir storage on clusterfs, but have no number to back that up ... Would be very interesting if we could somehow compare numbers for a few clusterfs'. I believe our main limitation in this setup is the iops we can get from the backend storage system. It's hard to balance the IO over enough RAID arrays (the fs is spread over 11 RAID5 arrays of 5 disks each), and we're always having hotspots. Right now two arrays are doing <100 iops, while others are doing 4-500 iops. Would very much like to replace it by something smarter where we can utilize SSDs for active data and something slower for stale data. GPFS can manage this by itself trough it's ILM interface, but we don't have the very fast storage to put in as tier-1. -jf From janfrode at tanso.net Wed Jan 4 09:33:55 2012 From: janfrode at tanso.net (Jan-Frode Myklebust) Date: Wed, 4 Jan 2012 08:33:55 +0100 Subject: [Dovecot] GPFS for mail-storage (Was: Re: Compressing existing maildirs) In-Reply-To: <20120104060924.0E7C727659@osmtp-1.airstreamcomm.net> References: <20120104060924.0E7C727659@osmtp-1.airstreamcomm.net> Message-ID: <20120104073355.GA20482@dibs.tanso.net> On Wed, Jan 04, 2012 at 12:09:39AM -0600, list at airstreamcomm.net wrote: > Could you remark on GPFS services hosting mail storage over a WAN between two geographically separated data centers? I haven't tried that, but know the theory quite well. There are 2 or 3 options: 1 - shared SAN between the data centers. Should work the same as a single data center, but you'd want to use disk quorum or a quorum node on a 3. site to avoid split brain. 2 - different SANs on the two sites. Disks on SAN1 would belong to failure group 1 and disks on SAN2 would belong to failure group 2. GPFS will write every block to disks in different failure groups. Nodes on location 1 will use SAN1 directly, and write to SAN2 via tcp/ip to nodes on location 2 (and vica versa). It's configurable if you want to return success when first block is written (asyncronous replication), or if you need both replicas to be written. Ref: mmcrfs -K: http://publib.boulder.ibm.com/infocenter/clresctr/vxrx/index.jsp?topic=%2Fcom.ibm.cluster.gpfs.v3r4.gpfs300.doc%2Fbl1adm_mmcrfs.html With asyncronous replication it will try to allocate both replicas, but if it fails you can re-establish the replication level later using "mmrestripefs". Reading will happen from a direct attached disk if possible, and over tcp/ip if there are no local replica of the needed block. Again you'll need a quorum node on a 3. site to avoid split brain. 3 - GPFS multi-cluster. Separate GPFS clusters on the two locations. Let them mount each others filesystems over IP, and access disks over either SAN or IP network. Each cluster is managed locally, if one site goes down the other site also loses access to the fs. I don't have any experience with this kind of config, but believe it's quite popular to use to share fs between HPC-sites. http://www.ibm.com/developerworks/systems/library/es-multiclustergpfs/index.html http://www.cisl.ucar.edu/hss/ssg/presentations/storage/NCAR-GPFS_Elahi.pdf -jf From Juergen.Obermann at hrz.uni-giessen.de Wed Jan 4 11:40:25 2012 From: Juergen.Obermann at hrz.uni-giessen.de (=?utf-8?b?SsO8cmdlbg==?= Obermann) Date: Wed, 04 Jan 2012 10:40:25 +0100 Subject: [Dovecot] error bad file number with compressed mbox files In-Reply-To: <1325590176.6987.57.camel@innu> References: <77e69f67dbffe67a6205ed1de7d2d0df@imapproxy.hrz> <1325590176.6987.57.camel@innu> Message-ID: <20120104104025.13503j06dkxnqg08@webmail.hrz.uni-giessen.de> > On Mon, 2012-01-02 at 15:33 +0100, J?rgen Obermann wrote: > >> can dsync convert from compressed mbox to compressed mdbox format? >> >> When I use compressed mbox files, either with gzip or with bzip2, I can >> read the mails as usual, but I find the following errors in dovecots log >> file: >> >> imap(userxy): Error: nfs_flush_fcntl: >> fcntl(/home/hrz/userxy/Mail/mymbox.gz, F_RDLCK) failed: Bad file number >> imap(userxy): Error: nfs_flush_fcntl: >> fcntl(/home/hrz/userxy/Mail/mymbox.bz2, F_RDLCK) failed: Bad file number > > This happens because of mail_nfs_* settings. You can either ignore those > errors, or disable the settings. Those settings are useful only if you > attempt to access the same mailbox from multiple servers at the same > time, which is randomly going to fail even with those settings, so they > aren't hugely useful. > > > After removing the mail_nfs_* settings this problem went away. Thank you, Timo. Greetings, J?rgen From openbsd at e-solutions.re Wed Jan 4 15:08:35 2012 From: openbsd at e-solutions.re (Wesley M.) Date: Wed, 04 Jan 2012 17:08:35 +0400 Subject: [Dovecot] migrate dovecot files 1.2.16 to 2.0.13 (OpenBSD 5.0) Message-ID: <9183836c1dc45b710712d4985f04df81@localhost> Hi, I have a mailserver(Postfix+MySql) on OpenBSD 4.9 with Dovecot 1.2.16, all works fine. Now i want to do the same but on OpenBSD 5.0. I meet problems using dovecot 2.0.13 on OpenBSD 5.0. Some tests (on the box): telnet 127.0.0.1 110 Trying 127.0.0.1... Connected to 127.0.0.1. Escape character is '^]'. Connection closed by foreign host. telnet 127.0.0.1 143 Trying 127.0.0.1... Connected to 127.0.0.1. Escape character is '^]'. Connection closed by foreign host. Seems that pop3/imap doesn't work 'netstat -anf inet' tcp 0 0 *.993 *.* LISTEN tcp 0 0 *.143 *.* LISTEN tcp 0 0 *.995 *.* LISTEN tcp 0 0 *.110 *.* LISTEN Therefore, ports are open. When i use Roundcube webmail, i have errors : error imap connection If someone can help me on. Thank you very much. Files to migrate (already tried to modify them) : dovecot.conf / dovecot-sql.conf / and 'dovecot -n ' ###############::::::::dovecot.conf:::::::::::################################# base_dir = /var/dovecot/ protocols = imap pop3 ssl_cert = /etc/ssl/dovecotcert.pem ssl_key = /etc/ssl/private/dovecot.pem ssl_cipher_list = HIGH:MEDIUM:+TLSv1:!SSLv2:+SSLv3 disable_plaintext_auth = yes default_login_user = _dovecot default_internal_user = _dovecot login_process_per_connection = no login_process_size = 64 mail_location = maildir:/var/mailserv/mail/%d/%n first_valid_uid = 1000 mmap_disable = yes protocol imap { mail_plugins = quota imap_quota autocreate imap_client_workarounds = delay-newmail } protocol pop3 { pop3_uidl_format = %08Xv%08Xu mail_plugins = quota pop3_client_workarounds = outlook-no-nuls oe-ns-eoh } protocol lda { mail_plugins = sieve quota postmaster_address = postmaster at mailr130.localdomain sendmail_path = /usr/sbin/sendmail auth_socket_path = /var/run/dovecot-auth-master } auth default { mechanisms = plain login digest-md5 cram-md5 apop passdb { driver=sql args = /etc/dovecot/dovecot-sql.conf } userdb { driver=sql args = /etc/dovecot/dovecot-sql.conf } user = root socket listen { client { path = /var/spool/postfix/private/auth mode = 0660 user = _postfix group = _postfix } master { path = /var/run/dovecot-auth-master mode = 0600 user = _dovecot # User running Dovecot LDA group = _dovecot # Or alternatively mode 0660 + LDA user in this group } } } plugin { sieve=~/.dovecot.sieve sieve_storage=~/sieve } plugin { quota = maildir quota_rule = *:storage=5G quota_rule2 = Trash:storage=100M quota_warning = storage=95%% /usr/local/bin/quota-warning.sh 95 quota_warning2 = storage=80%% /usr/local/bin/quota-warning.sh 80 } plugin { autocreate = Trash autocreate2 = Spam autocreate3 = Sent autocreate4 = Drafts autosubscribe = Trash autosubscribe2 = Spam autosubscribe3 = Sent autosubscribe4 = Drafts } plugin { antispam_signature = X-Spam-Flag antispam_signature_missing = move # move silently without training antispam_trash = trash;Trash;Deleted Items; Deleted Messages antispam_spam = SPAM;Spam;spam;Junk;junk antispam_mail_sendmail = /usr/local/bin/sa-learn antispam_mail_sendmail_args = --username=%u antispam_mail_spam = --spam antispam_mail_notspam = --ham antispam_mail_tmpdir = /tmp } ###############::::::::dovecot-sql.conf:::::::################################## driver = mysql connect = host=localhost dbname=mail user=postfix password=postfix default_pass_scheme = PLAIN password_query = SELECT email as user, password FROM users WHERE email = '%u' user_query = SELECT id as uid, id as gid, home, concat('*:storage=', quota, 'M') AS quota_rule FROM users WHERE email = '%u' ################### dovecot -n######################################## # 2.0.13: /etc/dovecot/dovecot.conf # OS: OpenBSD 5.0 i386 ffs auth_mechanisms = plain login digest-md5 cram-md5 apop base_dir = /var/dovecot/ default_internal_user = _dovecot default_login_user = _dovecot first_valid_uid = 1000 mail_location = maildir:/var/mailserv/mail/%d/%n mmap_disable = yes passdb { args = /etc/dovecot/dovecot-sql.conf driver = sql } plugin { antispam_mail_notspam = --ham antispam_mail_sendmail = /usr/local/bin/sa-learn antispam_mail_sendmail_args = --username=%u antispam_mail_spam = --spam antispam_mail_tmpdir = /tmp antispam_signature = X-Spam-Flag antispam_signature_missing = move antispam_spam = SPAM;Spam;spam;Junk;junk antispam_trash = trash;Trash;Deleted Items; Deleted Messages autocreate = Trash autocreate2 = Spam autocreate3 = Sent autocreate4 = Drafts autosubscribe = Trash autosubscribe2 = Spam autosubscribe3 = Sent autosubscribe4 = Drafts quota = maildir quota_rule = *:storage=5G quota_rule2 = Trash:storage=100M quota_warning = storage=95%% /usr/local/bin/quota-warning.sh 95 quota_warning2 = storage=80%% /usr/local/bin/quota-warning.sh 80 sieve = ~/.dovecot.sieve sieve_storage = ~/sieve } protocols = imap pop3 service auth { unix_listener /var/run/dovecot-auth-master { group = _dovecot mode = 0600 user = _dovecot } unix_listener /var/spool/postfix/private/auth { group = _postfix mode = 0660 user = _postfix } user = root } service imap-login { service_count = 0 vsz_limit = 64 M } service pop3-login { service_count = 0 vsz_limit = 64 M } ssl_cert = /etc/ssl/dovecotcert.pem ssl_cipher_list = HIGH:MEDIUM:+TLSv1:!SSLv2:+SSLv3 ssl_key = /etc/ssl/private/dovecot.pem userdb { args = /etc/dovecot/dovecot-sql.conf driver = sql } protocol imap { imap_client_workarounds = delay-newmail mail_plugins = quota imap_quota autocreate } protocol pop3 { mail_plugins = quota pop3_client_workarounds = outlook-no-nuls oe-ns-eoh pop3_uidl_format = %08Xv%08Xu } protocol lda { auth_socket_path = /var/run/dovecot-auth-master mail_plugins = sieve quota postmaster_address = postmaster at mailr130.localdomain sendmail_path = /usr/sbin/sendmail } Cheers, Wesley. M www.mouedine.net From Ralf.Hildebrandt at charite.de Wed Jan 4 16:06:40 2012 From: Ralf.Hildebrandt at charite.de (Ralf Hildebrandt) Date: Wed, 4 Jan 2012 15:06:40 +0100 Subject: [Dovecot] doveadm move from one user's mailbox to another user's mailbox? Message-ID: <20120104140640.GT5536@charite.de> Is something along the lines: doveadm move -u sourceuser destinationuser:/inbox search_query possible with 2.0.16? I want to move mails from a backup mailbox (which has no valid password assigned) to a "restore" mailbox (which *HAS* a password assigned to it). -- Ralf Hildebrandt Gesch?ftsbereich IT | Abteilung Netzwerk Charit? - Universit?tsmedizin Berlin Campus Benjamin Franklin Hindenburgdamm 30 | D-12203 Berlin Tel. +49 30 450 570 155 | Fax: +49 30 450 570 962 ralf.hildebrandt at charite.de | http://www.charite.de From tss at iki.fi Wed Jan 4 16:11:26 2012 From: tss at iki.fi (Timo Sirainen) Date: Wed, 04 Jan 2012 16:11:26 +0200 Subject: [Dovecot] doveadm move from one user's mailbox to another user's mailbox? In-Reply-To: <20120104140640.GT5536@charite.de> References: <20120104140640.GT5536@charite.de> Message-ID: <1325686286.6987.82.camel@innu> On Wed, 2012-01-04 at 15:06 +0100, Ralf Hildebrandt wrote: > Is something along the lines: > doveadm move -u sourceuser destinationuser:/inbox search_query > possible with 2.0.16? > > I want to move mails from a backup mailbox (which has no valid > password assigned) to a "restore" mailbox (which *HAS* a password > assigned to it). I guess: doveadm import -u dest maildir:/source/Maildir "" search_query There's no direct command to move mails between users. Or you could create a shared namespace.. From Ralf.Hildebrandt at charite.de Wed Jan 4 16:33:01 2012 From: Ralf.Hildebrandt at charite.de (Ralf Hildebrandt) Date: Wed, 4 Jan 2012 15:33:01 +0100 Subject: [Dovecot] doveadm move from one user's mailbox to another user's mailbox? In-Reply-To: <1325686286.6987.82.camel@innu> References: <20120104140640.GT5536@charite.de> <1325686286.6987.82.camel@innu> Message-ID: <20120104143301.GU5536@charite.de> * Timo Sirainen : > On Wed, 2012-01-04 at 15:06 +0100, Ralf Hildebrandt wrote: > > Is something along the lines: > > doveadm move -u sourceuser destinationuser:/inbox search_query > > possible with 2.0.16? > > > > I want to move mails from a backup mailbox (which has no valid > > password assigned) to a "restore" mailbox (which *HAS* a password > > assigned to it). > > I guess: > > doveadm import -u dest maildir:/source/Maildir "" search_query Yes, just the other way round. It's even better, since the MOVE operation would have REMOVED the mails from the backup. IMPORT instead only copies what it needs. -- Ralf Hildebrandt Gesch?ftsbereich IT | Abteilung Netzwerk Charit? - Universit?tsmedizin Berlin Campus Benjamin Franklin Hindenburgdamm 30 | D-12203 Berlin Tel. +49 30 450 570 155 | Fax: +49 30 450 570 962 ralf.hildebrandt at charite.de | http://www.charite.de From ludek.finstrle at pzkagis.cz Wed Jan 4 17:11:41 2012 From: ludek.finstrle at pzkagis.cz (Ludek Finstrle) Date: Wed, 4 Jan 2012 16:11:41 +0100 Subject: [Dovecot] Small LOGIN_MAX_INBUF_SIZE for GSSAPI with samba4 (AD) In-Reply-To: <1325589389.6987.55.camel@innu> References: <20120102182014.GA20872@pzkagis.cz> <1325589389.6987.55.camel@innu> Message-ID: <20120104151141.GA5755@pzkagis.cz> Hi Timo, Tue, Jan 03, 2012 at 01:16:29PM +0200, Timo Sirainen napsal(a): > On Mon, 2012-01-02 at 19:20 +0100, Ludek Finstrle wrote: > > > Jan 2 17:58:42 server dovecot: imap-login: Disconnected: Input buffer full (no auth attempts): rip=192.167.14.16, lip=192.167.14.16, secured > .. > > I fixed this problem with enlarging LOGIN_MAX_INBUF_SIZE. I also red about wrong lower/uppercase > > but it's not definitely my problem (I tried all possibilities of lower/uppercas in login). > > > > I sniffed the plain communication and the "a0000 AUTHENTICATE GSSAPI" line has around 1873 chars. > > When I enlarged the LOGIN_MAX_INBUF_SIZE to 2048 the problem disappeared and I'm now able to login > > to dovecot using gssapi in mutt client. > > There was already code that allowed 16kB SAS messages, but that didn't > work for initial SASL reponse with IMAP SASL-IR extension. > > > The simple patch I have to use is attached. > > I increased it to 4 kB: > http://hg.dovecot.org/dovecot-2.0/rev/d06061408f6d thank you very much for such fast reaction and for such good piece of SW. Luf From bra at fsn.hu Wed Jan 4 17:19:33 2012 From: bra at fsn.hu (Attila Nagy) Date: Wed, 04 Jan 2012 16:19:33 +0100 Subject: [Dovecot] assertion failed in mail-index.c Message-ID: <4F046E05.8070000@fsn.hu> Hi, I have this: Jan 04 15:55:21 pop3(jfm47): Panic: file mail-index.c: line 257 (mail_index_keyword_lookup_or_create): assertion failed: (*keyword != '\0') Jan 04 15:55:21 master: Error: service(pop3): child 3391 killed with signal 6 (core not dumped - set service pop3 { drop_priv_before_exec=yes }) I don't know why this happened, but wouldn't be the "self healing" (seen in the wiki I think :) kick in here? I mean it's even better to completely remove the index than dying and make the mailbox inaccessible. Thanks, From tss at iki.fi Wed Jan 4 17:28:15 2012 From: tss at iki.fi (Timo Sirainen) Date: Wed, 04 Jan 2012 17:28:15 +0200 Subject: [Dovecot] assertion failed in mail-index.c In-Reply-To: <4F046E05.8070000@fsn.hu> References: <4F046E05.8070000@fsn.hu> Message-ID: <1325690895.6987.88.camel@innu> On Wed, 2012-01-04 at 16:19 +0100, Attila Nagy wrote: > Hi, > > I have this: > Jan 04 15:55:21 pop3(jfm47): Panic: file mail-index.c: line 257 > (mail_index_keyword_lookup_or_create): assertion failed: (*keyword != '\0') > Jan 04 15:55:21 master: Error: service(pop3): child 3391 killed with > signal 6 (core not dumped - set service pop3 { drop_priv_before_exec=yes }) > I don't know why this happened, but wouldn't be the "self healing" (seen > in the wiki I think :) kick in here? > I mean it's even better to completely remove the index than dying and > make the mailbox inaccessible. See if http://hg.dovecot.org/dovecot-2.0/raw-rev/5ef791398c8c helps. If not, I'd need a gdb backtrace to find out what is causing it: http://dovecot.org/bugreport.html From sottilette at rfx.it Wed Jan 4 19:08:52 2012 From: sottilette at rfx.it (sottilette at rfx.it) Date: Wed, 4 Jan 2012 18:08:52 +0100 (CET) Subject: [Dovecot] POP3 problems Message-ID: Migrated a 1.0.2 server to 2.0.16 (same old box). IMAP seems working Ok. POP3 give problems with some clients (Outlook 2010 and Thunderbird reported). Seems authentication problem Below my doveconf -n (debug enbled, but no answer found to the problems) Any hints? Thanks, P. # doveconf -n # 2.0.16: /etc/dovecot/dovecot.conf doveconf: Warning: NOTE: You can get a new clean config file with: doveconf -n > dovecot-new.conf doveconf: Warning: Obsolete setting in /etc/dovecot/dovecot.conf:791: add auth_ prefix to all settings inside auth {} and remove the auth {} section completely doveconf: Warning: Obsolete setting in /etc/dovecot/dovecot.conf:864: passdb {} has been replaced by passdb { driver= } doveconf: Warning: Obsolete setting in /etc/dovecot/dovecot.conf:935: userdb passwd {} has been replaced by userdb { driver=passwd } doveconf: Warning: Obsolete setting in /etc/dovecot/dovecot.conf:998: auth_user has been replaced by service auth { user } doveconf: Warning: Obsolete setting in /etc/dovecot/dovecot.conf:1131: ssl_disable has been renamed to ssl # OS: Linux 2.6.9-42.0.10.ELsmp i686 CentOS release 4.9 (Final) auth_debug = yes auth_debug_passwords = yes auth_mechanisms = plain login auth_verbose = yes disable_plaintext_auth = no info_log_path = /var/log/mail/dovecot.info.log listen = * log_path = /var/log/mail/dovecot.log mail_full_filesystem_access = yes mail_location = mbox:~/:INBOX=/var/mail/%u mbox_read_locks = dotlock fcntl passdb { driver = pam } protocols = pop3 imap service auth { user = root } ssl = no ssl_cert = /etc/pki/dovecot/certs/dovecot.pem ssl_key = /etc/pki/dovecot/private/dovecot.pem userdb { driver = passwd } userdb { driver = passwd } protocol lda { postmaster_address = postmaster at example.com } From wgillespie+dovecot at es2eng.com Wed Jan 4 19:16:08 2012 From: wgillespie+dovecot at es2eng.com (Willie Gillespie) Date: Wed, 04 Jan 2012 10:16:08 -0700 Subject: [Dovecot] POP3 problems In-Reply-To: References: Message-ID: <4F048958.5070208@es2eng.com> On 01/04/2012 10:08 AM, sottilette at rfx.it wrote: > Migrated a 1.0.2 server to 2.0.16 (same old box). Some of the configuration settings changed between 1.x and 2.x > doveconf: Warning: Obsolete setting in /etc/dovecot/dovecot.conf:791: add auth_ prefix to all settings inside auth {} and remove the auth {} section completely > doveconf: Warning: Obsolete setting in /etc/dovecot/dovecot.conf:864: passdb {} has been replaced by passdb { driver= } > doveconf: Warning: Obsolete setting in /etc/dovecot/dovecot.conf:935: userdb passwd {} has been replaced by userdb { driver=passwd } > doveconf: Warning: Obsolete setting in /etc/dovecot/dovecot.conf:998: auth_user has been replaced by service auth { user } > doveconf: Warning: Obsolete setting in /etc/dovecot/dovecot.conf:1131: ssl_disable has been renamed to ssl You'll probably want to make sure everything is correct as per a 2.x config. From tss at iki.fi Wed Jan 4 19:24:28 2012 From: tss at iki.fi (Timo Sirainen) Date: Wed, 4 Jan 2012 19:24:28 +0200 Subject: [Dovecot] POP3 problems In-Reply-To: References: Message-ID: <55076E79-BD81-455B-BD68-9ABCFE53ED22@iki.fi> On 4.1.2012, at 19.08, sottilette at rfx.it wrote: > Migrated a 1.0.2 server to 2.0.16 (same old box). > IMAP seems working Ok. > POP3 give problems with some clients (Outlook 2010 and Thunderbird reported). > Seems authentication problem > Below my doveconf -n (debug enbled, but no answer found to the problems) What do the logs say when a client logs in? The debug logs should tell everything. > doveconf: Warning: NOTE: You can get a new clean config file with: doveconf -n > dovecot-new.conf You should do this and replace your old dovecot.conf with the new generated one. > userdb { > driver = passwd > } > userdb { > driver = passwd > } Also remove the duplicated userdb passwd. From sottilette at rfx.it Wed Jan 4 20:11:47 2012 From: sottilette at rfx.it (sottilette at rfx.it) Date: Wed, 4 Jan 2012 19:11:47 +0100 (CET) Subject: [Dovecot] POP3 problems In-Reply-To: <55076E79-BD81-455B-BD68-9ABCFE53ED22@iki.fi> References: <55076E79-BD81-455B-BD68-9ABCFE53ED22@iki.fi> Message-ID: On Wed, 4 Jan 2012, Timo Sirainen wrote: >> Migrated a 1.0.2 server to 2.0.16 (same old box). >> IMAP seems working Ok. >> POP3 give problems with some clients (Outlook 2010 and Thunderbird reported). >> Seems authentication problem >> Below my doveconf -n (debug enbled, but no answer found to the problems) > > What do the logs say when a client logs in? The debug logs should tell > everything. Yes, but my problem is that this is a production server with a really fast increasing log, so (in my limited skill of dovecot), I have some difficult to select interesting rows from it (I hoped this period was less busy, but my customers don't have the same idea ... ;-) ) Thanks for hints in selecting interesting rows. >> doveconf: Warning: NOTE: You can get a new clean config file with: doveconf -n > dovecot-new.conf > > You should do this and replace your old dovecot.conf with the new generated one. > >> userdb { >> driver = passwd >> } >> userdb { >> driver = passwd >> } > > Also remove the duplicated userdb passwd. This was an experimental configs manually derived from old 1.0.2 (mix of old working and new). If I replace it with a new config (below), authentication seems Ok, but fetch of mail from client is very slow (compared with old 1.0.2). Thanks for your very fast support ;-) P. # doveconf -n # 2.0.16: /etc/dovecot/dovecot.conf # OS: Linux 2.6.9-42.0.10.ELsmp i686 CentOS release 4.9 (Final) auth_mechanisms = plain login disable_plaintext_auth = no info_log_path = /var/log/mail/dovecot.info.log log_path = /var/log/mail/dovecot.log mail_full_filesystem_access = yes mail_location = mbox:~/:INBOX=/var/mail/%u mbox_read_locks = dotlock fcntl passdb { driver = pam } protocols = imap pop3 ssl_cert = References: <55076E79-BD81-455B-BD68-9ABCFE53ED22@iki.fi> Message-ID: <4F04B6B2.1030903@enas.net> Am 04.01.2012 19:11, schrieb sottilette at rfx.it: > On Wed, 4 Jan 2012, Timo Sirainen wrote: > >>> Migrated a 1.0.2 server to 2.0.16 (same old box). >>> IMAP seems working Ok. >>> POP3 give problems with some clients (Outlook 2010 and Thunderbird >>> reported). >>> Seems authentication problem >>> Below my doveconf -n (debug enbled, but no answer found to the >>> problems) >> >> What do the logs say when a client logs in? The debug logs should >> tell everything. > > Yes, but my problem is that this is a production server with a really > fast increasing log, so (in my limited skill of dovecot), I have some > difficult to select interesting rows from it (I hoped this period was > less busy, but my customers don't have the same idea ... ;-) ) > Thanks for hints in selecting interesting rows. Try to run "tail -f $MAILLOG | grep $USERNAME" until the user logs in and tries to fetch his emails. $MAILLOG = logfile to which dovecot logs all info $USERNAME = Username of your client which has the problems > >>> doveconf: Warning: NOTE: You can get a new clean config file with: >>> doveconf -n > dovecot-new.conf >> >> You should do this and replace your old dovecot.conf with the new >> generated one. >> >>> userdb { >>> driver = passwd >>> } >>> userdb { >>> driver = passwd >>> } >> >> Also remove the duplicated userdb passwd. > > > This was an experimental configs manually derived from old 1.0.2 (mix > of old working and new). > > If I replace it with a new config (below), authentication seems Ok, > but fetch of mail from client is very slow (compared with old 1.0.2). > > Thanks for your very fast support ;-) > > P. > > > > # doveconf -n > # 2.0.16: /etc/dovecot/dovecot.conf > # OS: Linux 2.6.9-42.0.10.ELsmp i686 CentOS release 4.9 (Final) > auth_mechanisms = plain login > disable_plaintext_auth = no > info_log_path = /var/log/mail/dovecot.info.log > log_path = /var/log/mail/dovecot.log > mail_full_filesystem_access = yes > mail_location = mbox:~/:INBOX=/var/mail/%u > mbox_read_locks = dotlock fcntl > passdb { > driver = pam > } > protocols = imap pop3 > ssl_cert = ssl_key = userdb { > driver = passwd > } > protocol pop3 { > pop3_uidl_format = %08Xu%08Xv > } > From dmiller at amfes.com Thu Jan 5 02:24:40 2012 From: dmiller at amfes.com (Daniel L. Miller) Date: Wed, 04 Jan 2012 16:24:40 -0800 Subject: [Dovecot] Possible mdbox corruption Message-ID: I thought I had cleared out the corruption I had before - perhaps I was mistaken. What steps should I take to help locate these issues? Currently using 2.1rc1. I see the following errors in my logs, including out of memory and message size issues (at 15:30): Jan 4 05:17:17 bubba dovecot: indexer-worker(user1 at domain.com): Error: Raw backtrace: /usr/local/lib/dovecot/libdovecot.so.0(+0x3ed0a) [0x7f6e17cbfd0a] -> /usr/local/lib/dovecot/libdovecot.so.0(+0x3ed56) [0x7f6e17cbfd56] -> /usr/local/lib/dovecot/libdovecot.so.0(i_fatal+0) [0x7f6e17c98d08] -> /usr/local/lib/dovecot/libdovecot.so.0(+0x4f310) [0x7f6e17cd0310] -> /usr/local/lib/dovecot/libdovecot.so.0(+0x3b965) [0x7f6e17cbc965] -> /usr/local/lib/dovecot/libdovecot.so.0(buffer_write+0x7c) [0x7f6e17cbd0ec] -> /usr/local/lib/dovecot/lib21_fts_solr_plugin.so(+0x3292) [0x7f6e164b7292] -> /usr/local/lib/dovecot/lib21_fts_solr_plugin.so(+0x3a97) [0x7f6e164b7a97] -> /usr/local/lib/dovecot/lib20_fts_plugin.so(fts_backend_update_set_build_key+0x2c) [0x7f6e166c4abc] -> /usr/local/lib/dovecot/lib20_fts_plugin.so(fts_build_mail+0x2d1) [0x7f6e166c5561] -> /usr/local/lib/dovecot/lib20_fts_plugin.so(+0xc630) [0x7f6e166ca630] -> dovecot/indexer-worker [user1 at domain.com Sent - 5500/7510]() [0x40245f] -> dovecot/indexer-worker [user1 at domain.com Sent - 5500/7510]() [0x4027dd] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_call_io+0x36) [0x7f6e17ccc0f6] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_handler_run+0x9f) [0x7f6e17ccd17f] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_run+0x28) [0x7f6e17ccc098] -> /usr/local/lib/dovecot/libdovecot.so.0(master_service_run+0x13) [0x7f6e17cb9123] -> dovecot/indexer-worker [user1 at domain.com Sent - 5500/7510](main+0x109) [0x401f29] -> /lib/libc.so.6(__libc_start_main+0xfe) [0x7f6e1791cd8e] -> dovecot/indexer-worker [user1 at domain.com Sent - 5500/7510]() [0x401d19] Jan 4 05:17:17 bubba dovecot: indexer: Error: Indexer worker disconnected, discarding 1 requests for user1 at domain.com Jan 4 05:17:17 bubba dovecot: master: Error: service(indexer-worker): child 10896 returned error 83 (Out of memory (vsz_limit=256 MB, you may need to increase it)) Jan 4 06:17:17 bubba dovecot: indexer-worker(user1 at domain.com): Fatal: pool_system_realloc(134217728): Out of memory Jan 4 06:17:17 bubba dovecot: indexer-worker(user1 at domain.com): Error: Raw backtrace: /usr/local/lib/dovecot/libdovecot.so.0(+0x3ed0a) [0x7f0ec1a57d0a] -> /usr/local/lib/dovecot/libdovecot.so.0(+0x3ed56) [0x7f0ec1a57d56] -> /usr/local/lib/dovecot/libdovecot.so.0(i_fatal+0) [0x7f0ec1a30d08] -> /usr/local/lib/dovecot/libdovecot.so.0(+0x4f310) [0x7f0ec1a68310] -> /usr/local/lib/dovecot/libdovecot.so.0(+0x3b965) [0x7f0ec1a54965] -> /usr/local/lib/dovecot/libdovecot.so.0(buffer_write+0x7c) [0x7f0ec1a550ec] -> /usr/local/lib/dovecot/lib21_fts_solr_plugin.so(+0x3292) [0x7f0ec024f292] -> /usr/local/lib/dovecot/lib21_fts_solr_plugin.so(+0x3a97) [0x7f0ec024fa97] -> /usr/local/lib/dovecot/lib20_fts_plugin.so(fts_backend_update_set_build_key+0x2c) [0x7f0ec045cabc] -> /usr/local/lib/dovecot/lib20_fts_plugin.so(fts_build_mail+0x2d1) [0x7f0ec045d561] -> /usr/local/lib/dovecot/lib20_fts_plugin.so(+0xc630) [0x7f0ec0462630] -> dovecot/indexer-worker [user1 at domain.com Sent - 5500/7510]() [0x40245f] -> dovecot/indexer-worker [user1 at domain.com Sent - 5500/7510]() [0x4027dd] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_call_io+0x36) [0x7f0ec1a640f6] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_handler_run+0x9f) [0x7f0ec1a6517f] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_run+0x28) [0x7f0ec1a64098] -> /usr/local/lib/dovecot/libdovecot.so.0(master_service_run+0x13) [0x7f0ec1a51123] -> dovecot/indexer-worker [user1 at domain.com Sent - 5500/7510](main+0x109) [0x401f29] -> /lib/libc.so.6(__libc_start_main+0xfe) [0x7f0ec16b4d8e] -> dovecot/indexer-worker [user1 at domain.com Sent - 5500/7510]() [0x401d19] Jan 4 06:17:17 bubba dovecot: indexer: Error: Indexer worker disconnected, discarding 1 requests for user1 at domain.com Jan 4 06:17:17 bubba dovecot: master: Error: service(indexer-worker): child 11941 returned error 83 (Out of memory (vsz_limit=256 MB, you may need to increase it)) Jan 4 07:17:18 bubba dovecot: indexer-worker(user1 at domain.com): Fatal: pool_system_realloc(134217728): Out of memory Jan 4 07:17:18 bubba dovecot: indexer-worker(user1 at domain.com): Error: Raw backtrace: /usr/local/lib/dovecot/libdovecot.so.0(+0x3ed0a) [0x7faed4e56d0a] -> /usr/local/lib/dovecot/libdovecot.so.0(+0x3ed56) [0x7faed4e56d56] -> /usr/local/lib/dovecot/libdovecot.so.0(i_fatal+0) [0x7faed4e2fd08] -> /usr/local/lib/dovecot/libdovecot.so.0(+0x4f310) [0x7faed4e67310] -> /usr/local/lib/dovecot/libdovecot.so.0(+0x3b965) [0x7faed4e53965] -> /usr/local/lib/dovecot/libdovecot.so.0(buffer_write+0x7c) [0x7faed4e540ec] -> /usr/local/lib/dovecot/lib21_fts_solr_plugin.so(+0x3292) [0x7faed364e292] -> /usr/local/lib/dovecot/lib21_fts_solr_plugin.so(+0x3a97) [0x7faed364ea97] -> /usr/local/lib/dovecot/lib20_fts_plugin.so(fts_backend_update_set_build_key+0x2c) [0x7faed385babc] -> /usr/local/lib/dovecot/lib20_fts_plugin.so(fts_build_mail+0x2d1) [0x7faed385c561] -> /usr/local/lib/dovecot/lib20_fts_plugin.so(+0xc630) [0x7faed3861630] -> dovecot/indexer-worker [user1 at domain.com Sent - 5500/7510]() [0x40245f] -> dovecot/indexer-worker [user1 at domain.com Sent - 5500/7510]() [0x4027dd] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_call_io+0x36) [0x7faed4e630f6] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_handler_run+0x9f) [0x7faed4e6417f] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_run+0x28) [0x7faed4e63098] -> /usr/local/lib/dovecot/libdovecot.so.0(master_service_run+0x13) [0x7faed4e50123] -> dovecot/indexer-worker [user1 at domain.com Sent - 5500/7510](main+0x109) [0x401f29] -> /lib/libc.so.6(__libc_start_main+0xfe) [0x7faed4ab3d8e] -> dovecot/indexer-worker [user1 at domain.com Sent - 5500/7510]() [0x401d19] Jan 4 07:17:18 bubba dovecot: indexer: Error: Indexer worker disconnected, discarding 1 requests for user1 at domain.com Jan 4 07:17:18 bubba dovecot: master: Error: service(indexer-worker): child 13299 returned error 83 (Out of memory (vsz_limit=256 MB, you may need to increase it)) Jan 4 08:17:17 bubba dovecot: indexer-worker(user1 at domain.com): Fatal: pool_system_realloc(134217728): Out of memory Jan 4 08:17:17 bubba dovecot: indexer-worker(user1 at domain.com): Error: Raw backtrace: /usr/local/lib/dovecot/libdovecot.so.0(+0x3ed0a) [0x7ffd84382d0a] -> /usr/local/lib/dovecot/libdovecot.so.0(+0x3ed56) [0x7ffd84382d56] -> /usr/local/lib/dovecot/libdovecot.so.0(i_fatal+0) [0x7ffd8435bd08] -> /usr/local/lib/dovecot/libdovecot.so.0(+0x4f310) [0x7ffd84393310] -> /usr/local/lib/dovecot/libdovecot.so.0(+0x3b965) [0x7ffd8437f965] -> /usr/local/lib/dovecot/libdovecot.so.0(buffer_write+0x7c) [0x7ffd843800ec] -> /usr/local/lib/dovecot/lib21_fts_solr_plugin.so(+0x3292) [0x7ffd82b7a292] -> /usr/local/lib/dovecot/lib21_fts_solr_plugin.so(+0x3a97) [0x7ffd82b7aa97] -> /usr/local/lib/dovecot/lib20_fts_plugin.so(fts_backend_update_set_build_key+0x2c) [0x7ffd82d87abc] -> /usr/local/lib/dovecot/lib20_fts_plugin.so(fts_build_mail+0x2d1) [0x7ffd82d88561] -> /usr/local/lib/dovecot/lib20_fts_plugin.so(+0xc630) [0x7ffd82d8d630] -> dovecot/indexer-worker [user1 at domain.com Sent - 5500/7510]() [0x40245f] -> dovecot/indexer-worker [user1 at domain.com Sent - 5500/7510]() [0x4027dd] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_call_io+0x36) [0x7ffd8438f0f6] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_handler_run+0x9f) [0x7ffd8439017f] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_run+0x28) [0x7ffd8438f098] -> /usr/local/lib/dovecot/libdovecot.so.0(master_service_run+0x13) [0x7ffd8437c123] -> dovecot/indexer-worker [user1 at domain.com Sent - 5500/7510](main+0x109) [0x401f29] -> /lib/libc.so.6(__libc_start_main+0xfe) [0x7ffd83fdfd8e] -> dovecot/indexer-worker [user1 at domain.com Sent - 5500/7510]() [0x401d19] Jan 4 08:17:17 bubba dovecot: indexer: Error: Indexer worker disconnected, discarding 1 requests for user1 at domain.com Jan 4 08:17:17 bubba dovecot: master: Error: service(indexer-worker): child 14413 returned error 83 (Out of memory (vsz_limit=256 MB, you may need to increase it)) Jan 4 09:17:19 bubba dovecot: indexer-worker(user1 at domain.com): Fatal: pool_system_realloc(134217728): Out of memory Jan 4 09:17:19 bubba dovecot: indexer-worker(user1 at domain.com): Error: Raw backtrace: /usr/local/lib/dovecot/libdovecot.so.0(+0x3ed0a) [0x7fb701bf5d0a] -> /usr/local/lib/dovecot/libdovecot.so.0(+0x3ed56) [0x7fb701bf5d56] -> /usr/local/lib/dovecot/libdovecot.so.0(i_fatal+0) [0x7fb701bced08] -> /usr/local/lib/dovecot/libdovecot.so.0(+0x4f310) [0x7fb701c06310] -> /usr/local/lib/dovecot/libdovecot.so.0(+0x3b965) [0x7fb701bf2965] -> /usr/local/lib/dovecot/libdovecot.so.0(buffer_write+0x7c) [0x7fb701bf30ec] -> /usr/local/lib/dovecot/lib21_fts_solr_plugin.so(+0x3292) [0x7fb7003ed292] -> /usr/local/lib/dovecot/lib21_fts_solr_plugin.so(+0x3a97) [0x7fb7003eda97] -> /usr/local/lib/dovecot/lib20_fts_plugin.so(fts_backend_update_set_build_key+0x2c) [0x7fb7005faabc] -> /usr/local/lib/dovecot/lib20_fts_plugin.so(fts_build_mail+0x2d1) [0x7fb7005fb561] -> /usr/local/lib/dovecot/lib20_fts_plugin.so(+0xc630) [0x7fb700600630] -> dovecot/indexer-worker [user1 at domain.com Sent - 5500/7510]() [0x40245f] -> dovecot/indexer-worker [user1 at domain.com Sent - 5500/7510]() [0x4027dd] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_call_io+0x36) [0x7fb701c020f6] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_handler_run+0x9f) [0x7fb701c0317f] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_run+0x28) [0x7fb701c02098] -> /usr/local/lib/dovecot/libdovecot.so.0(master_service_run+0x13) [0x7fb701bef123] -> dovecot/indexer-worker [user1 at domain.com Sent - 5500/7510](main+0x109) [0x401f29] -> /lib/libc.so.6(__libc_start_main+0xfe) [0x7fb701852d8e] -> dovecot/indexer-worker [user1 at domain.com Sent - 5500/7510]() [0x401d19] Jan 4 09:17:19 bubba dovecot: indexer: Error: Indexer worker disconnected, discarding 1 requests for user1 at domain.com Jan 4 09:17:19 bubba dovecot: master: Error: service(indexer-worker): child 15486 returned error 83 (Out of memory (vsz_limit=256 MB, you may need to increase it)) Jan 4 10:17:17 bubba dovecot: indexer-worker(user1 at domain.com): Fatal: pool_system_realloc(134217728): Out of memory Jan 4 10:17:17 bubba dovecot: indexer-worker(user1 at domain.com): Error: Raw backtrace: /usr/local/lib/dovecot/libdovecot.so.0(+0x3ed0a) [0x7f8dc590ed0a] -> /usr/local/lib/dovecot/libdovecot.so.0(+0x3ed56) [0x7f8dc590ed56] -> /usr/local/lib/dovecot/libdovecot.so.0(i_fatal+0) [0x7f8dc58e7d08] -> /usr/local/lib/dovecot/libdovecot.so.0(+0x4f310) [0x7f8dc591f310] -> /usr/local/lib/dovecot/libdovecot.so.0(+0x3b965) [0x7f8dc590b965] -> /usr/local/lib/dovecot/libdovecot.so.0(buffer_write+0x7c) [0x7f8dc590c0ec] -> /usr/local/lib/dovecot/lib21_fts_solr_plugin.so(+0x3292) [0x7f8dc4106292] -> /usr/local/lib/dovecot/lib21_fts_solr_plugin.so(+0x3a97) [0x7f8dc4106a97] -> /usr/local/lib/dovecot/lib20_fts_plugin.so(fts_backend_update_set_build_key+0x2c) [0x7f8dc4313abc] -> /usr/local/lib/dovecot/lib20_fts_plugin.so(fts_build_mail+0x2d1) [0x7f8dc4314561] -> /usr/local/lib/dovecot/lib20_fts_plugin.so(+0xc630) [0x7f8dc4319630] -> dovecot/indexer-worker [user1 at domain.com Sent - 5500/7510]() [0x40245f] -> dovecot/indexer-worker [user1 at domain.com Sent - 5500/7510]() [0x4027dd] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_call_io+0x36) [0x7f8dc591b0f6] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_handler_run+0x9f) [0x7f8dc591c17f] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_run+0x28) [0x7f8dc591b098] -> /usr/local/lib/dovecot/libdovecot.so.0(master_service_run+0x13) [0x7f8dc5908123] -> dovecot/indexer-worker [user1 at domain.com Sent - 5500/7510](main+0x109) [0x401f29] -> /lib/libc.so.6(__libc_start_main+0xfe) [0x7f8dc556bd8e] -> dovecot/indexer-worker [user1 at domain.com Sent - 5500/7510]() [0x401d19] Jan 4 10:17:17 bubba dovecot: indexer: Error: Indexer worker disconnected, discarding 1 requests for user1 at domain.com Jan 4 10:17:17 bubba dovecot: master: Error: service(indexer-worker): child 16472 returned error 83 (Out of memory (vsz_limit=256 MB, you may need to increase it)) Jan 4 11:17:17 bubba dovecot: indexer-worker(user1 at domain.com): Fatal: pool_system_realloc(134217728): Out of memory Jan 4 11:17:17 bubba dovecot: indexer-worker(user1 at domain.com): Error: Raw backtrace: /usr/local/lib/dovecot/libdovecot.so.0(+0x3ed0a) [0x7ff619c1dd0a] -> /usr/local/lib/dovecot/libdovecot.so.0(+0x3ed56) [0x7ff619c1dd56] -> /usr/local/lib/dovecot/libdovecot.so.0(i_fatal+0) [0x7ff619bf6d08] -> /usr/local/lib/dovecot/libdovecot.so.0(+0x4f310) [0x7ff619c2e310] -> /usr/local/lib/dovecot/libdovecot.so.0(+0x3b965) [0x7ff619c1a965] -> /usr/local/lib/dovecot/libdovecot.so.0(buffer_write+0x7c) [0x7ff619c1b0ec] -> /usr/local/lib/dovecot/lib21_fts_solr_plugin.so(+0x3292) [0x7ff618415292] -> /usr/local/lib/dovecot/lib21_fts_solr_plugin.so(+0x3a97) [0x7ff618415a97] -> /usr/local/lib/dovecot/lib20_fts_plugin.so(fts_backend_update_set_build_key+0x2c) [0x7ff618622abc] -> /usr/local/lib/dovecot/lib20_fts_plugin.so(fts_build_mail+0x2d1) [0x7ff618623561] -> /usr/local/lib/dovecot/lib20_fts_plugin.so(+0xc630) [0x7ff618628630] -> dovecot/indexer-worker [user1 at domain.com Sent - 5500/7514]() [0x40245f] -> dovecot/indexer-worker [user1 at domain.com Sent - 5500/7514]() [0x4027dd] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_call_io+0x36) [0x7ff619c2a0f6] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_handler_run+0x9f) [0x7ff619c2b17f] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_run+0x28) [0x7ff619c2a098] -> /usr/local/lib/dovecot/libdovecot.so.0(master_service_run+0x13) [0x7ff619c17123] -> dovecot/indexer-worker [user1 at domain.com Sent - 5500/7514](main+0x109) [0x401f29] -> /lib/libc.so.6(__libc_start_main+0xfe) [0x7ff61987ad8e] -> dovecot/indexer-worker [user1 at domain.com Sent - 5500/7514]() [0x401d19] Jan 4 11:17:17 bubba dovecot: indexer: Error: Indexer worker disconnected, discarding 1 requests for user1 at domain.com Jan 4 11:17:17 bubba dovecot: master: Error: service(indexer-worker): child 17522 returned error 83 (Out of memory (vsz_limit=256 MB, you may need to increase it)) Jan 4 12:17:17 bubba dovecot: indexer-worker(user1 at domain.com): Fatal: pool_system_realloc(134217728): Out of memory Jan 4 12:17:17 bubba dovecot: indexer-worker(user1 at domain.com): Error: Raw backtrace: /usr/local/lib/dovecot/libdovecot.so.0(+0x3ed0a) [0x7ffd988c1d0a] -> /usr/local/lib/dovecot/libdovecot.so.0(+0x3ed56) [0x7ffd988c1d56] -> /usr/local/lib/dovecot/libdovecot.so.0(i_fatal+0) [0x7ffd9889ad08] -> /usr/local/lib/dovecot/libdovecot.so.0(+0x4f310) [0x7ffd988d2310] -> /usr/local/lib/dovecot/libdovecot.so.0(+0x3b965) [0x7ffd988be965] -> /usr/local/lib/dovecot/libdovecot.so.0(buffer_write+0x7c) [0x7ffd988bf0ec] -> /usr/local/lib/dovecot/lib21_fts_solr_plugin.so(+0x3292) [0x7ffd970b9292] -> /usr/local/lib/dovecot/lib21_fts_solr_plugin.so(+0x3a97) [0x7ffd970b9a97] -> /usr/local/lib/dovecot/lib20_fts_plugin.so(fts_backend_update_set_build_key+0x2c) [0x7ffd972c6abc] -> /usr/local/lib/dovecot/lib20_fts_plugin.so(fts_build_mail+0x2d1) [0x7ffd972c7561] -> /usr/local/lib/dovecot/lib20_fts_plugin.so(+0xc630) [0x7ffd972cc630] -> dovecot/indexer-worker [user1 at domain.com Sent - 5500/7515]() [0x40245f] -> dovecot/indexer-worker [user1 at domain.com Sent - 5500/7515]() [0x4027dd] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_call_io+0x36) [0x7ffd988ce0f6] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_handler_run+0x9f) [0x7ffd988cf17f] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_run+0x28) [0x7ffd988ce098] -> /usr/local/lib/dovecot/libdovecot.so.0(master_service_run+0x13) [0x7ffd988bb123] -> dovecot/indexer-worker [user1 at domain.com Sent - 5500/7515](main+0x109) [0x401f29] -> /lib/libc.so.6(__libc_start_main+0xfe) [0x7ffd9851ed8e] -> dovecot/indexer-worker [user1 at domain.com Sent - 5500/7515]() [0x401d19] Jan 4 12:17:17 bubba dovecot: indexer: Error: Indexer worker disconnected, discarding 1 requests for user1 at domain.com Jan 4 12:17:17 bubba dovecot: master: Error: service(indexer-worker): child 18498 returned error 83 (Out of memory (vsz_limit=256 MB, you may need to increase it)) Jan 4 13:17:16 bubba dovecot: indexer-worker(user1 at domain.com): Fatal: pool_system_realloc(134217728): Out of memory Jan 4 13:17:16 bubba dovecot: indexer-worker(user1 at domain.com): Error: Raw backtrace: /usr/local/lib/dovecot/libdovecot.so.0(+0x3ed0a) [0x7f51b0163d0a] -> /usr/local/lib/dovecot/libdovecot.so.0(+0x3ed56) [0x7f51b0163d56] -> /usr/local/lib/dovecot/libdovecot.so.0(i_fatal+0) [0x7f51b013cd08] -> /usr/local/lib/dovecot/libdovecot.so.0(+0x4f310) [0x7f51b0174310] -> /usr/local/lib/dovecot/libdovecot.so.0(+0x3b965) [0x7f51b0160965] -> /usr/local/lib/dovecot/libdovecot.so.0(buffer_write+0x7c) [0x7f51b01610ec] -> /usr/local/lib/dovecot/lib21_fts_solr_plugin.so(+0x3292) [0x7f51ae95b292] -> /usr/local/lib/dovecot/lib21_fts_solr_plugin.so(+0x3a97) [0x7f51ae95ba97] -> /usr/local/lib/dovecot/lib20_fts_plugin.so(fts_backend_update_set_build_key+0x2c) [0x7f51aeb68abc] -> /usr/local/lib/dovecot/lib20_fts_plugin.so(fts_build_mail+0x2d1) [0x7f51aeb69561] -> /usr/local/lib/dovecot/lib20_fts_plugin.so(+0xc630) [0x7f51aeb6e630] -> dovecot/indexer-worker [user1 at domain.com Sent - 5500/7515]() [0x40245f] -> dovecot/indexer-worker [user1 at domain.com Sent - 5500/7515]() [0x4027dd] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_call_io+0x36) [0x7f51b01700f6] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_handler_run+0x9f) [0x7f51b017117f] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_run+0x28) [0x7f51b0170098] -> /usr/local/lib/dovecot/libdovecot.so.0(master_service_run+0x13) [0x7f51b015d123] -> dovecot/indexer-worker [user1 at domain.com Sent - 5500/7515](main+0x109) [0x401f29] -> /lib/libc.so.6(__libc_start_main+0xfe) [0x7f51afdc0d8e] -> dovecot/indexer-worker [user1 at domain.com Sent - 5500/7515]() [0x401d19] Jan 4 13:17:16 bubba dovecot: indexer: Error: Indexer worker disconnected, discarding 1 requests for user1 at domain.com Jan 4 13:17:16 bubba dovecot: master: Error: service(indexer-worker): child 19550 returned error 83 (Out of memory (vsz_limit=256 MB, you may need to increase it)) Jan 4 14:17:17 bubba dovecot: indexer-worker(user1 at domain.com): Fatal: pool_system_realloc(134217728): Out of memory Jan 4 14:17:17 bubba dovecot: indexer-worker(user1 at domain.com): Error: Raw backtrace: /usr/local/lib/dovecot/libdovecot.so.0(+0x3ed0a) [0x7f423b546d0a] -> /usr/local/lib/dovecot/libdovecot.so.0(+0x3ed56) [0x7f423b546d56] -> /usr/local/lib/dovecot/libdovecot.so.0(i_fatal+0) [0x7f423b51fd08] -> /usr/local/lib/dovecot/libdovecot.so.0(+0x4f310) [0x7f423b557310] -> /usr/local/lib/dovecot/libdovecot.so.0(+0x3b965) [0x7f423b543965] -> /usr/local/lib/dovecot/libdovecot.so.0(buffer_write+0x7c) [0x7f423b5440ec] -> /usr/local/lib/dovecot/lib21_fts_solr_plugin.so(+0x3292) [0x7f4239d3e292] -> /usr/local/lib/dovecot/lib21_fts_solr_plugin.so(+0x3a97) [0x7f4239d3ea97] -> /usr/local/lib/dovecot/lib20_fts_plugin.so(fts_backend_update_set_build_key+0x2c) [0x7f4239f4babc] -> /usr/local/lib/dovecot/lib20_fts_plugin.so(fts_build_mail+0x2d1) [0x7f4239f4c561] -> /usr/local/lib/dovecot/lib20_fts_plugin.so(+0xc630) [0x7f4239f51630] -> dovecot/indexer-worker [user1 at domain.com Sent - 5500/7516]() [0x40245f] -> dovecot/indexer-worker [user1 at domain.com Sent - 5500/7516]() [0x4027dd] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_call_io+0x36) [0x7f423b5530f6] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_handler_run+0x9f) [0x7f423b55417f] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_run+0x28) [0x7f423b553098] -> /usr/local/lib/dovecot/libdovecot.so.0(master_service_run+0x13) [0x7f423b540123] -> dovecot/indexer-worker [user1 at domain.com Sent - 5500/7516](main+0x109) [0x401f29] -> /lib/libc.so.6(__libc_start_main+0xfe) [0x7f423b1a3d8e] -> dovecot/indexer-worker [user1 at domain.com Sent - 5500/7516]() [0x401d19] Jan 4 14:17:17 bubba dovecot: master: Error: service(indexer-worker): child 20638 returned error 83 (Out of memory (vsz_limit=256 MB, you may need to increase it)) Jan 4 14:17:17 bubba dovecot: indexer: Error: Indexer worker disconnected, discarding 1 requests for user1 at domain.com Jan 4 15:17:17 bubba dovecot: indexer-worker(user1 at domain.com): Fatal: pool_system_realloc(134217728): Out of memory Jan 4 15:17:17 bubba dovecot: indexer-worker(user1 at domain.com): Error: Raw backtrace: /usr/local/lib/dovecot/libdovecot.so.0(+0x3ed0a) [0x7f3ab5e51d0a] -> /usr/local/lib/dovecot/libdovecot.so.0(+0x3ed56) [0x7f3ab5e51d56] -> /usr/local/lib/dovecot/libdovecot.so.0(i_fatal+0) [0x7f3ab5e2ad08] -> /usr/local/lib/dovecot/libdovecot.so.0(+0x4f310) [0x7f3ab5e62310] -> /usr/local/lib/dovecot/libdovecot.so.0(+0x3b965) [0x7f3ab5e4e965] -> /usr/local/lib/dovecot/libdovecot.so.0(buffer_write+0x7c) [0x7f3ab5e4f0ec] -> /usr/local/lib/dovecot/lib21_fts_solr_plugin.so(+0x3292) [0x7f3ab4649292] -> /usr/local/lib/dovecot/lib21_fts_solr_plugin.so(+0x3a97) [0x7f3ab4649a97] -> /usr/local/lib/dovecot/lib20_fts_plugin.so(fts_backend_update_set_build_key+0x2c) [0x7f3ab4856abc] -> /usr/local/lib/dovecot/lib20_fts_plugin.so(fts_build_mail+0x2d1) [0x7f3ab4857561] -> /usr/local/lib/dovecot/lib20_fts_plugin.so(+0xc630) [0x7f3ab485c630] -> dovecot/indexer-worker [user1 at domain.com Sent - 5500/7517]() [0x40245f] -> dovecot/indexer-worker [user1 at domain.com Sent - 5500/7517]() [0x4027dd] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_call_io+0x36) [0x7f3ab5e5e0f6] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_handler_run+0x9f) [0x7f3ab5e5f17f] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_run+0x28) [0x7f3ab5e5e098] -> /usr/local/lib/dovecot/libdovecot.so.0(master_service_run+0x13) [0x7f3ab5e4b123] -> dovecot/indexer-worker [user1 at domain.com Sent - 5500/7517](main+0x109) [0x401f29] -> /lib/libc.so.6(__libc_start_main+0xfe) [0x7f3ab5aaed8e] -> dovecot/indexer-worker [user1 at domain.com Sent - 5500/7517]() [0x401d19] Jan 4 15:17:17 bubba dovecot: master: Error: service(indexer-worker): child 21821 returned error 83 (Out of memory (vsz_limit=256 MB, you may need to increase it)) Jan 4 15:17:17 bubba dovecot: indexer: Error: Indexer worker disconnected, discarding 1 requests for user1 at domain.com Jan 4 15:30:48 bubba dovecot: imap(user2 at domain.com): Error: Cached message size smaller than expected (822 < 1493) Jan 4 15:30:48 bubba dovecot: imap(user2 at domain.com): Error: Corrupted index cache file /var/mail/amfes.com/lmiller/mdbox/mailboxes/Sent/dbox-Mails/dovecot.index.cache: Broken physical size for mail UID 1786 Jan 4 15:30:48 bubba dovecot: imap(user2 at domain.com): Error: read(/var/mail/amfes.com/lmiller/mdbox/storage/m.208) failed: Input/output error (FETCH for mailbox Sent UID 1786) Jan 4 16:17:20 bubba dovecot: indexer-worker(user1 at domain.com): Fatal: pool_system_realloc(134217728): Out of memory Jan 4 16:17:20 bubba dovecot: indexer-worker(user1 at domain.com): Error: Raw backtrace: /usr/local/lib/dovecot/libdovecot.so.0(+0x3ed0a) [0x7ffc91276d0a] -> /usr/local/lib/dovecot/libdovecot.so.0(+0x3ed56) [0x7ffc91276d56] -> /usr/local/lib/dovecot/libdovecot.so.0(i_fatal+0) [0x7ffc9124fd08] -> /usr/local/lib/dovecot/libdovecot.so.0(+0x4f310) [0x7ffc91287310] -> /usr/local/lib/dovecot/libdovecot.so.0(+0x3b965) [0x7ffc91273965] -> /usr/local/lib/dovecot/libdovecot.so.0(buffer_write+0x7c) [0x7ffc912740ec] -> /usr/local/lib/dovecot/lib21_fts_solr_plugin.so(+0x3292) [0x7ffc8fa6e292] -> /usr/local/lib/dovecot/lib21_fts_solr_plugin.so(+0x3a97) [0x7ffc8fa6ea97] -> /usr/local/lib/dovecot/lib20_fts_plugin.so(fts_backend_update_set_build_key+0x2c) [0x7ffc8fc7babc] -> /usr/local/lib/dovecot/lib20_fts_plugin.so(fts_build_mail+0x2d1) [0x7ffc8fc7c561] -> /usr/local/lib/dovecot/lib20_fts_plugin.so(+0xc630) [0x7ffc8fc81630] -> dovecot/indexer-worker [user1 at domain.com Sent - 5500/7517]() [0x40245f] -> dovecot/indexer-worker [user1 at domain.com Sent - 5500/7517]() [0x4027dd] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_call_io+0x36) [0x7ffc912830f6] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_handler_run+0x9f) [0x7ffc9128417f] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_run+0x28) [0x7ffc91283098] -> /usr/local/lib/dovecot/libdovecot.so.0(master_service_run+0x13) [0x7ffc91270123] -> dovecot/indexer-worker [user1 at domain.com Sent - 5500/7517](main+0x109) [0x401f29] -> /lib/libc.so.6(__libc_start_main+0xfe) [0x7ffc90ed3d8e] -> dovecot/indexer-worker [user1 at domain.com Sent - 5500/7517]() [0x401d19] Jan 4 16:17:20 bubba dovecot: master: Error: service(indexer-worker): child 22927 returned error 83 (Out of memory (vsz_limit=256 MB, you may need to increase it)) Jan 4 16:17:20 bubba dovecot: indexer: Error: Indexer worker disconnected, discarding 1 requests for user1 at domain.com -- Daniel From user+dovecot at localhost.localdomain.org Thu Jan 5 03:19:37 2012 From: user+dovecot at localhost.localdomain.org (Pascal Volk) Date: Thu, 05 Jan 2012 02:19:37 +0100 Subject: [Dovecot] Storing passwords encrypted... bcrypt? In-Reply-To: <4F0367A2.1000807@Media-Brokers.com> References: <4F0367A2.1000807@Media-Brokers.com> Message-ID: <4F04FAA9.3020908@localhost.localdomain.org> On 01/03/2012 09:40 PM Charles Marcus wrote: > Hi everyone, > > Was just perusing this article about how trivial it is to decrypt > passwords that are stored using most (standard) encryption methods (like > MD5), and was wondering - is it possible to use bcrypt with > dovecot+postfix+mysql (or posgres)? Yes it is possible to use bcrypt with dovecot. Currently you have only to write your password scheme plugin. The bcrypt algorithm is described at http://en.wikipedia.org/wiki/Bcrypt. If you are using Dovecot >= 2.0 'doveadm pw' supports the schemes: *BSD: Blowfish-Crypt *Linux (since glibc 2.7): SHA-256-Crypt and SHA-512-Crypt Some distributions have also added support for Blowfish-Crypt See also: doveadm-pw(1) If you are using Dovecot < 2.0 you can also use any of the algorithms supported by your system's libc. But then you have to prefix the hashes with {CRYPT} - not {{BLF,SHA256,SHA512}-CRYPT}. Regards, Pascal -- The trapper recommends today: deadbeef.1200501 at localdomain.org From noel.butler at ausics.net Thu Jan 5 03:59:12 2012 From: noel.butler at ausics.net (Noel Butler) Date: Thu, 05 Jan 2012 11:59:12 +1000 Subject: [Dovecot] Storing passwords encrypted... bcrypt? In-Reply-To: <4F03B25B.2020309@orlitzky.com> References: <4F0367A2.1000807@Media-Brokers.com> <4F036C3B.5080904@Media-Brokers.com> <4F037CC5.9030900@carpenter.org> <4F038186.3030505@Media-Brokers.com> <4F038B72.1000003@carpenter.org> <4F03AA6E.30003@Media-Brokers.com> <4F03B25B.2020309@orlitzky.com> Message-ID: <1325728752.9555.8.camel@tardis> On Tue, 2012-01-03 at 20:58 -0500, Michael Orlitzky wrote: > To prevent rainbow table attacks, salt your passwords. You can make them > a little bit more difficult in plenty of ways, but salt is the /solution/. Agreed... We use Crypt::PasswdMD5 - unix_md5_crypt() for all general password storage including mail/ftp etc, except for web, where we need to use apache_md5_crypt(). -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 490 bytes Desc: This is a digitally signed message part URL: From patrickdk at patrickdk.com Thu Jan 5 04:06:44 2012 From: patrickdk at patrickdk.com (Patrick Domack) Date: Wed, 04 Jan 2012 21:06:44 -0500 Subject: [Dovecot] Storing passwords encrypted... bcrypt? In-Reply-To: <1325728752.9555.8.camel@tardis> References: <4F0367A2.1000807@Media-Brokers.com> <4F036C3B.5080904@Media-Brokers.com> <4F037CC5.9030900@carpenter.org> <4F038186.3030505@Media-Brokers.com> <4F038B72.1000003@carpenter.org> <4F03AA6E.30003@Media-Brokers.com> <4F03B25B.2020309@orlitzky.com> <1325728752.9555.8.camel@tardis> Message-ID: <20120104210644.Horde.YEJENpLnE6FPBQW0C1KEd8A@kishi.patrickdk.com> Quoting Noel Butler : > On Tue, 2012-01-03 at 20:58 -0500, Michael Orlitzky wrote: > > >> To prevent rainbow table attacks, salt your passwords. You can make them >> a little bit more difficult in plenty of ways, but salt is the /solution/. > > > > Agreed... > We use Crypt::PasswdMD5 - > unix_md5_crypt() for all general password storage including mail/ftp > etc, except for web, where we need to use apache_md5_crypt(). But still, the results are all the same, if they get the hash, it can be broken, given time. Using more cpu expensive methods make it take longer (like adding salt, more complex hash). But the end result is they will have it if they want it. The only solution is to use two factor authenication, or rotate your passwords quicker than they can get broken. From user+dovecot at localhost.localdomain.org Thu Jan 5 04:26:59 2012 From: user+dovecot at localhost.localdomain.org (Pascal Volk) Date: Thu, 05 Jan 2012 03:26:59 +0100 Subject: [Dovecot] Storing passwords encrypted... bcrypt? In-Reply-To: <1325728752.9555.8.camel@tardis> References: <4F0367A2.1000807@Media-Brokers.com> <4F036C3B.5080904@Media-Brokers.com> <4F037CC5.9030900@carpenter.org> <4F038186.3030505@Media-Brokers.com> <4F038B72.1000003@carpenter.org> <4F03AA6E.30003@Media-Brokers.com> <4F03B25B.2020309@orlitzky.com> <1325728752.9555.8.camel@tardis> Message-ID: <4F050A73.7090300@localhost.localdomain.org> On 01/05/2012 02:59 AM Noel Butler wrote: > We use Crypt::PasswdMD5 - > unix_md5_crypt() for all general password storage including mail/ftp > etc, except for web, where we need to use apache_md5_crypt(). Huh, why do you need to store passwords in Apaches md5 crypt() format? ,--[ Apache config ]-- | AuthType Basic | AuthName "bla ?" | AuthBasicProvider dbm | AuthDBMUserFile /path/2/.htpasswd | Require valid-user | Order allow,deny | Allow from 203.0.113.0/24 2001:db8::/32 | Satisfy any `-- ,--[ stdin/stdout ]-- | user at localhost ~ $ python | Python 2.5.4 (r254:67916, Feb 17 2009, 20:16:45) | [GCC 4.3.3] on linux2 | Type "help", "copyright", "credits" or "license" for more information. | >>> import anydbm | >>> dbm = anydbm.open('/path/2/.htpasswd') | >>> dbm['user'] | '$6$Rn6L.3hT2x6dnX0t$d0/Tx.Ps3KSRxxm.ggFBYqum54/k8JmDzUcpoCXre88cBEXK8WB.Vdb1YzN.8fOvz3fJU4uLgW0/AlTiB9Ui2.::Real Name' | >>> `-- Regards, Pascal -- The trapper recommends today: deadbeef.1200503 at localdomain.org From noel.butler at ausics.net Thu Jan 5 04:31:37 2012 From: noel.butler at ausics.net (Noel Butler) Date: Thu, 05 Jan 2012 12:31:37 +1000 Subject: [Dovecot] Storing passwords encrypted... bcrypt? In-Reply-To: <20120104210644.Horde.YEJENpLnE6FPBQW0C1KEd8A@kishi.patrickdk.com> References: <4F0367A2.1000807@Media-Brokers.com> <4F036C3B.5080904@Media-Brokers.com> <4F037CC5.9030900@carpenter.org> <4F038186.3030505@Media-Brokers.com> <4F038B72.1000003@carpenter.org> <4F03AA6E.30003@Media-Brokers.com> <4F03B25B.2020309@orlitzky.com> <1325728752.9555.8.camel@tardis> <20120104210644.Horde.YEJENpLnE6FPBQW0C1KEd8A@kishi.patrickdk.com> Message-ID: <1325730697.9555.15.camel@tardis> On Wed, 2012-01-04 at 21:06 -0500, Patrick Domack wrote: > Quoting Noel Butler : > > > On Tue, 2012-01-03 at 20:58 -0500, Michael Orlitzky wrote: > > > > > >> To prevent rainbow table attacks, salt your passwords. You can make them > >> a little bit more difficult in plenty of ways, but salt is the /solution/. > > > > > > > > Agreed... > > We use Crypt::PasswdMD5 - > > unix_md5_crypt() for all general password storage including mail/ftp > > etc, except for web, where we need to use apache_md5_crypt(). > > But still, the results are all the same, if they get the hash, it can > be broken, given time. Using more cpu expensive methods make it take > longer (like adding salt, more complex hash). But the end result is > they will have it if they want it. > > The only solution is to use two factor authenication, or rotate your > passwords quicker than they can get broken. > Yup, anything can be broken, given time and resources, no mater what, but using crypted MD5 is better than using normal md5 (like sadly way too many use) and having easy rainbow attacks succeed in mere seconds. No mater how good your database security is, always assume the worse, too many think that a DB compromise just can't happen to them, and as murphy's law shows, their usually the ones it does happen to. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 490 bytes Desc: This is a digitally signed message part URL: From noel.butler at ausics.net Thu Jan 5 04:36:38 2012 From: noel.butler at ausics.net (Noel Butler) Date: Thu, 05 Jan 2012 12:36:38 +1000 Subject: [Dovecot] Storing passwords encrypted... bcrypt? In-Reply-To: <4F050A73.7090300@localhost.localdomain.org> References: <4F0367A2.1000807@Media-Brokers.com> <4F036C3B.5080904@Media-Brokers.com> <4F037CC5.9030900@carpenter.org> <4F038186.3030505@Media-Brokers.com> <4F038B72.1000003@carpenter.org> <4F03AA6E.30003@Media-Brokers.com> <4F03B25B.2020309@orlitzky.com> <1325728752.9555.8.camel@tardis> <4F050A73.7090300@localhost.localdomain.org> Message-ID: <1325730998.9555.21.camel@tardis> On Thu, 2012-01-05 at 03:26 +0100, Pascal Volk wrote: > On 01/05/2012 02:59 AM Noel Butler wrote: > > We use Crypt::PasswdMD5 - > > unix_md5_crypt() for all general password storage including mail/ftp > > etc, except for web, where we need to use apache_md5_crypt(). > > Huh, why do you need to store passwords in Apaches md5 crypt() format? > Because with multiple servers, we store them all in (replicated) mysql :) (the same with postfix/dovecot). and as I'm sure you are aware, Apache does not understand standard crypted MD5, hence why there is the second option of apache_md5_crypt() > ,--[ Apache config ]-- > | AuthType Basic > | AuthName "bla ?" > | AuthBasicProvider dbm > | AuthDBMUserFile /path/2/.htpasswd > | Require valid-user > | Order allow,deny > | Allow from 203.0.113.0/24 2001:db8::/32 > | Satisfy any > `-- -------------- next part -------------- A non-text attachment was scrubbed... Name: face-smile.png Type: image/png Size: 873 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 490 bytes Desc: This is a digitally signed message part URL: From user+dovecot at localhost.localdomain.org Thu Jan 5 05:05:53 2012 From: user+dovecot at localhost.localdomain.org (Pascal Volk) Date: Thu, 05 Jan 2012 04:05:53 +0100 Subject: [Dovecot] Storing passwords encrypted... bcrypt? In-Reply-To: <1325730998.9555.21.camel@tardis> References: <4F0367A2.1000807@Media-Brokers.com> <4F036C3B.5080904@Media-Brokers.com> <4F037CC5.9030900@carpenter.org> <4F038186.3030505@Media-Brokers.com> <4F038B72.1000003@carpenter.org> <4F03AA6E.30003@Media-Brokers.com> <4F03B25B.2020309@orlitzky.com> <1325728752.9555.8.camel@tardis> <4F050A73.7090300@localhost.localdomain.org> <1325730998.9555.21.camel@tardis> Message-ID: <4F051391.404@localhost.localdomain.org> On 01/05/2012 03:36 AM Noel Butler wrote: > > Because with multiple servers, we store them all in (replicated) > mysql :) (the same with postfix/dovecot). > and as I'm sure you are aware, Apache does not understand standard > crypted MD5, hence why there is the second option of apache_md5_crypt() Oh, let me guess: You are using Windows, Netware, TPF as OS for your web servers? ;-) man htpasswd | grep -- '-d ' -d Use crypt() encryption for passwords. This is not supported by the httpd server on Windows and Netware and TPF. As you may have seen in my previous mail, the password is generated using crypt(). HTTP Authentication works with that password hash, even with the httpd from the ASF. Regards, Pascal -- The trapper recommends today: cafefeed.1200504 at localdomain.org From david at blue-labs.org Thu Jan 5 05:16:15 2012 From: david at blue-labs.org (David Ford) Date: Wed, 04 Jan 2012 22:16:15 -0500 Subject: [Dovecot] Storing passwords encrypted... bcrypt? In-Reply-To: <1325730998.9555.21.camel@tardis> References: <4F0367A2.1000807@Media-Brokers.com> <4F036C3B.5080904@Media-Brokers.com> <4F037CC5.9030900@carpenter.org> <4F038186.3030505@Media-Brokers.com> <4F038B72.1000003@carpenter.org> <4F03AA6E.30003@Media-Brokers.com> <4F03B25B.2020309@orlitzky.com> <1325728752.9555.8.camel@tardis> <4F050A73.7090300@localhost.localdomain.org> <1325730998.9555.21.camel@tardis> Message-ID: <4F0515FF.9050101@blue-labs.org> > Because with multiple servers, we store them all in (replicated) mysql > :) (the same with postfix/dovecot). and as I'm sure you are aware, > Apache does not understand standard crypted MD5, hence why there is > the second option of apache_md5_crypt() with multiple servers, we use pam & nss, with a replicated ldap backed. this serves all auth requests for all services and no services cares if it is sha, md5, or a crypt method. -d From noel.butler at ausics.net Thu Jan 5 09:19:10 2012 From: noel.butler at ausics.net (Noel Butler) Date: Thu, 05 Jan 2012 17:19:10 +1000 Subject: [Dovecot] Storing passwords encrypted... bcrypt? In-Reply-To: <4F051391.404@localhost.localdomain.org> References: <4F0367A2.1000807@Media-Brokers.com> <4F036C3B.5080904@Media-Brokers.com> <4F037CC5.9030900@carpenter.org> <4F038186.3030505@Media-Brokers.com> <4F038B72.1000003@carpenter.org> <4F03AA6E.30003@Media-Brokers.com> <4F03B25B.2020309@orlitzky.com> <1325728752.9555.8.camel@tardis> <4F050A73.7090300@localhost.localdomain.org> <1325730998.9555.21.camel@tardis> <4F051391.404@localhost.localdomain.org> Message-ID: <1325747950.5349.31.camel@tardis> On Thu, 2012-01-05 at 04:05 +0100, Pascal Volk wrote: > On 01/05/2012 03:36 AM Noel Butler wrote: > > > > > Because with multiple servers, we store them all in (replicated) > > mysql :) (the same with postfix/dovecot). > > and as I'm sure you are aware, Apache does not understand standard > > crypted MD5, hence why there is the second option of apache_md5_crypt() > > Oh, let me guess: You are using Windows, Netware, TPF as OS for your > web servers? ;-) > > man htpasswd | grep -- '-d ' > -d Use crypt() encryption for passwords. This is not supported by the httpd server on Windows and Netware and TPF. > > > As you may have seen in my previous mail, the password is generated > using crypt(). HTTP Authentication works with that password hash, even > with the httpd from the ASF. > I think you need to do some homework, and although I now have 3.25 days of holidays remaining, I don't intend to waste them educating anybody hehe. Assuming you even know what I'm talking about, which I suspect you don't since you keep using console commands and things like htpasswd, which does not write to a mysql db, you don't seem to have comprehended that I do not work with flat files nor local so it is irrelevant, I use perl scripts for all systems management, so I hope you are not going to suggest that I should make a system call when I can do it natively in perl. But please, by all means, create a mysql db using a system crpyted md5 password, I'll even help ya, openssl passwd -1 foobartilly $1$e3a.f3uW$SYRQiMlEhC5XlnSxtxiNC/ pop the entry into the db and go for your life trying to authenticate. and when you've gone through half bottle of bourbon trying to figure out why its not working, try the apache crypted md5 version $apr1$yKxk.DrQ $ybcmM8mC1qD5t5FvoY9820 If you stop, and think about what I've said, you just might wake up to what I've been saying. Cheers PS me use windaz? wash your bloody mouth out mister! ;) (Slackware all the way) -------------- next part -------------- A non-text attachment was scrubbed... Name: face-wink.png Type: image/png Size: 876 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 490 bytes Desc: This is a digitally signed message part URL: From noel.butler at ausics.net Thu Jan 5 09:28:10 2012 From: noel.butler at ausics.net (Noel Butler) Date: Thu, 05 Jan 2012 17:28:10 +1000 Subject: [Dovecot] Storing passwords encrypted... bcrypt? In-Reply-To: <4F0515FF.9050101@blue-labs.org> References: <4F0367A2.1000807@Media-Brokers.com> <4F036C3B.5080904@Media-Brokers.com> <4F037CC5.9030900@carpenter.org> <4F038186.3030505@Media-Brokers.com> <4F038B72.1000003@carpenter.org> <4F03AA6E.30003@Media-Brokers.com> <4F03B25B.2020309@orlitzky.com> <1325728752.9555.8.camel@tardis> <4F050A73.7090300@localhost.localdomain.org> <1325730998.9555.21.camel@tardis> <4F0515FF.9050101@blue-labs.org> Message-ID: <1325748490.5349.37.camel@tardis> On Wed, 2012-01-04 at 22:16 -0500, David Ford wrote: > > with multiple servers, we use pam & nss, with a replicated ldap backed. public accessible mode :P oh dont start me on that, but luckily I'm not subjected to its dangers...and telling Pascal bout Bourbon made me realise its time to head out for last couple of nights of freedom and have a few. -------------- next part -------------- A non-text attachment was scrubbed... Name: face-raspberry.png Type: image/png Size: 865 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 490 bytes Desc: This is a digitally signed message part URL: From openbsd at e-solutions.re Thu Jan 5 09:45:06 2012 From: openbsd at e-solutions.re (Wesley M.) Date: Thu, 05 Jan 2012 11:45:06 +0400 Subject: [Dovecot] dovecot-lda error Message-ID: Hi, I use Dovecot 2.0.13 on OpenBSD 5.0 When i try to send emails i have the following error in /var/log/maillog Jan 5 11:23:49 mail50 postfix/pipe[29423]: D951842244C: to=, relay=dovecot, delay=0.02, delays=0.01/0/0/0.01, dsn=5.3.0, status=bounced (command line usage error. Command output: deliver: unknown option -- n Usage: dovecot-lda [-c ] [-a ] [-d ] [-p ] [-f ] [-m ] [-e] [-k] ) Jan 5 11:23:49 mail50 postfix/qmgr[13787]: D951842244C: removed In my /etc/postfix/master.cf : # Dovecot LDA dovecot unix - n n - - pipe flags=ODR user=_dovecot:_dovecot argv=/usr/local/libexec/dovecot/deliver -f ${ sender} -d ${user}@${nexthop} -n -m ${extension} How can i resolve that ? Thank you very much for your replies. Cheers, Wesley. From e-frog at gmx.de Thu Jan 5 10:39:50 2012 From: e-frog at gmx.de (e-frog) Date: Thu, 05 Jan 2012 09:39:50 +0100 Subject: [Dovecot] dovecot-lda error In-Reply-To: References: Message-ID: <4F0561D6.4020300@gmx.de> On 05.01.2012 08:45, wrote Wesley M.: > > > Hi, Hi, > > I use Dovecot 2.0.13 on OpenBSD 5.0 > When i try to send emails i > have the following error in /var/log/maillog > > Jan 5 11:23:49 mail50 > postfix/pipe[29423]: D951842244C: to=, relay=dovecot, delay=0.02, > delays=0.01/0/0/0.01, dsn=5.3.0, status=bounced (command line usage error. > Command output: deliver: unknown option -- n Usage: dovecot-lda [-c ] [-a ] Look at the bottom of this page: http://wiki2.dovecot.org/Upgrading/2.0 > [-d ] [-p ] [-f ] [-m ] [-e] [-k] ) > Jan 5 11:23:49 mail50 > postfix/qmgr[13787]: D951842244C: removed > > In my /etc/postfix/master.cf > : > # Dovecot LDA > dovecot unix - n n - - pipe > flags=ODR > user=_dovecot:_dovecot argv=/usr/local/libexec/dovecot/deliver -f > ${ > sender} -d ${user}@${nexthop} -n -m ${extension} > > How can i resolve that > ? > Thank you very much for your replies. > > Cheers, > > Wesley. > > From CMarcus at Media-Brokers.com Thu Jan 5 13:24:26 2012 From: CMarcus at Media-Brokers.com (Charles Marcus) Date: Thu, 05 Jan 2012 06:24:26 -0500 Subject: [Dovecot] Storing passwords encrypted... bcrypt? In-Reply-To: <4F03AD51.7080506@blue-labs.org> References: <4F0367A2.1000807@Media-Brokers.com> <4F036C3B.5080904@Media-Brokers.com> <4F037CC5.9030900@carpenter.org> <4F038186.3030505@Media-Brokers.com> <4F038B72.1000003@carpenter.org> <4F03AA6E.30003@Media-Brokers.com> <4F03AD51.7080506@blue-labs.org> Message-ID: <4F05886A.4080907@Media-Brokers.com> On 2012-01-03 8:37 PM, David Ford wrote: > part of my point along that of brute force resistance, is that > when security becomes onerous to the typical user such as requiring > non-repeat passwords of "10 characters including punctuation and mixed > case", even stalwart policy followers start tending toward avoiding it. Our policy is that we also don't force password changes unless/until there is a reason (an account is hacked/abused. I've been managing this mail system for 11+ years now, and this has *never* happened (knock wood). I'm not saying we're immune, or it can never happen, I'm simply saying it has never happened, so out policy is working as far as I'm concerned. > if anyone has a stressful job, spends a lot of time working, missing > sleep, is thereby prone to memory lapse, it's almost a sure guarantee > they *will* write it down/store it somewhere -- usually not in a > password safe. Again - there is no *need* form them to write it down. Once their workstation/home computer/phone is set up, it remembers the password for them. > or, they'll export their saved passwords to make a backup plain text > copy, and leave it on their Desktop folder but coyly named and > prefixed with a few random emails to grandma, so mr. sysadmin doesn't > notice it. And if I don't notice it, no one else will either, most likely. There is *no* perfect way, but ours works and has been working for 11+ years. > on a tangent, you should worry about active brute force attacks. > fail2ban and iptables heuristics become meaningless when the brute > forcing is done by bot nets which is more and more common than > single-host attacks these days. one IP per attempt in a 10-20 minute > window will probably never trigger any of these methods. Nor will it ever be successful in brute forcing a strong password either, because a botnet has to try the same user+different passwords, and is easy to monitor for an excessive number of failures (of the same user login attempts) and notify the sys admin (me) well in advance of the hack attempt being successful. -- Best regards, Charles From CMarcus at Media-Brokers.com Thu Jan 5 13:26:17 2012 From: CMarcus at Media-Brokers.com (Charles Marcus) Date: Thu, 05 Jan 2012 06:26:17 -0500 Subject: [Dovecot] Storing passwords encrypted... bcrypt? In-Reply-To: <4F03B25B.2020309@orlitzky.com> References: <4F0367A2.1000807@Media-Brokers.com> <4F036C3B.5080904@Media-Brokers.com> <4F037CC5.9030900@carpenter.org> <4F038186.3030505@Media-Brokers.com> <4F038B72.1000003@carpenter.org> <4F03AA6E.30003@Media-Brokers.com> <4F03B25B.2020309@orlitzky.com> Message-ID: <4F0588D9.1030709@Media-Brokers.com> On 2012-01-03 8:58 PM, Michael Orlitzky wrote: > On 01/03/2012 08:25 PM, Charles Marcus wrote: >> What I'm worried about is the worst case scenario of someone getting >> ahold of the entire user database of *stored* passwords, where they can >> then take their time and brute force them at their leisure, on *their* >> *own* systems, without having to hammer my server over smtp/imap and >> without the automated limit of *my* fail2ban getting in their way. > To prevent rainbow table attacks, salt your passwords. You can make them > a little bit more difficult in plenty of ways, but salt is the /solution/. Go read that link (you obviously didn't yet, because he claims that salting passwords is next to *useless*... >> As for people writing their passwords down... our policy is that it is a >> potentially *firable* *offense* (never even encountered one case of >> anyone posting their password, and I'm on these systems off and on all >> the time) if they do post these anywhere that is not under lock and key. >> Also, I always set up their email clients for them (on their >> workstations and on their phones - and of course tell it to remember the >> password, so they basically never have to enter it. > You realize they're just walking around with a $400 post-it note with > the password written on it, right? Nope, you are wrong - as I have patiently explained before. They do not *need* to write their password down. -- Best regards, Charles From CMarcus at Media-Brokers.com Thu Jan 5 13:31:32 2012 From: CMarcus at Media-Brokers.com (Charles Marcus) Date: Thu, 05 Jan 2012 06:31:32 -0500 Subject: [Dovecot] Storing passwords encrypted... bcrypt? In-Reply-To: <4F04FAA9.3020908@localhost.localdomain.org> References: <4F0367A2.1000807@Media-Brokers.com> <4F04FAA9.3020908@localhost.localdomain.org> Message-ID: <4F058A14.2060303@Media-Brokers.com> On 2012-01-04 8:19 PM, Pascal Volk wrote: > On 01/03/2012 09:40 PM Charles Marcus wrote: >> Hi everyone, >> >> Was just perusing this article about how trivial it is to decrypt >> passwords that are stored using most (standard) encryption methods (like >> MD5), and was wondering - is it possible to use bcrypt with >> dovecot+postfix+mysql (or posgres)? > Yes it is possible to use bcrypt with dovecot. Currently you have only > to write your password scheme plugin. The bcrypt algorithm is described > at http://en.wikipedia.org/wiki/Bcrypt. > > If you are using Dovecot>= 2.0 'doveadm pw' supports the schemes: > *BSD: Blowfish-Crypt > *Linux (since glibc 2.7): SHA-256-Crypt and SHA-512-Crypt > Some distributions have also added support for Blowfish-Crypt > See also: doveadm-pw(1) > > If you are using Dovecot< 2.0 you can also use any of the algorithms > supported by your system's libc. But then you have to prefix the hashes > with {CRYPT} - not {{BLF,SHA256,SHA512}-CRYPT}. Hmmm... thanks very much Pascal, I think that gets me half-way to an answer (but since ianap, this is mostly greek to me and so is not quite a solution I can implement yet)... You said above that 'yes, I can use it with dovecot' - but what about postfix and mysql... where/how do they fit into this mix? My thought was that there are two issues here: 1. Storing them in bcrypted form, and 2. The clients must support *decrypting* them... So, since I use postfixadmin, I'm guessing that for #1, it will have to support encrypting them in bcrypt form, and then I have to worry about dovecot - and since I'm planning on using postfix+dovecot-sasl, once dovecot supports it, postfix will too... Is that about right? Thanks again, -- Best regards, Charles From patrickdk at patrickdk.com Thu Jan 5 16:53:38 2012 From: patrickdk at patrickdk.com (Patrick Domack) Date: Thu, 05 Jan 2012 09:53:38 -0500 Subject: [Dovecot] Storing passwords encrypted... bcrypt? In-Reply-To: <1325747950.5349.31.camel@tardis> References: <4F0367A2.1000807@Media-Brokers.com> <4F036C3B.5080904@Media-Brokers.com> <4F037CC5.9030900@carpenter.org> <4F038186.3030505@Media-Brokers.com> <4F038B72.1000003@carpenter.org> <4F03AA6E.30003@Media-Brokers.com> <4F03B25B.2020309@orlitzky.com> <1325728752.9555.8.camel@tardis> <4F050A73.7090300@localhost.localdomain.org> <1325730998.9555.21.camel@tardis> <4F051391.404@localhost.localdomain.org> <1325747950.5349.31.camel@tardis> Message-ID: <20120105095338.Horde.6Wa7KJLnE6FPBbly7kZFh-A@kishi.patrickdk.com> Quoting Noel Butler : > On Thu, 2012-01-05 at 04:05 +0100, Pascal Volk wrote: > >> On 01/05/2012 03:36 AM Noel Butler wrote: >> >> > >> > Because with multiple servers, we store them all in (replicated) >> > mysql :) (the same with postfix/dovecot). >> > and as I'm sure you are aware, Apache does not understand standard >> > crypted MD5, hence why there is the second option of apache_md5_crypt() >> >> Oh, let me guess: You are using Windows, Netware, TPF as OS for your >> web servers? ;-) >> >> man htpasswd | grep -- '-d ' >> -d Use crypt() encryption for passwords. This is not >> supported by the httpd server on Windows and Netware and TPF. >> >> >> As you may have seen in my previous mail, the password is generated >> using crypt(). HTTP Authentication works with that password hash, even >> with the httpd from the ASF. >> > > > I think you need to do some homework, and although I now have 3.25 days > of holidays remaining, I don't intend to waste them educating anybody > hehe. Assuming you even know what I'm talking about, which I suspect you > don't since you keep using console commands and things like htpasswd, > which does not write to a mysql db, you don't seem to have comprehended > that I do not work with flat files nor local so it is irrelevant, I use > perl scripts for all systems management, so I hope you are not going to > suggest that I should make a system call when I can do it natively in > perl. > > But please, by all means, create a mysql db using a system crpyted md5 > password, I'll even help ya, openssl passwd -1 foobartilly > > $1$e3a.f3uW$SYRQiMlEhC5XlnSxtxiNC/ > > pop the entry into the db and go for your life trying to authenticate. > > > and when you've gone through half bottle of bourbon trying to figure out > why its not working, try the apache crypted md5 version $apr1$yKxk.DrQ > $ybcmM8mC1qD5t5FvoY9820 Mysql supports crypt right in it, so you can just submit the password to the mysql crypt function. We know perl has to support it also. The first thing I did when I was hired was to convert the password database from md5 to $6$. After that, I secured the machines that could and majorly limited what ones of them could get access to the list. About a month or two after this, we had about a thousand accounts compromised. So someone obviously got the list in how the old system was set, as every compromised password contains only lowercase letters less than 8 long. I wont say salted anything is bad, but keep the salt lengths up. Start with 8bytes atleast. crypts new option to support rounds also makes it a lot of fun, too bad I haven't seen consistant support for it yet, so I haven't been able to make use of that option. From michael at orlitzky.com Thu Jan 5 17:28:26 2012 From: michael at orlitzky.com (Michael Orlitzky) Date: Thu, 05 Jan 2012 10:28:26 -0500 Subject: [Dovecot] Storing passwords encrypted... bcrypt? In-Reply-To: <4F0588D9.1030709@Media-Brokers.com> References: <4F0367A2.1000807@Media-Brokers.com> <4F036C3B.5080904@Media-Brokers.com> <4F037CC5.9030900@carpenter.org> <4F038186.3030505@Media-Brokers.com> <4F038B72.1000003@carpenter.org> <4F03AA6E.30003@Media-Brokers.com> <4F03B25B.2020309@orlitzky.com> <4F0588D9.1030709@Media-Brokers.com> Message-ID: <4F05C19A.4030603@orlitzky.com> On 01/05/12 06:26, Charles Marcus wrote: > >> To prevent rainbow table attacks, salt your passwords. You can make them >> a little bit more difficult in plenty of ways, but salt is the >> /solution/. > > Go read that link (you obviously didn't yet, because he claims that > salting passwords is next to *useless*... > He doesn't claim that, but he's a crackpot anyway. Use a slow algorithm (others already mentioned bcrypt) to prevent brute-force search, and use salt to prevent pre-computed lookups. Anyone who tells you otherwise can probably be ignored. Extraordinary claims require extraordinary evidence. >> You realize they're just walking around with a $400 post-it note with >> the password written on it, right? > > Nope, you are wrong - as I have patiently explained before. They do not > *need* to write their password down. > They have them written down on their phones. If someone gets a hold of the phone, he can just read the password off of it. From michael at orlitzky.com Thu Jan 5 17:32:32 2012 From: michael at orlitzky.com (Michael Orlitzky) Date: Thu, 05 Jan 2012 10:32:32 -0500 Subject: [Dovecot] Storing passwords encrypted... bcrypt? In-Reply-To: <20120104210644.Horde.YEJENpLnE6FPBQW0C1KEd8A@kishi.patrickdk.com> References: <4F0367A2.1000807@Media-Brokers.com> <4F036C3B.5080904@Media-Brokers.com> <4F037CC5.9030900@carpenter.org> <4F038186.3030505@Media-Brokers.com> <4F038B72.1000003@carpenter.org> <4F03AA6E.30003@Media-Brokers.com> <4F03B25B.2020309@orlitzky.com> <1325728752.9555.8.camel@tardis> <20120104210644.Horde.YEJENpLnE6FPBQW0C1KEd8A@kishi.patrickdk.com> Message-ID: <4F05C290.5020308@orlitzky.com> On 01/04/12 21:06, Patrick Domack wrote: > > But still, the results are all the same, if they get the hash, it can be > broken, given time. Using more cpu expensive methods make it take longer > (like adding salt, more complex hash). But the end result is they will > have it if they want it. > Unless someone breaks either math or the hash algorithm, this is false. My password will be of little use to you in 10^20 years. From michael at orlitzky.com Thu Jan 5 17:46:23 2012 From: michael at orlitzky.com (Michael Orlitzky) Date: Thu, 05 Jan 2012 10:46:23 -0500 Subject: [Dovecot] Storing passwords encrypted... bcrypt? In-Reply-To: <4F05C19A.4030603@orlitzky.com> References: <4F0367A2.1000807@Media-Brokers.com> <4F036C3B.5080904@Media-Brokers.com> <4F037CC5.9030900@carpenter.org> <4F038186.3030505@Media-Brokers.com> <4F038B72.1000003@carpenter.org> <4F03AA6E.30003@Media-Brokers.com> <4F03B25B.2020309@orlitzky.com> <4F0588D9.1030709@Media-Brokers.com> <4F05C19A.4030603@orlitzky.com> Message-ID: <4F05C5CF.7010804@orlitzky.com> On 01/05/12 10:28, Michael Orlitzky wrote: >> >> Nope, you are wrong - as I have patiently explained before. They do not >> *need* to write their password down. >> > > They have them written down on their phones. If someone gets a hold of > the phone, he can just read the password off of it. I should point out, I don't think this is a bad thing! From CMarcus at Media-Brokers.com Thu Jan 5 18:14:20 2012 From: CMarcus at Media-Brokers.com (Charles Marcus) Date: Thu, 05 Jan 2012 11:14:20 -0500 Subject: [Dovecot] Storing passwords encrypted... bcrypt? In-Reply-To: <4F05C19A.4030603@orlitzky.com> References: <4F0367A2.1000807@Media-Brokers.com> <4F036C3B.5080904@Media-Brokers.com> <4F037CC5.9030900@carpenter.org> <4F038186.3030505@Media-Brokers.com> <4F038B72.1000003@carpenter.org> <4F03AA6E.30003@Media-Brokers.com> <4F03B25B.2020309@orlitzky.com> <4F0588D9.1030709@Media-Brokers.com> <4F05C19A.4030603@orlitzky.com> Message-ID: <4F05CC5C.7020807@Media-Brokers.com> On 2012-01-05 10:28 AM, Michael Orlitzky wrote: > On 01/05/12 06:26, Charles Marcus wrote: >>> To prevent rainbow table attacks, salt your passwords. You can make them >>> a little bit more difficult in plenty of ways, but salt is the >>> /solution/. >> Go read that link (you obviously didn't yet, because he claims that >> salting passwords is next to *useless*... > He doesn't claim that, Ummm... yes, he does... from tfa: "Salts Will Not Help You It?s important to note that salts are useless for preventing dictionary attacks or brute force attacks. You can use huge salts or many salts or hand-harvested, shade-grown, organic Himalayan pink salt. It doesn?t affect how fast an attacker can try a candidate password, given the hash and the salt from your database. Salt or no, if you?re using a general-purpose hash function designed for speed you?re well and truly effed." > but he's a crackpot anyway. Why? I asked because I'm genuinely unsure (don't know enough about the innards of the different encryption methods), and that's why I asked. Simply saying he's a crackpot means nothing. Also... > Use a slow algorithm (others already mentioned bcrypt)to prevent > brute-force search, Actually, that (bcrypt) is precisely what *the author of the article* (the one who you are saying is a crackpot) is suggesting to use - I guess you didn't even bother to read it or else you'd know that, so why bother commenting? > and use salt to prevent pre-computed lookups. Anyone who tells you > otherwise can probably be ignored. Extraordinary claims require > extraordinary evidence. I don't see it as an extraordinary claim, and anyone who goes around claiming someone else is a crackpot without evidence to support the claim is just yammering. >>> You realize they're just walking around with a $400 post-it note with >>> the password written on it, right? >> Nope, you are wrong - as I have patiently explained before. They do not >> *need* to write their password down. > They have them written down on their phones. If someone gets a hold of > the phone, he can just read the password off of it. No, they don't, your claim is baseless and without merit. Most people have never even known what their password *is*, much less written it down, because as I said (more than once), *I* set up their email clients (workstations, home computers and phones) *for them*. -- Best regards, Charles From wgillespie at es2eng.com Thu Jan 5 18:21:45 2012 From: wgillespie at es2eng.com (Willie Gillespie) Date: Thu, 05 Jan 2012 09:21:45 -0700 Subject: [Dovecot] Storing passwords encrypted... bcrypt? In-Reply-To: <4F05CC5C.7020807@Media-Brokers.com> References: <4F0367A2.1000807@Media-Brokers.com> <4F036C3B.5080904@Media-Brokers.com> <4F037CC5.9030900@carpenter.org> <4F038186.3030505@Media-Brokers.com> <4F038B72.1000003@carpenter.org> <4F03AA6E.30003@Media-Brokers.com> <4F03B25B.2020309@orlitzky.com> <4F0588D9.1030709@Media-Brokers.com> <4F05C19A.4030603@orlitzky.com> <4F05CC5C.7020807@Media-Brokers.com> Message-ID: <4F05CE19.8030204@es2eng.com> On 1/5/2012 9:14 AM, Charles Marcus wrote: > On 2012-01-05 10:28 AM, Michael Orlitzky wrote: >> On 01/05/12 06:26, Charles Marcus wrote: >>>> You realize they're just walking around with a $400 post-it note with >>>> the password written on it, right? > >>> Nope, you are wrong - as I have patiently explained before. They do not >>> *need* to write their password down. > >> They have them written down on their phones. If someone gets a hold of >> the phone, he can just read the password off of it. > > No, they don't, your claim is baseless and without merit. > > Most people have never even known what their password *is*, much less > written it down, because as I said (more than once), *I* set up their > email clients (workstations, home computers and phones) *for them*. If the phone knows the password and I have the phone, then I have the password. Similarly, if I compromise the workstation that knows the password, then I also have the password. Even if the user doesn't know the password, the phone/workstation does. And it has to be stored in a retrievable way. That's what he's trying to say when he was talking about a "$400 post-it note." From michael at orlitzky.com Thu Jan 5 18:31:17 2012 From: michael at orlitzky.com (Michael Orlitzky) Date: Thu, 05 Jan 2012 11:31:17 -0500 Subject: [Dovecot] Storing passwords encrypted... bcrypt? In-Reply-To: <4F05CC5C.7020807@Media-Brokers.com> References: <4F0367A2.1000807@Media-Brokers.com> <4F036C3B.5080904@Media-Brokers.com> <4F037CC5.9030900@carpenter.org> <4F038186.3030505@Media-Brokers.com> <4F038B72.1000003@carpenter.org> <4F03AA6E.30003@Media-Brokers.com> <4F03B25B.2020309@orlitzky.com> <4F0588D9.1030709@Media-Brokers.com> <4F05C19A.4030603@orlitzky.com> <4F05CC5C.7020807@Media-Brokers.com> Message-ID: <4F05D055.7020305@orlitzky.com> On 01/05/12 11:14, Charles Marcus wrote: > > Ummm... yes, he does... from tfa: > > "Salts Will Not Help You > > It?s important to note that salts are useless for preventing dictionary > attacks or brute force attacks. You can use huge salts or many salts or > hand-harvested, shade-grown, organic Himalayan pink salt. It doesn?t > affect how fast an attacker can try a candidate password, given the hash > and the salt from your database. > > Salt or no, if you?re using a general-purpose hash function designed for > speed you?re well and truly effed." Ugh, sorry. I went to the link that someone else quoted: https://www.grc.com/haystack.htm The article you posted is correct. Salt will not prevent brute-force search, but it isn't meant to. Salt is meant to prevent the attacker from using precomputed tables of hashed passwords, called rainbow tables. To prevent brute-force search, you use a better algorithm, like the author says. >> but he's a crackpot anyway. Gibson *is* a renowned crackpot. > Why? I asked because I'm genuinely unsure (don't know enough about the > innards of the different encryption methods), and that's why I asked. > Simply saying he's a crackpot means nothing. > > Also... > >> Use a slow algorithm (others already mentioned bcrypt)to prevent >> brute-force search, > > Actually, that (bcrypt) is precisely what *the author of the article* > (the one who you are saying is a crackpot) is suggesting to use - I > guess you didn't even bother to read it or else you'd know that, so why > bother commenting? Again, sorry, I don't always know how to work my email client. > > I don't see it as an extraordinary claim, and anyone who goes around > claiming someone else is a crackpot without evidence to support the > claim is just yammering. > Your article is fine, but you should always be skeptical because for every article like the one you posted, there are 100 like Gibson's. > > No, they don't, your claim is baseless and without merit. > > Most people have never even known what their password *is*, much less > written it down, because as I said (more than once), *I* set up their > email clients (workstations, home computers and phones) *for them*. > The password is on the phone, in plain text. If I have the phone, I can read it as easily as if it was written in sharpie. From yubao.liu at gmail.com Thu Jan 5 20:23:56 2012 From: yubao.liu at gmail.com (Yubao Liu) Date: Fri, 06 Jan 2012 02:23:56 +0800 Subject: [Dovecot] Strange error: DIGEST-MD5 mechanism can't be supported with given passdbs Message-ID: <4F05EABC.7070309@gmail.com> Hi all, I have no idea about that message, here is my configuration, what's wrong? Debian testing, Dovecot 2.0.15 $ doveconf -n # 2.0.15: /etc/dovecot/dovecot.conf # OS: Linux 3.1.0-1-686-pae i686 Debian wheezy/sid auth_default_realm = corp.example.com auth_krb5_keytab = /etc/dovecot.keytab auth_master_user_separator = * auth_mechanisms = gssapi digest-md5 auth_realms = corp.example.com auth_username_format = %n first_valid_gid = 1000 first_valid_uid = 1000 mail_location = mdbox:/srv/mail/%u/Mail managesieve_notify_capability = mailto managesieve_sieve_capability = fileinto reject envelope encoded-character vacation subaddress comparator-i;ascii-numeric relational regex imap4flags copy include variables body enotify environment mailbox date ihave passdb { args = /etc/dovecot/master-users driver = passwd-file master = yes pass = yes } passdb { driver = pam } plugin { sieve = /srv/mail/%u/.dovecot.sieve sieve_dir = /srv/mail/%u/sieve } protocols = " imap lmtp sieve" service auth { unix_listener auth-client { group = Debian-exim mode = 0660 } } ssl_cert = References: <4F0367A2.1000807@Media-Brokers.com> <4F036C3B.5080904@Media-Brokers.com> <4F037CC5.9030900@carpenter.org> <4F038186.3030505@Media-Brokers.com> <4F038B72.1000003@carpenter.org> <4F03AA6E.30003@Media-Brokers.com> <4F03B25B.2020309@orlitzky.com> <4F0588D9.1030709@Media-Brokers.com> <4F05C19A.4030603@orlitzky.com> <4F05CC5C.7020807@Media-Brokers.com> <4F05CE19.8030204@es2eng.com> Message-ID: <4F05ED9B.10901@Media-Brokers.com> On 2012-01-05 11:21 AM, Willie Gillespie wrote: > If the phone knows the password and I have the phone, then I have the > password. Similarly, if I compromise the workstation that knows the > password, then I also have the password. Interesting... I thought they were stored encrypted. I definitely use a (strong) Master Password in Thunderbird to protect the passwords, so it would take some doing on the workstations. > Even if the user doesn't know the password, the phone/workstation does. > And it has to be stored in a retrievable way. Yes, if an attacker has unfettered physical access to the workstation/phone, it can be compromised... > That's what he's trying to say when he was talking about a "$400 post-it > note." Got it... As I said, there is no perfect system... but ours has worked well in the 11+ years we've been doing it this way. -- Best regards, Charles From CMarcus at Media-Brokers.com Thu Jan 5 20:37:58 2012 From: CMarcus at Media-Brokers.com (Charles Marcus) Date: Thu, 05 Jan 2012 13:37:58 -0500 Subject: [Dovecot] Storing passwords encrypted... bcrypt? In-Reply-To: <4F05D055.7020305@orlitzky.com> References: <4F0367A2.1000807@Media-Brokers.com> <4F036C3B.5080904@Media-Brokers.com> <4F037CC5.9030900@carpenter.org> <4F038186.3030505@Media-Brokers.com> <4F038B72.1000003@carpenter.org> <4F03AA6E.30003@Media-Brokers.com> <4F03B25B.2020309@orlitzky.com> <4F0588D9.1030709@Media-Brokers.com> <4F05C19A.4030603@orlitzky.com> <4F05CC5C.7020807@Media-Brokers.com> <4F05D055.7020305@orlitzky.com> Message-ID: <4F05EE06.8070302@Media-Brokers.com> On 2012-01-05 11:31 AM, Michael Orlitzky wrote: > Ugh, sorry. I went to the link that someone else quoted: > > https://www.grc.com/haystack.htm > Gibson*is* a renowned crackpot. Don't know about that, but I do know from long experience Spinrite rocks! Maybe -- Best regards, Charles From david at blue-labs.org Thu Jan 5 20:47:58 2012 From: david at blue-labs.org (David Ford) Date: Thu, 05 Jan 2012 13:47:58 -0500 Subject: [Dovecot] Storing passwords encrypted... bcrypt? In-Reply-To: <4F05EE06.8070302@Media-Brokers.com> References: <4F0367A2.1000807@Media-Brokers.com> <4F036C3B.5080904@Media-Brokers.com> <4F037CC5.9030900@carpenter.org> <4F038186.3030505@Media-Brokers.com> <4F038B72.1000003@carpenter.org> <4F03AA6E.30003@Media-Brokers.com> <4F03B25B.2020309@orlitzky.com> <4F0588D9.1030709@Media-Brokers.com> <4F05C19A.4030603@orlitzky.com> <4F05CC5C.7020807@Media-Brokers.com> <4F05D055.7020305@orlitzky.com> <4F05EE06.8070302@Media-Brokers.com> Message-ID: <4F05F05E.60104@blue-labs.org> On 01/05/2012 01:37 PM, Charles Marcus wrote: > On 2012-01-05 11:31 AM, Michael Orlitzky wrote: >> Ugh, sorry. I went to the link that someone else quoted: >> >> https://www.grc.com/haystack.htm > >> Gibson*is* a renowned crackpot. > > Don't know about that, but I do know from long experience Spinrite rocks! > > Maybe he often piggybacks on common sense but makes it into an elaborate grandiose presentation. a lot of his topics tend to wander out to left field come half-time. -d From wgillespie+dovecot at es2eng.com Thu Jan 5 21:22:47 2012 From: wgillespie+dovecot at es2eng.com (Willie Gillespie) Date: Thu, 05 Jan 2012 12:22:47 -0700 Subject: [Dovecot] Storing passwords encrypted... bcrypt? In-Reply-To: <4F05ED9B.10901@Media-Brokers.com> References: <4F0367A2.1000807@Media-Brokers.com> <4F036C3B.5080904@Media-Brokers.com> <4F037CC5.9030900@carpenter.org> <4F038186.3030505@Media-Brokers.com> <4F038B72.1000003@carpenter.org> <4F03AA6E.30003@Media-Brokers.com> <4F03B25B.2020309@orlitzky.com> <4F0588D9.1030709@Media-Brokers.com> <4F05C19A.4030603@orlitzky.com> <4F05CC5C.7020807@Media-Brokers.com> <4F05CE19.8030204@es2eng.com> <4F05ED9B.10901@Media-Brokers.com> Message-ID: <4F05F887.70204@es2eng.com> On 01/05/2012 11:36 AM, Charles Marcus wrote: > On 2012-01-05 11:21 AM, Willie Gillespie wrote: >> If the phone knows the password and I have the phone, then I have the >> password. Similarly, if I compromise the workstation that knows the >> password, then I also have the password. > > Interesting... I thought they were stored encrypted. I definitely use a > (strong) Master Password in Thunderbird to protect the passwords, so it > would take some doing on the workstations. True. If you are using a master password, they are encrypted. From user+dovecot at localhost.localdomain.org Fri Jan 6 00:28:27 2012 From: user+dovecot at localhost.localdomain.org (Pascal Volk) Date: Thu, 05 Jan 2012 23:28:27 +0100 Subject: [Dovecot] Storing passwords encrypted... bcrypt? In-Reply-To: <4F058A14.2060303@Media-Brokers.com> References: <4F0367A2.1000807@Media-Brokers.com> <4F04FAA9.3020908@localhost.localdomain.org> <4F058A14.2060303@Media-Brokers.com> Message-ID: <4F06240B.2040101@localhost.localdomain.org> On 01/05/2012 12:31 PM Charles Marcus wrote: > ? > You said above that 'yes, I can use it with dovecot' - but what about > postfix and mysql... where/how do they fit into this mix? My thought was > that there are two issues here: > > 1. Storing them in bcrypted form, and For MySQL the bcrypted password is just a varchar. > 2. The clients must support *decrypting* them... Sorry, i don't know if clients need to know anything about the used password scheme. The used password scheme is mostly relevant for Dovecot. Don't mix password scheme and authentication scheme. > So, since I use postfixadmin, I'm guessing that for #1, it will have to > support encrypting them in bcrypt form, and then I have to worry about > dovecot - and since I'm planning on using postfix+dovecot-sasl, once > dovecot supports it, postfix will too... > > Is that about right? I think that's correct. Postfix uses Dovecot for the authentication stuff. If I'm wrong, please let me know it. Regards, Pascal -- The trapper recommends today: c01dcafe.1200523 at localdomain.org From Ralf.Hildebrandt at charite.de Fri Jan 6 12:09:53 2012 From: Ralf.Hildebrandt at charite.de (Ralf Hildebrandt) Date: Fri, 6 Jan 2012 11:09:53 +0100 Subject: [Dovecot] Deduplication active - but how good does it perform? Message-ID: <20120106100953.GV24134@charite.de> I have deduplication active in my first mdbox: type mailbox, but how do I find out how well the deduplication works? Is there a way of finding out how much disk space I saved (if I saved some :) )? -- Ralf Hildebrandt Gesch?ftsbereich IT | Abteilung Netzwerk Charit? - Universit?tsmedizin Berlin Campus Benjamin Franklin Hindenburgdamm 30 | D-12203 Berlin Tel. +49 30 450 570 155 | Fax: +49 30 450 570 962 ralf.hildebrandt at charite.de | http://www.charite.de From nick+dovecot at bunbun.be Fri Jan 6 12:52:34 2012 From: nick+dovecot at bunbun.be (Nick Rosier) Date: Fri, 06 Jan 2012 11:52:34 +0100 Subject: [Dovecot] Strange error: DIGEST-MD5 mechanism can't be supported with given passdbs In-Reply-To: <4F05EABC.7070309@gmail.com> References: <4F05EABC.7070309@gmail.com> Message-ID: <4F06D272.5010200@bunbun.be> Yubao Liu wrote: > Hi all, > > I have no idea about that message, here is my configuration, what's wrong? You have 2 passdb entries; 1 with a file and 1 with pam. I'm pretty sure PAM doesn't support DIGEST-MD5 authentication. Could be the cause of the problem. > Debian testing, Dovecot 2.0.15 > > $ doveconf -n > # 2.0.15: /etc/dovecot/dovecot.conf > # OS: Linux 3.1.0-1-686-pae i686 Debian wheezy/sid > auth_default_realm = corp.example.com > auth_krb5_keytab = /etc/dovecot.keytab > auth_master_user_separator = * > auth_mechanisms = gssapi digest-md5 > auth_realms = corp.example.com > auth_username_format = %n > first_valid_gid = 1000 > first_valid_uid = 1000 > mail_location = mdbox:/srv/mail/%u/Mail > managesieve_notify_capability = mailto > managesieve_sieve_capability = fileinto reject envelope > encoded-character vacation subaddress comparator-i;ascii-numeric > relational regex imap4flags copy include variables body enotify > environment mailbox date ihave > passdb { > args = /etc/dovecot/master-users > driver = passwd-file > master = yes > pass = yes > } > passdb { > driver = pam > } > plugin { > sieve = /srv/mail/%u/.dovecot.sieve > sieve_dir = /srv/mail/%u/sieve > } > protocols = " imap lmtp sieve" > service auth { > unix_listener auth-client { > group = Debian-exim > mode = 0660 > } > } > ssl_cert = ssl_key = userdb { > args = home=/srv/mail/%u > driver = passwd > } > protocol lmtp { > mail_plugins = " sieve" > } > protocol lda { > mail_plugins = " sieve" > } > > # cat /etc/dovecot/master-users > xxx at corp.example.com:zzzzzzzz > > The zzzzz is obtained by "doveadm pw -s digest-md5 -u > xxx at corp.example.com", > I tried to add prefix "{DIGEST-MD5}" before the generated hash and/or add > "scheme=DIGEST-MD5" to the passwd-file passdb's "args" option, both > don't help. > > The error message: > dovecot: master: Dovecot v2.0.15 starting up (core dumps disabled) > dovecot: auth: Fatal: DIGEST-MD5 mechanism can't be supported with given > passdbs > gold dovecot: master: Error: service(auth): command startup failed, > throttling > > I opened debug auth log, it showed dovecot read /etc/dovecot/master-users > and parsed one line, then the error occurred. Doesn't passwd-file > passdb support > digest-md5 password scheme? If it doesn't support, how do I configure > digest-md5 auth > mechanism with digest-md5 password scheme for virtual users? > > Regards, > Yubao Liu > Rgds, N. From tss at iki.fi Fri Jan 6 12:54:19 2012 From: tss at iki.fi (Timo Sirainen) Date: Fri, 6 Jan 2012 12:54:19 +0200 Subject: [Dovecot] Deduplication active - but how good does it perform? In-Reply-To: <20120106100953.GV24134@charite.de> References: <20120106100953.GV24134@charite.de> Message-ID: On 6.1.2012, at 12.09, Ralf Hildebrandt wrote: > I have deduplication active in my first mdbox: type mailbox, but how > do I find out how well the deduplication works? Is there a way of > finding out how much disk space I saved (if I saved some :) )? You could look at the files in the attachments directory, and see how many links they have. Each file has 2 initially. Each additional link has saved you bytes of space. From tss at iki.fi Fri Jan 6 12:55:49 2012 From: tss at iki.fi (Timo Sirainen) Date: Fri, 6 Jan 2012 12:55:49 +0200 Subject: [Dovecot] Possible mdbox corruption In-Reply-To: References: Message-ID: On 5.1.2012, at 2.24, Daniel L. Miller wrote: > I thought I had cleared out the corruption I had before - perhaps I was mistaken. What steps should I take to help locate these issues? Currently using 2.1rc1. I see the following errors in my logs, including out of memory and message size issues (at 15:30): .. > Jan 4 05:17:17 bubba dovecot: master: Error: service(indexer-worker): child 10896 returned error 83 (Out of memory (vsz_limit=256 MB, you may need to increase it)) > Jan 4 06:17:17 bubba dovecot: indexer-worker(user1 at domain.com): Fatal: pool_system_realloc(134217728): Out of memory The problem is clearly that index-worker's vsz_limit is too low. Increase it (or default_vsz_limit). From tss at iki.fi Fri Jan 6 12:57:43 2012 From: tss at iki.fi (Timo Sirainen) Date: Fri, 6 Jan 2012 12:57:43 +0200 Subject: [Dovecot] Possible mdbox corruption In-Reply-To: References: Message-ID: <9D72A466-6ECD-4A10-A52A-20BF56B32916@iki.fi> On 6.1.2012, at 12.55, Timo Sirainen wrote: >> Jan 4 05:17:17 bubba dovecot: master: Error: service(indexer-worker): child 10896 returned error 83 (Out of memory (vsz_limit=256 MB, you may need to increase it)) >> Jan 4 06:17:17 bubba dovecot: indexer-worker(user1 at domain.com): Fatal: pool_system_realloc(134217728): Out of memory > > The problem is clearly that index-worker's vsz_limit is too low. Increase it (or default_vsz_limit). Although the source of the out-of-memory /usr/local/lib/dovecot/libdovecot.so.0(buffer_write+0x7c) [0x7f0ec1a550ec] -> /usr/local/lib/dovecot/lib21_fts_solr_plugin.so(+0x3292) [0x7f0ec024f292] -> is something that shouldn't really be happening. I guess the Solr plugin wastes memory unnecessarily, I'll see what I can do about it. But for now just increase vsz limit. From nick+dovecot at bunbun.be Fri Jan 6 13:04:51 2012 From: nick+dovecot at bunbun.be (Nick Rosier) Date: Fri, 06 Jan 2012 12:04:51 +0100 Subject: [Dovecot] Deduplication active - but how good does it perform? In-Reply-To: <20120106100953.GV24134@charite.de> References: <20120106100953.GV24134@charite.de> Message-ID: <4F06D553.2010605@bunbun.be> Ralf Hildebrandt wrote: > I have deduplication active in my first mdbox: type mailbox, but how > do I find out how well the deduplication works? Is there a way of > finding out how much disk space I saved (if I saved some :) )? You could check how much diskspace all the mail uses (or the mail of a user) and compare it to the quota dovecot reports. But I think you would need quota's activated for this. E.g. on my small server used diskquota is 2GB where doveadm quota reports all users use 3.1GB. From adrian.minta at gmail.com Fri Jan 6 13:07:05 2012 From: adrian.minta at gmail.com (Adrian Minta) Date: Fri, 06 Jan 2012 13:07:05 +0200 Subject: [Dovecot] howto disable indexing on dovecot-lda ? Message-ID: <4F06D5D9.20001@gmail.com> Hello, is it possible to disable indexing on dovecot-lda ? Right now postfix delivers the mail directly to the nfs server without any problems. If I switch to dovecot-lda the system crashes do to the high I/O and locking. Indexing on lda is not very useful because the number of of imap logins is less than 5% that of incoming mails, so an user could wait for 3 sec to get his mail index, but a new mail can't. Dovecot version 1.2.15 mail_nfs_storage = yes mail_nfs_index = yes Than you ! From tss at iki.fi Fri Jan 6 13:27:41 2012 From: tss at iki.fi (Timo Sirainen) Date: Fri, 06 Jan 2012 13:27:41 +0200 Subject: [Dovecot] Possible mdbox corruption In-Reply-To: <9D72A466-6ECD-4A10-A52A-20BF56B32916@iki.fi> References: <9D72A466-6ECD-4A10-A52A-20BF56B32916@iki.fi> Message-ID: <1325849261.17774.0.camel@hurina> On Fri, 2012-01-06 at 12:57 +0200, Timo Sirainen wrote: > On 6.1.2012, at 12.55, Timo Sirainen wrote: > > >> Jan 4 05:17:17 bubba dovecot: master: Error: service(indexer-worker): child 10896 returned error 83 (Out of memory (vsz_limit=256 MB, you may need to increase it)) > >> Jan 4 06:17:17 bubba dovecot: indexer-worker(user1 at domain.com): Fatal: pool_system_realloc(134217728): Out of memory > > > > The problem is clearly that index-worker's vsz_limit is too low. Increase it (or default_vsz_limit). > > Although the source of the out-of-memory > > /usr/local/lib/dovecot/libdovecot.so.0(buffer_write+0x7c) [0x7f0ec1a550ec] -> /usr/local/lib/dovecot/lib21_fts_solr_plugin.so(+0x3292) [0x7f0ec024f292] -> > > is something that shouldn't really be happening. I guess the Solr plugin wastes memory unnecessarily, I'll see what I can do about it. But for now just increase vsz limit. I don't see any obvious reason why it would be using a lot of memory, unless you have a message that has huge (MIME) headers. See if http://hg.dovecot.org/dovecot-2.1/rev/380b0667e0a5 helps / logs a warning about it. From tss at iki.fi Fri Jan 6 13:39:44 2012 From: tss at iki.fi (Timo Sirainen) Date: Fri, 06 Jan 2012 13:39:44 +0200 Subject: [Dovecot] howto disable indexing on dovecot-lda ? In-Reply-To: <4F06D5D9.20001@gmail.com> References: <4F06D5D9.20001@gmail.com> Message-ID: <1325849985.17774.10.camel@hurina> On Fri, 2012-01-06 at 13:07 +0200, Adrian Minta wrote: > Hello, > is it possible to disable indexing on dovecot-lda ? protocol lda { mail_location = whatever-you-have-now:INDEX=MEMORY } > Right now postfix delivers the mail directly to the nfs server without > any problems. If I switch to dovecot-lda the system crashes do to the > high I/O and locking. Disabling indexing won't disable writing to dovecot-uidlist file. So I don't know if disabling indexes actually helps. From alexis.lelion at gmail.com Fri Jan 6 13:36:15 2012 From: alexis.lelion at gmail.com (Alexis Lelion) Date: Fri, 6 Jan 2012 12:36:15 +0100 Subject: [Dovecot] ACL with IMAP proxying Message-ID: Hello, I'm trying to use ACLs to restrict subscription on public mailboxes, but I went into trouble. My setup is made of two servers, and users are shared between them via a proxy. User authentication is done with LDAP, and credentials aren't shared between the mailservers. Instead, the proxies are using master password. The thing is that when the ACLs are checked, it actually doesn't give the user login, but the master login, which is useless. Is there a way to use the first part of destuser as it is done when fetching info from the userdb? Any help is appreciated, Thansk! Alexis -------------------------------------------------- ACL bug logs : 104184 Jan 6 12:09:35 mail02 dovecot: imap(user at domain): Debug: acl: acl username = proxy 104185 Jan 6 12:09:35 mail02 dovecot: imap(user at domain): Debug: acl: owner = 0 104186 Jan 6 12:09:35 mail02 dovecot: imap(user at domain): Debug: acl vfile: Global ACL directory: (none) 104187 Jan 6 12:09:35 mail02 dovecot: imap(user at domain): Debug: Namespace : type=public, prefix=Shared., sep=., inbox=no, hidden=no, list=yes, subscriptions=no location=maildir:/var/vmail/domain/Shared -------------------------------------------------- Output of "dovecot -n" # 2.0.15: /etc/dovecot/dovecot.conf # OS: Linux 2.6.32-5-amd64 x86_64 Debian 6.0.3 ext3 auth_debug = yes auth_master_user_separator = * auth_socket_path = /var/run/dovecot/auth-userdb auth_verbose = yes first_valid_uid = 150 lmtp_proxy = yes login_trusted_networks = mail01.ip mail_debug = yes mail_location = maildir:/var/vmail/%d/%n mail_nfs_storage = yes mail_plugins = acl mail_privileged_group = mail managesieve_notify_capability = mailto managesieve_sieve_capability = fileinto reject envelope encoded-character vacation subaddress comparator-i;ascii-numeric relational regex imap4flags copy include variables body enotify environment mailbox date ihave namespace { inbox = yes location = maildir:/var/vmail/%d/%n prefix = separator = . type = private } namespace { location = maildir:/var/vmail/domain/Shared prefix = Shared. separator = . subscriptions = no type = public } passdb { args = /etc/dovecot/master-users driver = passwd-file master = yes } passdb { args = /etc/dovecot/dovecot-ldap.conf driver = ldap } plugin { acl = vfile:/etc/dovecot/global-acls:cache_secs=300 recipient_delimiter = + sieve_after = /var/lib/dovecot/sieve/after.d/ sieve_before = /var/lib/dovecot/sieve/pre.d/ sieve_dir = /var/vmail/%d/%n/sieve sieve_global_path = /var/lib/dovecot/sieve/default.sieve } postmaster_address = user at domain protocols = " imap lmtp sieve" service auth { unix_listener /var/spool/postfix/private/auth { group = postfix mode = 0666 user = postfix } unix_listener auth-userdb { group = mail mode = 0600 user = vmail } } service lmtp { inet_listener lmtp { address = mail02.ip port = 24 } unix_listener /var/spool/postfix/private/dovecot-lmtp { group = postfix mode = 0660 user = postfix } } ssl = required ssl_cert = References: Message-ID: <1325850528.17774.13.camel@hurina> On Fri, 2012-01-06 at 12:36 +0100, Alexis Lelion wrote: > The thing is that when the ACLs are checked, it actually doesn't give > the user login, but the master login, which is useless. Yes, this is intentional. > Is there a way to use the first part of destuser as it is done when > fetching info from the userdb? You should be able to work around this with modifying userdb's query: user_query = select '%n' AS master_user, ... From stan at hardwarefreak.com Fri Jan 6 13:50:13 2012 From: stan at hardwarefreak.com (Stan Hoeppner) Date: Fri, 06 Jan 2012 05:50:13 -0600 Subject: [Dovecot] howto disable indexing on dovecot-lda ? In-Reply-To: <4F06D5D9.20001@gmail.com> References: <4F06D5D9.20001@gmail.com> Message-ID: <4F06DFF5.40707@hardwarefreak.com> On 1/6/2012 5:07 AM, Adrian Minta wrote: > Hello, > is it possible to disable indexing on dovecot-lda ? > > Right now postfix delivers the mail directly to the nfs server without > any problems. If I switch to dovecot-lda the system crashes do to the > high I/O and locking. > Indexing on lda is not very useful because the number of of imap logins > is less than 5% that of incoming mails, so an user could wait for 3 sec > to get his mail index, but a new mail can't. Then why bother with Dovecot LDA w/disabled indexing (the main reason for using it in the first place) instead of simply sticking with Postfix Local(8)? -- Stan From CMarcus at Media-Brokers.com Fri Jan 6 13:58:16 2012 From: CMarcus at Media-Brokers.com (Charles Marcus) Date: Fri, 06 Jan 2012 06:58:16 -0500 Subject: [Dovecot] Deduplication active - but how good does it perform? In-Reply-To: References: <20120106100953.GV24134@charite.de> Message-ID: <4F06E1D8.7090507@Media-Brokers.com> On 2012-01-06 5:54 AM, Timo Sirainen wrote: > On 6.1.2012, at 12.09, Ralf Hildebrandt wrote: >> I have deduplication active in my first mdbox: type mailbox, but how >> do I find out how well the deduplication works? Is there a way of >> finding out how much disk space I saved (if I saved some :) )? > You could look at the files in the attachments directory, and see how > many links they have. Each file has 2 initially. Each additional link > has saved you bytes of space. Maybe there could be a doveadm command for this? That would be really useful for some kind of stats applications... especially for promoting its use in environments where large attachments are common... -- Best regards, Charles From CMarcus at Media-Brokers.com Fri Jan 6 14:09:05 2012 From: CMarcus at Media-Brokers.com (Charles Marcus) Date: Fri, 06 Jan 2012 07:09:05 -0500 Subject: [Dovecot] Deduplication active - but how good does it perform? In-Reply-To: <4F06E1D8.7090507@Media-Brokers.com> References: <20120106100953.GV24134@charite.de> <4F06E1D8.7090507@Media-Brokers.com> Message-ID: <4F06E461.3010906@Media-Brokers.com> On 2012-01-06 6:58 AM, Charles Marcus wrote: > On 2012-01-06 5:54 AM, Timo Sirainen wrote: >> On 6.1.2012, at 12.09, Ralf Hildebrandt wrote: >>> I have deduplication active in my first mdbox: type mailbox, but how >>> do I find out how well the deduplication works? Is there a way of >>> finding out how much disk space I saved (if I saved some :) )? > >> You could look at the files in the attachments directory, and see how >> many links they have. Each file has 2 initially. Each additional link >> has saved you bytes of space. > > Maybe there could be a doveadm command for this? Incidentally, I use rsnapshot (which is simply a wrapper script for rsync) for my disk based backups. It uses hard links so that you can have hourly/daily/weekly/monthly (or whatever naming scheme you want) snapshots of your backups, but each snapshot simply contains hardlinks to the previous snapshots, so you can literally have hundreds of snapshots that only consume a little more space that one single whole snapshot. Anyway, rsnapshot has to leverage the du command to determine the amount of disk space each snapshot uses (when considered as a separate/standalone snapshot), or how much *actual* space each snapshot consumes (ie, only the files that are *not* hardlinked against a previous backup)... Maybe this could be a starting point for how to do this... http://rsnapshot.org/rsnapshot.html#usage and scroll down to the rsnapshot du command... -- Best regards, Charles From alexis.lelion at gmail.com Fri Jan 6 14:22:02 2012 From: alexis.lelion at gmail.com (Alexis Lelion) Date: Fri, 6 Jan 2012 13:22:02 +0100 Subject: [Dovecot] ACL with IMAP proxying In-Reply-To: <1325850528.17774.13.camel@hurina> References: <1325850528.17774.13.camel@hurina> Message-ID: Hi Timo, Thanks for your prompt answer, I wasn't expecting an answer that soon ;-) I just tried your workaround, and actually, master_user is properly set to the username, but then is overriden with the proxy login again : Jan 6 13:14:19 mail01 dovecot: imap: Debug: Added userdb setting: mail=maildir:/var/vmail/domain/user Jan 6 13:14:19 mail01 dovecot: imap: Debug: Added userdb setting: plugin/quota=dirsize:storage=0 Jan 6 13:14:19 mail01 dovecot: imap: Debug: Added userdb setting: plugin/master_user=user Jan 6 13:14:19 mail01 dovecot: imap: Debug: Added userdb setting: plugin/master_user=proxy Is there any other flag I can set to avoid this? (Something like Y for the password)? Alexis On Fri, Jan 6, 2012 at 12:48 PM, Timo Sirainen wrote: > On Fri, 2012-01-06 at 12:36 +0100, Alexis Lelion wrote: > > The thing is that when the ACLs are checked, it actually doesn't give > > the user login, but the master login, which is useless. > > Yes, this is intentional. > > > Is there a way to use the first part of destuser as it is done when > > fetching info from the userdb? > > You should be able to work around this with modifying userdb's query: > > user_query = select '%n' AS master_user, ... > > > From tss at iki.fi Fri Jan 6 14:26:37 2012 From: tss at iki.fi (Timo Sirainen) Date: Fri, 06 Jan 2012 14:26:37 +0200 Subject: [Dovecot] doveadm + dsync merging In-Reply-To: <4EFC76F0.2050705@localhost.localdomain.org> References: <20111229125326.GA2295@state-of-mind.de> <4EFC76F0.2050705@localhost.localdomain.org> Message-ID: <1325852800.17774.17.camel@hurina> On Thu, 2011-12-29 at 15:19 +0100, Pascal Volk wrote: > >> b) Don't have the dsync prefix: > >> > >> dsync mirror -> doveadm mirror > >> dsync backup -> doveadm backup > >> dsync server -> doveadm dsync-server (could be hidden from the doveadm commands list) I did this now, with mirror -> sync. > I'd prefer doveadm commands with the dsync prefix. (a)) Because: > > * doveadm already has other 'command groups' like mailbox, director ? > * that's the way to avoid command clashes (w/o hiding anything) There are already many mail related commands that don't have any prefix. For example I think "doveadm import" and "doveadm backup" are quite related. Also "dsync" is perhaps more about the internal implementation, so in future it's possible that sync/backup works some other way.. From tss at iki.fi Fri Jan 6 14:30:12 2012 From: tss at iki.fi (Timo Sirainen) Date: Fri, 06 Jan 2012 14:30:12 +0200 Subject: [Dovecot] ACL with IMAP proxying In-Reply-To: References: <1325850528.17774.13.camel@hurina> Message-ID: <1325853012.17774.19.camel@hurina> On Fri, 2012-01-06 at 13:22 +0100, Alexis Lelion wrote: > Thanks for your prompt answer, I wasn't expecting an answer that soon ;-) > I just tried your workaround, and actually, master_user is properly set to > the username, but then is overriden with the proxy login again : > > Jan 6 13:14:19 mail01 dovecot: imap: Debug: Added userdb setting: > mail=maildir:/var/vmail/domain/user > Jan 6 13:14:19 mail01 dovecot: imap: Debug: Added userdb setting: > plugin/quota=dirsize:storage=0 > Jan 6 13:14:19 mail01 dovecot: imap: Debug: Added userdb setting: > plugin/master_user=user > Jan 6 13:14:19 mail01 dovecot: imap: Debug: Added userdb setting: > plugin/master_user=proxy I thought it would have been the other way around.. See if http://hg.dovecot.org/dovecot-2.0/raw-rev/684381041dc4 helps? > Is there any other flag I can set to avoid this? (Something like Y for the > password)? Nope. From alexis.lelion at gmail.com Fri Jan 6 14:55:03 2012 From: alexis.lelion at gmail.com (Alexis Lelion) Date: Fri, 6 Jan 2012 13:55:03 +0100 Subject: [Dovecot] ACL with IMAP proxying In-Reply-To: <1325853012.17774.19.camel@hurina> References: <1325850528.17774.13.camel@hurina> <1325853012.17774.19.camel@hurina> Message-ID: Thanks Timo. I'm actually using a packaged version of Dovecot 2.0 from Debian, so I can't apply the patch easily right now. I'll try do build dovecot this weekend and see if it solves the issue. Cheers Alexis On Fri, Jan 6, 2012 at 1:30 PM, Timo Sirainen wrote: > On Fri, 2012-01-06 at 13:22 +0100, Alexis Lelion wrote: > > > Thanks for your prompt answer, I wasn't expecting an answer that soon ;-) > > I just tried your workaround, and actually, master_user is properly set > to > > the username, but then is overriden with the proxy login again : > > > > Jan 6 13:14:19 mail01 dovecot: imap: Debug: Added userdb setting: > > mail=maildir:/var/vmail/domain/user > > Jan 6 13:14:19 mail01 dovecot: imap: Debug: Added userdb setting: > > plugin/quota=dirsize:storage=0 > > Jan 6 13:14:19 mail01 dovecot: imap: Debug: Added userdb setting: > > plugin/master_user=user > > Jan 6 13:14:19 mail01 dovecot: imap: Debug: Added userdb setting: > > plugin/master_user=proxy > > I thought it would have been the other way around.. See if > http://hg.dovecot.org/dovecot-2.0/raw-rev/684381041dc4 helps? > > > Is there any other flag I can set to avoid this? (Something like Y for > the > > password)? > > Nope. > > > From tss at iki.fi Fri Jan 6 14:57:24 2012 From: tss at iki.fi (Timo Sirainen) Date: Fri, 06 Jan 2012 14:57:24 +0200 Subject: [Dovecot] ACL with IMAP proxying In-Reply-To: References: <1325850528.17774.13.camel@hurina> <1325853012.17774.19.camel@hurina> Message-ID: <1325854644.17774.20.camel@hurina> Another possibility: http://wiki2.dovecot.org/PostLoginScripting and set MASTER_USER environment. On Fri, 2012-01-06 at 13:55 +0100, Alexis Lelion wrote: > Thanks Timo. > I'm actually using a packaged version of Dovecot 2.0 from Debian, so I > can't apply the patch easily right now. > I'll try do build dovecot this weekend and see if it solves the issue. > > Cheers > > Alexis > > On Fri, Jan 6, 2012 at 1:30 PM, Timo Sirainen wrote: > > > On Fri, 2012-01-06 at 13:22 +0100, Alexis Lelion wrote: > > > > > Thanks for your prompt answer, I wasn't expecting an answer that soon ;-) > > > I just tried your workaround, and actually, master_user is properly set > > to > > > the username, but then is overriden with the proxy login again : > > > > > > Jan 6 13:14:19 mail01 dovecot: imap: Debug: Added userdb setting: > > > mail=maildir:/var/vmail/domain/user > > > Jan 6 13:14:19 mail01 dovecot: imap: Debug: Added userdb setting: > > > plugin/quota=dirsize:storage=0 > > > Jan 6 13:14:19 mail01 dovecot: imap: Debug: Added userdb setting: > > > plugin/master_user=user > > > Jan 6 13:14:19 mail01 dovecot: imap: Debug: Added userdb setting: > > > plugin/master_user=proxy > > > > I thought it would have been the other way around.. See if > > http://hg.dovecot.org/dovecot-2.0/raw-rev/684381041dc4 helps? > > > > > Is there any other flag I can set to avoid this? (Something like Y for > > the > > > password)? > > > > Nope. > > > > > > From adrian.minta at gmail.com Fri Jan 6 15:01:52 2012 From: adrian.minta at gmail.com (Adrian Minta) Date: Fri, 06 Jan 2012 15:01:52 +0200 Subject: [Dovecot] howto disable indexing on dovecot-lda ? In-Reply-To: <1325849985.17774.10.camel@hurina> References: <4F06D5D9.20001@gmail.com> <1325849985.17774.10.camel@hurina> Message-ID: <4F06F0C0.30906@gmail.com> On 01/06/12 13:39, Timo Sirainen wrote: > On Fri, 2012-01-06 at 13:07 +0200, Adrian Minta wrote: >> Hello, >> is it possible to disable indexing on dovecot-lda ? > protocol lda { > mail_location = whatever-you-have-now:INDEX=MEMORY > } > >> Right now postfix delivers the mail directly to the nfs server without >> any problems. If I switch to dovecot-lda the system crashes do to the >> high I/O and locking. > Disabling indexing won't disable writing to dovecot-uidlist file. So I > don't know if disabling indexes actually helps. > I don't have mail_location under "protocol lda": protocol lda { # Address to use when sending rejection mails. postmaster_address = postmaster at xxx sendmail_path = /usr/lib/sendmail auth_socket_path = /var/run/dovecot/auth-master mail_plugins = quota syslog_facility = mail } The mail_location is present only global. What to do then ? From adrian.minta at gmail.com Fri Jan 6 15:02:31 2012 From: adrian.minta at gmail.com (Adrian Minta) Date: Fri, 06 Jan 2012 15:02:31 +0200 Subject: [Dovecot] howto disable indexing on dovecot-lda ? In-Reply-To: <4F06DFF5.40707@hardwarefreak.com> References: <4F06D5D9.20001@gmail.com> <4F06DFF5.40707@hardwarefreak.com> Message-ID: <4F06F0E7.904@gmail.com> On 01/06/12 13:50, Stan Hoeppner wrote: > On 1/6/2012 5:07 AM, Adrian Minta wrote: >> Hello, >> is it possible to disable indexing on dovecot-lda ? >> >> Right now postfix delivers the mail directly to the nfs server without >> any problems. If I switch to dovecot-lda the system crashes do to the >> high I/O and locking. >> Indexing on lda is not very useful because the number of of imap logins >> is less than 5% that of incoming mails, so an user could wait for 3 sec >> to get his mail index, but a new mail can't. > Then why bother with Dovecot LDA w/disabled indexing (the main reason > for using it in the first place) instead of simply sticking with Postfix > Local(8)? > Because of sieve and quota support. Another possible advantage will be the support for hashed mailbox directories. From tss at iki.fi Fri Jan 6 15:08:26 2012 From: tss at iki.fi (Timo Sirainen) Date: Fri, 06 Jan 2012 15:08:26 +0200 Subject: [Dovecot] howto disable indexing on dovecot-lda ? In-Reply-To: <4F06F0C0.30906@gmail.com> References: <4F06D5D9.20001@gmail.com> <1325849985.17774.10.camel@hurina> <4F06F0C0.30906@gmail.com> Message-ID: <1325855306.17774.21.camel@hurina> On Fri, 2012-01-06 at 15:01 +0200, Adrian Minta wrote: > > protocol lda { > > mail_location = whatever-you-have-now:INDEX=MEMORY > > } > > > I don't have mail_location under "protocol lda": Just add it there. From alexis.lelion at gmail.com Fri Jan 6 15:20:26 2012 From: alexis.lelion at gmail.com (Alexis Lelion) Date: Fri, 6 Jan 2012 14:20:26 +0100 Subject: [Dovecot] ACL with IMAP proxying In-Reply-To: <1325854644.17774.20.camel@hurina> References: <1325850528.17774.13.camel@hurina> <1325853012.17774.19.camel@hurina> <1325854644.17774.20.camel@hurina> Message-ID: It worked! Thanks a lot for your help and have a wonderful day! On Fri, Jan 6, 2012 at 1:57 PM, Timo Sirainen wrote: > Another possibility: http://wiki2.dovecot.org/PostLoginScripting > > and set MASTER_USER environment. > > On Fri, 2012-01-06 at 13:55 +0100, Alexis Lelion wrote: > > Thanks Timo. > > I'm actually using a packaged version of Dovecot 2.0 from Debian, so I > > can't apply the patch easily right now. > > I'll try do build dovecot this weekend and see if it solves the issue. > > > > Cheers > > > > Alexis > > > > On Fri, Jan 6, 2012 at 1:30 PM, Timo Sirainen wrote: > > > > > On Fri, 2012-01-06 at 13:22 +0100, Alexis Lelion wrote: > > > > > > > Thanks for your prompt answer, I wasn't expecting an answer that > soon ;-) > > > > I just tried your workaround, and actually, master_user is properly > set > > > to > > > > the username, but then is overriden with the proxy login again : > > > > > > > > Jan 6 13:14:19 mail01 dovecot: imap: Debug: Added userdb setting: > > > > mail=maildir:/var/vmail/domain/user > > > > Jan 6 13:14:19 mail01 dovecot: imap: Debug: Added userdb setting: > > > > plugin/quota=dirsize:storage=0 > > > > Jan 6 13:14:19 mail01 dovecot: imap: Debug: Added userdb setting: > > > > plugin/master_user=user > > > > Jan 6 13:14:19 mail01 dovecot: imap: Debug: Added userdb setting: > > > > plugin/master_user=proxy > > > > > > I thought it would have been the other way around.. See if > > > http://hg.dovecot.org/dovecot-2.0/raw-rev/684381041dc4 helps? > > > > > > > Is there any other flag I can set to avoid this? (Something like Y > for > > > the > > > > password)? > > > > > > Nope. > > > > > > > > > > > > From adrian.minta at gmail.com Fri Jan 6 15:25:11 2012 From: adrian.minta at gmail.com (Adrian Minta) Date: Fri, 06 Jan 2012 15:25:11 +0200 Subject: [Dovecot] howto disable indexing on dovecot-lda ? In-Reply-To: <1325855306.17774.21.camel@hurina> References: <4F06D5D9.20001@gmail.com> <1325849985.17774.10.camel@hurina> <4F06F0C0.30906@gmail.com> <1325855306.17774.21.camel@hurina> Message-ID: <4F06F637.3070504@gmail.com> On 01/06/12 15:08, Timo Sirainen wrote: > On Fri, 2012-01-06 at 15:01 +0200, Adrian Minta wrote: >>> protocol lda { >>> mail_location = whatever-you-have-now:INDEX=MEMORY >>> } >>> >> I don't have mail_location under "protocol lda": > Just add it there. > Thank you ! Dovecot didn't complain after restart and the "dovecot -a" reports it correctly: lda: postmaster_address: postmaster at xxx sendmail_path: /usr/lib/sendmail auth_socket_path: /var/run/dovecot/auth-master mail_plugins: quota syslog_facility: mail mail_location: maildir:/var/virtual/%d/%u:INDEX=MEMORY I will do a test with this. From yubao.liu at gmail.com Fri Jan 6 18:15:55 2012 From: yubao.liu at gmail.com (Yubao Liu) Date: Sat, 07 Jan 2012 00:15:55 +0800 Subject: [Dovecot] Strange error: DIGEST-MD5 mechanism can't be supported with given passdbs In-Reply-To: <4F06D272.5010200@bunbun.be> References: <4F05EABC.7070309@gmail.com> <4F06D272.5010200@bunbun.be> Message-ID: <4F071E3B.2060405@gmail.com> On 01/06/2012 06:52 PM, Nick Rosier wrote: > Yubao Liu wrote: >> Hi all, >> >> I have no idea about that message, here is my configuration, what's wrong? > You have 2 passdb entries; 1 with a file and 1 with pam. I'm pretty sure > PAM doesn't support DIGEST-MD5 authentication. Could be the cause of the > problem. > Thanks, that does be the cause. http://hg.dovecot.org/dovecot-2.0/file/684381041dc4/src/auth/auth.c 121 static bool auth_passdb_list_have_lookup_credentials(struct auth *auth) 122 { 123 struct auth_passdb *passdb; 124 125 for (passdb = auth->passdbs; passdb != NULL; passdb = passdb->next) { 126 if (passdb->passdb->iface.lookup_credentials != NULL) 127 return TRUE; 128 } 129 return FALSE; 130 } I don't know why this function doesn't check auth->masterdbs, if I insert these lines after line 128, that error goes away, and dovecot's imap-login process happily does DIGEST-MD5 authentication [1]. In my configuration, "masterdbs" contains "passdb passwd-file", "passdbs" contains " passdb pam". for (passdb = auth->masterdbs; passdb != NULL; passdb = passdb->next) { if (passdb->passdb->iface.lookup_credentials != NULL) return TRUE; } [1] But the authentication for "user*master" always fails, I realized master users can't login as other users by DIGEST-MD5 or CRAM-MD5 auth mechanisms because these authentication mechanisms use "user*master" as username in hash algorithm, not just "master". Regards, Yubao Liu >> Debian testing, Dovecot 2.0.15 >> >> $ doveconf -n >> # 2.0.15: /etc/dovecot/dovecot.conf >> # OS: Linux 3.1.0-1-686-pae i686 Debian wheezy/sid >> auth_default_realm = corp.example.com >> auth_krb5_keytab = /etc/dovecot.keytab >> auth_master_user_separator = * >> auth_mechanisms = gssapi digest-md5 >> auth_realms = corp.example.com >> auth_username_format = %n >> first_valid_gid = 1000 >> first_valid_uid = 1000 >> mail_location = mdbox:/srv/mail/%u/Mail >> managesieve_notify_capability = mailto >> managesieve_sieve_capability = fileinto reject envelope >> encoded-character vacation subaddress comparator-i;ascii-numeric >> relational regex imap4flags copy include variables body enotify >> environment mailbox date ihave >> passdb { >> args = /etc/dovecot/master-users >> driver = passwd-file >> master = yes >> pass = yes >> } >> passdb { >> driver = pam >> } >> plugin { >> sieve = /srv/mail/%u/.dovecot.sieve >> sieve_dir = /srv/mail/%u/sieve >> } >> protocols = " imap lmtp sieve" >> service auth { >> unix_listener auth-client { >> group = Debian-exim >> mode = 0660 >> } >> } >> ssl_cert => ssl_key => userdb { >> args = home=/srv/mail/%u >> driver = passwd >> } >> protocol lmtp { >> mail_plugins = " sieve" >> } >> protocol lda { >> mail_plugins = " sieve" >> } >> >> # cat /etc/dovecot/master-users >> xxx at corp.example.com:zzzzzzzz >> >> The zzzzz is obtained by "doveadm pw -s digest-md5 -u >> xxx at corp.example.com", >> I tried to add prefix "{DIGEST-MD5}" before the generated hash and/or add >> "scheme=DIGEST-MD5" to the passwd-file passdb's "args" option, both >> don't help. >> >> The error message: >> dovecot: master: Dovecot v2.0.15 starting up (core dumps disabled) >> dovecot: auth: Fatal: DIGEST-MD5 mechanism can't be supported with given >> passdbs >> gold dovecot: master: Error: service(auth): command startup failed, >> throttling >> >> I opened debug auth log, it showed dovecot read /etc/dovecot/master-users >> and parsed one line, then the error occurred. Doesn't passwd-file >> passdb support >> digest-md5 password scheme? If it doesn't support, how do I configure >> digest-md5 auth >> mechanism with digest-md5 password scheme for virtual users? >> >> Regards, >> Yubao Liu >> > Rgds, > N. From tss at iki.fi Fri Jan 6 18:41:48 2012 From: tss at iki.fi (Timo Sirainen) Date: Fri, 06 Jan 2012 18:41:48 +0200 Subject: [Dovecot] v2.0.17 released Message-ID: <1325868113.17774.28.camel@hurina> http://dovecot.org/releases/2.0/dovecot-2.0.17.tar.gz http://dovecot.org/releases/2.0/dovecot-2.0.17.tar.gz.sig Among other changes: + Proxying now supports sending SSL client certificate to server with ssl_client_cert/key settings. + doveadm dump: Added support for dumping dbox headers/metadata. - Fixed memory leaks in login processes with SSL connections - vpopmail support was broken in v2.0.16 From tss at iki.fi Fri Jan 6 18:42:07 2012 From: tss at iki.fi (Timo Sirainen) Date: Fri, 06 Jan 2012 18:42:07 +0200 Subject: [Dovecot] v2.1.rc2 released Message-ID: <1325868127.17774.29.camel@hurina> http://dovecot.org/releases/2.1/rc/dovecot-2.1.rc2.tar.gz http://dovecot.org/releases/2.1/rc/dovecot-2.1.rc2.tar.gz.sig Lots of fixes since rc1. Some of the changes were larger than I wanted at RC stage, but they had to be done now.. Hopefully it's all over now, and we can have v2.1.0 soon. :) Some of the more important changes: * dsync was merged into doveadm. There is still "dsync" symlink pointing to "doveadm", which you can use the old way for now. The preferred ways to run dsync are "doveadm sync" (for old "dsync mirror") and "doveadm backup". + IMAP SPECIAL-USE extension to describe mailboxes + Added mailbox {} sections, which deprecate autocreate plugin + lib-fs: Added "mode" parameter to "posix" backend to specify mode for created files/dirs (for mail_attachment_dir). + inet_listener names are now used to figure out what type the socket is when useful. For example naming service auth { inet_listener } to auth-client vs. auth-userdb has different behavior. + Added pop3c (= POP3 client) storage backend. - LMTP proxying code was simplified, hopefully fixing its problems. - dsync: Don't remove user's subscriptions for subscriptions=no namespaces. From tss at iki.fi Fri Jan 6 18:44:44 2012 From: tss at iki.fi (Timo Sirainen) Date: Fri, 06 Jan 2012 18:44:44 +0200 Subject: [Dovecot] Strange error: DIGEST-MD5 mechanism can't be supported with given passdbs In-Reply-To: <4F071E3B.2060405@gmail.com> References: <4F05EABC.7070309@gmail.com> <4F06D272.5010200@bunbun.be> <4F071E3B.2060405@gmail.com> Message-ID: <1325868288.17774.30.camel@hurina> On Sat, 2012-01-07 at 00:15 +0800, Yubao Liu wrote: > I don't know why this function doesn't check auth->masterdbs, if I > insert these lines after line 128, that error goes away, and dovecot's > imap-login process happily does DIGEST-MD5 authentication [1]. > In my configuration, "masterdbs" contains "passdb passwd-file", > "passdbs" contains " passdb pam". So .. you want DIGEST-MD5 authentication for the master users, but not for anyone else? I hadn't really thought anyone would want that.. From lists at luigirosa.com Fri Jan 6 19:13:20 2012 From: lists at luigirosa.com (Luigi Rosa) Date: Fri, 06 Jan 2012 18:13:20 +0100 Subject: [Dovecot] v2.1.rc2 released In-Reply-To: <1325868127.17774.29.camel@hurina> References: <1325868127.17774.29.camel@hurina> Message-ID: <4F072BB0.7040507@luigirosa.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Timo Sirainen said the following on 06/01/12 17:42: > http://dovecot.org/releases/2.1/rc/dovecot-2.1.rc2.tar.gz Making all in doveadm make[3]: Entering directory `/usr/src/dovecot-2.1.rc2/src/doveadm' Making all in dsync make[4]: Entering directory `/usr/src/dovecot-2.1.rc2/src/doveadm/dsync' gcc -DHAVE_CONFIG_H -I. -I../../.. -I../../../src/lib -I../../../src/lib-test - -I../../../src/lib-settings -I../../../src/lib-master -I../../../src/lib-mail - -I../../../src/lib-imap -I../../../src/lib-index -I../../../src/lib-storage - -I../../../src/doveadm -std=gnu99 -g -O2 -Wall -W -Wmissing-prototypes - -Wmissing-declarations -Wpointer-arith -Wchar-subscripts -Wformat=2 - -Wbad-function-cast -Wstrict-aliasing=2 -I/usr/kerberos/include -MT doveadm-dsync.o -MD -MP -MF .deps/doveadm-dsync.Tpo -c -o doveadm-dsync.o doveadm-dsync.c doveadm-dsync.c:17:27: error: doveadm-dsync.h: No such file or directory doveadm-dsync.c:386: warning: no previous prototype for ?doveadm_dsync_main? make[4]: *** [doveadm-dsync.o] Error 1 make[4]: Leaving directory `/usr/src/dovecot-2.1.rc2/src/doveadm/dsync' make[3]: *** [all-recursive] Error 1 make[3]: Leaving directory `/usr/src/dovecot-2.1.rc2/src/doveadm' make[2]: *** [all-recursive] Error 1 make[2]: Leaving directory `/usr/src/dovecot-2.1.rc2/src' make[1]: *** [all-recursive] Error 1 make[1]: Leaving directory `/usr/src/dovecot-2.1.rc2' make: *** [all] Error 2 In fact the file doveadm-dsync.h is not in the tarball Ciao, luigi - -- / +--[Luigi Rosa]-- \ Non cercare di vincere mai un gatto in testardaggine. --Robert A. Heinlein -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.11 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iEYEARECAAYFAk8HK68ACgkQ3kWu7Tfl6ZRCkgCgwUGMxj12NBI3p8FO0W2AIBwW uSAAn3YuEAtm5ulsvWaPuPeylK2e/Vpc =kzD0 -----END PGP SIGNATURE----- From yubao.liu at gmail.com Fri Jan 6 19:29:14 2012 From: yubao.liu at gmail.com (Yubao Liu) Date: Sat, 07 Jan 2012 01:29:14 +0800 Subject: [Dovecot] Strange error: DIGEST-MD5 mechanism can't be supported with given passdbs In-Reply-To: <1325868288.17774.30.camel@hurina> References: <4F05EABC.7070309@gmail.com> <4F06D272.5010200@bunbun.be> <4F071E3B.2060405@gmail.com> <1325868288.17774.30.camel@hurina> Message-ID: <4F072F6A.8050801@gmail.com> On 01/07/2012 12:44 AM, Timo Sirainen wrote: > On Sat, 2012-01-07 at 00:15 +0800, Yubao Liu wrote: > >> I don't know why this function doesn't check auth->masterdbs, if I >> insert these lines after line 128, that error goes away, and dovecot's >> imap-login process happily does DIGEST-MD5 authentication [1]. >> In my configuration, "masterdbs" contains "passdb passwd-file", >> "passdbs" contains " passdb pam". > So .. you want DIGEST-MD5 authentication for the master users, but not > for anyone else? I hadn't really thought anyone would want that.. > I hope users use GSSAPI authentication from native MUA, but RoundCube webmail doesn't support that, so that I have to use DIGEST-MD5/CRAM-MD5/ PLAIN/LOGIN for authentication between RoundCube and Dovecot, and let RoundCube login as master user for normal user. I really don't like to transfer password as plain text, so I prefer DIGEST-MD5 and CRAM-MD5 for both auth mechanisms and password schemes. My last email is partially wrong, DIGEST-MD5 can't be used for master users because 'real_user*master_user' is used to calculate digest in IMAP client, this can't be consistent with digest in passdb because only 'master_user' is used to calculate digest. But CRAM-MD5 doesn't use user name to calculate digest, I just tried it successfully with my rude patch to src/auth/auth.c in my previous email:-) # doveadm pw -s CRAM-MD5 -u webmail (use 123456 as passwd) # cat > /etc/dovecot/master-users webmail:{CRAM-MD5}dd59f669267e9bb13d42a1ba57c972c5b13a4b2ae457c9ada8035dc7d8bae41b ^D $ gsasl --imap imap.corp.example.com --verbose -m CRAM-MD5 -a 'dieken*webmail at corp.example.com' -p 123456 Trying `gold.corp.example.com'... * OK [CAPABILITY IMAP4rev1 LITERAL+ SASL-IR LOGIN-REFERRALS ID ENABLE IDLE STARTTLS LOGINDISABLED AUTH=GSSAPI AUTH=DIGEST-MD5 AUTH=CRAM-MD5] Dovecot ready. . CAPABILITY * CAPABILITY IMAP4rev1 LITERAL+ SASL-IR LOGIN-REFERRALS ID ENABLE IDLE STARTTLS LOGINDISABLED AUTH=GSSAPI AUTH=DIGEST-MD5 AUTH=CRAM-MD5 . OK Pre-login capabilities listed, post-login capabilities have more. . STARTTLS . OK Begin TLS negotiation now. . CAPABILITY * CAPABILITY IMAP4rev1 LITERAL+ SASL-IR LOGIN-REFERRALS ID ENABLE IDLE AUTH=GSSAPI AUTH=DIGEST-MD5 AUTH=CRAM-MD5 . OK Pre-login capabilities listed, post-login capabilities have more. . AUTHENTICATE CRAM-MD5 + PDM1OTIzODgxNjgyNzUxMjUuMTMyNTg3MDQwMkBnb2xkPg== ZGlla2VuKndlYm1haWxAY29ycC5leGFtcGxlLmNvbSBkYjRlZWJlMTUwZGZjZjg5NTVkODZhNDBlMGJiZmQzNA== * CAPABILITY IMAP4rev1 LITERAL+ SASL-IR LOGIN-REFERRALS ID ENABLE IDLE SORT SORT=DISPLAY THREAD=REFERENCES THREAD=REFS MULTIAPPEND UNSELECT CHILDREN NAMESPACE UIDPLUS LIST-EXTENDED I18NLEVEL=1 CONDSTORE QRESYNC ESEARCH ESORT SEARCHRES WITHIN CONTEXT=SEARCH LIST-STATUS Client authentication finished (server trusted)... Enter application data (EOF to finish): It's also OK to use "-a 'dieken*webmail'" instead of "-a 'dieken*webmail at corp.example.com'. # doveconf -n # 2.0.15: /etc/dovecot/dovecot.conf # OS: Linux 3.1.0-1-686-pae i686 Debian wheezy/sid auth_debug = yes auth_debug_passwords = yes auth_default_realm = corp.example.com auth_krb5_keytab = /etc/dovecot.keytab auth_master_user_separator = * auth_mechanisms = gssapi digest-md5 cram-md5 auth_realms = corp.example.com auth_username_format = %n auth_verbose = yes auth_verbose_passwords = plain first_valid_gid = 1000 first_valid_uid = 1000 mail_debug = yes mail_location = mdbox:/srv/mail/%u/Mail managesieve_notify_capability = mailto managesieve_sieve_capability = fileinto reject envelope encoded-character vacation subaddress comparator-i;ascii-numeric relational regex imap4flags copy include variables body enotify environment mailbox date ihave passdb { args = /etc/dovecot/master-users driver = passwd-file master = yes } passdb { driver = pam } plugin { sieve = /srv/mail/%u/.dovecot.sieve sieve_dir = /srv/mail/%u/sieve } protocols = " imap lmtp sieve" service auth { unix_listener auth-client { group = Debian-exim mode = 0660 } } ssl_cert = References: <9D72A466-6ECD-4A10-A52A-20BF56B32916@iki.fi> Message-ID: On 1/6/2012 2:57 AM, Timo Sirainen wrote: > On 6.1.2012, at 12.55, Timo Sirainen wrote: > >>> Jan 4 05:17:17 bubba dovecot: master: Error: service(indexer-worker): child 10896 returned error 83 (Out of memory (vsz_limit=256 MB, you may need to increase it)) >>> Jan 4 06:17:17 bubba dovecot: indexer-worker(user1 at domain.com): Fatal: pool_system_realloc(134217728): Out of memory >> The problem is clearly that index-worker's vsz_limit is too low. Increase it (or default_vsz_limit). > Although the source of the out-of-memory > > /usr/local/lib/dovecot/libdovecot.so.0(buffer_write+0x7c) [0x7f0ec1a550ec] -> /usr/local/lib/dovecot/lib21_fts_solr_plugin.so(+0x3292) [0x7f0ec024f292] -> > > is something that shouldn't really be happening. I guess the Solr plugin wastes memory unnecessarily, I'll see what I can do about it. But for now just increase vsz limit. > I set default_vsz_limit = 1024M. Those errors appear gone - but I do have messages like: Jan 6 09:22:42 bubba dovecot: indexer-worker(user1 at domain.com): Error: fts_solr: Indexing failed: 400 Illegal character ((CTRL-CHAR, code 18)) at [row,col {unknown-source}]: [482765,16] Jan 6 09:22:42 bubba dovecot: indexer-worker: Error: Google seems to indicate that Solr cannot handle "invalid" characters - and that it is the responsibility of the calling program to strip out such. A quick search shows me a both an individual character comparison in Java and a regex used for the purpose. Is there any "illegal character protection" in the Dovecot Solr plugin? -- Daniel From dmiller at amfes.com Fri Jan 6 19:35:34 2012 From: dmiller at amfes.com (Daniel L. Miller) Date: Fri, 06 Jan 2012 09:35:34 -0800 Subject: [Dovecot] FTS-Solr plugin Message-ID: Solr plugin appears to break when mailbox names have an ampersand in the name. The messages appear to indicate '&' gets translated to '&--'. -- Daniel From tss at iki.fi Fri Jan 6 19:36:41 2012 From: tss at iki.fi (Timo Sirainen) Date: Fri, 6 Jan 2012 19:36:41 +0200 Subject: [Dovecot] Possible mdbox corruption In-Reply-To: References: <9D72A466-6ECD-4A10-A52A-20BF56B32916@iki.fi> Message-ID: <75CA67D3-81FF-4FD5-8D61-6992E808DB09@iki.fi> On 6.1.2012, at 19.30, Daniel L. Miller wrote: > Jan 6 09:22:42 bubba dovecot: indexer-worker(user1 at domain.com): Error: fts_solr: Indexing failed: 400 Illegal character ((CTRL-CHAR, code 18)) at [row,col {unknown-source}]: [482765,16] > Jan 6 09:22:42 bubba dovecot: indexer-worker: Error: > > Google seems to indicate that Solr cannot handle "invalid" characters - and that it is the responsibility of the calling program to strip out such. A quick search shows me a both an individual character comparison in Java and a regex used for the purpose. Is there any "illegal character protection" in the Dovecot Solr plugin? Yes, there is. So I'm not really sure what it's complaining about. Are you using the "solr" or "solr_old" backend? From yubao.liu at gmail.com Fri Jan 6 19:45:15 2012 From: yubao.liu at gmail.com (Yubao Liu) Date: Sat, 07 Jan 2012 01:45:15 +0800 Subject: [Dovecot] Strange error: DIGEST-MD5 mechanism can't be supported with given passdbs In-Reply-To: <1325868288.17774.30.camel@hurina> References: <4F05EABC.7070309@gmail.com> <4F06D272.5010200@bunbun.be> <4F071E3B.2060405@gmail.com> <1325868288.17774.30.camel@hurina> Message-ID: <4F07332B.70708@gmail.com> On 01/07/2012 12:44 AM, Timo Sirainen wrote: > On Sat, 2012-01-07 at 00:15 +0800, Yubao Liu wrote: > >> I don't know why this function doesn't check auth->masterdbs, if I >> insert these lines after line 128, that error goes away, and dovecot's >> imap-login process happily does DIGEST-MD5 authentication [1]. >> In my configuration, "masterdbs" contains "passdb passwd-file", >> "passdbs" contains " passdb pam". > So .. you want DIGEST-MD5 authentication for the master users, but not > for anyone else? I hadn't really thought anyone would want that.. > Is there any special reason that master passdb isn't taken into account in src/auth/auth.c:auth_passdb_list_have_lookup_credentials() ? I feel master passdb is also a kind of passdb. http://wiki2.dovecot.org/PasswordDatabase > You can use multiple databases, so if the password doesn't match > in the first database, Dovecot checks the next one. This can be useful > if you want to easily support having both virtual users and also local > system users (see Authentication/MultipleDatabases ). This is exactly my use case, I use Kerberos for system users, I'm curious why master passdb isn't used to check "have_lookup_credentials" ability. http://wiki2.dovecot.org/Authentication/MultipleDatabases > Currently the fallback works only with the PLAIN authentication mechanism. I hope this limitation can be relaxed. Regards, Yubao Liu From tss at iki.fi Fri Jan 6 19:51:49 2012 From: tss at iki.fi (Timo Sirainen) Date: Fri, 6 Jan 2012 19:51:49 +0200 Subject: [Dovecot] Strange error: DIGEST-MD5 mechanism can't be supported with given passdbs In-Reply-To: <4F07332B.70708@gmail.com> References: <4F05EABC.7070309@gmail.com> <4F06D272.5010200@bunbun.be> <4F071E3B.2060405@gmail.com> <1325868288.17774.30.camel@hurina> <4F07332B.70708@gmail.com> Message-ID: On 6.1.2012, at 19.45, Yubao Liu wrote: > On 01/07/2012 12:44 AM, Timo Sirainen wrote: >> On Sat, 2012-01-07 at 00:15 +0800, Yubao Liu wrote: >> >>> I don't know why this function doesn't check auth->masterdbs, if I >>> insert these lines after line 128, that error goes away, and dovecot's >>> imap-login process happily does DIGEST-MD5 authentication [1]. >>> In my configuration, "masterdbs" contains "passdb passwd-file", >>> "passdbs" contains " passdb pam". >> So .. you want DIGEST-MD5 authentication for the master users, but not >> for anyone else? I hadn't really thought anyone would want that.. >> > Is there any special reason that master passdb isn't taken into > account in src/auth/auth.c:auth_passdb_list_have_lookup_credentials() ? > I feel master passdb is also a kind of passdb. I guess it could be changed. It wasn't done intentionally that way. > This is exactly my use case, I use Kerberos for system users, > I'm curious why master passdb isn't used to check "have_lookup_credentials" ability > http://wiki2.dovecot.org/Authentication/MultipleDatabases > > Currently the fallback works only with the PLAIN authentication mechanism. > > I hope this limitation can be relaxed. It might already be .. I don't remember. In any case you have only PAM passdb, so it shouldn't matter. GSSAPI isn't a passdb. From tss at iki.fi Fri Jan 6 21:40:44 2012 From: tss at iki.fi (Timo Sirainen) Date: Fri, 06 Jan 2012 21:40:44 +0200 Subject: [Dovecot] v2.1.rc3 released Message-ID: <1325878845.17774.38.camel@hurina> http://dovecot.org/releases/2.1/rc/dovecot-2.1.rc3.tar.gz http://dovecot.org/releases/2.1/rc/dovecot-2.1.rc3.tar.gz.sig Whops, rc2 was missing a file. I always run "make distcheck", which should catch these, but recently it has always failed due to clang static checking giving one "error" that I didn't really want to fix. Because of that the distcheck didn't finish and didn't check for the missing file. So, anyway, I've made clang happy again, and now that I see how bad idea it is to just ignore the failed distcheck, I won't do that again in future. :) From mailinglist at ngong.de Fri Jan 6 18:37:22 2012 From: mailinglist at ngong.de (mailinglist) Date: Fri, 06 Jan 2012 17:37:22 +0100 Subject: [Dovecot] change initial permissions on creation of mail folder Message-ID: <4F072342.1090901@ngong.de> Installed dovcot from Debian .deb file. Creating a new account for system users sets permission for user-only. Where to change initial permissions on creation of mail folder and other subdirectories. Installed dovecot using "apt-get install dovecot-imapd dovecot-pop3d". Any time when I create a new account in my mail client for a system user, Dovecot tries to create ~/mail/.imap/INBOX. The permissions for mail and .imap are set to 0700. By this permissions INBOX can not be created leading to an error message in log files. When I manualy change the permissions to 0770, INBOX is created From doctor at doctor.nl2k.ab.ca Fri Jan 6 22:12:56 2012 From: doctor at doctor.nl2k.ab.ca (The Doctor) Date: Fri, 6 Jan 2012 13:12:56 -0700 Subject: [Dovecot] v2.1.rc2 released In-Reply-To: <1325868127.17774.29.camel@hurina> References: <1325868127.17774.29.camel@hurina> Message-ID: <20120106201255.GA20598@doctor.nl2k.ab.ca> On Fri, Jan 06, 2012 at 06:42:07PM +0200, Timo Sirainen wrote: > http://dovecot.org/releases/2.1/rc/dovecot-2.1.rc2.tar.gz > http://dovecot.org/releases/2.1/rc/dovecot-2.1.rc2.tar.gz.sig > > Lots of fixes since rc1. Some of the changes were larger than I wanted > at RC stage, but they had to be done now.. Hopefully it's all over now, > and we can have v2.1.0 soon. :) > > Some of the more important changes: > > * dsync was merged into doveadm. There is still "dsync" symlink > pointing to "doveadm", which you can use the old way for now. > The preferred ways to run dsync are "doveadm sync" (for old "dsync > mirror") and "doveadm backup". > > + IMAP SPECIAL-USE extension to describe mailboxes > + Added mailbox {} sections, which deprecate autocreate plugin > + lib-fs: Added "mode" parameter to "posix" backend to specify mode > for created files/dirs (for mail_attachment_dir). > + inet_listener names are now used to figure out what type the socket > is when useful. For example naming service auth { inet_listener } to > auth-client vs. auth-userdb has different behavior. > + Added pop3c (= POP3 client) storage backend. > - LMTP proxying code was simplified, hopefully fixing its problems. > - dsync: Don't remove user's subscriptions for subscriptions=no > namespaces. > Suggestion: Get rid of the --as-needed ld flag. This is a show stopper for me. Also, Making all in doveadm Making all in dsync gcc -DHAVE_CONFIG_H -I. -I../../.. -I../../../src/lib -I../../../src/lib-test -I../../../src/lib-settings -I../../../src/lib-master -I../../../src/lib-mail -I../../../src/lib-imap -I../../../src/lib-index -I../../../src/lib-storage -I../../../src/doveadm -std=gnu99 -g -O2 -Wall -W -Wmissing-prototypes -Wmissing-declarations -Wpointer-arith -Wchar-subscripts -Wformat=2 -Wbad-function-cast -I/usr/contrib/include -MT doveadm-dsync.o -MD -MP -MF .deps/doveadm-dsync.Tpo -c -o doveadm-dsync.o doveadm-dsync.c doveadm-dsync.c:17:27: doveadm-dsync.h: No such file or directory doveadm-dsync.c:386: warning: no previous prototype for `doveadm_dsync_main' *** Error code 1 Stop. *** Error code 1 Stop. *** Error code 1 Stop. *** Error code 1 Stop. *** Error code 1 Looks like rc3 needed . -- Member - Liberal International This is doctor at nl2k.ab.ca Ici doctor at nl2k.ab.ca God, Queen and country! Never Satan President Republic! Beware AntiChrist rising! https://www.fullyfollow.me/rootnl2k Merry Christmas 2011 and Happy New Year 2012 ! From doctor at doctor.nl2k.ab.ca Fri Jan 6 22:19:14 2012 From: doctor at doctor.nl2k.ab.ca (The Doctor) Date: Fri, 6 Jan 2012 13:19:14 -0700 Subject: [Dovecot] v2.1.rc2 released In-Reply-To: <20120106201255.GA20598@doctor.nl2k.ab.ca> References: <1325868127.17774.29.camel@hurina> <20120106201255.GA20598@doctor.nl2k.ab.ca> Message-ID: <20120106201914.GC20598@doctor.nl2k.ab.ca> On Fri, Jan 06, 2012 at 01:12:56PM -0700, The Doctor wrote: > On Fri, Jan 06, 2012 at 06:42:07PM +0200, Timo Sirainen wrote: > > http://dovecot.org/releases/2.1/rc/dovecot-2.1.rc2.tar.gz > > http://dovecot.org/releases/2.1/rc/dovecot-2.1.rc2.tar.gz.sig > > > > Lots of fixes since rc1. Some of the changes were larger than I wanted > > at RC stage, but they had to be done now.. Hopefully it's all over now, > > and we can have v2.1.0 soon. :) > > > > Some of the more important changes: > > > > * dsync was merged into doveadm. There is still "dsync" symlink > > pointing to "doveadm", which you can use the old way for now. > > The preferred ways to run dsync are "doveadm sync" (for old "dsync > > mirror") and "doveadm backup". > > > > + IMAP SPECIAL-USE extension to describe mailboxes > > + Added mailbox {} sections, which deprecate autocreate plugin > > + lib-fs: Added "mode" parameter to "posix" backend to specify mode > > for created files/dirs (for mail_attachment_dir). > > + inet_listener names are now used to figure out what type the socket > > is when useful. For example naming service auth { inet_listener } to > > auth-client vs. auth-userdb has different behavior. > > + Added pop3c (= POP3 client) storage backend. > > - LMTP proxying code was simplified, hopefully fixing its problems. > > - dsync: Don't remove user's subscriptions for subscriptions=no > > namespaces. > > > > > Suggestion: > > Get rid of the --as-needed ld flag. This is a show stopper for me. > > Also, > > Making all in doveadm > Making all in dsync > gcc -DHAVE_CONFIG_H -I. -I../../.. -I../../../src/lib -I../../../src/lib-test -I../../../src/lib-settings -I../../../src/lib-master -I../../../src/lib-mail -I../../../src/lib-imap -I../../../src/lib-index -I../../../src/lib-storage -I../../../src/doveadm -std=gnu99 -g -O2 -Wall -W -Wmissing-prototypes -Wmissing-declarations -Wpointer-arith -Wchar-subscripts -Wformat=2 -Wbad-function-cast -I/usr/contrib/include -MT doveadm-dsync.o -MD -MP -MF .deps/doveadm-dsync.Tpo -c -o doveadm-dsync.o doveadm-dsync.c > doveadm-dsync.c:17:27: doveadm-dsync.h: No such file or directory > doveadm-dsync.c:386: warning: no previous prototype for `doveadm_dsync_main' > *** Error code 1 > > Stop. > *** Error code 1 > > Stop. > *** Error code 1 > > Stop. > *** Error code 1 > > Stop. > *** Error code 1 > > > Looks like rc3 needed . > Just noted your rc3 notice. Can you get an rc4 going where the above 2 mentions are fixed? > -- > Member - Liberal International This is doctor at nl2k.ab.ca Ici doctor at nl2k.ab.ca > God, Queen and country! Never Satan President Republic! Beware AntiChrist rising! > https://www.fullyfollow.me/rootnl2k > Merry Christmas 2011 and Happy New Year 2012 ! -- Member - Liberal International This is doctor at nl2k.ab.ca Ici doctor at nl2k.ab.ca God, Queen and country! Never Satan President Republic! Beware AntiChrist rising! https://www.fullyfollow.me/rootnl2k Merry Christmas 2011 and Happy New Year 2012 ! From tss at iki.fi Fri Jan 6 22:24:45 2012 From: tss at iki.fi (Timo Sirainen) Date: Fri, 6 Jan 2012 22:24:45 +0200 Subject: [Dovecot] v2.1.rc2 released In-Reply-To: <20120106201914.GC20598@doctor.nl2k.ab.ca> References: <1325868127.17774.29.camel@hurina> <20120106201255.GA20598@doctor.nl2k.ab.ca> <20120106201914.GC20598@doctor.nl2k.ab.ca> Message-ID: <01600D7A-F1E9-4DD9-8182-B3A5CB9A2859@iki.fi> On 6.1.2012, at 22.19, The Doctor wrote: >> doveadm-dsync.c:17:27: doveadm-dsync.h: No such file or directory >> doveadm-dsync.c:386: warning: no previous prototype for `doveadm_dsync_main' >> *** Error code 1 >> Looks like rc3 needed . >> > > Just noted your rc3 notice. > > Can you get an rc4 going where the above 2 mentions are fixed? rc3 fixes these. From dmiller at amfes.com Fri Jan 6 22:32:54 2012 From: dmiller at amfes.com (Daniel L. Miller) Date: Fri, 06 Jan 2012 12:32:54 -0800 Subject: [Dovecot] Possible mdbox corruption In-Reply-To: <75CA67D3-81FF-4FD5-8D61-6992E808DB09@iki.fi> References: <9D72A466-6ECD-4A10-A52A-20BF56B32916@iki.fi> <75CA67D3-81FF-4FD5-8D61-6992E808DB09@iki.fi> Message-ID: On 1/6/2012 9:36 AM, Timo Sirainen wrote: > On 6.1.2012, at 19.30, Daniel L. Miller wrote: > >> Jan 6 09:22:42 bubba dovecot: indexer-worker(user1 at domain.com): Error: fts_solr: Indexing failed: 400 Illegal character ((CTRL-CHAR, code 18)) at [row,col {unknown-source}]: [482765,16] >> Jan 6 09:22:42 bubba dovecot: indexer-worker: Error: >> >> Google seems to indicate that Solr cannot handle "invalid" characters - and that it is the responsibility of the calling program to strip out such. A quick search shows me a both an individual character comparison in Java and a regex used for the purpose. Is there any "illegal character protection" in the Dovecot Solr plugin? > Yes, there is. So I'm not really sure what it's complaining about. Are you using the "solr" or "solr_old" backend? > > "Solr". plugin { fts = solr fts_solr = url=http://localhost:8983/solr/ } -- Daniel From david at paperclipsystems.com Fri Jan 6 22:44:51 2012 From: david at paperclipsystems.com (David Egbert) Date: Fri, 06 Jan 2012 13:44:51 -0700 Subject: [Dovecot] failed: Too many levels of symbolic links Message-ID: <4F075D43.8090706@paperclipsystems.com> All, My dovecot install works great except for one error I keep seeing this in my logs. The folder has 7138 messages in it. I am informed the user they needed to reduce the number of messages in the folder and believe this will fix the problem. My question is about where the problem lies. Is the problem related to an internal limit with Dovecot v2.0.15 or with my Debian (3.1.0-1-amd64)? Thanks --- dovecot: imap(XXXXX at XXXXX.com): Error: readdir(/XXXX/XXXX/XXXXXXXXX/XXXXX/XXXXXXX/XXXXXXXXXXXXXXXXXXX/XXX) failed: Too many levels of symbolic links David Egbert Paperclip Systems, LLC --- This message, its contents, and attachments are confidential and are only authorized for the intended recipient. Disclosure, re-distribution, or use of said information is strictly prohibited, and may be excluded from disclosure by applicable law. If you are not the intended recipient, or their intermediary, please notify the sender and delete this message. From tss at iki.fi Fri Jan 6 23:16:33 2012 From: tss at iki.fi (Timo Sirainen) Date: Fri, 6 Jan 2012 23:16:33 +0200 Subject: [Dovecot] failed: Too many levels of symbolic links In-Reply-To: <4F075D43.8090706@paperclipsystems.com> References: <4F075D43.8090706@paperclipsystems.com> Message-ID: <703E9D47-8238-4E52-A60B-A466B77C5CBF@iki.fi> On 6.1.2012, at 22.44, David Egbert wrote: > dovecot: imap(XXXXX at XXXXX.com): Error: readdir(/XXXX/XXXX/XXXXXXXXX/XXXXX/XXXXXXX/XXXXXXXXXXXXXXXXXXX/XXX) failed: Too many levels of symbolic links You have a symlink loop. Either a symlink that points to itself or one of the parent directories. From e-frog at gmx.de Fri Jan 6 23:25:49 2012 From: e-frog at gmx.de (e-frog) Date: Fri, 06 Jan 2012 22:25:49 +0100 Subject: [Dovecot] 2.1.rc1 (056934abd2ef): virtual plugin mailbox search pattern In-Reply-To: <4EF4BB6C.3050902@gmx.de> References: <4EF4BB6C.3050902@gmx.de> Message-ID: <4F0766DD.1060805@gmx.de> ON 23.12.2011 18:33, wrote e-frog: > Hello Timo, > > With dovecot 2.1.rc1 (056934abd2ef) there seems to be something wrong > with virtual plugin mailbox search patterns. > > I'm using a virtual mailbox 'unread' with the following dovecot-virtual > file > > $ cat dovecot-virtual > * > unseen > > For testing propose I created the following folders with each containing > one unread message > > INBOX, INBOX/level1 and INBOX/level1/level2 > > 2.1.rc1 (056934abd2ef) > > 1 LIST "" "*" > * LIST (\HasChildren) "/" "INBOX" > * LIST (\HasChildren) "/" "INBOX/level1" > * LIST (\HasNoChildren) "/" "INBOX/level1/level2" > * LIST (\HasChildren) "/" "virtual" > * LIST (\HasNoChildren) "/" "virtual/unread" > 1 OK List completed. > 2 STATUS "INBOX" (UNSEEN) > * STATUS "INBOX" (UNSEEN 1) > 2 OK Status completed. > 3 STATUS "INBOX/level1" (UNSEEN) > * STATUS "INBOX/level1" (UNSEEN 1) > 3 OK Status completed. > 4 STATUS "INBOX/level1/level2" (UNSEEN) > * STATUS "INBOX/level1/level2" (UNSEEN 1) > 4 OK Status completed. > 5 STATUS "virtual/unread" (UNSEEN) > * STATUS "virtual/unread" (UNSEEN 1) > 5 OK Status completed. > > Result: virtual/unread shows only 1 unseen message. Further tests showed > it's the one from INBOX. The mails from the deeper levels are not found. > > Downgrading to 2.0.16 restores the correct behavior: > > 1 LIST "" "*" > * LIST (\HasChildren) "/" "INBOX" > * LIST (\HasChildren) "/" "INBOX/level1" > * LIST (\HasNoChildren) "/" "INBOX/level1/level2" > * LIST (\HasChildren) "/" "virtual" > * LIST (\HasNoChildren) "/" "virtual/unread" > 1 OK List completed. > 2 STATUS "INBOX" (UNSEEN) > * STATUS "INBOX" (UNSEEN 1) > 2 OK Status completed. > 3 STATUS "INBOX/level1" (UNSEEN) > * STATUS "INBOX/level1" (UNSEEN 1) > 3 OK Status completed. > 4 STATUS "INBOX/level1/level2" (UNSEEN) > * STATUS "INBOX/level1/level2" (UNSEEN 1) > 4 OK Status completed. > 5 STATUS "virtual/unread" (UNSEEN) > * STATUS "virtual/unread" (UNSEEN 3) > 5 OK Status completed. > > Result: virtual/unread shows 3 unseen messages as it should > > The namespace configuration is as following > > namespace { > hidden = no > inbox = yes > list = yes > location = > prefix = > separator = / > subscriptions = yes > type = private > } > namespace { > location = virtual:~/virtual > prefix = virtual/ > separator = / > subscriptions = no > type = private > } > > I've also tried this with location = virtual:~/virtual:LAYOUT=maildir++ > leading to the same result. > > Thanks, > e-frog Just tested this on 2.1.rc3 and this still doesn't work like in v2.0. It seems like the search stops at the first hierarchy separator. Is there anything in addition I can do to help fix this issue? Thanks, e-frog From david at paperclipsystems.com Fri Jan 6 23:41:04 2012 From: david at paperclipsystems.com (David Egbert) Date: Fri, 06 Jan 2012 14:41:04 -0700 Subject: [Dovecot] failed: Too many levels of symbolic links In-Reply-To: <703E9D47-8238-4E52-A60B-A466B77C5CBF@iki.fi> References: <4F075D43.8090706@paperclipsystems.com> <703E9D47-8238-4E52-A60B-A466B77C5CBF@iki.fi> Message-ID: <4F076A70.3090905@paperclipsystems.com> On 1/6/2012 2:16 PM, Timo Sirainen wrote: > On 6.1.2012, at 22.44, David Egbert wrote: > >> dovecot: imap(XXXXX at XXXXX.com): Error: readdir(/XXXX/XXXX/XXXXXXXXX/XXXXX/XXXXXXX/XXXXXXXXXXXXXXXXXXX/XXX) failed: Too many levels of symbolic links > You have a symlink loop. Either a symlink that points to itself or one of the parent directories. > I thought that might have been the case, but I checked and there are no symlinks in that directory, or any of the directories above it in the path. All of the directories and files were created by dovecot. I didn't notice this in the logs until recently. The files are stored on an NFS Raid if that makes any difference. --- David Egbert From tss at iki.fi Fri Jan 6 23:51:41 2012 From: tss at iki.fi (Timo Sirainen) Date: Fri, 6 Jan 2012 23:51:41 +0200 Subject: [Dovecot] failed: Too many levels of symbolic links In-Reply-To: <4F076A70.3090905@paperclipsystems.com> References: <4F075D43.8090706@paperclipsystems.com> <703E9D47-8238-4E52-A60B-A466B77C5CBF@iki.fi> <4F076A70.3090905@paperclipsystems.com> Message-ID: On 6.1.2012, at 23.41, David Egbert wrote: > On 1/6/2012 2:16 PM, Timo Sirainen wrote: >> On 6.1.2012, at 22.44, David Egbert wrote: >> >>> dovecot: imap(XXXXX at XXXXX.com): Error: readdir(/XXXX/XXXX/XXXXXXXXX/XXXXX/XXXXXXX/XXXXXXXXXXXXXXXXXXX/XXX) failed: Too many levels of symbolic links >> You have a symlink loop. Either a symlink that points to itself or one of the parent directories. >> > I thought that might have been the case, but I checked and there are no symlinks in that directory, or any of the directories above it in the path. All of the directories and files were created by dovecot. I didn't notice this in the logs until recently. The files are stored on an NFS Raid if that makes any difference. Well, then.. You have a bit too many Xes in there for me to guess which readdir() is the one failing. I guess it's /new or /cur for a Maildir? Anyway, readdir() is failing with ELOOP. Does it always fail with "Too many levels of symbolic links" or is it sometimes different? This sounds like a bug in Linux NFS client code. You can reproduce this always with this one user's Maildir? Can you do "ls" in the directory? From david at paperclipsystems.com Sat Jan 7 00:10:32 2012 From: david at paperclipsystems.com (David Egbert) Date: Fri, 06 Jan 2012 15:10:32 -0700 Subject: [Dovecot] failed: Too many levels of symbolic links In-Reply-To: References: <4F075D43.8090706@paperclipsystems.com> <703E9D47-8238-4E52-A60B-A466B77C5CBF@iki.fi> <4F076A70.3090905@paperclipsystems.com> Message-ID: <4F077158.4000500@paperclipsystems.com> On 1/6/2012 2:51 PM, Timo Sirainen wrote: > On 6.1.2012, at 23.41, David Egbert wrote: > >> On 1/6/2012 2:16 PM, Timo Sirainen wrote: >>> On 6.1.2012, at 22.44, David Egbert wrote: >>> >>>> dovecot: imap(XXXXX at XXXXX.com): Error: readdir(/XXXX/XXXX/XXXXXXXXX/XXXXX/XXXXXXX/XXXXXXXXXXXXXXXXXXX/XXX) failed: Too many levels of symbolic links >>> You have a symlink loop. Either a symlink that points to itself or one of the parent directories. >>> >> I thought that might have been the case, but I checked and there are no symlinks in that directory, or any of the directories above it in the path. All of the directories and files were created by dovecot. I didn't notice this in the logs until recently. The files are stored on an NFS Raid if that makes any difference. > Well, then.. You have a bit too many Xes in there for me to guess which readdir() is the one failing. I guess it's /new or /cur for a Maildir? > > Anyway, readdir() is failing with ELOOP. Does it always fail with "Too many levels of symbolic links" or is it sometimes different? This sounds like a bug in Linux NFS client code. You can reproduce this always with this one user's Maildir? Can you do "ls" in the directory? > Sorry about the X's... it is a client directory. We support many domains and their privacy is paramount. You are correct it is in the /cur directory. I can LS all of directories without problems. This user has 10+Gb in his mail box spread across 352 subscribed folders. As for the logs it is always the directory, always the same error. David Egbert From tss at iki.fi Sat Jan 7 00:30:37 2012 From: tss at iki.fi (Timo Sirainen) Date: Sat, 7 Jan 2012 00:30:37 +0200 Subject: [Dovecot] failed: Too many levels of symbolic links In-Reply-To: <4F077158.4000500@paperclipsystems.com> References: <4F075D43.8090706@paperclipsystems.com> <703E9D47-8238-4E52-A60B-A466B77C5CBF@iki.fi> <4F076A70.3090905@paperclipsystems.com> <4F077158.4000500@paperclipsystems.com> Message-ID: <4A0E9695-E78A-487F-AE53-888D27981EF1@iki.fi> On 7.1.2012, at 0.10, David Egbert wrote: >> Anyway, readdir() is failing with ELOOP. Does it always fail with "Too many levels of symbolic links" or is it sometimes different? This sounds like a bug in Linux NFS client code. You can reproduce this always with this one user's Maildir? Can you do "ls" in the directory? >> > Sorry about the X's... it is a client directory. We support many domains and their privacy is paramount. You are correct it is in the /cur directory. I can LS all of directories without problems. This user has 10+Gb in his mail box spread across 352 subscribed folders. As for the logs it is always the directory, always the same error. Try the attached test program. Run it as: ./readdir /path/to/Maildir/cur Does it also give non-zero error? -------------- next part -------------- A non-text attachment was scrubbed... Name: readdir.c Type: application/octet-stream Size: 271 bytes Desc: not available URL: From yubao.liu at gmail.com Sat Jan 7 05:36:27 2012 From: yubao.liu at gmail.com (Yubao Liu) Date: Sat, 07 Jan 2012 11:36:27 +0800 Subject: [Dovecot] Strange error: DIGEST-MD5 mechanism can't be supported with given passdbs In-Reply-To: References: <4F05EABC.7070309@gmail.com> <4F06D272.5010200@bunbun.be> <4F071E3B.2060405@gmail.com> <1325868288.17774.30.camel@hurina> <4F07332B.70708@gmail.com> Message-ID: <4F07BDBB.3060204@gmail.com> On 01/07/2012 01:51 AM, Timo Sirainen wrote: > On 6.1.2012, at 19.45, Yubao Liu wrote: >> On 01/07/2012 12:44 AM, Timo Sirainen wrote: >>> On Sat, 2012-01-07 at 00:15 +0800, Yubao Liu wrote: >>>> I don't know why this function doesn't check auth->masterdbs, if I >>>> insert these lines after line 128, that error goes away, and dovecot's >>>> imap-login process happily does DIGEST-MD5 authentication [1]. >>>> In my configuration, "masterdbs" contains "passdb passwd-file", >>>> "passdbs" contains " passdb pam". >>> So .. you want DIGEST-MD5 authentication for the master users, but not >>> for anyone else? I hadn't really thought anyone would want that.. >> Is there any special reason that master passdb isn't taken into >> account in src/auth/auth.c:auth_passdb_list_have_lookup_credentials() ? >> I feel master passdb is also a kind of passdb. > I guess it could be changed. It wasn't done intentionally that way. > I guess this change broke old way: http://hg.dovecot.org/dovecot-2.0/rev/b05793c609ac In old version, "auth->passdbs" contains all passdbs, this revision changes "auth->passdbs" to only contain non-master passdbs. I'm not sure which fix is better or even my proposal is correct or fully: a) in src/auth/auth.c:auth_passdb_preinit(), insert master passdb to auth->passdbs too, and remove duplicate code for masterdbs in auth_init() and auth_deinit(). b) add similar code for masterdbs in auth_passdb_list_have_verify_plain(), auth_passdb_list_have_lookup_credentials(), auth_passdb_list_have_set_credentials(). >> This is exactly my use case, I use Kerberos for system users, >> I'm curious why master passdb isn't used to check "have_lookup_credentials" ability >> http://wiki2.dovecot.org/Authentication/MultipleDatabases >>> Currently the fallback works only with the PLAIN authentication mechanism. >> I hope this limitation can be relaxed. > It might already be .. I don't remember. In any case you have only PAM passdb, so it shouldn't matter. GSSAPI isn't a passdb. If the fix above is added, then I can use CRAM-MD5 with master passwd-file passdb and normal pam passdb, else imap-login process can't startup due to check in auth_mech_list_verify_passdb(). Attached two patches against dovecot-2.0 branch for the two schemes, the first is cleaner but may affect other logics in other source files. Another related question is "pass" option in master passdb, if I set it to "yes", the authentication fails: Jan 7 11:26:00 gold dovecot: auth: Debug: client in: AUTH#0111#011CRAM-MD5#011service=imap#011secured#011lip=127.0.1.1#011rip=127.0.0.1#011lport=143#011rport=51771 Jan 7 11:26:00 gold dovecot: auth: Debug: client out: CONT#0111#011PDk4NjcwMDY1MTU3NzI3MjguMTMyNTkwNjc2MEBnb2xkPg== Jan 7 11:26:00 gold dovecot: auth: Debug: client in: CONT#0111#011ZGlla2VuKndlYm1haWwgYmNkMzFiMWE1YjQ1OWQ0OGRkZWQ4ZmIzZDhmMjVhZTc= Jan 7 11:26:00 gold dovecot: auth: Debug: auth(webmail,127.0.0.1,master): Master user lookup for login: dieken Jan 7 11:26:00 gold dovecot: auth: Debug: passwd-file(webmail,127.0.0.1,master): lookup: user=webmail file=/etc/dovecot/master-users Jan 7 11:26:00 gold dovecot: auth: passdb(webmail,127.0.0.1,master): Master user logging in as dieken Jan 7 11:26:00 gold dovecot: auth: Error: passdb(dieken,127.0.0.1): No passdbs support skipping password verification - pass=yes can't be used in master passdb Jan 7 11:26:00 gold dovecot: auth: Debug: password(dieken,127.0.0.1): passdb doesn't support credential lookups My normal passdb is a PAM passdb, it doesn't support credential lookups, that's reasonable, but I feel the comment for "pass" option is confusing: $ less /etc/dovecot/conf.d/auth-master.conf.ext .... # Example master user passdb using passwd-file. You can use any passdb though. passdb { driver = passwd-file master = yes args = /etc/dovecot/master-users # Unless you're using PAM, you probably still want the destination user to # be looked up from passdb that it really exists. pass=yes does that. pass = yes } According the comment, it's to check whether the real user exists, why not to check userdb but another passdb? Even it must check against passdb, in this case, it's obvious not necessary to lookup credentials, it's enough to to lookup user name only. Regards, Yubao Liu -------------- next part -------------- A non-text attachment was scrubbed... Name: schemeA-count-master-passdb-as-passdb-too.patch Type: text/x-patch Size: 1357 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: schemeB-also-check-against-master-passdbs.patch Type: text/x-patch Size: 1187 bytes Desc: not available URL: From phil at kernick.org Sat Jan 7 02:21:53 2012 From: phil at kernick.org (Phil Kernick) Date: Sat, 07 Jan 2012 10:51:53 +1030 Subject: [Dovecot] Attribute Cache flush errors on FreeBSD 8.2 Message-ID: <4F079021.4090001@kernick.org> I'm running dovecot 2.0.16 on FreeBSD 8.2 with the mail spool and indexes on an NFS server. Lines like the following keep appearing in syslog for access to each mailbox: Error: nfs_flush_attr_cache_fd_locked: fchown(/home/philk/Mail/Deleted) failed: Bad file descriptor This is coming from nfs-workarounds.c line 210, which tracing back seems to be coming from the call to mbox_lock on lib-storage/index/mbox/mbox-lock.c line 774. I have /home mounted with options acregmin=0,acregmax=0,acdirmin=0,acdirmax=0 (as FreeBSD doesn't have a noac option), but it throws the same error either way. The output of dovecot -n is below. Phil. # 2.0.16: /usr/local/etc/dovecot/dovecot.conf # OS: FreeBSD 8.2-RELEASE-p3 i386 auth_mechanisms = plain login auth_username_format = %Lu disable_plaintext_auth = no first_valid_gid = 1000 first_valid_uid = 1000 listen = *, [::] mail_fsync = always mail_location = mbox:~/Mail/:INBOX=/var/mail/%u mail_nfs_index = yes mail_nfs_storage = yes mail_privileged_group = mail mmap_disable = yes passdb { args = session=yes dovecot driver = pam } protocols = imap pop3 service auth { unix_listener /var/spool/postfix/private/auth { group = postfix mode = 0660 user = postfix } user = root } ssl_cert = Hi *, I am currently in the planning stage for a "new and improved" mail system at my university. Right now, everything is on one big backend server but this is causing me increasing amounts of pain, beginning with the time a full backup takes. So naturally, I want to split this big server into smaller ones. To keep things simple, I want to pin a user to a server so I can avoid things like NFS or cluster aware filesystems. The mapping for each account is then inserted into the LDAP object for each user and the frontend proxy (perdition at the moment) then uses this information to route each access to the correct backend storage server running dovecot. So far this has been working nice with my test setup. But: I also have to provide shared folders for users. Thankfully users don't have the right to share their own folders, which makes things easier (I hope). Right now, the setup works like this, using Courier: - complete virtual mail setup - global shared folders configured in /etc/courier/shared/index - inside /home/shared-folder-name/Maildir/courierimapacl specific user get access to a folder - each folder a user has access is mapped to the namespace #shared like #shared.shared-folder-name Now, if I split my backend storage server into multiple ones and user-A is on server-1 and user-B is on server-2, but both need to access the same shared folder, I have a problem. I could of course move all users needing access to a shared folder to the same server, but in the end, this will be a nightmare for me, because I forsee having to move users around on a daily basis. Right now, I am pondering with using an additional server with just the shared folders on it and using NFS (or a cluster FS) to mount the shared folder filesystem to each backend storage server, so each user has potential access to a shared folders data. Ideas? Suggestions? Nudges in the right direction? Gr??e, Sven. -- Sigmentation fault. Core dumped. From stan at hardwarefreak.com Sun Jan 8 02:35:37 2012 From: stan at hardwarefreak.com (Stan Hoeppner) Date: Sat, 07 Jan 2012 18:35:37 -0600 Subject: [Dovecot] Providing shared folders with multiple backend servers In-Reply-To: <68fd4hi9kbv8@mids.svenhartge.de> References: <68fd4hi9kbv8@mids.svenhartge.de> Message-ID: <4F08E4D9.1090203@hardwarefreak.com> On 1/7/2012 4:20 PM, Sven Hartge wrote: > Hi *, > > I am currently in the planning stage for a "new and improved" mail > system at my university. > > Right now, everything is on one big backend server but this is causing > me increasing amounts of pain, beginning with the time a full backup > takes. You failed to mention your analysis and diagnosis identifying the source of the slow backup, and other issues your eluded to but didn't mention specifically. You also didn't mention how you're doing this full backup (tar, IMAP; D2D or tape), where the backup bottleneck is, what mailbox storage format you're using, total mailbox count and filesystem space occupied. What is your disk storage configuration? Direct attach? Hardware or software RAID? What RAID level? How many disks? SAS or SATA? It's highly likely your problems can be solved without the drastic architecture change, and new problems it will introduce, that you describe below. > So naturally, I want to split this big server into smaller ones. Naturally? Many OPs spend significant x/y/z resources trying to avoid the "shared nothing" storage backend setup below. > To keep things simple, I want to pin a user to a server so I can avoid > things like NFS or cluster aware filesystems. The mapping for each > account is then inserted into the LDAP object for each user and the > frontend proxy (perdition at the moment) then uses this information to > route each access to the correct backend storage server running dovecot. Splitting the IMAP workload like this isn't keeping things simple, but increases complexity, on many levels. And there's nothing wrong with NFS and cluster filesystems if they are used correctly. > So far this has been working nice with my test setup. > > But: I also have to provide shared folders for users. Thankfully users > don't have the right to share their own folders, which makes things > easier (I hope). > > Right now, the setup works like this, using Courier: > > - complete virtual mail setup > - global shared folders configured in /etc/courier/shared/index > - inside /home/shared-folder-name/Maildir/courierimapacl specific user > get access to a folder > - each folder a user has access is mapped to the namespace #shared > like #shared.shared-folder-name > > Now, if I split my backend storage server into multiple ones and user-A > is on server-1 and user-B is on server-2, but both need to access the > same shared folder, I have a problem. Yes, you do. > I could of course move all users needing access to a shared folder to > the same server, but in the end, this will be a nightmare for me, > because I forsee having to move users around on a daily basis. See my comments above. > Right now, I am pondering with using an additional server with just the > shared folders on it and using NFS (or a cluster FS) to mount the shared > folder filesystem to each backend storage server, so each user has > potential access to a shared folders data. So you're going to implement a special case of what you're desperately trying to avoid? This makes no sense. > Ideas? Suggestions? Nudges in the right direction? Yes. We need more real information. Please provide: 1. Mailbox count, total maildir file count and size 2. Average/peak concurrent user connections 3. CPU type/speed/total core count, total RAM, free RAM (incl buffers) 4. Storage configuration--total spindles, RAID level, hard or soft RAID 5. Filesystem type 6. Backup software/method 7. Operating system Instead of telling us what you think the solution to your unidentified bottleneck is and then asking "yeah or nay", tell us what the problem is and allow us to recommend solutions. This way you'll get some education and multiple solutions that may very well be a better fit, will perform better, and possibly cost less in capital outlay and administration time/effort. -- Stan From sven at svenhartge.de Sun Jan 8 03:55:28 2012 From: sven at svenhartge.de (Sven Hartge) Date: Sun, 8 Jan 2012 02:55:28 +0100 Subject: [Dovecot] Providing shared folders with multiple backend servers References: <68fd4hi9kbv8@mids.svenhartge.de> <4F08E4D9.1090203@hardwarefreak.com> Message-ID: <78fdevu9kbv8@mids.svenhartge.de> Stan Hoeppner wrote: > It's highly likely your problems can be solved without the drastic > architecture change, and new problems it will introduce, that you > describe below. The main reason is I need to replace the hardware as its service contract ends this year and I am not able to extend it further. The box so far is fine, there are normally no problems during normal operations with speed or responsiveness towards the end-user. Sometimes, higher peak loads tend to strain the system a bit and this is starting to occur more often. First thought was to move this setup into our VMware cluster (yeah, I know, spare me the screams), since the hardware used there is way more powerfull than the hardware used now and I wouldn't have to buy new servers for my mail system (which is kind of painful to do in an universitary environment, especially in Germany, if you want to invest an amount of money above a certain amount). But then I thought about the problems with VMs this size and got to the idea with the distributed setup, splitting the one server into 4 or 6 backend servers. As I said: "idea". Other ideas making my life easier are more than welcome. >> Ideas? Suggestions? Nudges in the right direction? > Yes. We need more real information. Please provide: > 1. Mailbox count, total maildir file count and size about 10,000 Maildir++ boxes 900GB for 1300GB used, "df -i" says 11 million inodes used I know, this is very _tiny_ compared to the systems ISPs are using. > 2. Average/peak concurrent user connections IMAP: Average 800 concurrent user connections, peaking at about 1400. POP3: Average 300 concurrent user connections, peaking at about 600. > 3. CPU type/speed/total core count, total RAM, free RAM (incl buffers) Currently dual-core AMD Opteron 2210, 1.8GHz. Right now, in the middle of the night (2:30 AM here) on a Sunday, thus a low point in the usage pattern: total used free shared buffers cached Mem: 12335820 9720252 2615568 0 53112 680424 -/+ buffers/cache: 8986716 3349104 Swap: 5855676 10916 5844760 System reaches its 7 year this summer which is the end of its service contract. > 4. Storage configuration--total spindles, RAID level, hard or soft RAID RAID 6 with 12 SATA1.5 disks, external 4Gbit FC Back in 2005, a SAS enclosure was way to expensive for us to afford. > 5. Filesystem type XFS in a LVM to allow snapshots for backup I of course aligned the partions on the RAID correctly and of course created a filesystem with the correct parameters wrt. spindels, chunk size, etc. > 6. Backup software/method Full backup with Bacula, taking about 24 hours right now. Because of this, I switched to virtual full backups, only ever doing incremental and differental backups off of the real system and creating synthetic full backups inside Bacula. Works fine though, incremental taking 2 hours, differential about 4 hours. The main problem of the backup time is Maildir++. During a test, I copied the mail storage to a spare box, converted it to mdbox (50MB file size) and the backup was lightning fast compared to the Maildir++ format. Additonally compressing the mails inside the mdbox and not having Bacula compress them for me reduce the backup time further (and speeding up the access through IMAP and POP3). So this is the way to go, I think, regardless of which way I implement the backend mail server. > 7. Operating system Debian Linux Lenny, currently with kernel 2.6.39 > Instead of telling us what you think the solution to your unidentified > bottleneck is and then asking "yeah or nay", tell us what the problem is > and allow us to recommend solutions. I am not asking for "yay or nay", I just pointed out my idea, but I am open to other suggestions. If the general idea is to buy a new big single storage system, I am more than happy to do just this, because this will prevent any problems I might have with a distributed one before they even can occur. Maybe two HP DL180s (one for production and one as test/standby-system) with an SAS attached enclosure for storage? Keeping in mind the new system has to work for some time (again 5 to 7 years) I have to be able to extend the storage space without to much hassle. Gr??e, S? -- Sigmentation fault. Core dumped. From yubao.liu at gmail.com Sun Jan 8 04:56:33 2012 From: yubao.liu at gmail.com (Yubao Liu) Date: Sun, 08 Jan 2012 10:56:33 +0800 Subject: [Dovecot] Strange error: DIGEST-MD5 mechanism can't be supported with given passdbs In-Reply-To: <4F07BDBB.3060204@gmail.com> References: <4F05EABC.7070309@gmail.com> <4F06D272.5010200@bunbun.be> <4F071E3B.2060405@gmail.com> <1325868288.17774.30.camel@hurina> <4F07332B.70708@gmail.com> <4F07BDBB.3060204@gmail.com> Message-ID: <4F0905E1.9090603@gmail.com> Hi Timo, Did you review the patches in previous email? I tested two patches against my configuration(pasted in this thread too), they both work well. I prefer the first patch, but I'm not sure whether it breaks something else. Regards, Yubao Liu On 01/07/2012 11:36 AM, Yubao Liu wrote: > On 01/07/2012 01:51 AM, Timo Sirainen wrote: >> On 6.1.2012, at 19.45, Yubao Liu wrote: >>> On 01/07/2012 12:44 AM, Timo Sirainen wrote: >>>> On Sat, 2012-01-07 at 00:15 +0800, Yubao Liu wrote: >>>>> I don't know why this function doesn't check auth->masterdbs, if I >>>>> insert these lines after line 128, that error goes away, and >>>>> dovecot's >>>>> imap-login process happily does DIGEST-MD5 authentication [1]. >>>>> In my configuration, "masterdbs" contains "passdb passwd-file", >>>>> "passdbs" contains " passdb pam". >>>> So .. you want DIGEST-MD5 authentication for the master users, but not >>>> for anyone else? I hadn't really thought anyone would want that.. >>> Is there any special reason that master passdb isn't taken into >>> account in src/auth/auth.c:auth_passdb_list_have_lookup_credentials() ? >>> I feel master passdb is also a kind of passdb. >> I guess it could be changed. It wasn't done intentionally that way. >> > I guess this change broke old way: > http://hg.dovecot.org/dovecot-2.0/rev/b05793c609ac > > In old version, "auth->passdbs" contains all passdbs, this revision > changes "auth->passdbs" to only contain non-master passdbs. > > I'm not sure which fix is better or even my proposal is correct or fully: > a) in src/auth/auth.c:auth_passdb_preinit(), insert master passdb to > auth->passdbs too, and remove duplicate code for masterdbs > in auth_init() and auth_deinit(). > > b) add similar code for masterdbs in > auth_passdb_list_have_verify_plain(), > auth_passdb_list_have_lookup_credentials(), > auth_passdb_list_have_set_credentials(). >>> This is exactly my use case, I use Kerberos for system users, >>> I'm curious why master passdb isn't used to check >>> "have_lookup_credentials" ability >>> http://wiki2.dovecot.org/Authentication/MultipleDatabases >>>> Currently the fallback works only with the PLAIN authentication >>>> mechanism. >>> I hope this limitation can be relaxed. >> It might already be .. I don't remember. In any case you have only >> PAM passdb, so it shouldn't matter. GSSAPI isn't a passdb. > If the fix above is added, then I can use CRAM-MD5 with master > passwd-file passdb > and normal pam passdb, else imap-login process can't startup due to > check in > auth_mech_list_verify_passdb(). > > Attached two patches against dovecot-2.0 branch for the two schemes, > the first is cleaner but may affect other logics in other source files. > > > Another related question is "pass" option in master passdb, if I set > it to "yes", > the authentication fails: > Jan 7 11:26:00 gold dovecot: auth: Debug: client in: > AUTH#0111#011CRAM-MD5#011service=imap#011secured#011lip=127.0.1.1#011rip=127.0.0.1#011lport=143#011rport=51771 > Jan 7 11:26:00 gold dovecot: auth: Debug: client out: > CONT#0111#011PDk4NjcwMDY1MTU3NzI3MjguMTMyNTkwNjc2MEBnb2xkPg== > Jan 7 11:26:00 gold dovecot: auth: Debug: client in: > CONT#0111#011ZGlla2VuKndlYm1haWwgYmNkMzFiMWE1YjQ1OWQ0OGRkZWQ4ZmIzZDhmMjVhZTc= > Jan 7 11:26:00 gold dovecot: auth: Debug: > auth(webmail,127.0.0.1,master): Master user lookup for login: dieken > Jan 7 11:26:00 gold dovecot: auth: Debug: > passwd-file(webmail,127.0.0.1,master): lookup: user=webmail > file=/etc/dovecot/master-users > Jan 7 11:26:00 gold dovecot: auth: passdb(webmail,127.0.0.1,master): > Master user logging in as dieken > Jan 7 11:26:00 gold dovecot: auth: Error: passdb(dieken,127.0.0.1): > No passdbs support skipping password verification - pass=yes can't be > used in master passdb > Jan 7 11:26:00 gold dovecot: auth: Debug: password(dieken,127.0.0.1): > passdb doesn't support credential lookups > > My normal passdb is a PAM passdb, it doesn't support credential > lookups, that's > reasonable, but I feel the comment for "pass" option is confusing: > > $ less /etc/dovecot/conf.d/auth-master.conf.ext > .... > # Example master user passdb using passwd-file. You can use any passdb > though. > passdb { > driver = passwd-file > master = yes > args = /etc/dovecot/master-users > > # Unless you're using PAM, you probably still want the destination > user to > # be looked up from passdb that it really exists. pass=yes does that. > pass = yes > } > > According the comment, it's to check whether the real user exists, why > not > to check userdb but another passdb? Even it must check against passdb, > in this case, it's obvious not necessary to lookup credentials, it's > enough to > to lookup user name only. > > Regards, > Yubao Liu > From stan at hardwarefreak.com Sun Jan 8 15:09:00 2012 From: stan at hardwarefreak.com (Stan Hoeppner) Date: Sun, 08 Jan 2012 07:09:00 -0600 Subject: [Dovecot] Providing shared folders with multiple backend servers In-Reply-To: <78fdevu9kbv8@mids.svenhartge.de> References: <68fd4hi9kbv8@mids.svenhartge.de> <4F08E4D9.1090203@hardwarefreak.com> <78fdevu9kbv8@mids.svenhartge.de> Message-ID: <4F09956C.1030109@hardwarefreak.com> On 1/7/2012 7:55 PM, Sven Hartge wrote: > Stan Hoeppner wrote: > >> It's highly likely your problems can be solved without the drastic >> architecture change, and new problems it will introduce, that you >> describe below. > > The main reason is I need to replace the hardware as its service > contract ends this year and I am not able to extend it further. > > The box so far is fine, there are normally no problems during normal > operations with speed or responsiveness towards the end-user. > > Sometimes, higher peak loads tend to strain the system a bit and this is > starting to occur more often. ... > First thought was to move this setup into our VMware cluster (yeah, I > know, spare me the screams), since the hardware used there is way more > powerfull than the hardware used now and I wouldn't have to buy new > servers for my mail system (which is kind of painful to do in an > universitary environment, especially in Germany, if you want to invest > an amount of money above a certain amount). What's wrong with moving it onto VMware? This actually seems like a smart move given your description of the node hardware. It also gives you much greater backup flexibility with VCB (or whatever they call it today). You can snapshot the LUN over the SAN during off peak hours to a backup server and do the actual backup to the library at your leisure. Forgive me if the software names have changed as I've not used VMware since ESX3 back in 07. > But then I thought about the problems with VMs this size and got to the > idea with the distributed setup, splitting the one server into 4 or 6 > backend servers. Not sure what you mean by "VMs this size". Do you mean memory requirements or filesystem size? If the nodes have enough RAM that's no issue. And surely you're not thinking of using a .vmdk for the mailbox storage. You'd use an RDM SAN LUN. In fact you should be able to map in the existing XFS storage LUN and use it as is. Assuming it's not going into retirement as well. If an individual VMware node don't have sufficient RAM you could build a VM based Dovecot cluster, run these two VMs on separate nodes, and thin out the other VMs allowed to run on these nodes. Since you can't directly share XFS, build a tiny Debian NFS server VM and map the XFS LUN to it, export the filesystem to the two Dovecot VMs. You could install the Dovecot director on this NFS server VM as well. Converting from maildir to mdbox should help eliminate the NFS locking problems. I would do the conversion before migrating to this VM setup with NFS. Also, run the NFS server VM on the same physical node as one of the Dovecot servers. The NFS traffic will be a memory-memory copy instead of going over the GbE wire, decreasing IO latency and increasing performance for that Dovecot server. If it's possible to have Dovecot director or your fav load balancer weight more connections to one Deovecot node, funnel 10-15% more connections to this one. (I'm no director guru, in fact haven't use it yet). Assuming the CPUs in the VMware cluster nodes are clocked a decent amount higher than 1.8GHz I wouldn't monkey with configuring virtual smp for these two VMs, as they'll be IO bound not CPU bound. > As I said: "idea". Other ideas making my life easier are more than > welcome. I hope my suggestions contribute to doing so. :) >>> Ideas? Suggestions? Nudges in the right direction? > >> Yes. We need more real information. Please provide: > >> 1. Mailbox count, total maildir file count and size > > about 10,000 Maildir++ boxes > > 900GB for 1300GB used, "df -i" says 11 million inodes used Converting to mdbox will take a large burden off your storage, as you've seen. With ~1.3TB consumed of ~15TB you should have plenty of space to convert to mdbox while avoiding filesystem fragmentation. With maildir you likely didn't see heavy fragmentation due to small file sizes. With mdbox, especially at 50MB, you'll likely start seeing more fragmentation. Use this to periodically check the fragmentation level: $ xfs_db -c frag [device] -r e.g. $ xfs_db -c frag /dev/sda7 -r actual 76109, ideal 75422, fragmentation factor 0.90% I'd recommend running xfs_fsr when frag factor exceeds ~20-30%. The XFS developers recommend against running xfs_fsr too often as it can actually increases free space fragmentation while it decreases file fragmentation, especially on filesystems that are relatively full. Having heavily fragmented free space is worse than having fragmented files, as newly created files will automatically be fragged. > I know, this is very _tiny_ compared to the systems ISPs are using. Not everyone is an ISP, including me. :) >> 2. Average/peak concurrent user connections > > IMAP: Average 800 concurrent user connections, peaking at about 1400. > POP3: Average 300 concurrent user connections, peaking at about 600. > >> 3. CPU type/speed/total core count, total RAM, free RAM (incl buffers) > > Currently dual-core AMD Opteron 2210, 1.8GHz. Heheh, yeah, a bit long in the tooth, but not horribly underpowered for 1100 concurrent POP/IMAP users. Though this may be the reason for the sluggishness when you hit that 2000 concurrent user peak. Any chance you have some top output for the peak period? > Right now, in the middle of the night (2:30 AM here) on a Sunday, thus a > low point in the usage pattern: > > total used free shared buffers cached > Mem: 12335820 9720252 2615568 0 53112 680424 > -/+ buffers/cache: 8986716 3349104 > Swap: 5855676 10916 5844760 Ugh... "-m" and "-g" options exist for a reason. :) So this box has 12GB RAM, currently ~2.5GB free during off peak hours. It would be interesting to see free RAM and swap usage values during peak. That would tell use whether we're CPU or RAM starved. If both turned up clean then we'd need to look at iowait. If you're not RAM starved then moving to VMware nodes with 16/24/32GB RAM should work fine, as long as you don't stack many other VMs on top. Enabling memory dedup may help a little. > System reaches its 7 year this summer which is the end of its service > contract. Enjoy your retirement old workhorse. :) >> 4. Storage configuration--total spindles, RAID level, hard or soft RAID > > RAID 6 with 12 SATA1.5 disks, external 4Gbit FC I assume this means a LUN on a SAN array somewhere on the other end of that multi-mode cable, yes? Can you tell us what brand/model the box is? > Back in 2005, a SAS enclosure was way to expensive for us to afford. How one affords an FC SAN array but not a less expensive direct attach SAS enclosure is a mystery... :) >> 5. Filesystem type > > XFS in a LVM to allow snapshots for backup XFS is the only way to fly, IMNSHO. > I of course aligned the partions on the RAID correctly and of course > created a filesystem with the correct parameters wrt. spindels, chunk > size, etc. Which is critical for mitigating the RMW penalty of parity RAID. Speaking of which, why RAID6 for maildir? Given that your array is 90% vacant, why didn't you go with RAID10 for 3-5 times the random write performance? >> 6. Backup software/method > > Full backup with Bacula, taking about 24 hours right now. Because of > this, I switched to virtual full backups, only ever doing incremental > and differental backups off of the real system and creating synthetic > full backups inside Bacula. Works fine though, incremental taking 2 > hours, differential about 4 hours. Move to VMware and use VCB. You'll fall in love. > The main problem of the backup time is Maildir++. During a test, I > copied the mail storage to a spare box, converted it to mdbox (50MB > file size) and the backup was lightning fast compared to the Maildir++ > format. Well of course. You were surprised by this? How long has it been since you used mbox? mbox backs up even faster than mdbox. Why? Larger files and fewer of them. Which means the disks can actually do streaming reads, and don't have to beat their heads to death jumping all over the platters to read maildir files, which are scattered all over the place when created. Which is while maildir is described as a "random" IO workload. > Additonally compressing the mails inside the mdbox and not having Bacula > compress them for me reduce the backup time further (and speeding up the > access through IMAP and POP3). Again, no surprise here. When files exist on disk already compressed it takes less IO bandwidth to read the file data for a given actual file size. So if you have say 10MB files that compress down to 5MB, you can read twice as many files when the pipe is saturated, twice as much file data. > So this is the way to go, I think, regardless of which way I implement > the backend mail server. Which is why I asked my questions. :) mdbox would have been one of my recommendations, but you already discovered it. >> 7. Operating system > > Debian Linux Lenny, currently with kernel 2.6.39 :) Debian, XFS, Dovecot, FC SAN storage--I like your style. Lenny with 2.6.39? Is that a backport or rolled kernel? Not Squeeze? Interesting. I'm running Squeeze with rolled vanilla 2.6.38.6. It's been about 6 months so it's 'bout time I roll a new one. :) >> Instead of telling us what you think the solution to your unidentified >> bottleneck is and then asking "yeah or nay", tell us what the problem is >> and allow us to recommend solutions. > > I am not asking for "yay or nay", I just pointed out my idea, but I am > open to other suggestions. I think you've already discovered the best suggestions on your own. > If the general idea is to buy a new big single storage system, I am more > than happy to do just this, because this will prevent any problems I might > have with a distributed one before they even can occur. One box is definitely easier to administer and troubleshoot. Though I must say that even though it's more complex, I think the VM architecture I described is worth a serious look. If your current 12x1.5TB SAN array is being retired as well, you could piggy back onto the array(s) feeding the VMware farm, or expand them if necessary/possible. Adding drives is usually much cheaper than buying a new populated array chassis. Given your service contract comments it's unlikely you're the type to build your own servers. Being a hardwarefreak, I nearly always build my servers and storage from scratch. This may be worth a look merely for educational purposes. I just happened to have finished spec'ing out a new high volume 20TB IMAP server recently which should handle 5000 concurrent users without breaking a sweat, for only ~$7500 USD: Full parts list: http://secure.newegg.com/WishList/PublicWishDetail.aspx?WishListNumber=17069985 Summary: 2GHz 8-core 12MB L3 cache Magny Cours Opteron SuperMicro MBD-H8SGL-O w/32GB qualified quad channel reg ECC DDR3/1333 dual Intel 82574 GbE ports LSI 512MB PCIe 2.0 x8 RAID, 24 port SAS expander, 20x1TB 7.2k WD RE4 20 bay SAS/SATA 6G hot swap Norco chassis Create a RAID1 pair for /boot, the root filesystem, swap partition of say 8GB, 2GB partition for external XFS log, should have ~900GB left for utilitarian purposes. Configure two spares. Configure the remaining 16 drives as RAID10 with a 64KB stripe size (8KB, 16 sector strip size), yielding 8TB raw for the XFS backed mdbox mailstore. Enable the BBWC write cache (dang, forgot the battery module, +$175). This should yield approximately 8*150 = 1200 IOPS peak to/from disk, many thousands to BBWC, more than plenty for 5000 concurrent users given the IO behavior of most MUAs. Channel bond the NICs to the switch or round robin DNS the two IPs if pathing for redundancy. What's that? You want to support 10K users? Simply drop in another 4 sticks of the 8GB Kingston Reg ECC RAM for 64GB total, and plug one of these into the external SFF8088 port on the LSI card: http://www.newegg.com/Product/Product.aspx?Item=N82E16816133047 populated with 18 of the 1TB RE4 drives. Configure 16 drives the same as the primary array, grow it into your existing XFS. Since you have two identical arrays comprising the filesystem, sunit/swidth values are still valid so you don't need to add mount options. Configure 2 drives as hot spares. The additional 16 drive RAID10 doubles our disk IOPS to ~2400, maintaining our concurrent user to IOPS ratio at ~4:1, and doubles our mail storage to ~16TB. This expansion hardware will run an additional ~$6200. Grand total to support ~10K concurrent users (maybe more) with a quality DIY build is just over $14K USD, or ~$1.40 per mailbox. Not too bad for an 8-core, 64GB server with 32TB of hardware RAID10 mailbox storage and 38 total 1TB disks. I haven't run the numbers for a comparable HP system, but an educated guess says it would be quite a bit more expensive, not the server so much, but the storage. HP's disk drive prices are outrageous, though not approaching anywhere near the level of larceny EMC commits with it's drive sales. $2400 for a $300 Seagate drive wearing an EMC cape? Please.... > Maybe two HP DL180s (one for production and one as test/standby-system) > with an SAS attached enclosure for storage? If you're hooked on 1U chassis (I hate em) go with the DL165 G7. If not I'd go 2U, the DL385 G7. Magny Cours gives you more bang for the buck in this class of machines. The performance is excellent, and, if everybody buys Intel, AMD goes bankrupt, and then Chipzilla charges whatever it desires. They've already been sanctioned, and fined by the FTC at least twice. They paid Intergraph $800 million in an antitrust settlement in 2000 after they forced them out of the hardware business. They recently paid AMD $1 Billion in an antitrust settlement. They're just like Microsoft, putting competitors out of business by any and all means necessary, even if their conduct is illegal. Yes, I'd much rather give AMD my business, given they had superior CPUs to Intel for many years, and their current chips are still more than competitive. /end rant. ;) > Keeping in mind the new system has to work for some time (again 5 to 7 > years) I have to be able to extend the storage space without to much > hassle. Given you're currently only using ~1.3TB of ~15TB do you really see this as an issue? Will you be changing your policy or quotas? Will the university double its enrollment? If not I would think a new 12-16TB raw array would be more than plenty. If you really want growth potential get a SATABeast and start with 14 2TB SATA drives. You'll still have 28 empty SAS/SATA slots in the 4U chassis, 42 total. Max capacity is 84TB. You get dual 8Gb/s FC LC ports and dual GbE iSCSI ports per controller, all ports active, two controllers max. The really basic SKU runs about $20-25K USD with the single controller and a few small drives, before institutional/educational discounts. www.nexsan.com/satabeast I've used the SATABlade and SATABoy models (8 and 14 drives) and really like the simplicity of design and the httpd management interface. Good products, and one of the least expensive and feature rich in this class. Sorry this was so windy. I am the hardwarefreak after all. :) -- Stan From xamiw at arcor.de Sun Jan 8 17:37:10 2012 From: xamiw at arcor.de (xamiw at arcor.de) Date: Sun, 8 Jan 2012 16:37:10 +0100 (CET) Subject: [Dovecot] uid / gid and systemusers Message-ID: <1809497881.1135529.1326037030206.JavaMail.ngmail@webmail10.arcor-online.net> Hi all, I'm facing a problem when a user (q and g in this example) is logging into dovecot. Can anybody tell some hint? Thanks in advance. George /var/log/mail.log: ... Jan 8 16:18:28 test dovecot: User q is missing UID (see mail_uid setting) Jan 8 16:18:28 test dovecot: imap-login: Internal login failure (auth failed, 1 attempts): user=, method=PLAIN, rip=AAA.BBB.CCC.DDD, lip=EEE.FFF.GGG.HHH TLS <--- edited by me Jan 8 16:18:28 test dovecot: dovecot: User g is missing UID (see mail_uid setting) Jan 8 16:18:28 test dovecot: imap-login: Internal login failure (auth failed, 1 attempts): user=, method=PLAIN, rip=AAA.BBB.CCC.DDD, lip=EEE.FFF.GGG.HHH TLS <--- edited by me /etc/dovecot/dovecot.conf: protocols = imaps disable_plaintext_auth = yes shutdown_clients = yes log_timestamp = "%Y-%m-%d %H:%M:%S " ssl = yes ssl_cert_file = /etc/ssl/certs/dovecot.pem ssl_key_file = /etc/ssl/private/dovecot.pem mail_location = mbox:~/mail:INBOX=/var/mail/%u mail_privileged_group = mail mbox_write_locks = fnctl dotlock auth default { mechanisms = plain passdb shadow { } } /etc/passwd: ... g:x:1000:1000:test1,,,:/home/g:/bin/bash q:x:1001:1001:test2,,,:/home/q:/bin/bash /etc/group: ... g:x:1000: q:x:1001: From sven at svenhartge.de Sun Jan 8 17:39:45 2012 From: sven at svenhartge.de (Sven Hartge) Date: Sun, 8 Jan 2012 16:39:45 +0100 Subject: [Dovecot] Providing shared folders with multiple backend servers References: <68fd4hi9kbv8@mids.svenhartge.de> <4F08E4D9.1090203@hardwarefreak.com> <78fdevu9kbv8@mids.svenhartge.de> <4F09956C.1030109@hardwarefreak.com> Message-ID: <88fev069kbv8@mids.svenhartge.de> Stan Hoeppner wrote: > On 1/7/2012 7:55 PM, Sven Hartge wrote: >> Stan Hoeppner wrote: >> >>> It's highly likely your problems can be solved without the drastic >>> architecture change, and new problems it will introduce, that you >>> describe below. >> >> The main reason is I need to replace the hardware as its service >> contract ends this year and I am not able to extend it further. >> >> The box so far is fine, there are normally no problems during normal >> operations with speed or responsiveness towards the end-user. >> >> Sometimes, higher peak loads tend to strain the system a bit and this is >> starting to occur more often. > ... >> First thought was to move this setup into our VMware cluster (yeah, I >> know, spare me the screams), since the hardware used there is way more >> powerfull than the hardware used now and I wouldn't have to buy new >> servers for my mail system (which is kind of painful to do in an >> universitary environment, especially in Germany, if you want to invest >> an amount of money above a certain amount). > What's wrong with moving it onto VMware? This actually seems like a > smart move given your description of the node hardware. It also gives > you much greater backup flexibility with VCB (or whatever they call it > today). You can snapshot the LUN over the SAN during off peak hours to > a backup server and do the actual backup to the library at your leisure. > Forgive me if the software names have changed as I've not used VMware > since ESX3 back in 07. VCB as it was back in the days is dead. But yes, one of the reasons to use a VM was to be able to easily backup the whole shebang. >> But then I thought about the problems with VMs this size and got to the >> idea with the distributed setup, splitting the one server into 4 or 6 >> backend servers. > Not sure what you mean by "VMs this size". Do you mean memory > requirements or filesystem size? If the nodes have enough RAM that's no > issue. Memory size. I am a bit hesistant to deploy a VM with 16GB of RAM. My cluster nodes each have 48GB, so no problem on this side though. > And surely you're not thinking of using a .vmdk for the mailbox > storage. You'd use an RDM SAN LUN. No, I was not planning to use a VMDK backed disk for this. > In fact you should be able to map in the existing XFS storage LUN and > use it as is. Assuming it's not going into retirement as well. It is going to be retired as well, as it is as old as the server. It also is not connected to any SAN as well, only local to the backend server. And our VMware SAN is iSCSI based, so no way to plug a FC-based storage into it. > If an individual VMware node don't have sufficient RAM you could build a > VM based Dovecot cluster, run these two VMs on separate nodes, and thin > out the other VMs allowed to run on these nodes. Since you can't > directly share XFS, build a tiny Debian NFS server VM and map the XFS > LUN to it, export the filesystem to the two Dovecot VMs. You could > install the Dovecot director on this NFS server VM as well. Converting > from maildir to mdbox should help eliminate the NFS locking problems. I > would do the conversion before migrating to this VM setup with NFS. > Also, run the NFS server VM on the same physical node as one of the > Dovecot servers. The NFS traffic will be a memory-memory copy instead > of going over the GbE wire, decreasing IO latency and increasing > performance for that Dovecot server. If it's possible to have Dovecot > director or your fav load balancer weight more connections to one > Deovecot node, funnel 10-15% more connections to this one. (I'm no > director guru, in fact haven't use it yet). So, this reads like my idea in the first place. Only you place all the mails on the NFS server, whereas my idea was to just share the shared folders from a central point and keep the normal user dirs local to the different nodes, thus reducing network impact for the way more common user access. > Assuming the CPUs in the VMware cluster nodes are clocked a decent > amount higher than 1.8GHz I wouldn't monkey with configuring virtual smp > for these two VMs, as they'll be IO bound not CPU bound. 2.3GHz for most VMware nodes. >>>> Ideas? Suggestions? Nudges in the right direction? >> >>> Yes. We need more real information. Please provide: >> >>> 1. Mailbox count, total maildir file count and size >> >> about 10,000 Maildir++ boxes >> >> 900GB for 1300GB used, "df -i" says 11 million inodes used > Converting to mdbox will take a large burden off your storage, as you've > seen. With ~1.3TB consumed of ~15TB you should have plenty of space to > convert to mdbox while avoiding filesystem fragmentation. You got the numbers wrong. And I got a word wrong ;) Should have read "900GB _of_ 1300GB used". I am using 900GB of 1300GB. The disks are SATA1.5 (not SATA3 or SATA6) as in data transfer rate. The disks each are 150GB in size, so my maximum storage size of my underlying VG is 1500GB. root at ms1:~# vgs VG #PV #LV #SN Attr VSize VFree vg01 1 6 0 wz--n- 70.80G 40.80G vg02 1 1 0 wz--n- 1.45T 265.00G vg03 1 1 0 wz--n- 1.09T 0 Filesystem Size Used Avail Use% Mounted on /dev/mapper/vg02-home_lv 1.2T 867G 357G 71% /home /dev/mapper/vg03-backup_lv 1.1T 996G 122G 90% /backup So not much wiggle room left. But modifications to our systems are made, which allow me to temp-disable a user, convert and move his mailbox and re-enable him, which allows me to move them one at a time from the old system to the new one, without losing a mail or disrupting service to long and often. >> Right now, in the middle of the night (2:30 AM here) on a Sunday, thus a >> low point in the usage pattern: >> >> total used free shared buffers cached >> Mem: 12335820 9720252 2615568 0 53112 680424 >> -/+ buffers/cache: 8986716 3349104 >> Swap: 5855676 10916 5844760 > Ugh... "-m" and "-g" options exist for a reason. :) So this box has > 12GB RAM, currently ~2.5GB free during off peak hours. It would be > interesting to see free RAM and swap usage values during peak. That > would tell use whether we're CPU or RAM starved. If both turned up > clean then we'd need to look at iowait. If you're not RAM starved then > moving to VMware nodes with 16/24/32GB RAM should work fine, as long as > you don't stack many other VMs on top. Enabling memory dedup may help a > little. Well, peak hours are somewhat between 10:00 and 14:00 o'clock. Will check then. >> System reaches its 7 year this summer which is the end of its service >> contract. > Enjoy your retirement old workhorse. :) >>> 4. Storage configuration--total spindles, RAID level, hard or soft RAID >> >> RAID 6 with 12 SATA1.5 disks, external 4Gbit FC > I assume this means a LUN on a SAN array somewhere on the other end of > that multi-mode cable, yes? Can you tell us what brand/model the box is? This is a Transtec Provigo 610. This is a 24 disk enclosure, 12 disks with 150GB (7.200k) each for the main mail storage in RAID6 and another 10 disks with 150GB (5.400k) for a backup LUN. I daily rsnapshot my /home onto this local backup (20 days of retention), because it is easier to restore from than firing up Bacula, which has the long retention time of 90 days. But must users need a restore of mails from $yesterday or $the_day_before. >> Back in 2005, a SAS enclosure was way to expensive for us to afford. > How one affords an FC SAN array but not a less expensive direct attach > SAS enclosure is a mystery... :) Well, it was either Parallel-SCSI or FC back then, as far as I can remember. The price difference between the U320 version and the FC one was not so big and I wanted to avoid having to route those big SCSI-U320 through my racks. >>> 5. Filesystem type >> >> XFS in a LVM to allow snapshots for backup > XFS is the only way to fly, IMNSHO. >> I of course aligned the partions on the RAID correctly and of course >> created a filesystem with the correct parameters wrt. spindels, chunk >> size, etc. > Which is critical for mitigating the RMW penalty of parity RAID. > Speaking of which, why RAID6 for maildir? Given that your array is 90% > vacant, why didn't you go with RAID10 for 3-5 times the random write > performance? See above, not 1500GB disks, but 150GB ones. RAID6, because I wanted the double security. I have been kind of burned by the previous system and I tend to get nervous while tinking about data loss in my mail storage, because I know my users _will_ give me hell if that happens. >>> 6. Backup software/method >> >> Full backup with Bacula, taking about 24 hours right now. Because of >> this, I switched to virtual full backups, only ever doing incremental >> and differental backups off of the real system and creating synthetic >> full backups inside Bacula. Works fine though, incremental taking 2 >> hours, differential about 4 hours. > Move to VMware and use VCB. You'll fall in love. >> The main problem of the backup time is Maildir++. During a test, I >> copied the mail storage to a spare box, converted it to mdbox (50MB >> file size) and the backup was lightning fast compared to the Maildir++ >> format. > Well of course. You were surprised by this? No, I was not surprised by the speedup, I _knew_ mdbox would backup faster. Just how big it was. That a backup of 100 big files is faster than a backup of 100,000 little files is not exactly rocket sience. > How long has it been since you used mbox? mbox backs up even faster > than mdbox. Why? Larger files and fewer of them. Which means the > disks can actually do streaming reads, and don't have to beat their > heads to death jumping all over the platters to read maildir files, > which are scattered all over the place when created. Which is while > maildir is described as a "random" IO workload. I never used mbox as an admin. The box before the box before this one uses uw-imapd with mbox and I experienced the system as a user and it was horriffic. Most users back then never heard of IMAP folders and just stored their mails inside of INBOX, which of course got huge. If one of those users with a big mbox then deleted mails, it would literally lock the box up for everyone, as uw-imapd was copying (for example) a 600MB mbox file around to delete one mail. Of course, this was mostly because of the crappy uw-imapd and secondly by some poor design choices in the server itself (underpowered RAID controller, to small cache and a RAID5 setup, low RAM in the server). So the first thing we did back then, in 2004, was to change to Courier and convert from mbox to maildir, which made the mailsystem fly again, even on the same hardware, only the disk setup changed to RAID10. Then we bought new hardware (the one previous to the current one), this time with more RAM, better RAID controller, smarter disk setup. We outgrew this one really fast and a disk upgrade was not possible; it lasted only 2 years. So the next one got this external 24 disk array with 12 disks used at deployment. But Courier is showing its age and things like Sieve are only possible with great pain, so I want to avoid it. >> So this is the way to go, I think, regardless of which way I implement >> the backend mail server. > Which is why I asked my questions. :) mdbox would have been one of my > recommendations, but you already discovered it. And this is why I kind of hold this upgrade back until dovecot 2.1 is released, as it has some optimizations here. >>> 7. Operating system >> >> Debian Linux Lenny, currently with kernel 2.6.39 > :) Debian, XFS, Dovecot, FC SAN storage--I like your style. Lenny with > 2.6.39? Is that a backport or rolled kernel? Not Squeeze? That is a BPO-kernel. Not-yet Squeeze. I admin over 150 different systems here, plus I am the main VMware and SAN admin. So upgrades take some time until I grow an extra pair of eyes and arms. ;) And since I have been planning to re-implement the mailsystem for some time now, I held the update to the storage backends back. No use in disrupting service for the end user if I'm going to replace the whole thing with a new one in the end. >>> Instead of telling us what you think the solution to your unidentified >>> bottleneck is and then asking "yeah or nay", tell us what the problem is >>> and allow us to recommend solutions. >> >> I am not asking for "yay or nay", I just pointed out my idea, but I am >> open to other suggestions. > I think you've already discovered the best suggestions on your own. >> If the general idea is to buy a new big single storage system, I am more >> than happy to do just this, because this will prevent any problems I might >> have with a distributed one before they even can occur. > One box is definitely easier to administer and troubleshoot. Though I > must say that even though it's more complex, I think the VM architecture > I described is worth a serious look. If your current 12x1.5TB SAN array > is being retired as well, you could piggy back onto the array(s) feeding > the VMware farm, or expand them if necessary/possible. Adding drives is > usually much cheaper than buying a new populated array chassis. Given > your service contract comments it's unlikely you're the type to build > your own servers. Being a hardwarefreak, I nearly always build my > servers and storage from scratch. Naa, I have been doing this for too long. While I am perfectly capable of building such a server myself, I am now the kind of guy who wants to "yell" at a vendor, when their hardware fails. Which does not mean I am using any "Express" package or preconfigured server, I still read the specs and pick the parts which make the most sense for a job and then have that one custom build by HP or IBM or Dell or ... Personal build PCs and servers out of single parts have been nothing than a nightmare for me. And: my cowworkers need to be able to service them as well while I am not available and they are not as a hardware aficionado as I am. So "professional" hardware with a 5 to 7 year support contract is the way to go for me. > If you're hooked on 1U chassis (I hate em) go with the DL165 G7. If not > I'd go 2U, the DL385 G7. Magny Cours gives you more bang for the buck > in this class of machines. I have plenty space for 2U systems and already use DL385 G7s, I am not fixed on Intel or AMD, I'll gladly use the one which is the most fit for a given jobs. Gr??e, Sven -- Sigmentation fault. Core dumped. From sven at svenhartge.de Sun Jan 8 22:15:22 2012 From: sven at svenhartge.de (Sven Hartge) Date: Sun, 8 Jan 2012 21:15:22 +0100 Subject: [Dovecot] Providing shared folders with multiple backend servers References: <68fd4hi9kbv8@mids.svenhartge.de> <4F08E4D9.1090203@hardwarefreak.com> <78fdevu9kbv8@mids.svenhartge.de> <4F09956C.1030109@hardwarefreak.com> <88fev069kbv8@mids.svenhartge.de> Message-ID: Sven Hartge wrote: > Stan Hoeppner wrote: >> If an individual VMware node don't have sufficient RAM you could build a >> VM based Dovecot cluster, run these two VMs on separate nodes, and thin >> out the other VMs allowed to run on these nodes. Since you can't >> directly share XFS, build a tiny Debian NFS server VM and map the XFS >> LUN to it, export the filesystem to the two Dovecot VMs. You could >> install the Dovecot director on this NFS server VM as well. Converting >> from maildir to mdbox should help eliminate the NFS locking problems. I >> would do the conversion before migrating to this VM setup with NFS. >> Also, run the NFS server VM on the same physical node as one of the >> Dovecot servers. The NFS traffic will be a memory-memory copy instead >> of going over the GbE wire, decreasing IO latency and increasing >> performance for that Dovecot server. If it's possible to have Dovecot >> director or your fav load balancer weight more connections to one >> Deovecot node, funnel 10-15% more connections to this one. (I'm no >> director guru, in fact haven't use it yet). > So, this reads like my idea in the first place. > Only you place all the mails on the NFS server, whereas my idea was to > just share the shared folders from a central point and keep the normal > user dirs local to the different nodes, thus reducing network impact for > the way more common user access. To be a bit more concrete on this one: a) X backend servers which my frontend (being perdition or dovecot director) redirects users to, fixed, no random redirects. I might start with 4 backend servers, but I can easily scale them, either vertically by adding more RAM or vCPUs or horizontally by adding more VMs and reshuffling some mailboxes during the night. Why 4 and not 2? If I'm going to build a cluster, I already have to do the work to implement this and with 4 backends, I can distribute the load even further without much additional administrative overhead. But the load impact on each node gets lower with more nodes, if I am able to evenly spread my users across those nodes (like md5'ing the username and using the first 2 bits from that to determine which node the user resides on). b) 1 backend server for the public shared mailboxes, exporting them via NFS to the user backend servers Configuration like this, from http://wiki2.dovecot.org/SharedMailboxes/Public ,---- | # User's private mail location | mail_location = mdbox:~/mdbox | | # When creating any namespaces, you must also have a private namespace: | namespace { | type = private | separator = . | prefix = INBOX. | #location defaults to mail_location. | inbox = yes | } | | namespace { | type = public | separator = . | prefix = #shared. | location = mdbox:/srv/shared/ | subscriptions = no | } `---- With /srv/shared being the NFS mountpoint from my central public shared mailbox server. This setup would keep the amount of data transferred via NFS small (only a tiny fraction of my 10,000 users have access to a shared folder, mostly users in the IT-Team or in the administration of the university. Wouldn't such a setup be the "Best of Both Worlds"? Having the main traffic going to local disks (being RDMs) and also being able to provide shared folders to every user who needs them without the need to move those users onto one server? Gr??e, Sven. -- Sigmentation fault. Core dumped. From sven at svenhartge.de Sun Jan 8 23:07:11 2012 From: sven at svenhartge.de (Sven Hartge) Date: Sun, 8 Jan 2012 22:07:11 +0100 Subject: [Dovecot] Providing shared folders with multiple backend servers References: <68fd4hi9kbv8@mids.svenhartge.de> <4F08E4D9.1090203@hardwarefreak.com> <78fdevu9kbv8@mids.svenhartge.de> <4F09956C.1030109@hardwarefreak.com> <88fev069kbv8@mids.svenhartge.de> Message-ID: Sven Hartge wrote: > Sven Hartge wrote: >> Stan Hoeppner wrote: >>> If an individual VMware node don't have sufficient RAM you could build a >>> VM based Dovecot cluster, run these two VMs on separate nodes, and thin >>> out the other VMs allowed to run on these nodes. Since you can't >>> directly share XFS, build a tiny Debian NFS server VM and map the XFS >>> LUN to it, export the filesystem to the two Dovecot VMs. You could >>> install the Dovecot director on this NFS server VM as well. Converting >>> from maildir to mdbox should help eliminate the NFS locking problems. I >>> would do the conversion before migrating to this VM setup with NFS. >>> Also, run the NFS server VM on the same physical node as one of the >>> Dovecot servers. The NFS traffic will be a memory-memory copy instead >>> of going over the GbE wire, decreasing IO latency and increasing >>> performance for that Dovecot server. If it's possible to have Dovecot >>> director or your fav load balancer weight more connections to one >>> Deovecot node, funnel 10-15% more connections to this one. (I'm no >>> director guru, in fact haven't use it yet). >> So, this reads like my idea in the first place. >> Only you place all the mails on the NFS server, whereas my idea was to >> just share the shared folders from a central point and keep the normal >> user dirs local to the different nodes, thus reducing network impact for >> the way more common user access. > To be a bit more concrete on this one: > a) X backend servers which my frontend (being perdition or dovecot > director) redirects users to, fixed, no random redirects. > I might start with 4 backend servers, but I can easily scale them, > either vertically by adding more RAM or vCPUs or horizontally by > adding more VMs and reshuffling some mailboxes during the night. > Why 4 and not 2? If I'm going to build a cluster, I already have to do > the work to implement this and with 4 backends, I can distribute the > load even further without much additional administrative overhead. > But the load impact on each node gets lower with more nodes, if I am > able to evenly spread my users across those nodes (like md5'ing the > username and using the first 2 bits from that to determine which > node the user resides on). Ah, I forgot: I _already_ have the mechanisms in place to statically redirect/route accesses for users to different backends, since some of the users are already redirected to a different mailsystem at another location of my university. So using this mechanism to also redirect/route users internal to _my_ location is no big deal. This is what got me into the idea of several independant backend storages without the need to share the _whole_ storage, but just the shared folders for some users. (Are my words making any sense? I got the feeling I'm writing German with English words and nobody is really understanding anything ...) Gr??e, Sven. -- Sigmentation fault. Core dumped. From dmiller at amfes.com Mon Jan 9 01:40:48 2012 From: dmiller at amfes.com (Daniel L. Miller) Date: Sun, 08 Jan 2012 15:40:48 -0800 Subject: [Dovecot] Possible mdbox corruption In-Reply-To: <4F075A76.1040807@amfes.com> References: <4F04EDC8.6060809@amfes.com> <9D72A466-6ECD-4A10-A52A-20BF56B32916@iki.fi> <4F072FA9.2020009@amfes.com> <75CA67D3-81FF-4FD5-8D61-6992E808DB09@iki.fi> <4F075A76.1040807@amfes.com> Message-ID: On 1/6/2012 12:32 PM, Daniel L. Miller wrote: > On 1/6/2012 9:36 AM, Timo Sirainen wrote: >> On 6.1.2012, at 19.30, Daniel L. Miller wrote: >> >>> Jan 6 09:22:42 bubba dovecot: indexer-worker(user1 at domain.com): >>> Error: fts_solr: Indexing failed: 400 Illegal character ((CTRL-CHAR, >>> code 18)) at [row,col {unknown-source}]: [482765,16] >>> Jan 6 09:22:42 bubba dovecot: indexer-worker: Error: >>> >>> Google seems to indicate that Solr cannot handle "invalid" >>> characters - and that it is the responsibility of the calling >>> program to strip out such. A quick search shows me a both an >>> individual character comparison in Java and a regex used for the >>> purpose. Is there any "illegal character protection" in the Dovecot >>> Solr plugin? >> Yes, there is. So I'm not really sure what it's complaining about. >> Are you using the "solr" or "solr_old" backend? >> >> > "Solr". > > plugin { > fts = solr > fts_solr = url=http://localhost:8983/solr/ > } > Now seeing: Jan 8 15:40:09 bubba dovecot: imap(user1 at domain.com): Error: fts_solr: Lookup failed: 400 undefined field CC Jan 8 15:40:09 bubba dovecot: imap: Error: -- Daniel From dmiller at amfes.com Mon Jan 9 01:48:29 2012 From: dmiller at amfes.com (Daniel L. Miller) Date: Sun, 08 Jan 2012 15:48:29 -0800 Subject: [Dovecot] Solr plugin In-Reply-To: <4F0A2980.7050003@amfes.com> References: <4F04EDC8.6060809@amfes.com> <9D72A466-6ECD-4A10-A52A-20BF56B32916@iki.fi> <4F072FA9.2020009@amfes.com> <75CA67D3-81FF-4FD5-8D61-6992E808DB09@iki.fi> <4F075A76.1040807@amfes.com> <4F0A2980.7050003@amfes.com> Message-ID: On 1/8/2012 3:40 PM, Daniel L. Miller wrote: > On 1/6/2012 12:32 PM, Daniel L. Miller wrote: >> On 1/6/2012 9:36 AM, Timo Sirainen wrote: >>> On 6.1.2012, at 19.30, Daniel L. Miller wrote: >>> > > Jan 8 15:40:09 bubba dovecot: imap(user1 at domain.com): Error: > fts_solr: Lookup failed: 400 undefined field CC > Jan 8 15:40:09 bubba dovecot: imap: Error: > > Looking at the Solr output - looks like the CC parameter is being capitalized while all the other fieldnames are lowercase. -- Daniel From Ralf.Hildebrandt at charite.de Mon Jan 9 09:40:57 2012 From: Ralf.Hildebrandt at charite.de (Ralf Hildebrandt) Date: Mon, 9 Jan 2012 08:40:57 +0100 Subject: [Dovecot] 2.0.17: Index lost -> SAVEDON lost as well? Message-ID: <20120109074057.GC22506@charite.de> Today I encoundered these errors: Jan 9 08:30:06 mail dovecot: lmtp(31174, backup at backup.invalid): Error: Log synchronization error at seq=858,offset=44672 for /home/mailboxname/mdbox/storage/dovecot.map.index: Append with UID 282388, but next_uid = 282389 Jan 9 08:30:06 mail dovecot: lmtp(31819, backup at backup.invalid): Error: Log synchronization error at seq=858,offset=44672 for /home/mailboxname/mdbox/storage/dovecot.map.index: Append with UID 282388, but next_uid = 282389 Jan 9 08:30:06 mail dovecot: lmtp(32148, backup at backup.invalid): Error: Log synchronization error at seq=858,offset=44672 for /home/mailboxname/mdbox/storage/dovecot.map.index: Append with UID 282388, but next_uid = 282389 After that, the SAVEDON date for all mails was reset to today: mail:~# doveadm search -u backup at backup.invalid mailbox INBOX SAVEDON 2012-01-09 | wc -l 75650 mail:~# doveadm search -u backup at backup.invalid mailbox INBOX SAVEDON 2012-01-08 | wc -l 0 mail:~# doveadm search -u backup at backup.invalid mailbox INBOX SAVEDON 2012-01-07 | wc -l 0 Before, I was running this: vorgestern=`date -d "-2 day" +"%Y-%m-%d"` doveadm expunge -u backup at backup.invalid mailbox INBOX SAVEDBEFORE $vorgestern doveadm purge -u backup at backup.invalid Is there a way of restoring the SAVEDON info? -- Ralf Hildebrandt Gesch?ftsbereich IT | Abteilung Netzwerk Charit? - Universit?tsmedizin Berlin Campus Benjamin Franklin Hindenburgdamm 30 | D-12203 Berlin Tel. +49 30 450 570 155 | Fax: +49 30 450 570 962 ralf.hildebrandt at charite.de | http://www.charite.de From junk4 at klunky.co.uk Mon Jan 9 11:16:58 2012 From: junk4 at klunky.co.uk (J4K) Date: Mon, 09 Jan 2012 10:16:58 +0100 Subject: [Dovecot] dovecot: imap-login: Disconnected (no auth attempts) Message-ID: <4F0AB08A.7050605@klunky.co.uk> Morning everyone, On the 8th of Jan the TLS/SSL certificate I use with Dovecot expired. I replaced it with a new on the 9th of Jan. I tested this with Thunderbird and all is well. This morning people tell me they cannot get their email using their mobile telephones : K9 Mail I have reverted the SSL cert back to the old one just in case. Thunderbird will works. Dovecot 1:1.2.15-7 running on Debian 6 The messages in the logs are: Jan 9 10:11:37 logout dovecot: imap-login: Disconnected (no auth attempts): rip=90.131.95.130, lip=88.119.95.13, TLS: Disconnected Jan 9 10:11:38 logout dovecot: imap-login: Disconnected (no auth attempts): rip=90.131.95.130, lip=88.119.95.13, TLS: Disconnected In dovecot.conf I have this set : disable_plaintext_auth = no And the auth default mechanisms are set to: mechanisms = plain login What is strange is the only item that changed is the SSL cert, which has since been changed back to the old one (which has expired... ^^). Any ideas where I may look or change? Regards, S From robert at schetterer.org Mon Jan 9 11:27:26 2012 From: robert at schetterer.org (Robert Schetterer) Date: Mon, 09 Jan 2012 10:27:26 +0100 Subject: [Dovecot] dovecot: imap-login: Disconnected (no auth attempts) In-Reply-To: <4F0AB08A.7050605@klunky.co.uk> References: <4F0AB08A.7050605@klunky.co.uk> Message-ID: <4F0AB2FE.8000306@schetterer.org> Am 09.01.2012 10:16, schrieb J4K: > Morning everyone, > > On the 8th of Jan the TLS/SSL certificate I use with Dovecot expired. I > replaced it with a new on the 9th of Jan. I tested this with > Thunderbird and all is well. > > This morning people tell me they cannot get their email using their > mobile telephones : K9 Mail > > I have reverted the SSL cert back to the old one just in case. > Thunderbird will works. > > > Dovecot 1:1.2.15-7 running on Debian 6 > > The messages in the logs are: > > Jan 9 10:11:37 logout dovecot: imap-login: Disconnected (no auth > attempts): rip=90.131.95.130, lip=88.119.95.13, TLS: Disconnected > Jan 9 10:11:38 logout dovecot: imap-login: Disconnected (no auth > attempts): rip=90.131.95.130, lip=88.119.95.13, TLS: Disconnected > > In dovecot.conf I have this set : > > disable_plaintext_auth = no > > And the auth default mechanisms are set to: > mechanisms = plain login > > What is strange is the only item that changed is the SSL cert, which has > since been changed back to the old one (which has expired... ^^). > > Any ideas where I may look or change? > > Regards, S if you only changed the crt etc, and youre sure you did everything right perhaps you have forgot adding a needed intermediate cert ? read here http://www.trustico.co.uk/install/how-to-install-ssl-certificate.php Required Intermediate Certificates (CA Certificates) To successfully install your SSL Certificate you may be required to install an Intermediate CA Certificate. Please review the above installation instructions carefully to determine if an Intermediate CA Certificate is required, how to obtain it and correctly import it into your system. For more information please Contact Us. Alternatively, and for systems not covered by the above installation instructions, please use our Intermediate Certificate Wizard to find the correct CA Certificate or Root Bundle that is required for your SSL Certificate to function correctly. Find Out More Information -- Best Regards MfG Robert Schetterer Germany/Munich/Bavaria From junk4 at klunky.co.uk Mon Jan 9 11:39:24 2012 From: junk4 at klunky.co.uk (J4K) Date: Mon, 09 Jan 2012 10:39:24 +0100 Subject: [Dovecot] dovecot: imap-login: Disconnected (no auth attempts) In-Reply-To: <4F0AB2FE.8000306@schetterer.org> References: <4F0AB08A.7050605@klunky.co.uk> <4F0AB2FE.8000306@schetterer.org> Message-ID: <4F0AB5CC.108@klunky.co.uk> On 09/01/12 10:27, Robert Schetterer wrote: > Am 09.01.2012 10:16, schrieb J4K: >> Morning everyone, >> >> On the 8th of Jan the TLS/SSL certificate I use with Dovecot expired. I >> replaced it with a new on the 9th of Jan. I tested this with >> Thunderbird and all is well. >> >> This morning people tell me they cannot get their email using their >> mobile telephones : K9 Mail >> >> I have reverted the SSL cert back to the old one just in case. >> Thunderbird will works. >> >> >> Dovecot 1:1.2.15-7 running on Debian 6 >> >> The messages in the logs are: >> >> Jan 9 10:11:37 logout dovecot: imap-login: Disconnected (no auth >> attempts): rip=90.131.95.130, lip=88.119.95.13, TLS: Disconnected >> Jan 9 10:11:38 logout dovecot: imap-login: Disconnected (no auth >> attempts): rip=90.131.95.130, lip=88.119.95.13, TLS: Disconnected >> >> In dovecot.conf I have this set : >> >> disable_plaintext_auth = no >> >> And the auth default mechanisms are set to: >> mechanisms = plain login >> >> What is strange is the only item that changed is the SSL cert, which has >> since been changed back to the old one (which has expired... ^^). >> >> Any ideas where I may look or change? >> >> Regards, S > if you only changed the crt etc, and youre sure you did everything right > > perhaps you have forgot adding a needed intermediate cert ? > > read here > http://www.trustico.co.uk/install/how-to-install-ssl-certificate.php > > Required Intermediate Certificates (CA Certificates) > > To successfully install your SSL Certificate you may be required to > install an Intermediate CA Certificate. Please review the above > installation instructions carefully to determine if an Intermediate CA > Certificate is required, how to obtain it and correctly import it into > your system. For more information please Contact Us. > Alternatively, and for systems not covered by the above installation > instructions, please use our Intermediate Certificate Wizard to find the > correct CA Certificate or Root Bundle that is required for your SSL > Certificate to function correctly. Find Out More Information You may have some email problems with the mobile phone because of the certificates. Thunderbird and webmail are fine. You have only to access its complaint about a unknown certificate. I am working on the certificate problem. From robert at schetterer.org Mon Jan 9 11:41:18 2012 From: robert at schetterer.org (Robert Schetterer) Date: Mon, 09 Jan 2012 10:41:18 +0100 Subject: [Dovecot] dovecot: imap-login: Disconnected (no auth attempts) In-Reply-To: <4F0AB556.3040103@klunky.co.uk> References: <4F0AB08A.7050605@klunky.co.uk> <4F0AB2FE.8000306@schetterer.org> <4F0AB556.3040103@klunky.co.uk> Message-ID: <4F0AB63E.4020205@schetterer.org> Am 09.01.2012 10:37, schrieb Simon Loewenthal: > On 09/01/12 10:27, Robert Schetterer wrote: >> Am 09.01.2012 10:16, schrieb J4K: >>> Morning everyone, >>> >>> On the 8th of Jan the TLS/SSL certificate I use with Dovecot expired. I >>> replaced it with a new on the 9th of Jan. I tested this with >>> Thunderbird and all is well. >>> >>> This morning people tell me they cannot get their email using their >>> mobile telephones : K9 Mail >>> >>> I have reverted the SSL cert back to the old one just in case. >>> Thunderbird will works. >>> >>> >>> Dovecot 1:1.2.15-7 running on Debian 6 >>> >>> The messages in the logs are: >>> >>> Jan 9 10:11:37 logout dovecot: imap-login: Disconnected (no auth >>> attempts): rip=90.131.95.130, lip=88.119.95.13, TLS: Disconnected >>> Jan 9 10:11:38 logout dovecot: imap-login: Disconnected (no auth >>> attempts): rip=90.131.95.130, lip=88.119.95.13, TLS: Disconnected >>> >>> In dovecot.conf I have this set : >>> >>> disable_plaintext_auth = no >>> >>> And the auth default mechanisms are set to: >>> mechanisms = plain login >>> >>> What is strange is the only item that changed is the SSL cert, which has >>> since been changed back to the old one (which has expired... ^^). >>> >>> Any ideas where I may look or change? >>> >>> Regards, S >> if you only changed the crt etc, and youre sure you did everything right >> >> perhaps you have forgot adding a needed intermediate cert ? >> >> read here >> http://www.trustico.co.uk/install/how-to-install-ssl-certificate.php >> >> Required Intermediate Certificates (CA Certificates) >> >> To successfully install your SSL Certificate you may be required to >> install an Intermediate CA Certificate. Please review the above >> installation instructions carefully to determine if an Intermediate CA >> Certificate is required, how to obtain it and correctly import it into >> your system. For more information please Contact Us. >> Alternatively, and for systems not covered by the above installation >> instructions, please use our Intermediate Certificate Wizard to find the >> correct CA Certificate or Root Bundle that is required for your SSL >> Certificate to function correctly. Find Out More Information > I know that the intermediate certs are messed up, which is why I rolled > back to the old expired certificate. I did not expect an expired > certificate to block authentication, and it does not mean that it does > block. The problem may be elsewhere. that might be a k9 problem ( older versions ) or in android older versions, is there a ignore ssl failure option as workaround what does thunderbird tell you about the new cert ? but for sure the problem may elsewhere > > -- > PGP is optional: 4BA78604 > simon @ klunky . org > simon @ klunky . co.uk > I won't accept your confidentiality > agreement, and your Emails are kept. > ~???~ > -- Best Regards MfG Robert Schetterer Germany/Munich/Bavaria From junk4 at klunky.co.uk Mon Jan 9 11:52:22 2012 From: junk4 at klunky.co.uk (J4K) Date: Mon, 09 Jan 2012 10:52:22 +0100 Subject: [Dovecot] dovecot: imap-login: Disconnected (no auth attempts) In-Reply-To: <4F0AB63E.4020205@schetterer.org> References: <4F0AB08A.7050605@klunky.co.uk> <4F0AB2FE.8000306@schetterer.org> <4F0AB556.3040103@klunky.co.uk> <4F0AB63E.4020205@schetterer.org> Message-ID: <4F0AB8D6.8060204@klunky.co.uk> On 09/01/12 10:41, Robert Schetterer wrote: > Am 09.01.2012 10:37, schrieb Simon Loewenthal: >> On 09/01/12 10:27, Robert Schetterer wrote: >>> Am 09.01.2012 10:16, schrieb J4K: >>>> Morning everyone, >>>> >>>> On the 8th of Jan the TLS/SSL certificate I use with Dovecot expired. I >>>> replaced it with a new on the 9th of Jan. I tested this with >>>> Thunderbird and all is well. >>>> >>>> This morning people tell me they cannot get their email using their >>>> mobile telephones : K9 Mail >>>> >>>> I have reverted the SSL cert back to the old one just in case. >>>> Thunderbird will works. >>>> >>>> >>>> Dovecot 1:1.2.15-7 running on Debian 6 >>>> >>>> The messages in the logs are: >>>> >>>> Jan 9 10:11:37 logout dovecot: imap-login: Disconnected (no auth >>>> attempts): rip=90.131.95.130, lip=88.119.95.13, TLS: Disconnected >>>> Jan 9 10:11:38 logout dovecot: imap-login: Disconnected (no auth >>>> attempts): rip=90.131.95.130, lip=88.119.95.13, TLS: Disconnected >>>> >>>> In dovecot.conf I have this set : >>>> >>>> disable_plaintext_auth = no >>>> >>>> And the auth default mechanisms are set to: >>>> mechanisms = plain login >>>> >>>> What is strange is the only item that changed is the SSL cert, which has >>>> since been changed back to the old one (which has expired... ^^). >>>> >>>> Any ideas where I may look or change? >>>> >>>> Regards, S >>> if you only changed the crt etc, and youre sure you did everything right >>> >>> perhaps you have forgot adding a needed intermediate cert ? >>> >>> read here >>> http://www.trustico.co.uk/install/how-to-install-ssl-certificate.php >>> >>> Required Intermediate Certificates (CA Certificates) >>> >>> To successfully install your SSL Certificate you may be required to >>> install an Intermediate CA Certificate. Please review the above >>> installation instructions carefully to determine if an Intermediate CA >>> Certificate is required, how to obtain it and correctly import it into >>> your system. For more information please Contact Us. >>> Alternatively, and for systems not covered by the above installation >>> instructions, please use our Intermediate Certificate Wizard to find the >>> correct CA Certificate or Root Bundle that is required for your SSL >>> Certificate to function correctly. Find Out More Information >> I know that the intermediate certs are messed up, which is why I rolled >> back to the old expired certificate. I did not expect an expired >> certificate to block authentication, and it does not mean that it does >> block. The problem may be elsewhere. > that might be a k9 problem ( older versions ) or in android older > versions, is there a ignore ssl failure option as workaround > > what does thunderbird tell you about the new cert ? > > but for sure the problem may elsewhere >> -- >> PGP is optional: 4BA78604 >> simon @ klunky . org >> simon @ klunky . co.uk >> I won't accept your confidentiality >> agreement, and your Emails are kept. >> ~???~ >> > TB says unknown, and I know why. I have set the class 1 and class 2 certificate chain keys to the same, when these should be different. Damn, StartCom's certs are difficult to set up. Workaround for K9 (latest version) is to go to the Account Settings -> Fetching -> Incoming Server, and click Next. It will attempt to authenicate and then complain about the certificate. One can ignore the warning and accept the certificate. Cheers all. Simon From janm at transactionware.com Sun Jan 8 10:38:04 2012 From: janm at transactionware.com (Jan Mikkelsen) Date: Sun, 8 Jan 2012 19:38:04 +1100 Subject: [Dovecot] Building 2.1.rc1 with cluence, but without libstemmer In-Reply-To: <1324377324.3597.47.camel@innu> References: <1324377324.3597.47.camel@innu> Message-ID: <8D81449C-C294-4983-961E-17907EBDBF6A@transactionware.com> On 20/12/2011, at 9:35 PM, Timo Sirainen wrote: > [?] >> and libtextcat is dovecot 2.1.rc1 intended to be used against? > > http://www.let.rug.nl/vannoord/TextCat/ probably.. Basically I've just > used the libstemmer and libtextcat that are in Debian. Hmm. That seems to be been turned into libtextcat here ? http://software.wise-guys.nl/libtextcat/ Dovecot builds against this version, so I'm hopeful it will work OK. Thanks for the answers, I'm going to test out 2.1-rc3 tomorrow. Regards, Jan. From mpapet at yahoo.com Mon Jan 9 07:34:31 2012 From: mpapet at yahoo.com (Michael Papet) Date: Sun, 8 Jan 2012 21:34:31 -0800 (PST) Subject: [Dovecot] Newbie: LDA Isn't Logging Message-ID: <1326087271.17295.YahooMailClassic@web125406.mail.ne1.yahoo.com> I did some testing on a Debian testing VM. I built 2.0.17 from sources and copied the config straight over from the malfunctioning machine. LDA logging worked. So, it could be something about my system. But, running /usr/lib/dovecot/deliver still doesn't return a value on the command line as documented on the wiki. I've attached strace files from both the malfunctioning Debian packages machine and the built from sources VM. Unfortunately, I'm a new strace user, so I don't know what it all means. Michael --- On Tue, 1/3/12, Timo Sirainen wrote: > From: Timo Sirainen > Subject: Re: [Dovecot] Newbie: LDA Isn't Logging > To: "Michael" > Cc: dovecot at dovecot.org > Date: Tuesday, January 3, 2012, 4:15 AM > On Mon, 2012-01-02 at 22:48 -0800, > Michael wrote: > > Hi, > > > > I'm a newbie having some trouble getting deliver to > log anything.? Related to this, there are no return > values unless the -d is missing.? I'm using LDAP to > store virtual domain and user account information. > > > > Test #1: /usr/lib/dovecot/deliver -e -f mpapet at yahoo.com > -d zed at mailswansong.dom > < bad.mail > > Expected result: supposed to fail, there's no zed > account via ldap lookup and supposed to get a return code > per the wiki at http://wiki2.dovecot.org/LDA.? > Supposed to log too. > > Actual result: nothing gets delivered, no return code, > nothing is logged. > > As in return code is 0? Something's definitely wrong there > then. > > First check that deliver at least reads the config file. > Add something > broken in there, such as: "foo=bar" at the beginning of > dovecot.conf. > Does deliver fail now? > > Also running deliver via strace could show something > useful. > > > -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: sources_2.0.17_strace.txt URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: malfunctioning_debian_strace.txt URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: sources_2.0.17_no-user.txt URL: From bind at enas.net Mon Jan 9 12:18:50 2012 From: bind at enas.net (Urban Loesch) Date: Mon, 09 Jan 2012 11:18:50 +0100 Subject: [Dovecot] Proxy login failures Message-ID: <4F0ABF0A.1080404@enas.net> Hi, I'm using two dovecot pop3/imap proxies in front of our dovecot servers. Since some days I see many of the following errors in the logs of the two proxy-servers: ... dovecot: pop3-login: Error: proxy: Remote "IPV6-IP":110 disconnected: Connection closed: Connection reset by peer (state=0): user=, method=PLAIN, rip=remote-ip, lip=localip ... dovecot: imap-login: Error: proxy: Remote "IPV6-IP":143 disconnected: Connection closed: Connection reset by peer (state=0): user=, method=PLAIN, rip=remote-ip, lip=localip ... When this happens the Client gets the following error from the proxy: -ERR [IN-USE] Account is temporarily unavailable. System-details: OS: Debian Linux Proxy: 2.0.5-0~auto+23 Backend: 2.0.13-0~auto+54 Have you any idea what could cause this type of error? Thanks and regards Urban Loesch doveconf -n from one of our backendservers: # 2.0.13 (02d97fb66047): /etc/dovecot/dovecot.conf # OS: Linux 2.6.38.8-vs2.3.0.37-rc17-rol-em64t-timerp x86_64 Debian 6.0.2 ext4 auth_cache_negative_ttl = 0 auth_cache_size = 40 M auth_cache_ttl = 12 hours auth_mechanisms = plain login auth_username_format = %Lu auth_verbose = yes deliver_log_format = msgid=%m: %$ %p %w disable_plaintext_auth = no login_trusted_networks = our Proxy IP's (v4 and v6) mail_gid = mailstore mail_location = mdbox:/home/vmail/%d/%n:INDEX=/home/dovecotindex/%d/%n mail_plugins = " quota mail_log notify zlib" mail_uid = mailstore managesieve_notify_capability = mailto managesieve_sieve_capability = fileinto reject envelope encoded-character vacation subaddress comparator-i;ascii-numeric relational regex imap4flags copy include variables body enotify environment mailbox date ihave imapflags notify mdbox_rotate_size = 5 M passdb { args = /etc/dovecot/dovecot-sql-account.conf driver = sql } plugin { mail_log_events = delete undelete expunge copy mailbox_delete mailbox_rename mail_log_fields = uid box msgid size from mail_log_group_events = no quota = dict:Storage used::file:%h/dovecot-quota sieve = ~/.dovecot.sieve sieve_dir = ~/sieve sieve_extensions = +notify +imapflags sieve_max_redirects = 10 zlib_save = gz zlib_save_level = 5 } protocols = imap pop3 lmtp sieve service imap-login { inet_listener imap { port = 143 } service_count = 0 vsz_limit = 256 M } service lmtp { inet_listener lmtp { address = * port = 24 } unix_listener /var/spool/postfix/private/dovecot-lmtp { group = postfix mode = 0666 user = postfix } vsz_limit = 512 M } service pop3-login { inet_listener pop3 { port = 110 } service_count = 0 vsz_limit = 256 M } ssl = no ssl_cert = References: <4F0AB08A.7050605@klunky.co.uk> <4F0AB2FE.8000306@schetterer.org> <4F0AB556.3040103@klunky.co.uk> <4F0AB63E.4020205@schetterer.org> <4F0AB8D6.8060204@klunky.co.uk> Message-ID: <4F0AD221.6060007@arx.net> > TB says unknown, and I know why. I have set the class 1 and class 2 > certificate chain keys to the same, when these should be different. > Damn, StartCom's certs are difficult to set up. read this: http://binblog.info/2010/02/02/lengthy-chains/ basically, you start with YOUR cert and work you way up to the root CA with openssl x509 -in your_servers.{crt|pem} -subject -issuer > server- allinone.crt openssl x509 -in intermediate_authority.{crt|pem} -subject -issuer >> server-allinone.crt openssl x509 -in root_ca.{crt|pem} -subject -issuer >> server-allinone.crt then, in dovecot.conf ---8<--- ssl_cert_file = /path/to/server-allinone.crt ssl_key_file = /path/to/private.key ---8<--- It works for me but YMMV of course. Androids before 2.2 do not have startcom as a trusted CA and will complain anyhow. Best Regards, Thanos Chatziathanassiou > > Workaround for K9 (latest version) is to go to the Account Settings -> > Fetching -> Incoming Server, and click Next. It will attempt to > authenicate and then complain about the certificate. One can ignore the > warning and accept the certificate. > > Cheers all. > > Simon > -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 4271 bytes Desc: ?????????????????????????????? ???????????????? S/MIME URL: From stan at hardwarefreak.com Mon Jan 9 14:28:55 2012 From: stan at hardwarefreak.com (Stan Hoeppner) Date: Mon, 09 Jan 2012 06:28:55 -0600 Subject: [Dovecot] Providing shared folders with multiple backend servers In-Reply-To: <88fev069kbv8@mids.svenhartge.de> References: <68fd4hi9kbv8@mids.svenhartge.de> <4F08E4D9.1090203@hardwarefreak.com> <78fdevu9kbv8@mids.svenhartge.de> <4F09956C.1030109@hardwarefreak.com> <88fev069kbv8@mids.svenhartge.de> Message-ID: <4F0ADD87.5080103@hardwarefreak.com> On 1/8/2012 9:39 AM, Sven Hartge wrote: > Memory size. I am a bit hesistant to deploy a VM with 16GB of RAM. My > cluster nodes each have 48GB, so no problem on this side though. Shouldn't be a problem if you're going to spread the load over 2 to 4 cluster nodes. 16/2 = 8GB per VM, 16/4 = 4GB per Dovecot VM. This, assuming you are able to evenly spread user load. > And our VMware SAN is iSCSI based, so no way to plug a FC-based storage > into it. There are standalone FC-iSCSI bridges but they're marketed to bridge FC SAN islands over an IP WAN. Director class SAN switches can connect anything to anything, just buy the cards you need. Both of these are rather pricey. These wouldn't make sense in your environment. I'm just pointing out that it can be done. > So, this reads like my idea in the first place. > > Only you place all the mails on the NFS server, whereas my idea was to > just share the shared folders from a central point and keep the normal > user dirs local to the different nodes, thus reducing network impact for > the way more common user access. To be quite honest, after thinking this through a bit, many traditional advantages of a single shared mail store start to disappear. Whether you use NFS or a clusterFS, or 'local' disk (RDMs), all IO goes to the same array, so the traditional IO load balancing advantage disappears. The other main advantage, replacing a dead hardware node, simply mapping the LUNs to the new one and booting it up, also disappears due to VMware's unique abilities, including vmotion. Efficient use of storage isn't an issue as you can just as easily slice off a small LUN to each of 2/4 Dovecot VMs as a larger one to the NFS VM. So the only disadvantages I see are with the 'local' disk RDM mailstore location. 'manual' connection/mailbox/size balancing, all increasing administrator burden. > 2.3GHz for most VMware nodes. How many total cores per VMware node (all sockets)? > You got the numbers wrong. And I got a word wrong ;) > > Should have read "900GB _of_ 1300GB used". My bad. I misunderstood. > So not much wiggle room left. And that one is retiring anyway as you state below. So do you have plenty of space on your VMware SAN arrays? If not can you add disks or do you need another array chassis? > But modifications to our systems are made, which allow me to > temp-disable a user, convert and move his mailbox and re-enable him, > which allows me to move them one at a time from the old system to the > new one, without losing a mail or disrupting service to long and often. As it should be. > This is a Transtec Provigo 610. This is a 24 disk enclosure, 12 disks > with 150GB (7.200k) each for the main mail storage in RAID6 and another > 10 disks with 150GB (5.400k) for a backup LUN. I daily rsnapshot my > /home onto this local backup (20 days of retention), because it is > easier to restore from than firing up Bacula, which has the long > retention time of 90 days. But must users need a restore of mails from > $yesterday or $the_day_before. And your current iSCSI SAN array(s) backing the VMware farm? Total disks? Is it monolithic, or do you have multiple array chassis from one or multiple vendors? > Well, it was either Parallel-SCSI or FC back then, as far as I can > remember. The price difference between the U320 version and the FC one > was not so big and I wanted to avoid having to route those big SCSI-U320 > through my racks. Can't blame you there. I take it you hadn't built the iSCSI SAN yet at that point? > See above, not 1500GB disks, but 150GB ones. RAID6, because I wanted the > double security. I have been kind of burned by the previous system and I > tend to get nervous while tinking about data loss in my mail storage, > because I know my users _will_ give me hell if that happens. And as it turns out RAID10 wouldn't have provided you enough bytes. > I never used mbox as an admin. The box before the box before this one > uses uw-imapd with mbox and I experienced the system as a user and it > was horriffic. Most users back then never heard of IMAP folders and just > stored their mails inside of INBOX, which of course got huge. If one of > those users with a big mbox then deleted mails, it would literally lock > the box up for everyone, as uw-imapd was copying (for example) a 600MB > mbox file around to delete one mail. Yeah, ouch. IMAP with mbox works pretty well when users are marginally smart about organizing their mail, or a POP then delete setup. I'd bet if that was maildir in that era on that box it would have slowed things way down as well. Especially if the filesystem was XFS, which had horrible, abysmal really, unlink performance until 2.6.35 (2009). > Of course, this was mostly because of the crappy uw-imapd and secondly > by some poor design choices in the server itself (underpowered RAID > controller, to small cache and a RAID5 setup, low RAM in the server). That's a recipe for disaster. > So the first thing we did back then, in 2004, was to change to Courier > and convert from mbox to maildir, which made the mailsystem fly again, > even on the same hardware, only the disk setup changed to RAID10. I wonder how much gain you'd have seen if you stuck with RAID5 instead... > Then we bought new hardware (the one previous to the current one), this > time with more RAM, better RAID controller, smarter disk setup. We > outgrew this one really fast and a disk upgrade was not possible; it > lasted only 2 years. Did you need more space or more spindles? > But Courier is showing its age and things like Sieve are only possible > with great pain, so I want to avoid it. Don't blame ya. Lots of people migrate from Courier for Dovecot for similar reasons. > And this is why I kind of hold this upgrade back until dovecot 2.1 is > released, as it has some optimizations here. Sounds like it's going to be a bit more than an 'upgrade'. ;) > That is a BPO-kernel. Not-yet Squeeze. I admin over 150 different > systems here, plus I am the main VMware and SAN admin. So upgrades take > some time until I grow an extra pair of eyes and arms. ;) /me nods > And since I have been planning to re-implement the mailsystem for some > time now, I held the update to the storage backends back. No use in > disrupting service for the end user if I'm going to replace the whole > thing with a new one in the end. /me nods > Naa, I have been doing this for too long. While I am perfectly capable > of building such a server myself, I am now the kind of guy who wants to > "yell" at a vendor, when their hardware fails. At your scale it would simply be impractical, and impossible from a time management standpoint. > Personal build PCs and servers out of single parts have been nothing > than a nightmare for me. I've had nothing but good luck with "DIY" systems. My background is probably a bit different than most though. Hardware has been in my blood since I was a teenager in about '86. I used to design and build relatively high end custom -48vdc white box servers and SCSI arrays for telcos back in the day, along with standard 115v servers for SMBs. Also, note the RHS of my email address. ;) That is a nickname given to me about 13 years ago. I decided to adopt it for my vanity domain. > And: my cowworkers need to be able to service > them as well while I am not available and they are not as a hardware > aficionado as I am. That's the biggest reason right there. DIY is only really feasible if you run your own show, and will likely continue to be running it for a while. Or if staff is similarly skilled. Most IT folks these days aren't hardware oriented people. > So "professional" hardware with a 5 to 7 year support contract is the > way to go for me. Definitely. > I have plenty space for 2U systems and already use DL385 G7s, I am not > fixed on Intel or AMD, I'll gladly use the one which is the most fit for > a given jobs. Just out of curiosity do you have any Power or SPARC systems, or all x86? -- Stan From stan at hardwarefreak.com Mon Jan 9 15:13:40 2012 From: stan at hardwarefreak.com (Stan Hoeppner) Date: Mon, 09 Jan 2012 07:13:40 -0600 Subject: [Dovecot] Providing shared folders with multiple backend servers In-Reply-To: References: <68fd4hi9kbv8@mids.svenhartge.de> <4F08E4D9.1090203@hardwarefreak.com> <78fdevu9kbv8@mids.svenhartge.de> <4F09956C.1030109@hardwarefreak.com> <88fev069kbv8@mids.svenhartge.de> Message-ID: <4F0AE804.5070002@hardwarefreak.com> On 1/8/2012 2:15 PM, Sven Hartge wrote: > Wouldn't such a setup be the "Best of Both Worlds"? Having the main > traffic going to local disks (being RDMs) and also being able to provide > shared folders to every user who needs them without the need to move > those users onto one server? The only problems I can see at this time are: 1. Some users will have much larger mailboxes than others. Each year ~1/4 of your student population rotates, so if you manually place existing mailboxes now based on current size you have no idea who the big users are in the next freshman class, or the next. So you may have to do manual re-balancing of mailboxes, maybe frequently. 2. If you lose a Dovecot VM guest due to image file or other corruption, or some other rare cause, you can't restart that guest, but will have to build a new image from a template. This could cause either minor or significant downtime for ~1/4 of your mail users w/4 nodes. This is likely rare enough it's not worth consideration. 3. You will consume more SAN volumes and LUNs. Most arrays have a fixed number of each. May or may not be an issue. -- Stan From stan at hardwarefreak.com Mon Jan 9 15:38:20 2012 From: stan at hardwarefreak.com (Stan Hoeppner) Date: Mon, 09 Jan 2012 07:38:20 -0600 Subject: [Dovecot] Providing shared folders with multiple backend servers In-Reply-To: References: <68fd4hi9kbv8@mids.svenhartge.de> <4F08E4D9.1090203@hardwarefreak.com> <78fdevu9kbv8@mids.svenhartge.de> <4F09956C.1030109@hardwarefreak.com> <88fev069kbv8@mids.svenhartge.de> Message-ID: <4F0AEDCC.10109@hardwarefreak.com> On 1/8/2012 3:07 PM, Sven Hartge wrote: > Ah, I forgot: I _already_ have the mechanisms in place to statically > redirect/route accesses for users to different backends, since some of > the users are already redirected to a different mailsystem at another > location of my university. I assume you mean IMAP/POP connections, not SMTP. > So using this mechanism to also redirect/route users internal to _my_ > location is no big deal. > > This is what got me into the idea of several independant backend > storages without the need to share the _whole_ storage, but just the > shared folders for some users. > > (Are my words making any sense? I got the feeling I'm writing German with > English words and nobody is really understanding anything ...) You're making perfect sense, and frankly, if not for the .de TLD in your email address, I'd have thought you were an American. Your written English is probably better than mine, and it's my only language. To be fair to the Brits, I speak/write American English. ;) I'm guessing no one else has interest in this thread, or maybe simply lost interest as the replies have been lengthy, and not wholly Dovecot related. I accept some blame for that. -- Stan From sven at svenhartge.de Mon Jan 9 15:48:22 2012 From: sven at svenhartge.de (Sven Hartge) Date: Mon, 9 Jan 2012 14:48:22 +0100 Subject: [Dovecot] Providing shared folders with multiple backend servers References: <68fd4hi9kbv8@mids.svenhartge.de> <4F08E4D9.1090203@hardwarefreak.com> <78fdevu9kbv8@mids.svenhartge.de> <4F09956C.1030109@hardwarefreak.com> <88fev069kbv8@mids.svenhartge.de> <4F0ADD87.5080103@hardwarefreak.com> Message-ID: <08fhdkhkv5v8@mids.svenhartge.de> Stan Hoeppner wrote: > On 1/8/2012 9:39 AM, Sven Hartge wrote: >> Memory size. I am a bit hesistant to deploy a VM with 16GB of RAM. My >> cluster nodes each have 48GB, so no problem on this side though. > Shouldn't be a problem if you're going to spread the load over 2 to 4 > cluster nodes. 16/2 = 8GB per VM, 16/4 = 4GB per Dovecot VM. This, > assuming you are able to evenly spread user load. I think I will be able to do that. If I devide my users by using a hash like MD5 or SHA1 over their username, this should give me an even distribution. >> So, this reads like my idea in the first place. >> >> Only you place all the mails on the NFS server, whereas my idea was to >> just share the shared folders from a central point and keep the normal >> user dirs local to the different nodes, thus reducing network impact for >> the way more common user access. > To be quite honest, after thinking this through a bit, many traditional > advantages of a single shared mail store start to disappear. Whether > you use NFS or a clusterFS, or 'local' disk (RDMs), all IO goes to the > same array, so the traditional IO load balancing advantage disappears. > The other main advantage, replacing a dead hardware node, simply mapping > the LUNs to the new one and booting it up, also disappears due to > VMware's unique abilities, including vmotion. Efficient use of storage > isn't an issue as you can just as easily slice off a small LUN to each > of 2/4 Dovecot VMs as a larger one to the NFS VM. Yes. Plus I can much more easily increase a LUNs size, if the need arises. > So the only disadvantages I see are with the 'local' disk RDM mailstore > location. 'manual' connection/mailbox/size balancing, all increasing > administrator burden. Well, I don't see size balancing as a problem since I can increase the size of the disk for a node very easy. Load should be fairly even, if I distribute the 10,000 users across the nodes. Even if there is a slight imbalance, the systems should have enough power to smooth that out. I could measure the load every user creates and use that as a distribution key, but I believe this to be a wee bit over-engineered for my scenario. Initial placement of a new user will be automatic, during the activation of the account, so no administrative burden there. It seems my initial idea was not so bad after all ;) Now I "just" need o built a little test setup, put some dummy users on it and see, if anything bad happens while accessing the shared folders and how the reaction of the system is, should the shared folder server be down. >> 2.3GHz for most VMware nodes. > How many total cores per VMware node (all sockets)? 8 >> You got the numbers wrong. And I got a word wrong ;) >> >> Should have read "900GB _of_ 1300GB used". > My bad. I misunderstood. Here the memory statistics an 14:30 o'clock: total used free shared buffers cached Mem: 12046 11199 847 0 88 7926 -/+ buffers/cache: 3185 8861 Swap: 5718 10 5707 >> So not much wiggle room left. > And that one is retiring anyway as you state below. So do you have > plenty of space on your VMware SAN arrays? If not can you add disks > or do you need another array chassis? The SAN has plenty space. Over 70TiB at this time, with another 70TiB having just arrived and waiting to be connected. >> This is a Transtec Provigo 610. This is a 24 disk enclosure, 12 disks >> with 150GB (7.200k) each for the main mail storage in RAID6 and >> another 10 disks with 150GB (5.400k) for a backup LUN. I daily >> rsnapshot my /home onto this local backup (20 days of retention), >> because it is easier to restore from than firing up Bacula, which has >> the long retention time of 90 days. But must users need a restore of >> mails from $yesterday or $the_day_before. > And your current iSCSI SAN array(s) backing the VMware farm? Total > disks? Is it monolithic, or do you have multiple array chassis from > one or multiple vendors? The iSCSI storage nodes (HP P4500) use 600GB SAS6 at 15k rpm with 12 disks per node, configured in 2 RAID5 sets with 6 disks each. But this is internal to each storage node, which are kind of a blackbox and have to be treated as such. The HP P4500 is a but unique, since it does not consist of a head node which storage arrays connected to it, but of individual storage nodes forming a self balancing iSCSI cluster. (The nodes consist of DL320s G2.) So far, I had no performance or other problems with this setup and it scales quite nice, as you buy as you grow . And again, price was also a factor, deploying a FC-SAN would have cost us more than thrice the amount than the amount the deployment of an iSCSI solution did, because the latter is "just" ethernet, while the former would have needed a lot more totally new components. >> Well, it was either Parallel-SCSI or FC back then, as far as I can >> remember. The price difference between the U320 version and the FC one >> was not so big and I wanted to avoid having to route those big SCSI-U320 >> through my racks. > Can't blame you there. I take it you hadn't built the iSCSI SAN yet at > that point? No, at that time (2005/2006) nobody thought of a SAN. That is a fairly "new" idea here, first implemented for the VMware cluster in 2008. >> Then we bought new hardware (the one previous to the current one), >> this time with more RAM, better RAID controller, smarter disk setup. >> We outgrew this one really fast and a disk upgrade was not possible; >> it lasted only 2 years. > Did you need more space or more spindles? More space. The IMAP usage became more prominent which caused a steep rise in space needed on the mail storage server. But 74GiB SCA drives where expensive and 130GiB SCA drives where not available at that time. >> And this is why I kind of hold this upgrade back until dovecot 2.1 is >> released, as it has some optimizations here. > Sounds like it's going to be a bit more than an 'upgrade'. ;) Well, yes. It is more a re-implementation than an upgrade. >> I have plenty space for 2U systems and already use DL385 G7s, I am >> not fixed on Intel or AMD, I'll gladly use the one which is the most >> fit for a given jobs. > Just out of curiosity do you have any Power or SPARC systems, or all > x86? Central IT here this days only uses x86-based systems. There where some Sun SPARC systems, but both have been decomissioned. New SPARC hardware is just too expensive for our scale. And if you want to use virtualization, you can either use only SPARC systems and partition them or use x86 based systems. And then there is the need to virtualize Windows, so x86 is the only option. Most bigger Universities in Germany make nearly exclusive use of SPARC systems, but they had a central IT with big irons (IBM, HP, etc.) since back in the 1960's, so naturally the continue on that path. Gr??e, Sven. -- Sigmentation fault. Core dumped. From philip at turmel.org Mon Jan 9 15:50:49 2012 From: philip at turmel.org (Phil Turmel) Date: Mon, 09 Jan 2012 08:50:49 -0500 Subject: [Dovecot] Providing shared folders with multiple backend servers In-Reply-To: <4F0AEDCC.10109@hardwarefreak.com> References: <68fd4hi9kbv8@mids.svenhartge.de> <4F08E4D9.1090203@hardwarefreak.com> <78fdevu9kbv8@mids.svenhartge.de> <4F09956C.1030109@hardwarefreak.com> <88fev069kbv8@mids.svenhartge.de> <4F0AEDCC.10109@hardwarefreak.com> Message-ID: <4F0AF0B9.7030406@turmel.org> On 01/09/2012 08:38 AM, Stan Hoeppner wrote: > On 1/8/2012 3:07 PM, Sven Hartge wrote: [...] >> (Are my words making any sense? I got the feeling I'm writing German with >> English words and nobody is really understanding anything ...) > > You're making perfect sense, and frankly, if not for the .de TLD in your > email address, I'd have thought you were an American. Your written > English is probably better than mine, and it's my only language. To be > fair to the Brits, I speak/write American English. ;) Concur. My American ear is also perfectly happy. > I'm guessing no one else has interest in this thread, or maybe simply > lost interest as the replies have been lengthy, and not wholly Dovecot > related. I accept some blame for that. I've been following this thread with great interest, but no advice to offer. The content is entirely appropriate, and appreciated. Don't be embarrassed by your enthusiasm, Stan. Sven, a follow-up report when you have it all working as desired would also be appreciated (and appropriate). Thanks, Phil From sven at svenhartge.de Mon Jan 9 15:52:27 2012 From: sven at svenhartge.de (Sven Hartge) Date: Mon, 9 Jan 2012 14:52:27 +0100 Subject: [Dovecot] Providing shared folders with multiple backend servers References: <68fd4hi9kbv8@mids.svenhartge.de> <4F08E4D9.1090203@hardwarefreak.com> <78fdevu9kbv8@mids.svenhartge.de> <4F09956C.1030109@hardwarefreak.com> <88fev069kbv8@mids.svenhartge.de> <4F0AEDCC.10109@hardwarefreak.com> Message-ID: <18fhg73kv5v8@mids.svenhartge.de> Stan Hoeppner wrote: > On 1/8/2012 3:07 PM, Sven Hartge wrote: >> Ah, I forgot: I _already_ have the mechanisms in place to statically >> redirect/route accesses for users to different backends, since some >> of the users are already redirected to a different mailsystem at >> another location of my university. > I assume you mean IMAP/POP connections, not SMTP. Yes. perdition uses its popmap feature to redirect users of the other location to the IMAP/POP servers there. So we only need one central mailserver for the users to configure while we are able to physically store their mails at different datacenters. > I'm guessing no one else has interest in this thread, or maybe simply > lost interest as the replies have been lengthy, and not wholly Dovecot > related. I accept some blame for that. I will open a new thread with more concrete problems/questions after I setup my test setup. This will be more technical and less philosphical, I hope :) Gr??e, Sven -- Sigmentation fault. Core dumped. From sven at svenhartge.de Mon Jan 9 16:08:12 2012 From: sven at svenhartge.de (Sven Hartge) Date: Mon, 9 Jan 2012 15:08:12 +0100 Subject: [Dovecot] Providing shared folders with multiple backend servers References: <68fd4hi9kbv8@mids.svenhartge.de> <4F08E4D9.1090203@hardwarefreak.com> <78fdevu9kbv8@mids.svenhartge.de> <4F09956C.1030109@hardwarefreak.com> <88fev069kbv8@mids.svenhartge.de> <4F0AE804.5070002@hardwarefreak.com> Message-ID: <28fhgdvkv5v8@mids.svenhartge.de> Stan Hoeppner wrote: > On 1/8/2012 2:15 PM, Sven Hartge wrote: >> Wouldn't such a setup be the "Best of Both Worlds"? Having the main >> traffic going to local disks (being RDMs) and also being able to provide >> shared folders to every user who needs them without the need to move >> those users onto one server? > The only problems I can see at this time are: > 1. Some users will have much larger mailboxes than others. > Each year ~1/4 of your student population rotates, so if you > manually place existing mailboxes now based on current size > you have no idea who the big users are in the next freshman > class, or the next. So you may have to do manual re-balancing > of mailboxes, maybe frequently. The quota for students is 1GiB here. If I provide each of my 4 nodes with 500GiB of storage space, this gives me 2TiB now, which should be sufficient. If a nodes fills, I increase its storage space. Only if it fills too fast, I may have to rebalance users. And I never wanted to place the users based on their current size. I knew this was not going to work because of the reasons you mentioned. I just want to hash their username and use this as a function to distribute the users, keeping it simple and stupid. > 2. If you lose a Dovecot VM guest due to image file or other > corruption, or some other rare cause, you can't restart that guest, > but will have to build a new image from a template. This could > cause either minor or significant downtime for ~1/4 of your mail > users w/4 nodes. This is likely rare enough it's not worth > consideration. Yes, I know. But right now, if I lose my one and only mail storage servers, all users mailboxes will be offline, until I am either a) able to repair the server, b) move the disks to my identical backup system (or the backup system to the location of the failed one) or c) start the backup system and lose all mails not rsynced since the last rsync-run. It is not easy designing a mail system without a SPoF which still performs under load. For example, once a time I had a DRDB (active/passive( setup between the two storage systems. This would allow me to start my standby system without losing (nearly) any mail. But this was awful slow and sluggish. > 3. You will consume more SAN volumes and LUNs. Most arrays have a > fixed number of each. May or may not be an issue. Not really an issue here. The SAN is exclusive for the VMware cluster, so most LUNs are quite big (1TiB to 2TiB) but there are not many of them. Gr??e, Sven. -- Sigmentation fault. Core dumped. From tom at elysium.ltd.uk Mon Jan 9 16:41:55 2012 From: tom at elysium.ltd.uk (Tom Clark) Date: Mon, 9 Jan 2012 14:41:55 -0000 Subject: [Dovecot] Resetting a UID Message-ID: <025c01cccedc$d5ccd680$81668380$@elysium.ltd.uk> Hi, We've got a client with a Blackberry that hass deleted his emails off his Blackberry device. The BES won't re-download the messages as it believes it has already downloaded them (apparently it matches on UID). Is there any way of resetting a folder (and messages in the folder) UID? I know in courier you used to be able to touch the directory. Thanks, Tom Clark From tss at iki.fi Mon Jan 9 16:43:00 2012 From: tss at iki.fi (Timo Sirainen) Date: Mon, 09 Jan 2012 16:43:00 +0200 Subject: [Dovecot] Postfix user map Message-ID: <4F0AFCF4.1050506@iki.fi> http://hg.dovecot.org/dovecot-2.1/rev/f562bcaca215 implements "postmap" type sockets, which follow Postfix's tcp_table(5) protocol. So you can ask: get user at domain and Dovecot answers one of: - 200 1 - 500 User not found - 400 Internal failure So you can use this with Postfix: virtual_mailbox_maps = tcp:127.0.0.1:1234 With Dovecot you can enable it with: service auth { inet_listener postmap { listen = 127.0.0.1 port = 1234 } } Anyone have ideas if this could be improved, or used for some other purposes? From tss at iki.fi Mon Jan 9 16:51:07 2012 From: tss at iki.fi (Timo Sirainen) Date: Mon, 9 Jan 2012 16:51:07 +0200 Subject: [Dovecot] Providing shared folders with multiple backend servers In-Reply-To: <68fd4hi9kbv8@mids.svenhartge.de> References: <68fd4hi9kbv8@mids.svenhartge.de> Message-ID: Too much text in the rest of this thread so I haven't read it, but: On 8.1.2012, at 0.20, Sven Hartge wrote: > Right now, I am pondering with using an additional server with just the > shared folders on it and using NFS (or a cluster FS) to mount the shared > folder filesystem to each backend storage server, so each user has > potential access to a shared folders data. With NFS you'll run into problems with caching (http://wiki2.dovecot.org/NFS). Some cluster fs might work better. The "proper" solution for this that I've been thinking about would be to use v2.1's imapc backend with master users. So that when user A wants to access user B's shared folder, Dovecot connects to B's IMAP server using master user login, and accesses the mailbox via IMAP. Probably wouldn't be a big job to implement, mainly I'd need to figure out how this should be configured.. From tss at iki.fi Mon Jan 9 16:57:02 2012 From: tss at iki.fi (Timo Sirainen) Date: Mon, 9 Jan 2012 16:57:02 +0200 Subject: [Dovecot] 2.0.17: Index lost -> SAVEDON lost as well? In-Reply-To: <20120109074057.GC22506@charite.de> References: <20120109074057.GC22506@charite.de> Message-ID: <17E5EF90-C133-4F70-A377-0D93694181DE@iki.fi> On 9.1.2012, at 9.40, Ralf Hildebrandt wrote: > Today I encoundered these errors: > > Jan 9 08:30:06 mail dovecot: lmtp(31174, backup at backup.invalid): Error: Log synchronization error at seq=858,offset=44672 for > /home/mailboxname/mdbox/storage/dovecot.map.index: Append with UID 282388, but next_uid = 282389 Any idea why this happened? > After that, the SAVEDON date for all mails was reset to today: Yeah. The "save date" is stored only in index. And index rebuild drops all those fields. I guess this could/should be fixed in index rebuild. > Is there a way of restoring the SAVEDON info? Not currently without extra code (and even then you could only restore it to e.g. its received date). From sven at svenhartge.de Mon Jan 9 16:58:50 2012 From: sven at svenhartge.de (Sven Hartge) Date: Mon, 9 Jan 2012 15:58:50 +0100 Subject: [Dovecot] Providing shared folders with multiple backend servers References: <68fd4hi9kbv8@mids.svenhartge.de> Message-ID: <38fhk6pkv5v8@mids.svenhartge.de> Timo Sirainen wrote: > On 8.1.2012, at 0.20, Sven Hartge wrote: >> Right now, I am pondering with using an additional server with just >> the shared folders on it and using NFS (or a cluster FS) to mount the >> shared folder filesystem to each backend storage server, so each user >> has potential access to a shared folders data. > With NFS you'll run into problems with caching > (http://wiki2.dovecot.org/NFS). Some cluster fs might work better. > The "proper" solution for this that I've been thinking about would be > to use v2.1's imapc backend with master users. So that when user A > wants to access user B's shared folder, Dovecot connects to B's IMAP > server using master user login, and accesses the mailbox via IMAP. > Probably wouldn't be a big job to implement, mainly I'd need to figure > out how this should be configured.. Luckily, in my case, User A does not access anythin from User B, but instead both User A and User B access the same public folder, which is different from any folder of User A and User B. Gr??e, Sven. -- Sigmentation fault. Core dumped. From tss at iki.fi Mon Jan 9 17:00:21 2012 From: tss at iki.fi (Timo Sirainen) Date: Mon, 9 Jan 2012 17:00:21 +0200 Subject: [Dovecot] Solr plugin In-Reply-To: References: <4F04EDC8.6060809@amfes.com> <9D72A466-6ECD-4A10-A52A-20BF56B32916@iki.fi> <4F072FA9.2020009@amfes.com> <75CA67D3-81FF-4FD5-8D61-6992E808DB09@iki.fi> <4F075A76.1040807@amfes.com> <4F0A2980.7050003@amfes.com> Message-ID: On 9.1.2012, at 1.48, Daniel L. Miller wrote: > On 1/8/2012 3:40 PM, Daniel L. Miller wrote: >> On 1/6/2012 12:32 PM, Daniel L. Miller wrote: >>> On 1/6/2012 9:36 AM, Timo Sirainen wrote: >>>> On 6.1.2012, at 19.30, Daniel L. Miller wrote: >>>> >> >> Jan 8 15:40:09 bubba dovecot: imap(user1 at domain.com): Error: fts_solr: Lookup failed: 400 undefined field CC >> Jan 8 15:40:09 bubba dovecot: imap: Error: >> >> > > Looking at the Solr output - looks like the CC parameter is being capitalized while all the other fieldnames are lowercase. Did you look at the input? Looking at the code, it should be lowercased. Maybe Solr just uppercases it for some reason. Are you using a Solr schema that has "cc" field? From Ralf.Hildebrandt at charite.de Mon Jan 9 17:02:49 2012 From: Ralf.Hildebrandt at charite.de (Ralf Hildebrandt) Date: Mon, 9 Jan 2012 16:02:49 +0100 Subject: [Dovecot] 2.0.17: Index lost -> SAVEDON lost as well? In-Reply-To: <17E5EF90-C133-4F70-A377-0D93694181DE@iki.fi> References: <20120109074057.GC22506@charite.de> <17E5EF90-C133-4F70-A377-0D93694181DE@iki.fi> Message-ID: <20120109150249.GH22506@charite.de> * Timo Sirainen : > On 9.1.2012, at 9.40, Ralf Hildebrandt wrote: > > > Today I encoundered these errors: > > > > Jan 9 08:30:06 mail dovecot: lmtp(31174, backup at backup.invalid): Error: Log synchronization error at seq=858,offset=44672 for > > /home/mailboxname/mdbox/storage/dovecot.map.index: Append with UID 282388, but next_uid = 282389 > > Any idea why this happened? I was running those commands: # new style (dovecot) vorgestern=`date -d "-2 day" +"%Y-%m-%d"` doveadm expunge -u backup at backup.invalid mailbox INBOX SAVEDBEFORE $vorgestern doveadm purge -u backup at backup.invalid > > After that, the SAVEDON date for all mails was reset to today: > > Yeah. The "save date" is stored only in index. And index rebuild drops > all those fields. I guess this could/should be fixed in index rebuild. It's ok. Right now it only affects my expiry method. -- Ralf Hildebrandt Gesch?ftsbereich IT | Abteilung Netzwerk Charit? - Universit?tsmedizin Berlin Campus Benjamin Franklin Hindenburgdamm 30 | D-12203 Berlin Tel. +49 30 450 570 155 | Fax: +49 30 450 570 962 ralf.hildebrandt at charite.de | http://www.charite.de From CMarcus at Media-Brokers.com Mon Jan 9 17:14:37 2012 From: CMarcus at Media-Brokers.com (Charles Marcus) Date: Mon, 09 Jan 2012 10:14:37 -0500 Subject: [Dovecot] Providing shared folders with multiple backend servers In-Reply-To: References: <68fd4hi9kbv8@mids.svenhartge.de> Message-ID: <4F0B045D.1010101@Media-Brokers.com> On 2012-01-09 9:51 AM, Timo Sirainen wrote: > The "proper" solution for this that I've been thinking about would be > to use v2.1's imapc backend with master users. So that when user A > wants to access user B's shared folder, Dovecot connects to B's IMAP > server using master user login, and accesses the mailbox via IMAP. > Probably wouldn't be a big job to implement, mainly I'd need to > figure out how this should be configured. Sounds interesting... would this be the new officially supported method for sharing mailboxes in all cases? Or is this just for shared mailboxes on NFS shares? It sounds like this might be a proper (fully supported without kludges) way to get what I had asked about before, with respect to expanding on the concept of Master users for sharing an entire account with one or more other users... -- Best regards, Charles From noeldude at gmail.com Mon Jan 9 17:32:01 2012 From: noeldude at gmail.com (Noel) Date: Mon, 09 Jan 2012 09:32:01 -0600 Subject: [Dovecot] Postfix user map In-Reply-To: <4F0AFCF4.1050506@iki.fi> References: <4F0AFCF4.1050506@iki.fi> Message-ID: <4F0B0871.6040500@gmail.com> On 1/9/2012 8:43 AM, Timo Sirainen wrote: > http://hg.dovecot.org/dovecot-2.1/rev/f562bcaca215 implements > "postmap" type sockets, which follow Postfix's tcp_table(5) > protocol. So you can ask: > > get user at domain > > and Dovecot answers one of: > > - 200 1 > - 500 User not found > - 400 Internal failure > > So you can use this with Postfix: > > virtual_mailbox_maps = tcp:127.0.0.1:1234 > > With Dovecot you can enable it with: > > service auth { > inet_listener postmap { > listen = 127.0.0.1 > port = 1234 > } > } > > Anyone have ideas if this could be improved, or used for some > other purposes? Cool. Does this just check for valid user existence, or can it also check for over-quota (and respond 500 overquota I suppose)? -- Noel Jones From robert at schetterer.org Mon Jan 9 17:37:32 2012 From: robert at schetterer.org (Robert Schetterer) Date: Mon, 09 Jan 2012 16:37:32 +0100 Subject: [Dovecot] Postfix user map In-Reply-To: <4F0B0871.6040500@gmail.com> References: <4F0AFCF4.1050506@iki.fi> <4F0B0871.6040500@gmail.com> Message-ID: <4F0B09BC.3010300@schetterer.org> Am 09.01.2012 16:32, schrieb Noel: > On 1/9/2012 8:43 AM, Timo Sirainen wrote: >> http://hg.dovecot.org/dovecot-2.1/rev/f562bcaca215 implements >> "postmap" type sockets, which follow Postfix's tcp_table(5) >> protocol. So you can ask: >> >> get user at domain >> >> and Dovecot answers one of: >> >> - 200 1 >> - 500 User not found >> - 400 Internal failure >> >> So you can use this with Postfix: >> >> virtual_mailbox_maps = tcp:127.0.0.1:1234 >> >> With Dovecot you can enable it with: >> >> service auth { >> inet_listener postmap { >> listen = 127.0.0.1 >> port = 1234 >> } >> } >> >> Anyone have ideas if this could be improved, or used for some >> other purposes? > > > Cool. > Does this just check for valid user existence, or can it also check > for over-quota (and respond 500 overquota I suppose)? if you use dove lmtp with postfix it allready works "like that way" for over quota > > > -- Noel Jones -- Best Regards MfG Robert Schetterer Germany/Munich/Bavaria From noeldude at gmail.com Mon Jan 9 17:46:44 2012 From: noeldude at gmail.com (Noel) Date: Mon, 09 Jan 2012 09:46:44 -0600 Subject: [Dovecot] Postfix user map In-Reply-To: <4F0B09BC.3010300@schetterer.org> References: <4F0AFCF4.1050506@iki.fi> <4F0B0871.6040500@gmail.com> <4F0B09BC.3010300@schetterer.org> Message-ID: <4F0B0BE4.8010907@gmail.com> On 1/9/2012 9:37 AM, Robert Schetterer wrote: > Am 09.01.2012 16:32, schrieb Noel: >> On 1/9/2012 8:43 AM, Timo Sirainen wrote: >>> http://hg.dovecot.org/dovecot-2.1/rev/f562bcaca215 implements >>> "postmap" type sockets, which follow Postfix's tcp_table(5) >>> protocol. So you can ask: >>> >>> get user at domain >>> >>> and Dovecot answers one of: >>> >>> - 200 1 >>> - 500 User not found >>> - 400 Internal failure >>> >>> So you can use this with Postfix: >>> >>> virtual_mailbox_maps = tcp:127.0.0.1:1234 >>> >>> With Dovecot you can enable it with: >>> >>> service auth { >>> inet_listener postmap { >>> listen = 127.0.0.1 >>> port = 1234 >>> } >>> } >>> >>> Anyone have ideas if this could be improved, or used for some >>> other purposes? >> >> Cool. >> Does this just check for valid user existence, or can it also check >> for over-quota (and respond 500 overquota I suppose)? > if you use dove lmtp with postfix it allready works "like that way" > for over quota That can reject over-quota users during the postfix SMTP conversation? -- Noel Jones From robert at schetterer.org Mon Jan 9 17:50:49 2012 From: robert at schetterer.org (Robert Schetterer) Date: Mon, 09 Jan 2012 16:50:49 +0100 Subject: [Dovecot] Postfix user map In-Reply-To: <4F0B0BE4.8010907@gmail.com> References: <4F0AFCF4.1050506@iki.fi> <4F0B0871.6040500@gmail.com> <4F0B09BC.3010300@schetterer.org> <4F0B0BE4.8010907@gmail.com> Message-ID: <4F0B0CD9.3090402@schetterer.org> Am 09.01.2012 16:46, schrieb Noel: > On 1/9/2012 9:37 AM, Robert Schetterer wrote: >> Am 09.01.2012 16:32, schrieb Noel: >>> On 1/9/2012 8:43 AM, Timo Sirainen wrote: >>>> http://hg.dovecot.org/dovecot-2.1/rev/f562bcaca215 implements >>>> "postmap" type sockets, which follow Postfix's tcp_table(5) >>>> protocol. So you can ask: >>>> >>>> get user at domain >>>> >>>> and Dovecot answers one of: >>>> >>>> - 200 1 >>>> - 500 User not found >>>> - 400 Internal failure >>>> >>>> So you can use this with Postfix: >>>> >>>> virtual_mailbox_maps = tcp:127.0.0.1:1234 >>>> >>>> With Dovecot you can enable it with: >>>> >>>> service auth { >>>> inet_listener postmap { >>>> listen = 127.0.0.1 >>>> port = 1234 >>>> } >>>> } >>>> >>>> Anyone have ideas if this could be improved, or used for some >>>> other purposes? >>> >>> Cool. >>> Does this just check for valid user existence, or can it also check >>> for over-quota (and respond 500 overquota I suppose)? >> if you use dove lmtp with postfix it allready works "like that way" >> for over quota > > > That can reject over-quota users during the postfix SMTP conversation? jep ,it does, i was glad having/testing this feature in dove 2 release, avoiding overquota backscatter etc > > > > -- Noel Jones -- Best Regards MfG Robert Schetterer Germany/Munich/Bavaria From stan at hardwarefreak.com Mon Jan 9 17:56:36 2012 From: stan at hardwarefreak.com (Stan Hoeppner) Date: Mon, 09 Jan 2012 09:56:36 -0600 Subject: [Dovecot] Providing shared folders with multiple backend servers In-Reply-To: <28fhgdvkv5v8@mids.svenhartge.de> References: <68fd4hi9kbv8@mids.svenhartge.de> <4F08E4D9.1090203@hardwarefreak.com> <78fdevu9kbv8@mids.svenhartge.de> <4F09956C.1030109@hardwarefreak.com> <88fev069kbv8@mids.svenhartge.de> <4F0AE804.5070002@hardwarefreak.com> <28fhgdvkv5v8@mids.svenhartge.de> Message-ID: <4F0B0E34.8080901@hardwarefreak.com> On 1/9/2012 8:08 AM, Sven Hartge wrote: > Stan Hoeppner wrote: > The quota for students is 1GiB here. If I provide each of my 4 nodes > with 500GiB of storage space, this gives me 2TiB now, which should be > sufficient. If a nodes fills, I increase its storage space. Only if it > fills too fast, I may have to rebalance users. That should work. > And I never wanted to place the users based on their current size. I > knew this was not going to work because of the reasons you mentioned. > > I just want to hash their username and use this as a function to > distribute the users, keeping it simple and stupid. My apologies Sven. I just re-read your first messages and you did mention this method. > Yes, I know. But right now, if I lose my one and only mail storage > servers, all users mailboxes will be offline, until I am either a) able > to repair the server, b) move the disks to my identical backup system (or > the backup system to the location of the failed one) or c) start the > backup system and lose all mails not rsynced since the last rsync-run. True. 3/4 of users remaining online is much better than none. :) > It is not easy designing a mail system without a SPoF which still > performs under load. And many other systems for that matter. > For example, once a time I had a DRDB (active/passive( setup between the > two storage systems. This would allow me to start my standby system > without losing (nearly) any mail. But this was awful slow and sluggish. Eric Rostetter at University of Texas at Austin has reported good performance with his twin Dovecot DRBD cluster. Though in his case he's doing active/active DRBD with GFS2 sitting on top, so there is no failover needed. DRBD is obviously not an option for your current needs. >> 3. You will consume more SAN volumes and LUNs. Most arrays have a >> fixed number of each. May or may not be an issue. > > Not really an issue here. The SAN is exclusive for the VMware cluster, > so most LUNs are quite big (1TiB to 2TiB) but there are not many of > them. I figured this wouldn't be a problem. I'm just trying to be thorough, mentioning anything I can think of that might be an issue. The more I think about your planned architecture the more it reminds me of a "shared nothing" database cluster--even a relatively small one can outrun a well tuned mainframe, especially doing decision support/data mining workloads (TPC-H). As long as you're prepared for the extra administration, which you obviously are, this setup will yield better performance than the NFS setup I recommended. Performance may not be quite as good as 4 physical hosts with local storage, but you haven't mentioned the details of your SAN storage nor the current load on it, so obviously I can't say with any certainty. If the controller currently has plenty of spare IOPS then the performance difference would be minimal. And using the SAN allows automatic restart of a VM if a physical node dies. As with Phil, I'm anxious to see how well it works in production. When you send an update please CC me directly as sometimes I don't read all the list mail. I hope my participation was helpful to you Sven, even if only to a small degree. Best of luck with the implementation. -- Stan From sven at svenhartge.de Mon Jan 9 18:16:14 2012 From: sven at svenhartge.de (Sven Hartge) Date: Mon, 9 Jan 2012 17:16:14 +0100 Subject: [Dovecot] Providing shared folders with multiple backend servers References: <68fd4hi9kbv8@mids.svenhartge.de> <4F08E4D9.1090203@hardwarefreak.com> <78fdevu9kbv8@mids.svenhartge.de> <4F09956C.1030109@hardwarefreak.com> <88fev069kbv8@mids.svenhartge.de> <4F0AE804.5070002@hardwarefreak.com> <28fhgdvkv5v8@mids.svenhartge.de> <4F0B0E34.8080901@hardwarefreak.com> Message-ID: <48fhoafkv5v8@mids.svenhartge.de> Stan Hoeppner wrote: > The more I think about your planned architecture the more it reminds > me of a "shared nothing" database cluster--even a relatively small one > can outrun a well tuned mainframe, especially doing decision > support/data mining workloads (TPC-H). > As long as you're prepared for the extra administration, which you > obviously are, this setup will yield better performance than the NFS > setup I recommended. Performance may not be quite as good as 4 > physical hosts with local storage, but you haven't mentioned the > details of your SAN storage nor the current load on it, so obviously I > can't say with any certainty. If the controller currently has plenty > of spare IOPS then the performance difference would be minimal. This is the beauty of the HP P4500: every node is a controller, load is automagically balanced between all nodes of a storage cluster. The more nodes (up to ten) you add, the more performance you get. So far, I have not been able to push our current SAN to its limits, even with totally artificial benchmarks, so I am quite confident in its performance for the given task. But if everything fails and the performance is not good, I can still go ahead and buy dedicated hardware for the mailsystem. The only thing left is the NFS problem with caching Timo mentioned, but since the accesses to a central public shared folder will be only a minor portion of a clients access, I am hoping the impact will be minimal. Only testing will tell. Gr??e, Sven. -- Sigmentation fault. Core dumped. From robert at schetterer.org Mon Jan 9 18:19:20 2012 From: robert at schetterer.org (Robert Schetterer) Date: Mon, 09 Jan 2012 17:19:20 +0100 Subject: [Dovecot] Postfix user map In-Reply-To: <4F0B0CD9.3090402@schetterer.org> References: <4F0AFCF4.1050506@iki.fi> <4F0B0871.6040500@gmail.com> <4F0B09BC.3010300@schetterer.org> <4F0B0BE4.8010907@gmail.com> <4F0B0CD9.3090402@schetterer.org> Message-ID: <4F0B1388.209@schetterer.org> Am 09.01.2012 16:50, schrieb Robert Schetterer: > Am 09.01.2012 16:46, schrieb Noel: >> On 1/9/2012 9:37 AM, Robert Schetterer wrote: >>> Am 09.01.2012 16:32, schrieb Noel: >>>> On 1/9/2012 8:43 AM, Timo Sirainen wrote: >>>>> http://hg.dovecot.org/dovecot-2.1/rev/f562bcaca215 implements >>>>> "postmap" type sockets, which follow Postfix's tcp_table(5) >>>>> protocol. So you can ask: >>>>> >>>>> get user at domain >>>>> >>>>> and Dovecot answers one of: >>>>> >>>>> - 200 1 >>>>> - 500 User not found >>>>> - 400 Internal failure >>>>> >>>>> So you can use this with Postfix: >>>>> >>>>> virtual_mailbox_maps = tcp:127.0.0.1:1234 >>>>> >>>>> With Dovecot you can enable it with: >>>>> >>>>> service auth { >>>>> inet_listener postmap { >>>>> listen = 127.0.0.1 >>>>> port = 1234 >>>>> } >>>>> } >>>>> >>>>> Anyone have ideas if this could be improved, or used for some >>>>> other purposes? >>>> >>>> Cool. >>>> Does this just check for valid user existence, or can it also check >>>> for over-quota (and respond 500 overquota I suppose)? >>> if you use dove lmtp with postfix it allready works "like that way" >>> for over quota >> >> >> That can reject over-quota users during the postfix SMTP conversation? > > jep ,it does, i was glad having/testing this feature in dove 2 release, > avoiding > overquota backscatter etc i am afraid i wasnt total corect here in fact i havent seen backscatter overquota on my servers since using dove lmtp with postfix but i guess there may cases left in which it could happen you should ask Timo for exact tec answer the postfix answer ever was write some policy daemon for it ( which i found extremly complicated at my try, and stopped it ) but i guess its always a problem comparing the size of a mail with space left in mailstore i.e with many reciepts of one mail etc, whatever tec solution is used so i should have said dove lmtp is the best/easy solution for overquota i know at present and my problems with it are solved for now > >> >> >> >> -- Noel Jones > > -- Best Regards MfG Robert Schetterer Germany/Munich/Bavaria From tss at iki.fi Mon Jan 9 20:09:46 2012 From: tss at iki.fi (Timo Sirainen) Date: Mon, 9 Jan 2012 20:09:46 +0200 Subject: [Dovecot] Postfix user map In-Reply-To: <4F0B0871.6040500@gmail.com> References: <4F0AFCF4.1050506@iki.fi> <4F0B0871.6040500@gmail.com> Message-ID: <8A109A75-164C-41B4-A13B-19C3F1D01E12@iki.fi> On 9.1.2012, at 17.32, Noel wrote: > On 1/9/2012 8:43 AM, Timo Sirainen wrote: >> http://hg.dovecot.org/dovecot-2.1/rev/f562bcaca215 implements >> "postmap" type sockets, which follow Postfix's tcp_table(5) >> protocol. So you can ask: >> >> get user at domain >> >> and Dovecot answers one of: >> >> - 200 1 >> - 500 User not found >> - 400 Internal failure >> >> Anyone have ideas if this could be improved, or used for some >> other purposes? > > > Cool. > Does this just check for valid user existence, or can it also check > for over-quota (and respond 500 overquota I suppose)? Hmm. That looked potentially useful, but Postfix doesn't seem to support it at least that way, since the message to SMTP client is the same regardless of what I add after 500 reply. Also that would have required me to move the code somewhere else from auth process, since auth doesn't know the quota usage. And internally Dovecot would still have had to do auth lookup separately, so there's really no benefit in doing this vs. having Postfix do two lookups. From tss at iki.fi Mon Jan 9 20:12:38 2012 From: tss at iki.fi (Timo Sirainen) Date: Mon, 9 Jan 2012 20:12:38 +0200 Subject: [Dovecot] Postfix user map In-Reply-To: <4F0B1388.209@schetterer.org> References: <4F0AFCF4.1050506@iki.fi> <4F0B0871.6040500@gmail.com> <4F0B09BC.3010300@schetterer.org> <4F0B0BE4.8010907@gmail.com> <4F0B0CD9.3090402@schetterer.org> <4F0B1388.209@schetterer.org> Message-ID: <70A95B54-98CE-4B4D-8B76-CDA279353202@iki.fi> On 9.1.2012, at 18.19, Robert Schetterer wrote: > i am afraid i wasnt total corect here > in fact i havent seen backscatter overquota on my servers > since using dove lmtp with postfix LMTP shouldn't matter here. In most configs mails are put to queue first, and only from there they are sent to LMTP, and if LMTP rejects a mail then backscatter is sent. Maybe the difference you're seeing is that it's now Postfix sending the bounce (or perhaps skipping it?) instead of dovecot-lda (unless you gave -e parameter). From tss at iki.fi Mon Jan 9 20:15:00 2012 From: tss at iki.fi (Timo Sirainen) Date: Mon, 9 Jan 2012 20:15:00 +0200 Subject: [Dovecot] Providing shared folders with multiple backend servers In-Reply-To: <4F0B045D.1010101@Media-Brokers.com> References: <68fd4hi9kbv8@mids.svenhartge.de> <4F0B045D.1010101@Media-Brokers.com> Message-ID: On 9.1.2012, at 17.14, Charles Marcus wrote: > On 2012-01-09 9:51 AM, Timo Sirainen wrote: >> The "proper" solution for this that I've been thinking about would be >> to use v2.1's imapc backend with master users. So that when user A >> wants to access user B's shared folder, Dovecot connects to B's IMAP >> server using master user login, and accesses the mailbox via IMAP. >> Probably wouldn't be a big job to implement, mainly I'd need to >> figure out how this should be configured. > > Sounds interesting... would this be the new officially supported method for sharing mailboxes in all cases? Or is this just for shared mailboxes on NFS shares? Well, it would be one officially supported way to do it. It would also help when using multiple UIDs. From sven at svenhartge.de Mon Jan 9 20:25:55 2012 From: sven at svenhartge.de (Sven Hartge) Date: Mon, 9 Jan 2012 19:25:55 +0100 Subject: [Dovecot] Providing shared folders with multiple backend servers References: <68fd4hi9kbv8@mids.svenhartge.de> Message-ID: <68fi0aakv5v8@mids.svenhartge.de> Timo Sirainen wrote: > On 8.1.2012, at 0.20, Sven Hartge wrote: >> Right now, I am pondering with using an additional server with just >> the shared folders on it and using NFS (or a cluster FS) to mount the >> shared folder filesystem to each backend storage server, so each user >> has potential access to a shared folders data. > With NFS you'll run into problems with caching > (http://wiki2.dovecot.org/NFS). Some cluster fs might work better. Can "mmap_disable = yes" and the other NFS options be set per namespace or only globally? Gr??e, Sven. -- Sigmentation fault. Core dumped. From tss at iki.fi Mon Jan 9 20:35:20 2012 From: tss at iki.fi (Timo Sirainen) Date: Mon, 9 Jan 2012 20:35:20 +0200 Subject: [Dovecot] Providing shared folders with multiple backend servers In-Reply-To: <68fi0aakv5v8@mids.svenhartge.de> References: <68fd4hi9kbv8@mids.svenhartge.de> <68fi0aakv5v8@mids.svenhartge.de> Message-ID: On 9.1.2012, at 20.25, Sven Hartge wrote: > Timo Sirainen wrote: >> On 8.1.2012, at 0.20, Sven Hartge wrote: > >>> Right now, I am pondering with using an additional server with just >>> the shared folders on it and using NFS (or a cluster FS) to mount the >>> shared folder filesystem to each backend storage server, so each user >>> has potential access to a shared folders data. > >> With NFS you'll run into problems with caching >> (http://wiki2.dovecot.org/NFS). Some cluster fs might work better. > > Can "mmap_disable = yes" and the other NFS options be set per namespace > or only globally? Currently only globally. From tss at iki.fi Mon Jan 9 20:36:36 2012 From: tss at iki.fi (Timo Sirainen) Date: Mon, 9 Jan 2012 20:36:36 +0200 Subject: [Dovecot] Resetting a UID In-Reply-To: <025c01cccedc$d5ccd680$81668380$@elysium.ltd.uk> References: <025c01cccedc$d5ccd680$81668380$@elysium.ltd.uk> Message-ID: <002EEBD6-7E83-41EF-B2DC-BAA101FA92D5@iki.fi> On 9.1.2012, at 16.41, Tom Clark wrote: > We've got a client with a Blackberry that hass deleted his emails off his > Blackberry device. The BES won't re-download the messages as it believes it > has already downloaded them (apparently it matches on UID). You can delete dovecot.index* and dovecot-uidlist files. Assuming you're using maildir. > Is there any way of resetting a folder (and messages in the folder) UID? I > know in courier you used to be able to touch the directory. I doubt Courier would do that without deleting courierimapuiddb. From tss at iki.fi Mon Jan 9 20:40:01 2012 From: tss at iki.fi (Timo Sirainen) Date: Mon, 9 Jan 2012 20:40:01 +0200 Subject: [Dovecot] Proxy login failures In-Reply-To: <4F0ABF0A.1080404@enas.net> References: <4F0ABF0A.1080404@enas.net> Message-ID: <69796D8B-5CFE-48A2-A092-B1A32331BC1F@iki.fi> On 9.1.2012, at 12.18, Urban Loesch wrote: > I'm using two dovecot pop3/imap proxies in front of our dovecot servers. > Since some days I see many of the following errors in the logs of the two proxy-servers: > > dovecot: pop3-login: Error: proxy: Remote "IPV6-IP":110 disconnected: Connection closed: Connection reset by peer (state=0): user=, method=PLAIN, rip=remote-ip, lip=localip > > When this happens the Client gets the following error from the proxy: > -ERR [IN-USE] Account is temporarily unavailable. The connection to remote server dies before authentication finishes. The reason for why that happens should be logged by the backend server. Sounds like it crashes. Check for ANY error messages in backend servers. From tss at iki.fi Mon Jan 9 20:43:09 2012 From: tss at iki.fi (Timo Sirainen) Date: Mon, 9 Jan 2012 20:43:09 +0200 Subject: [Dovecot] Newbie: LDA Isn't Logging In-Reply-To: <1326087271.17295.YahooMailClassic@web125406.mail.ne1.yahoo.com> References: <1326087271.17295.YahooMailClassic@web125406.mail.ne1.yahoo.com> Message-ID: <98056774-CE97-4A39-AFEF-3FB22330D430@iki.fi> On 9.1.2012, at 7.34, Michael Papet wrote: > LDA logging worked. So, it could be something about my system. But, running /usr/lib/dovecot/deliver still doesn't return a value on the command line as documented on the wiki. > > I've attached strace files from both the malfunctioning Debian packages machine and the built from sources VM. Unfortunately, I'm a new strace user, so I don't know what it all means. The last line in the malfunctioning deliver: exit_group(67) = ? So Dovecot exits with value 67, which means EX_NOUSER. Looks like everything is working correctly. Are you maybe running a wrapper script that hides the exit code? Or in some other way checking it wrong.. From tss at iki.fi Mon Jan 9 20:44:07 2012 From: tss at iki.fi (Timo Sirainen) Date: Mon, 9 Jan 2012 20:44:07 +0200 Subject: [Dovecot] uid / gid and systemusers In-Reply-To: <1809497881.1135529.1326037030206.JavaMail.ngmail@webmail10.arcor-online.net> References: <1809497881.1135529.1326037030206.JavaMail.ngmail@webmail10.arcor-online.net> Message-ID: On 8.1.2012, at 17.37, xamiw at arcor.de wrote: > Jan 8 16:18:28 test dovecot: User q is missing UID (see mail_uid setting) > Jan 8 16:18:28 test dovecot: imap-login: Internal login failure (auth failed, 1 attempts): user=, method=PLAIN, rip=AAA.BBB.CCC.DDD, lip=EEE.FFF.GGG.HHH TLS <--- edited by me .. > auth default { > mechanisms = plain > passdb shadow { > } > } You have passdb, but no userdb. > /etc/passwd: > ... > g:x:1000:1000:test1,,,:/home/g:/bin/bash > q:x:1001:1001:test2,,,:/home/q:/bin/bash To use /etc/passwd as userdb, you need to add userdb passwd {} From sven at svenhartge.de Mon Jan 9 20:47:09 2012 From: sven at svenhartge.de (Sven Hartge) Date: Mon, 9 Jan 2012 19:47:09 +0100 Subject: [Dovecot] Providing shared folders with multiple backend servers References: <68fd4hi9kbv8@mids.svenhartge.de> <68fi0aakv5v8@mids.svenhartge.de> Message-ID: <78fi1g2kv5v8@mids.svenhartge.de> Timo Sirainen wrote: > On 9.1.2012, at 20.25, Sven Hartge wrote: >> Timo Sirainen wrote: >>> On 8.1.2012, at 0.20, Sven Hartge wrote: >>>> Right now, I am pondering with using an additional server with just >>>> the shared folders on it and using NFS (or a cluster FS) to mount >>>> the shared folder filesystem to each backend storage server, so >>>> each user has potential access to a shared folders data. >> >>> With NFS you'll run into problems with caching >>> (http://wiki2.dovecot.org/NFS). Some cluster fs might work better. >> >> Can "mmap_disable = yes" and the other NFS options be set per >> namespace or only globally? > Currently only globally. Ah, too bad. Back to the drawing board then. Implementing my idea in my environment using a cluster filesystem would be a very big pain in the lower back, so I need a different idea to share the shared folders with all nodes but still keeping the user specific mailboxes fixed and local to a node. The imapc backed namespace you mentioned sounds very interesting, but this is not implemented right now for shared folders, is it? Gr??e, Sven. -- Sigmentation fault. Core dumped. From tss at iki.fi Mon Jan 9 20:59:03 2012 From: tss at iki.fi (Timo Sirainen) Date: Mon, 9 Jan 2012 20:59:03 +0200 Subject: [Dovecot] Strange error: DIGEST-MD5 mechanism can't be supported with given passdbs In-Reply-To: <4F07BDBB.3060204@gmail.com> References: <4F05EABC.7070309@gmail.com> <4F06D272.5010200@bunbun.be> <4F071E3B.2060405@gmail.com> <1325868288.17774.30.camel@hurina> <4F07332B.70708@gmail.com> <4F07BDBB.3060204@gmail.com> Message-ID: <491E7C43-2C87-4FD6-8AC0-E79F22E9749F@iki.fi> On 7.1.2012, at 5.36, Yubao Liu wrote: > In old version, "auth->passdbs" contains all passdbs, this revision > changes "auth->passdbs" to only contain non-master passdbs. > > I'm not sure which fix is better or even my proposal is correct or fully: > a) in src/auth/auth.c:auth_passdb_preinit(), insert master passdb to > auth->passdbs too, and remove duplicate code for masterdbs > in auth_init() and auth_deinit(). Not a good idea. The master passdb needs to be treated specially, otherwise you might accidentally allow regular users logging in as other users. > b) add similar code for masterdbs in auth_passdb_list_have_verify_plain(), > auth_passdb_list_have_lookup_credentials(), auth_passdb_list_have_set_credentials(). Kind of annoying code duplication, but .. I guess it can't really be helped. Added: http://hg.dovecot.org/dovecot-2.0/rev/bed15faedfd4 > Another related question is "pass" option in master passdb, if I set it to "yes", > the authentication fails: .. > My normal passdb is a PAM passdb, it doesn't support credential lookups, that's > reasonable, Right. > but I feel the comment for "pass" option is confusing: > > # Unless you're using PAM, you probably still want the destination user to > # be looked up from passdb that it really exists. pass=yes does that. > pass = yes > } > > According the comment, it's to check whether the real user exists, why not > to check userdb but another passdb? Well.. It is going to check userdb eventually anyway, so it would still fail, just a bit later and maybe with different error message. > Even it must check against passdb, > in this case, it's obvious not necessary to lookup credentials, it's enough to > to lookup user name only. There's currently no passdb that supports "does user exist?" lookup, but doesn't support credentials lookup, so this is more of a theoretical issue. (I guess maybe PAM could be abused in some configurations to do the check, but that's rather ugly..) From noeldude at gmail.com Mon Jan 9 21:04:13 2012 From: noeldude at gmail.com (Noel) Date: Mon, 09 Jan 2012 13:04:13 -0600 Subject: [Dovecot] Postfix user map In-Reply-To: <8A109A75-164C-41B4-A13B-19C3F1D01E12@iki.fi> References: <4F0AFCF4.1050506@iki.fi> <4F0B0871.6040500@gmail.com> <8A109A75-164C-41B4-A13B-19C3F1D01E12@iki.fi> Message-ID: <4F0B3A2D.4020301@gmail.com> On 1/9/2012 12:09 PM, Timo Sirainen wrote: > On 9.1.2012, at 17.32, Noel wrote: > > Cool. > Does this just check for valid user existence, or can it also check > for over-quota (and respond 500 overquota I suppose)? > Hmm. That looked potentially useful, but Postfix doesn't seem to support it at least that way, since the message to SMTP client is the same regardless of what I add after 500 reply. Also that would have required me to move the code somewhere else from auth process, since auth doesn't know the quota usage. And internally Dovecot would still have had to do auth lookup separately, so there's really no benefit in doing this vs. having Postfix do two lookups. How about a separate TCP lookup for quota status? This would be really useful for sites that don't have that information in a shared sql table (or no SQL in postfix), and get rid of kludgy policy services used to check quota status. This would be used with a check_recipient_access table, response would be something like: 200 DUNNO quota OK 200 REJECT user over quota 500 user not found -- Noel Jones From david at paperclipsystems.com Mon Jan 9 21:15:36 2012 From: david at paperclipsystems.com (David Egbert) Date: Mon, 09 Jan 2012 12:15:36 -0700 Subject: [Dovecot] failed: Too many levels of symbolic links In-Reply-To: <4A0E9695-E78A-487F-AE53-888D27981EF1@iki.fi> References: <4F075D43.8090706@paperclipsystems.com> <703E9D47-8238-4E52-A60B-A466B77C5CBF@iki.fi> <4F076A70.3090905@paperclipsystems.com> <4F077158.4000500@paperclipsystems.com> <4A0E9695-E78A-487F-AE53-888D27981EF1@iki.fi> Message-ID: <4F0B3CD8.8050501@paperclipsystems.com> On 1/6/2012 3:30 PM, Timo Sirainen wrote: > On 7.1.2012, at 0.10, David Egbert wrote: > >>> Anyway, readdir() is failing with ELOOP. Does it always fail with "Too many levels of symbolic links" or is it sometimes different? This sounds like a bug in Linux NFS client code. You can reproduce this always with this one user's Maildir? Can you do "ls" in the directory? >>> >> Sorry about the X's... it is a client directory. We support many domains and their privacy is paramount. You are correct it is in the /cur directory. I can LS all of directories without problems. This user has 10+Gb in his mail box spread across 352 subscribed folders. As for the logs it is always the directory, always the same error. > Try the attached test program. Run it as: ./readdir /path/to/Maildir/cur > > Does it also give non-zero error? > I ran it, and it returned: readdir() errno = 0 The user backed up their data and then removed the folder from the server. The error is now gone so I am assuming there was some corrupt file in the directory. Thanks for all of the help. David Egbert Paperclip Systems, LLC --- This message, its contents, and attachments are confidential and are only authorized for the intended recipient. Disclosure, re-distribution, or use of said information is strictly prohibited, and may be excluded from disclosure by applicable law. If you are not the intended recipient, or their intermediary, please notify the sender and delete this message. From tss at iki.fi Mon Jan 9 21:16:09 2012 From: tss at iki.fi (Timo Sirainen) Date: Mon, 9 Jan 2012 21:16:09 +0200 Subject: [Dovecot] Providing shared folders with multiple backend servers In-Reply-To: <78fi1g2kv5v8@mids.svenhartge.de> References: <68fd4hi9kbv8@mids.svenhartge.de> <68fi0aakv5v8@mids.svenhartge.de> <78fi1g2kv5v8@mids.svenhartge.de> Message-ID: On 9.1.2012, at 20.47, Sven Hartge wrote: >>> Can "mmap_disable = yes" and the other NFS options be set per >>> namespace or only globally? > >> Currently only globally. > > Ah, too bad. > > Back to the drawing board then. mmap_disable=yes works pretty well even if you're only using it for local filesystems. It just spends some more memory when reading dovecot.index.cache files. > Implementing my idea in my environment using a cluster filesystem would > be a very big pain in the lower back, so I need a different idea to > share the shared folders with all nodes but still keeping the user > specific mailboxes fixed and local to a node. > > The imapc backed namespace you mentioned sounds very interesting, but > this is not implemented right now for shared folders, is it? Well.. If you don't need users sharing mailboxes to each others, then you can probably already do this with Dovecot v2.1: 1. Configure the user Dovecots: namespace { type = public prefix = Shared/ location = imapc:~/imapc-shared } imapc_host = sharedmails.example.com imapc_password = master-user-password # With latest v2.1 hg you can do: imapc_user = shareduser imapc_master_user = %u # With v2.1.rc2 and older you need to do: imapc_user = shareduser*%u auth_master_user_separator = * 2. Configure the shared Dovecot: You need master passdb that allows all existing users to log in as "shareduser" user. You can probably simply do (not tested): passdb { type = static args = user=shareduser master = yes } The "shareduser" owns all of the actual shared mailboxes and has the necessary ACLs set up for individual users. ACLs use the master username (= the real username in this case) to do the ACL checks. From tss at iki.fi Mon Jan 9 21:19:34 2012 From: tss at iki.fi (Timo Sirainen) Date: Mon, 9 Jan 2012 21:19:34 +0200 Subject: [Dovecot] Providing shared folders with multiple backend servers In-Reply-To: References: <68fd4hi9kbv8@mids.svenhartge.de> <68fi0aakv5v8@mids.svenhartge.de> <78fi1g2kv5v8@mids.svenhartge.de> Message-ID: <08C9B341-1292-44F4-AB6B-D6D804ED60BE@iki.fi> On 9.1.2012, at 21.16, Timo Sirainen wrote: > passdb { > type = static > args = user=shareduser Of course you should also require a password: args = user=shareduser pass=master-user-password From tss at iki.fi Mon Jan 9 21:31:00 2012 From: tss at iki.fi (Timo Sirainen) Date: Mon, 9 Jan 2012 21:31:00 +0200 Subject: [Dovecot] change initial permissions on creation of mail folder In-Reply-To: <4F072342.1090901@ngong.de> References: <4F072342.1090901@ngong.de> Message-ID: <061F5BDF-A47F-40F5-8B86-E42C585B9EBB@iki.fi> On 6.1.2012, at 18.37, mailinglist wrote: > Installed dovcot from Debian .deb file. Creating a new account for system users sets permission for user-only. Where to change initial permissions on creation of mail folder and other subdirectories. Permissions for folders are taken from the mail root directory. http://wiki2.dovecot.org/SharedMailboxes/Permissions has details. Permissions for newly created mail root directory are always 0700. If you want something else, create the mail directory with wanted permissions at the same time as you create the user. > Installed dovecot using "apt-get install dovecot-imapd dovecot-pop3d". Any time when I create a new account in my mail client for a system user, Dovecot tries to create ~/mail/.imap/INBOX. The permissions for mail and .imap are set to 0700. By this permissions INBOX can not be created leading to an error message in log files. When I manualy change the permissions to 0770, INBOX is created I don't really understand why INBOX couldn't be created. 0700 should be enough for most installations. Unless you have a very good reason you shouldn't use 0770 for mails (sounds more like you've a weirdly configured mail setup). From tss at iki.fi Mon Jan 9 21:31:51 2012 From: tss at iki.fi (Timo Sirainen) Date: Mon, 9 Jan 2012 21:31:51 +0200 Subject: [Dovecot] FTS-Solr plugin In-Reply-To: References: Message-ID: <51D1C049-8E87-4AB7-9A20-6BDB0748A569@iki.fi> On 6.1.2012, at 19.35, Daniel L. Miller wrote: > Solr plugin appears to break when mailbox names have an ampersand in the name. The messages appear to indicate '&' gets translated to '&--'. What message? With fts=solr (not solr_old) the mailbox name isn't used in Solr at all. It uses mailbox GUIDs. From sven at svenhartge.de Mon Jan 9 21:31:58 2012 From: sven at svenhartge.de (Sven Hartge) Date: Mon, 9 Jan 2012 20:31:58 +0100 Subject: [Dovecot] Providing shared folders with multiple backend servers References: <68fd4hi9kbv8@mids.svenhartge.de> <68fi0aakv5v8@mids.svenhartge.de> <78fi1g2kv5v8@mids.svenhartge.de> Message-ID: <98fi3r4kv5v8@mids.svenhartge.de> Timo Sirainen wrote: > On 9.1.2012, at 20.47, Sven Hartge wrote: >>>> Can "mmap_disable = yes" and the other NFS options be set per >>>> namespace or only globally? >> >>> Currently only globally. >> >> Ah, too bad. >> >> Back to the drawing board then. > mmap_disable=yes works pretty well even if you're only using it for local filesystems. It just spends some more memory when reading dovecot.index.cache files. >> Implementing my idea in my environment using a cluster filesystem would >> be a very big pain in the lower back, so I need a different idea to >> share the shared folders with all nodes but still keeping the user >> specific mailboxes fixed and local to a node. >> >> The imapc backed namespace you mentioned sounds very interesting, but >> this is not implemented right now for shared folders, is it? > Well.. If you don't need users sharing mailboxes to each others, God heavens, no! If I allowed users to share their mailboxes with other users, hell would break loose. Nononono, just shared folders set up by the admin team, statically assigned to groups of users (for example, the central postmaster@ mail alias ends in such a shared folder). > then you can probably already do this with Dovecot v2.1: > 1. Configure the user Dovecots: > namespace { > type = public > prefix = Shared/ > location = imapc:~/imapc-shared > } > imapc_host = sharedmails.example.com > imapc_password = master-user-password > # With latest v2.1 hg you can do: > imapc_user = shareduser > imapc_master_user = %u > # With v2.1.rc2 and older you need to do: > imapc_user = shareduser*%u > auth_master_user_separator = * So, in my case, this would look like this: ,---- | # User's private mail location | mail_location = mdbox:~/mdbox | | # When creating any namespaces, you must also have a private namespace: | namespace { | type = private | separator = . | prefix = INBOX. | #location defaults to mail_location. | inbox = yes | } | | namespace { | type = public | separator = . | prefix = #shared. | location = imapc:~/imapc-shared | subscriptions = no | } | | imapc_host = m-st-sh-01.foo.bar | imapc_password = master-user-password | imapc_user = shareduser | imapc_master_user = %u `---- Where do I add "list = children"? In the user-dovecots shared namespace or on the shared-dovecots private namespace? > 2. Configure the shared Dovecot: > You need master passdb that allows all existing users to log in as "shareduser" user. You can probably simply do (not tested): > passdb { > type = static > args = user=shareduser pass=master-user-password > master = yes > } > The "shareduser" owns all of the actual shared mailboxes and has the > necessary ACLs set up for individual users. ACLs use the master > username (= the real username in this case) to do the ACL checks. So this is kind of "backwards", since normally the imapc_master_user would be the static user and imapc_user would be dynamic, right? All in all, a _very_ interesting configuration. Gr??e, Sven. -- Sigmentation fault. Core dumped. From tss at iki.fi Mon Jan 9 21:38:59 2012 From: tss at iki.fi (Timo Sirainen) Date: Mon, 9 Jan 2012 21:38:59 +0200 Subject: [Dovecot] Providing shared folders with multiple backend servers In-Reply-To: <98fi3r4kv5v8@mids.svenhartge.de> References: <68fd4hi9kbv8@mids.svenhartge.de> <68fi0aakv5v8@mids.svenhartge.de> <78fi1g2kv5v8@mids.svenhartge.de> <98fi3r4kv5v8@mids.svenhartge.de> Message-ID: <183FE5C1-811F-44A1-B7C6-BC57AC92C0C2@iki.fi> On 9.1.2012, at 21.31, Sven Hartge wrote: > ,---- > | # User's private mail location > | mail_location = mdbox:~/mdbox > | > | # When creating any namespaces, you must also have a private namespace: > | namespace { > | type = private > | separator = . > | prefix = INBOX. > | #location defaults to mail_location. > | inbox = yes > | } > | > | namespace { > | type = public > | separator = . > | prefix = #shared. I'd probably just use "Shared." as prefix, since it is visible to users. Anyway if you want to use # you need to put the value in "quotes" or it's treated as comment. > | location = imapc:~/imapc-shared > | subscriptions = no list = children here > | } > | > | imapc_host = m-st-sh-01.foo.bar > | imapc_password = master-user-password > | imapc_user = shareduser > | imapc_master_user = %u > `---- > > Where do I add "list = children"? In the user-dovecots shared namespace > or on the shared-dovecots private namespace? Shared-dovecot always has mailboxes (at least INBOX), so list=children would equal list=yes. > >> 2. Configure the shared Dovecot: > >> You need master passdb that allows all existing users to log in as "shareduser" user. You can probably simply do (not tested): > >> passdb { >> type = static >> args = user=shareduser pass=master-user-password >> master = yes >> } > >> The "shareduser" owns all of the actual shared mailboxes and has the >> necessary ACLs set up for individual users. ACLs use the master >> username (= the real username in this case) to do the ACL checks. > > So this is kind of "backwards", since normally the imapc_master_user would be > the static user and imapc_user would be dynamic, right? Right. Also in this Dovecot you want a regular namespace without prefix: namespace inbox { separator = / list = yes inbox = yes } You might as well use the proper separator here in case you ever change it for users. From sven at svenhartge.de Mon Jan 9 21:45:12 2012 From: sven at svenhartge.de (Sven Hartge) Date: Mon, 9 Jan 2012 20:45:12 +0100 Subject: [Dovecot] Providing shared folders with multiple backend servers References: <68fd4hi9kbv8@mids.svenhartge.de> <68fi0aakv5v8@mids.svenhartge.de> <78fi1g2kv5v8@mids.svenhartge.de> <98fi3r4kv5v8@mids.svenhartge.de> <183FE5C1-811F-44A1-B7C6-BC57AC92C0C2@iki.fi> Message-ID: Timo Sirainen wrote: > On 9.1.2012, at 21.31, Sven Hartge wrote: >> ,---- >> | # User's private mail location >> | mail_location = mdbox:~/mdbox >> | >> | # When creating any namespaces, you must also have a private namespace: >> | namespace { >> | type = private >> | separator = . >> | prefix = INBOX. >> | #location defaults to mail_location. >> | inbox = yes >> | } >> | >> | namespace { >> | type = public >> | separator = . >> | prefix = #shared. > I'd probably just use "Shared." as prefix, since it is visible to > users. Anyway if you want to use # you need to put the value in > "quotes" or it's treated as comment. I have to use "#shared.", because this is what Courier uses. Unfortunately I have to stick to prefixes and seperators used currently. >> | location = imapc:~/imapc-shared What is the syntax of this location? What does "imapc-shared" do in this case? >> | subscriptions = no > list = children here >> | } >> | >> | imapc_host = m-st-sh-01.foo.bar >> | imapc_password = master-user-password >> | imapc_user = shareduser >> | imapc_master_user = %u >> `---- >> >> Where do I add "list = children"? In the user-dovecots shared namespace >> or on the shared-dovecots private namespace? > Shared-dovecot always has mailboxes (at least INBOX), so list=children would equal list=yes. OK, seems logical. >> >>> 2. Configure the shared Dovecot: >> >>> You need master passdb that allows all existing users to log in as "shareduser" user. You can probably simply do (not tested): >> >>> passdb { >>> type = static >>> args = user=shareduser pass=master-user-password >>> master = yes >>> } >> >>> The "shareduser" owns all of the actual shared mailboxes and has the >>> necessary ACLs set up for individual users. ACLs use the master >>> username (= the real username in this case) to do the ACL checks. >> >> So this is kind of "backwards", since normally the imapc_master_user would be >> the static user and imapc_user would be dynamic, right? > Right. Also in this Dovecot you want a regular namespace without prefix: > namespace inbox { > separator = / > list = yes > inbox = yes > } > You might as well use the proper separator here in case you ever change it for users. Is this seperator converted to '.' on the frontend? The department supporting our users will give me hell if anything visible changes in the layout of the folders for the end user. Gr??e, Sven. -- Sigmentation fault. Core dumped. From tss at iki.fi Mon Jan 9 22:05:48 2012 From: tss at iki.fi (Timo Sirainen) Date: Mon, 9 Jan 2012 22:05:48 +0200 Subject: [Dovecot] Providing shared folders with multiple backend servers In-Reply-To: References: <68fd4hi9kbv8@mids.svenhartge.de> <68fi0aakv5v8@mids.svenhartge.de> <78fi1g2kv5v8@mids.svenhartge.de> <98fi3r4kv5v8@mids.svenhartge.de> <183FE5C1-811F-44A1-B7C6-BC57AC92C0C2@iki.fi> Message-ID: On 9.1.2012, at 21.45, Sven Hartge wrote: >>> | location = imapc:~/imapc-shared > > What is the syntax of this location? What does "imapc-shared" do in this > case? It's the directory for index files. The backend IMAP server is used as a rather dummy storage, so if for example you do a FETCH 1:* BODYSTRUCTURE command, all of the message bodies are downloaded to the user's Dovecot server which parses them. But with indexes this is done only once (same as with any other mailbox format). If you want SEARCH BODY to be fast, you'd also need to use some kind of full text search indexes. If your users share the same UID (or 0666 mode would probably work too), you could share the index files rather than make them per-user. Then you could use imapc:/shared/imapc or something. BTW. All message flags are shared between users. If you want per-user flags you'd need to modify the code. >> Right. Also in this Dovecot you want a regular namespace without prefix: > >> namespace inbox { >> separator = / >> list = yes >> inbox = yes >> } > >> You might as well use the proper separator here in case you ever change it for users. > > Is this seperator converted to '.' on the frontend? Yes, as long as you explicitly specify the separator setting to the public namespace. From sven at svenhartge.de Mon Jan 9 22:13:23 2012 From: sven at svenhartge.de (Sven Hartge) Date: Mon, 9 Jan 2012 21:13:23 +0100 Subject: [Dovecot] Providing shared folders with multiple backend servers References: <68fd4hi9kbv8@mids.svenhartge.de> <68fi0aakv5v8@mids.svenhartge.de> <78fi1g2kv5v8@mids.svenhartge.de> <98fi3r4kv5v8@mids.svenhartge.de> <183FE5C1-811F-44A1-B7C6-BC57AC92C0C2@iki.fi> Message-ID: Timo Sirainen wrote: > On 9.1.2012, at 21.45, Sven Hartge wrote: >>>> | location = imapc:~/imapc-shared >> >> What is the syntax of this location? What does "imapc-shared" do in this >> case? > It's the directory for index files. The backend IMAP server is used as > a rather dummy storage, so if for example you do a FETCH 1:* > BODYSTRUCTURE command, all of the message bodies are downloaded to the > user's Dovecot server which parses them. But with indexes this is done > only once (same as with any other mailbox format). If you want SEARCH > BODY to be fast, you'd also need to use some kind of full text search > indexes. The bodies are downloaded but not stored, right? Just the index files are stored locally. > If your users share the same UID (or 0666 mode would probably work > too), you could share the index files rather than make them per-user. > Then you could use imapc:/shared/imapc or something. Hmm. Yes, this is a fully virtual setup, every users mail is owned by the virtmail user. Does this sharing of index files have any security or privacy issues? Not every user sees every shared folder, so an information leak has to be avoided at all costs. > BTW. All message flags are shared between users. If you want per-user > flags you'd need to modify the code. No, I need shared message flags, as this is the reason we introduced shared folders, so one user can see, if a mail has already been read or replied to. >>> Right. Also in this Dovecot you want a regular namespace without prefix: >> >>> namespace inbox { >>> separator = / >>> list = yes >>> inbox = yes >>> } >> >>> You might as well use the proper separator here in case you ever change it for users. >> >> Is this seperator converted to '.' on the frontend? > Yes, as long as you explicitly specify the separator setting to the > public namespace. OK, good to know, one for my documentation with an '!' behind it. Gr??e, Sven -- Sigmentation fault. Core dumped. From tss at iki.fi Mon Jan 9 22:20:44 2012 From: tss at iki.fi (Timo Sirainen) Date: Mon, 9 Jan 2012 22:20:44 +0200 Subject: [Dovecot] Providing shared folders with multiple backend servers In-Reply-To: References: <68fd4hi9kbv8@mids.svenhartge.de> <68fi0aakv5v8@mids.svenhartge.de> <78fi1g2kv5v8@mids.svenhartge.de> <98fi3r4kv5v8@mids.svenhartge.de> <183FE5C1-811F-44A1-B7C6-BC57AC92C0C2@iki.fi> Message-ID: <8002CDFC-88BB-47D2-96D2-8F7EFB26DD86@iki.fi> On 9.1.2012, at 22.13, Sven Hartge wrote: > Timo Sirainen wrote: >> On 9.1.2012, at 21.45, Sven Hartge wrote: > >>>>> | location = imapc:~/imapc-shared >>> >>> What is the syntax of this location? What does "imapc-shared" do in this >>> case? > >> It's the directory for index files. The backend IMAP server is used as >> a rather dummy storage, so if for example you do a FETCH 1:* >> BODYSTRUCTURE command, all of the message bodies are downloaded to the >> user's Dovecot server which parses them. But with indexes this is done >> only once (same as with any other mailbox format). If you want SEARCH >> BODY to be fast, you'd also need to use some kind of full text search >> indexes. > > The bodies are downloaded but not stored, right? Just the index files > are stored locally. Right. >> If your users share the same UID (or 0666 mode would probably work >> too), you could share the index files rather than make them per-user. >> Then you could use imapc:/shared/imapc or something. > > Hmm. Yes, this is a fully virtual setup, every users mail is owned by > the virtmail user. Does this sharing of index files have any security or > privacy issues? There are no privacy issues, at least currently, since there is no per-user data. If you had wanted per-user flags this wouldn't have worked. > Not every user sees every shared folder, so an information leak has to > be avoided at all costs. Oh, that reminds me, it doesn't actually work :) Because Dovecot deletes those directories it doesn't see on the remote server. You might be able to use imapc:~/imapc:INDEX=/shared/imapc though. The nice thing about shared imapc indexes is that each user doesn't have to re-index the message. From bind at enas.net Mon Jan 9 22:23:18 2012 From: bind at enas.net (Urban Loesch) Date: Mon, 09 Jan 2012 21:23:18 +0100 Subject: [Dovecot] Proxy login failures In-Reply-To: <69796D8B-5CFE-48A2-A092-B1A32331BC1F@iki.fi> References: <4F0ABF0A.1080404@enas.net> <69796D8B-5CFE-48A2-A092-B1A32331BC1F@iki.fi> Message-ID: <4F0B4CB6.2080703@enas.net> Am 09.01.2012 19:40, schrieb Timo Sirainen: > On 9.1.2012, at 12.18, Urban Loesch wrote: > >> I'm using two dovecot pop3/imap proxies in front of our dovecot servers. >> Since some days I see many of the following errors in the logs of the two proxy-servers: >> >> dovecot: pop3-login: Error: proxy: Remote "IPV6-IP":110 disconnected: Connection closed: Connection reset by peer (state=0): user=, method=PLAIN, rip=remote-ip, lip=localip >> >> When this happens the Client gets the following error from the proxy: >> -ERR [IN-USE] Account is temporarily unavailable. > The connection to remote server dies before authentication finishes. The reason for why that happens should be logged by the backend server. Sounds like it crashes. Check for ANY error messages in backend servers. > I still did that, but I found nothing in the logs. The only thing I could think about is that all 7 backend servers are virtual servers (using technology from http://linux-vserver.org) and they all are running on the same physical machine (DELL PER610 with 32GB RAM, RAID 10 SAS - load between 0.5 and 2.0, iowait about 1-5%). So they are sharing the same kernel. Also all servers are connected to a mysql server, running on a different machine in the same subnet. Could it be that either the kernel needs some tcp tuning ore perhaps the answers from the remote mysql server could be to slow in some cases? Now I switched 2 of the 7 backend servers to the backup mysql slave server. Should be no problem because dovecot is only reading from it. If it helps I will see tomorrow an I let you know. thanks Urban From sven at svenhartge.de Mon Jan 9 22:24:09 2012 From: sven at svenhartge.de (Sven Hartge) Date: Mon, 9 Jan 2012 21:24:09 +0100 Subject: [Dovecot] Providing shared folders with multiple backend servers References: <68fd4hi9kbv8@mids.svenhartge.de> <68fi0aakv5v8@mids.svenhartge.de> <78fi1g2kv5v8@mids.svenhartge.de> <98fi3r4kv5v8@mids.svenhartge.de> <183FE5C1-811F-44A1-B7C6-BC57AC92C0C2@iki.fi> <8002CDFC-88BB-47D2-96D2-8F7EFB26DD86@iki.fi> Message-ID: Timo Sirainen wrote: > On 9.1.2012, at 22.13, Sven Hartge wrote: >> Timo Sirainen wrote: >>> On 9.1.2012, at 21.45, Sven Hartge wrote: >>>>>> | location = imapc:~/imapc-shared >>>> >>>> What is the syntax of this location? What does "imapc-shared" do in >>>> this case? >> >>> It's the directory for index files. The backend IMAP server is used >>> as a rather dummy storage, so if for example you do a FETCH 1:* >>> BODYSTRUCTURE command, all of the message bodies are downloaded to >>> the user's Dovecot server which parses them. But with indexes this >>> is done only once (same as with any other mailbox format). If you >>> want SEARCH BODY to be fast, you'd also need to use some kind of >>> full text search indexes. >> >>> If your users share the same UID (or 0666 mode would probably work >>> too), you could share the index files rather than make them >>> per-user. Then you could use imapc:/shared/imapc or something. >> Hmm. Yes, this is a fully virtual setup, every users mail is owned by >> the virtmail user. Does this sharing of index files have any security >> or privacy issues? > There are no privacy issues, at least currently, since there is no > per-user data. If you had wanted per-user flags this wouldn't have > worked. OK. I think I will go with the per-user index files for now and pay the extra in bandwidth and processing power needed. All in all, of 10,000 users, only about 100 use shared folders. Gr??e, Sven. -- Sigmentation fault. Core dumped. From xamiw at arcor.de Tue Jan 10 00:30:21 2012 From: xamiw at arcor.de (xamiw at arcor.de) Date: Mon, 9 Jan 2012 23:30:21 +0100 (CET) Subject: [Dovecot] uid / gid and systemusers In-Reply-To: References: <1809497881.1135529.1326037030206.JavaMail.ngmail@webmail10.arcor-online.net> Message-ID: <778892216.47622.1326148221293.JavaMail.ngmail@webmail16.arcor-online.net> That's it, thanks a lot. ----- Original Nachricht ---- Von: Timo Sirainen An: xamiw at arcor.de Datum: 09.01.2012 19:44 Betreff: Re: [Dovecot] uid / gid and systemusers > On 8.1.2012, at 17.37, xamiw at arcor.de wrote: > > > Jan 8 16:18:28 test dovecot: User q is missing UID (see mail_uid > setting) > > Jan 8 16:18:28 test dovecot: imap-login: Internal login failure (auth > failed, 1 attempts): user=, method=PLAIN, rip=AAA.BBB.CCC.DDD, > lip=EEE.FFF.GGG.HHH TLS <--- edited by me > .. > > auth default { > > mechanisms = plain > > passdb shadow { > > } > > } > > You have passdb, but no userdb. > > > /etc/passwd: > > ... > > g:x:1000:1000:test1,,,:/home/g:/bin/bash > > q:x:1001:1001:test2,,,:/home/q:/bin/bash > > To use /etc/passwd as userdb, you need to add userdb passwd {} > > From tss at iki.fi Tue Jan 10 00:39:00 2012 From: tss at iki.fi (Timo Sirainen) Date: Tue, 10 Jan 2012 00:39:00 +0200 Subject: [Dovecot] Proxy login failures In-Reply-To: <4F0B4CB6.2080703@enas.net> References: <4F0ABF0A.1080404@enas.net> <69796D8B-5CFE-48A2-A092-B1A32331BC1F@iki.fi> <4F0B4CB6.2080703@enas.net> Message-ID: <27646CE2-F912-4D61-9016-F6BBE0DA9C56@iki.fi> On 9.1.2012, at 22.23, Urban Loesch wrote: >>> I'm using two dovecot pop3/imap proxies in front of our dovecot servers. >>> Since some days I see many of the following errors in the logs of the two proxy-servers: >>> >>> dovecot: pop3-login: Error: proxy: Remote "IPV6-IP":110 disconnected: Connection closed: Connection reset by peer (state=0): user=, method=PLAIN, rip=remote-ip, lip=localip >>> >>> When this happens the Client gets the following error from the proxy: >>> -ERR [IN-USE] Account is temporarily unavailable. >> The connection to remote server dies before authentication finishes. The reason for why that happens should be logged by the backend server. Sounds like it crashes. Check for ANY error messages in backend servers. >> > > I still did that, but I found nothing in the logs. It's difficult to guess then. At the very least there should be an "Info" message about a new connection at the time when this failure happened. If there's not even that, then maybe the problem is network related. > The only thing I could think about is that all 7 backend servers are virtual servers (using technology from http://linux-vserver.org) and they all are running > on the same physical machine (DELL PER610 with 32GB RAM, RAID 10 SAS - load between 0.5 and 2.0, iowait about 1-5%). So they are sharing the same kernel. For testing, or what's the point in doing that? :) But the load is low enough that I doubt it has anything to do with it. > Also all servers are connected to a mysql server, running on a different machine in the same subnet. Could it be that either the kernel needs some tcp tuning ore perhaps the answers from the remote mysql server > could be to slow in some cases? MySQL server problem would show up with a different error message. TCP tuning is also unlikely to help, since the connection probably dies within a second. Actually it would be a good idea to log the duration. This patch adds it: http://hg.dovecot.org/dovecot-2.0/raw-rev/8438f66433a6 These are the only explanations that I can think of for the error: * Remote Dovecot crashes / kills the connection (it would log an error message) * Remote Dovecot server is full of handling existing connections (It would log a warning) * Network trouble, something in the middle disconnecting the connection * Source/destination OS trouble, disconnecting the connection * Some hang that results in eventual disconnection. The duration patch would show if this is the case. From dmiller at amfes.com Tue Jan 10 02:21:32 2012 From: dmiller at amfes.com (Daniel L. Miller) Date: Mon, 09 Jan 2012 16:21:32 -0800 Subject: [Dovecot] Solr plugin In-Reply-To: References: <4F04EDC8.6060809@amfes.com> <9D72A466-6ECD-4A10-A52A-20BF56B32916@iki.fi> <4F072FA9.2020009@amfes.com> <75CA67D3-81FF-4FD5-8D61-6992E808DB09@iki.fi> <4F075A76.1040807@amfes.com> <4F0A2980.7050003@amfes.com> <4F0A2B4D.2040106@amfes.com> Message-ID: On 1/9/2012 7:00 AM, Timo Sirainen wrote: > On 9.1.2012, at 1.48, Daniel L. Miller wrote: > >> On 1/8/2012 3:40 PM, Daniel L. Miller wrote: >>> On 1/6/2012 12:32 PM, Daniel L. Miller wrote: >>>> On 1/6/2012 9:36 AM, Timo Sirainen wrote: >>>>> On 6.1.2012, at 19.30, Daniel L. Miller wrote: >>>>> >>> Jan 8 15:40:09 bubba dovecot: imap(user1 at domain.com): Error: fts_solr: Lookup failed: 400 undefined field CC >>> Jan 8 15:40:09 bubba dovecot: imap: Error: >>> >>> >> Looking at the Solr output - looks like the CC parameter is being capitalized while all the other fieldnames are lowercase. > Did you look at the input? Looking at the code, it should be lowercased. Maybe Solr just uppercases it for some reason. Are you using a Solr schema that has "cc" field? > I see the following in a running Solr instance. This is generated from a Windoze Thunderbird 8.0 client: Jan 9, 2012 4:20:13 PM org.apache.solr.core.SolrCore execute INFO: [] webapp=/solr path=/select params={fl=uid,score&sort=uid+asc&fq=%2Bbox:c1af150abfc9df4d7f7a00003bc41c5f+%2Buser:"dmiller at amfes.com"&q=from:"test"+OR+to:"test"+OR+CC:"test"+OR+subject:"test"+OR+body:"test"&rows=9038} status=400 QTime=4 That's where I see the uppercased CC. -- Daniel From tss at iki.fi Tue Jan 10 02:28:46 2012 From: tss at iki.fi (Timo Sirainen) Date: Tue, 10 Jan 2012 02:28:46 +0200 Subject: [Dovecot] Solr plugin In-Reply-To: References: <4F04EDC8.6060809@amfes.com> <9D72A466-6ECD-4A10-A52A-20BF56B32916@iki.fi> <4F072FA9.2020009@amfes.com> <75CA67D3-81FF-4FD5-8D61-6992E808DB09@iki.fi> <4F075A76.1040807@amfes.com> <4F0A2980.7050003@amfes.com> <4F0A2B4D.2040106@amfes.com> Message-ID: <78E1EDA2-62A8-4CD3-BA82-6239FDC975EB@iki.fi> On 10.1.2012, at 2.21, Daniel L. Miller wrote: >> Did you look at the input? Looking at the code, it should be lowercased. Maybe Solr just uppercases it for some reason. Are you using a Solr schema that has "cc" field? > > I see the following in a running Solr instance. This is generated from a Windoze Thunderbird 8.0 client: > > Jan 9, 2012 4:20:13 PM org.apache.solr.core.SolrCore execute > INFO: [] webapp=/solr path=/select params={fl=uid,score&sort=uid+asc&fq=%2Bbox:c1af150abfc9df4d7f7a00003bc41c5f+%2Buser:"dmiller at amfes.com"&q=from:"test"+OR+to:"test"+OR+CC:"test"+OR+subject:"test"+OR+body:"test"&rows=9038} status=400 QTime=4 Oh, you were talking about the searching part, not indexing. Yeah, there it wasn't necessarily lowercased. Fixed: http://hg.dovecot.org/dovecot-2.1/rev/075591a4b6a8 From stan at hardwarefreak.com Tue Jan 10 04:19:22 2012 From: stan at hardwarefreak.com (Stan Hoeppner) Date: Mon, 09 Jan 2012 20:19:22 -0600 Subject: [Dovecot] Providing shared folders with multiple backend servers In-Reply-To: <08fhdkhkv5v8@mids.svenhartge.de> References: <68fd4hi9kbv8@mids.svenhartge.de> <4F08E4D9.1090203@hardwarefreak.com> <78fdevu9kbv8@mids.svenhartge.de> <4F09956C.1030109@hardwarefreak.com> <88fev069kbv8@mids.svenhartge.de> <4F0ADD87.5080103@hardwarefreak.com> <08fhdkhkv5v8@mids.svenhartge.de> Message-ID: <4F0BA02A.3050405@hardwarefreak.com> On 1/9/2012 7:48 AM, Sven Hartge wrote: > It seems my initial idea was not so bad after all ;) Yeah, but you didn't know how "not so bad" it really was until you had me analyze it, flesh it out, and confirm it. ;) > Now I "just" need o > built a little test setup, put some dummy users on it and see, if > anything bad happens while accessing the shared folders and how the > reaction of the system is, should the shared folder server be down. It won't be down. Because instead of using NFS you're going to use GFS2 for the shared folder LUN so each user accesses the shared folders locally just as they do their mailbox. Pat yourself on the back Sven, you just eliminated a SPOF. ;) >> How many total cores per VMware node (all sockets)? > > 8 Fairly beefy. Dual socket quad core Xeons I'd guess. > Here the memory statistics an 14:30 o'clock: > > total used free shared buffers cached > Mem: 12046 11199 847 0 88 7926 > -/+ buffers/cache: 3185 8861 > Swap: 5718 10 5707 That doesn't look too bad. How many IMAP user connections at that time? Is that a high average or low for that day? The RAM numbers in isolation only paint a partial picture... > The SAN has plenty space. Over 70TiB at this time, with another 70TiB > having just arrived and waiting to be connected. 140TB of 15k storage. Wow, you're so under privileged. ;) > The iSCSI storage nodes (HP P4500) use 600GB SAS6 at 15k rpm with 12 > disks per node, configured in 2 RAID5 sets with 6 disks each. > > But this is internal to each storage node, which are kind of a blackbox > and have to be treated as such. I cringe every time I hear 'black box'... > The HP P4500 is a but unique, since it does not consist of a head node > which storage arrays connected to it, but of individual storage nodes > forming a self balancing iSCSI cluster. (The nodes consist of DL320s G2.) The 'black box' is Lefthand Networks SAN/iQ software stack. I wasn't that impressed with it when I read about it 8 or so years ago. IIRC, load balancing across cluster nodes is accomplished by resending host packets from a receiving node to another node after performing special sauce calculations regarding cluster load. Hence the need, apparently, for a full power, hot running, multi-core x86 CPU instead of an embedded low power/wattage type CPU such as MIPS, PPC, i960 descended IOP3xx, or even the Atom if they must stick with x86 binaries. If this choice was merely due to economy of scale of their server boards, they could have gone with a single socket board instead of the dual, which would have saved money. So this choice of a dual socket Xeon board wasn't strictly based on cost or ease of manufacture. Many/most purpose built SAN arrays on the market don't use full power x86 chips, but embedded RISC chips, to cut cost, power draw, and heat generation. These RISC chips are typically in order designs, don't have branch prediction or register renaming logic circuits and they have tiny caches. This is because block moving code handles streams of data and doesn't typically branch nor have many conditionals. For streaming apps, data caches simply get in the way, although an instruction cache is beneficial. HP's choice of full power CPUs that have such features suggests branching conditional code is used. Which makes sense when running algorithms that attempt to calculate the least busy node. Thus, this 'least busy node' calculation and packet shipping adds non trivial latency to host SCSI IO command completion, compared to traditional FC/iSCSI SAN arrays, or DAS, and thus has implications for high IOPS workloads and especially those making heavy use of FSYNC, such as SMTP and IMAP servers. FSYNC performance may not be an issue if the controller instantly acks FSYNC before data hits platter, but then you may run into bigger problems as you have no guarantee data hit the disk. Or, you may not run into perceptible performance issues at all given the number of P4500s you have and the proportionally light IO load of your 10K mail users. Sheer horsepower alone may prove sufficient. Just in case, it may prove beneficial to fire up ImapTest or some other synthetic mail workload generator to see if array response times are acceptable under heavy mail loads. > So far, I had no performance or other problems with this setup and it > scales quite nice, as you buy as you grow . I'm glad the Lefthand units are working well for you so far. Are you hitting the arrays with any high random IOPS workloads as of yet? > And again, price was also a factor, deploying a FC-SAN would have cost > us more than thrice the amount than the amount the deployment of an iSCSI > solution did, because the latter is "just" ethernet, while the former > would have needed a lot more totally new components. I guess that depends on the features you need, such as PIT backups, remote replication, etc. I expanded a small FC SAN about 5 years ago for the same cost as an iSCSI array, simply due to the fact that the least expensive _quality_ unit with a good reputation happened to have both iSCSI and FC ports included. It was a 1U 8x500GB Nexsan Satablade, their smallest unit (since discontinued). Ran about $8K USD IIRC. Nexsan continues to offer excellent products. For anyone interested in high density high performance FC+iSCSI SAN arrays at a midrange price, add Nexsan to your vendor research list: http://www.nexsan.com > No, at that time (2005/2006) nobody thought of a SAN. That is a fairly > "new" idea here, first implemented for the VMware cluster in 2008. You must have slower adoption on that side of the pond. As I just mentioned, I was expanding an already existing small FC SAN in 2006 that had been in place since 2004 IIRC. And this was at a small private 6-12 school with enrollment of about 500. iSCSI SANs took off like a rocket in the States around 06/07, in tandem with VMware ESX going viral here. > More space. The IMAP usage became more prominent which caused a steep > rise in space needed on the mail storage server. But 74GiB SCA drives > where expensive and 130GiB SCA drives where not available at that time. With 144TB of HP Lefthand 15K SAS drives it appears you're no longer having trouble funding storage purchases. ;) >>> And this is why I kind of hold this upgrade back until dovecot 2.1 is >>> released, as it has some optimizations here. > >> Sounds like it's going to be a bit more than an 'upgrade'. ;) > > Well, yes. It is more a re-implementation than an upgrade. It actually sounds like fun. To me anyway. ;) I love this stuff. > Central IT here this days only uses x86-based systems. There where some Sun > SPARC systems, but both have been decomissioned. New SPARC hardware is > just too expensive for our scale. And if you want to use virtualization, > you can either use only SPARC systems and partition them or use x86 > based systems. And then there is the need to virtualize Windows, so x86 > is the only option. Definitely a trend for a while now. > Most bigger Universities in Germany make nearly exclusive use of SPARC > systems, but they had a central IT with big irons (IBM, HP, etc.) since > back in the 1960's, so naturally the continue on that path. Siemens/Fujitsu machines or SUN machines? I've been under the impression that Fujitsu sold more SPARC boxen in Europe, or at least Germany, than SUN did, due to the Siemens partnership. I could be wrong here. -- Stan From robert at schetterer.org Tue Jan 10 08:06:38 2012 From: robert at schetterer.org (Robert Schetterer) Date: Tue, 10 Jan 2012 07:06:38 +0100 Subject: [Dovecot] Postfix user map In-Reply-To: <70A95B54-98CE-4B4D-8B76-CDA279353202@iki.fi> References: <4F0AFCF4.1050506@iki.fi> <4F0B0871.6040500@gmail.com> <4F0B09BC.3010300@schetterer.org> <4F0B0BE4.8010907@gmail.com> <4F0B0CD9.3090402@schetterer.org> <4F0B1388.209@schetterer.org> <70A95B54-98CE-4B4D-8B76-CDA279353202@iki.fi> Message-ID: <4F0BD56E.9090808@schetterer.org> Am 09.01.2012 19:12, schrieb Timo Sirainen: > On 9.1.2012, at 18.19, Robert Schetterer wrote: > >> i am afraid i wasnt total corect here >> in fact i havent seen backscatter overquota on my servers >> since using dove lmtp with postfix > > LMTP shouldn't matter here. In most configs mails are put to queue first, and only from there they are sent to LMTP, and if LMTP rejects a mail then backscatter is sent. Maybe the difference you're seeing is that it's now Postfix sending the bounce (or perhaps skipping it?) instead of dovecot-lda (unless you gave -e parameter). > Hi Timo, thx for clearing anyway backscatter with overquota was rare ever so no big problem -- Best Regards MfG Robert Schetterer Germany/Munich/Bavaria From yubao.liu at gmail.com Tue Jan 10 08:58:37 2012 From: yubao.liu at gmail.com (Liu Yubao) Date: Tue, 10 Jan 2012 14:58:37 +0800 Subject: [Dovecot] Strange error: DIGEST-MD5 mechanism can't be supported with given passdbs In-Reply-To: <491E7C43-2C87-4FD6-8AC0-E79F22E9749F@iki.fi> References: <4F05EABC.7070309@gmail.com> <4F06D272.5010200@bunbun.be> <4F071E3B.2060405@gmail.com> <1325868288.17774.30.camel@hurina> <4F07332B.70708@gmail.com> <4F07BDBB.3060204@gmail.com> <491E7C43-2C87-4FD6-8AC0-E79F22E9749F@iki.fi> Message-ID: On Tue, Jan 10, 2012 at 2:59 AM, Timo Sirainen wrote: > On 7.1.2012, at 5.36, Yubao Liu wrote: > >> In old version, ?"auth->passdbs" contains all passdbs, this revision >> changes "auth->passdbs" to only contain non-master passdbs. >> >> I'm not sure which fix is better or even my proposal is correct or fully: >> ?a) in src/auth/auth.c:auth_passdb_preinit(), insert master passdb to >> ? ? ?auth->passdbs too, ?and remove duplicate code for masterdbs >> ? ? ?in auth_init() and auth_deinit(). > > Not a good idea. The master passdb needs to be treated specially, otherwise you might accidentally allow regular users logging in as other users. > Sorry I don't understand well. This scheme adds all master dbs to auth->passdbs, auth->masterdbs are not changed and still contains only master users. I guess dovecot lookups auth->masterdbs for master users and auth->passdbs for regular users, regular users don't know master users' passwords so they can't login as other users. http://wiki2.dovecot.org/Authentication/MasterUsers The "Example configuration" already shows master user account can be added to auth->passdbs too. This scheme does bring unexpected issue, the master users can't have separate passwords for regular login as themselves(because masterdbs are also added to passdbs), the risk of password leak increases much, but I don't think it's a good practice to do regular login with master user account. Quoted from same wiki page(I really enjoy the wonderful Dovecot wiki, it's the most well organized and documented wiki in open source projects, thank you very much!): "If you want master users to be able to log in as themselves, you'll need to either add the user to the normal passdb or add the passdb to dovecot.conf twice, with and without master=yes. Note that if the passdbs point to different locations, the user can have a different password when logging in as other users than when logging in as himself. This is a good idea since it can avoid accidentally logging in as someone else. " Anyway, the scheme B is much less risky and much simple, just a little annoying code duplication:-) >> ?b) add similar code for masterdbs in auth_passdb_list_have_verify_plain(), >> ? ? ?auth_passdb_list_have_lookup_credentials(), auth_passdb_list_have_set_credentials(). > > Kind of annoying code duplication, but .. I guess it can't really be helped. Added: > http://hg.dovecot.org/dovecot-2.0/rev/bed15faedfd4 > Thank you very much, I don't have to maintain my private package:-) >> Another related question is "pass" option in master passdb, if I set it to "yes", >> the authentication fails: > .. >> My normal passdb is a PAM passdb, ?it doesn't support credential lookups, that's >> reasonable, > > Right. > >> but I feel the comment for "pass" option is confusing: >> >> ?# Unless you're using PAM, you probably still want the destination user to >> ?# be looked up from passdb that it really exists. pass=yes does that. >> ?pass = yes >> } >> >> According the comment, it's to check whether the real user exists, why not >> to check userdb but another passdb? > > Well.. It is going to check userdb eventually anyway, so it would still fail, just a bit later and maybe with different error message. If Dovecot doesn't check password for the real user against passdb (actually it doesn't have the password of real user because it's doing master user proxy authorization), it won't fail on userdb lookup because the userdb does contain the real user, in my case, the real user is system user and absolutely exists. > >> Even it must check against passdb, >> in this case, it's obvious not necessary to lookup credentials, it's enough to >> to lookup user name only. > > There's currently no passdb that supports "does user exist?" lookup, but doesn't support credentials lookup, so this is more of a theoretical issue. (I guess maybe PAM could be abused in some configurations to do the check, but that's rather ugly..) I don't understand why master user proxy authorization in Dovecot has to check real user against his credential, does that mean "user*master" has to authenticate twice? one for master, one for user, but often client can't provide two passwords in single login and the regular passdb such as PAM passdb doesn't support credentials lookup. So I feel it's better Dovecot checks only destination user names in passdbs or userdbs after master user authentication part succeeds to decide whether the destination user exists, just as the comment for "pass=yes" describes. This may not be a bug, IMHO just a confusing feature. Regards, Yubao Liu From l.chelchowski at slupsk.eurocar.pl Tue Jan 10 11:34:37 2012 From: l.chelchowski at slupsk.eurocar.pl (l.chelchowski) Date: Tue, 10 Jan 2012 10:34:37 +0100 Subject: [Dovecot] Quota-warning and setresgid Message-ID: Hi! Please help me with this. The problem exists when quota-warning is executing: LOG: Jan 10 10:15:06 lmtp(85973): Debug: none: root=, index=, control=, inbox=, alt= Jan 10 10:15:06 lmtp(85973): Info: Connect from local Jan 10 10:15:06 lmtp(85973): Debug: Loading modules from directory: /usr/local/lib/dovecot Jan 10 10:15:06 lmtp(85973): Debug: Module loaded: /usr/local/lib/dovecot/lib10_quota_plugin.so Jan 10 10:15:06 lmtp(85973): Debug: Module loaded: /usr/local/lib/dovecot/lib90_sieve_plugin.so Jan 10 10:15:06 lmtp(85973): Debug: auth input: tester at domain.eu home=/home/vmail/domain.eu/tester/ mail=maildir:/home/vmail/domain.eu/tester/:INDEX=/var/mail/vmail/domain.eu/tester at domain.eu/index/public uid=101 gid=12 quota_rule=*:storage=2097 acl_groups= Jan 10 10:15:06 lmtp(85973): Debug: Added userdb setting: mail=maildir:/home/vmail/domain.eu/tester/:INDEX=/var/mail/vmail/domain.eu/tester at domain.eu/index/public Jan 10 10:15:06 lmtp(85973): Debug: Added userdb setting: plugin/quota_rule=*:storage=2097 Jan 10 10:15:06 lmtp(85973): Debug: Added userdb setting: plugin/acl_groups= Jan 10 10:15:06 lmtp(85973, tester at domain.eu): Debug: Effective uid=101, gid=12, home=/home/vmail/domain.eu/tester/ Jan 10 10:15:06 lmtp(85973, tester at domain.eu): Debug: Quota root: name=user backend=dict args=:proxy::quotadict Jan 10 10:15:06 lmtp(85973, tester at domain.eu): Debug: Quota rule: root=user mailbox=* bytes=2147328 messages=0 Jan 10 10:15:06 lmtp(85973, tester at domain.eu): Debug: Quota rule: root=user mailbox=Trash bytes=+429465 (20%) messages=0 Jan 10 10:15:06 lmtp(85973, tester at domain.eu): Debug: Quota rule: root=user mailbox=SPAM bytes=+429465 (20%) messages=0 Jan 10 10:15:06 lmtp(85973, tester at domain.eu): Debug: Quota warning: bytes=1717862 (80%) messages=0 reverse=no command=quota-warning 80 tester at domain.eu Jan 10 10:15:06 lmtp(85973, tester at domain.eu): Debug: Quota warning: bytes=1932595 (90%) messages=0 reverse=no command=quota-warning 90 tester at domain.eu Jan 10 10:15:06 lmtp(85973, tester at domain.eu): Debug: Quota warning: bytes=2039961 (95%) messages=0 reverse=no command=quota-warning 95 tester at domain.eu Jan 10 10:15:06 lmtp(85973, tester at domain.eu): Debug: dict quota: user=tester at domain.eu, uri=proxy::quotadict, noenforcing=0 Jan 10 10:15:06 lmtp(85973, tester at domain.eu): Debug: Namespace : type=private, prefix=, sep=/, inbox=yes, hidden=no, list=yes, subscriptions=yes location=maildir:/home/vmail/domain.eu/tester/:INDEX=/var/mail/vmail/domain.eu/tester at domain.eu/index/public Jan 10 10:15:06 lmtp(85973, tester at domain.eu): Debug: maildir++: root=/home/vmail/domain.eu/tester, index=/var/mail/vmail/domain.eu/tester at domain.eu/index/public, control=, inbox=/home/vmail/domain.eu/tester, alt= Jan 10 10:15:06 lmtp(85973, tester at domain.eu): Debug: Namespace : type=public, prefix=Public/, sep=/, inbox=no, hidden=no, list=children, subscriptions=yes location=maildir:/home/vmail/public/:CONTROL=/var/mail/vmail/domain.eu/tester/control/public:INDEX=/var/mail/vmail/domain.eu/tester/index/public:LAYOUT=fs Jan 10 10:15:06 lmtp(85973, tester at domain.eu): Debug: fs: root=/home/vmail/public, index=/var/mail/vmail/domain.eu/tester/index/public, control=/var/mail/vmail/domain.eu/tester/control/public, inbox=, alt= Jan 10 10:15:06 lmtp(85973, tester at domain.eu): Debug: Namespace : type=shared, prefix=Shared/%u/, sep=/, inbox=no, hidden=no, list=children, subscriptions=no location=maildir:%h/:INDEX=/var/mail/vmail/domain.eu/tester at domain.eu/index/shared/%u Jan 10 10:15:06 lmtp(85973, tester at domain.eu): Debug: shared: root=/var/run/dovecot, index=, control=, inbox=, alt= ... Jan 10 10:15:06 lmtp(85973, tester at domain.eu): Debug: quota: Executing warning: quota-warning 95 tester at domain.eu Jan 10 10:15:06 lmtp(85973, tester at domain.eu): Info: bLUfAJoBDE/VTwEA9hAjDg: sieve: msgid=<4F0C0180.3040704 at domain.eu>: stored mail into mailbox 'INBOX' Jan 10 10:15:06 lmtp(85973): Info: Disconnect from local: Client quit (in reset) Jan 10 10:15:06 lda: Debug: Loading modules from directory: /usr/local/lib/dovecot Jan 10 10:15:06 lda: Debug: Module loaded: /usr/local/lib/dovecot/lib01_acl_plugin.so Jan 10 10:15:06 lda: Debug: Module loaded: /usr/local/lib/dovecot/lib10_quota_plugin.so Jan 10 10:15:06 lda: Debug: Module loaded: /usr/local/lib/dovecot/lib90_sieve_plugin.so Jan 10 10:15:06 lda: Debug: auth input: tester at domain.eu home=/home/vmail/domain.eu/tester/ mail=maildir:/home/vmail/domain.eu/tester/:INDEX=/var/mail/vmail/domain.eu/tester at domain.eu/index/public uid=101 gid=12 quota_rule=*:storage=2097 acl_groups= Jan 10 10:15:06 lda: Debug: Added userdb setting: mail=maildir:/home/vmail/domain.eu/tester/:INDEX=/var/mail/vmail/domain.eu/tester at domain.eu/index/public Jan 10 10:15:06 lda: Debug: Added userdb setting: plugin/quota_rule=*:storage=2097 Jan 10 10:15:06 lda: Debug: Added userdb setting: plugin/acl_groups= Jan 10 10:15:06 lda(tester at domain.eu): Fatal: setresgid(12(mail),12(mail),101(vmail)) failed with euid=101(vmail): Operation not permitted Jan 10 10:15:06 master: Error: service(quota-warning): child 85974 returned error 75 dovecot -n # 2.0.16: /usr/local/etc/dovecot/dovecot.conf # OS: FreeBSD 8.2-RELEASE-p3 amd64 auth_master_user_separator = * auth_mechanisms = plain login cram-md5 auth_username_format = %Lu dict { quotadict = mysql:/usr/local/etc/dovecot/dovecot-dict-sql.conf } disable_plaintext_auth = no first_valid_gid = 12 first_valid_uid = 101 log_path = /var/log/dovecot.log mail_debug = yes mail_gid = vmail mail_plugins = " quota acl" mail_privileged_group = vmail mail_uid = vmail managesieve_notify_capability = mailto managesieve_sieve_capability = fileinto reject envelope encoded-character vacation subaddress comparator-i;ascii-numeric relational regex imap4flags copy include variables body enotify environment mailbox date namespace { inbox = yes location = prefix = separator = / type = private } namespace { list = children location = maildir:/home/vmail/public/:CONTROL=/var/mail/vmail/%d/%n/control/public:INDEX=/var/mail/vmail/%d/%n/index/public:LAYOUT=fs prefix = Public/ separator = / subscriptions = yes type = public } namespace { list = children location = maildir:%%h/:INDEX=/var/mail/vmail/%d/%u/index/shared/%%u prefix = Shared/%%u/ separator = / subscriptions = no type = shared } passdb { args = /usr/local/etc/dovecot/dovecot-sql.conf driver = sql } passdb { args = /usr/local/etc/dovecot/passwd.masterusers driver = passwd-file master = yes pass = yes } plugin { acl = vfile:/usr/local/etc/dovecot/acls acl_shared_dict = file:/usr/local/etc/dovecot/shared/shared-mailboxes.db autocreate = Trash autocreate2 = Junk autocreate3 = Sent autocreate4 = Drafts autocreate5 = Archives autosubscribe = Trash autosubscribe2 = Junk autosubscribe3 = Sent autosubscribe4 = Drafts autosubscribe5 = Public/Poczta autosubscribe6 = Archives fts = squat fts_squat = partial=4 full=10 quota = dict:user::proxy::quotadict quota_rule2 = Trash:storage=+20%% quota_rule3 = SPAM:storage=+20%% quota_warning = storage=80%% quota-warning 80 %u quota_warning2 = storage=90%% quota-warning 90 %u quota_warning3 = storage=95%% quota-warning 95 %u sieve = ~/.dovecot.sieve sieve_before = /usr/local/etc/dovecot/sieve/default.sieve sieve_dir = ~/sieve sieve_global_dir = /usr/local/etc/dovecot/sieve sieve_global_path = /usr/local/etc/dovecot/sieve/default.sieve } protocols = imap pop3 sieve lmtp service auth { unix_listener /var/spool/postfix/private/auth { group = mail mode = 0660 user = postfix } unix_listener auth-userdb { group = mail mode = 0660 user = vmail } } service dict { unix_listener dict { mode = 0600 user = vmail } } service imap { executable = imap postlogin } service lmtp { unix_listener /var/spool/postfix/private/dovecot-lmtp { group = postfix mode = 0660 user = postfix } } service managesieve { drop_priv_before_exec = yes } service pop3 { drop_priv_before_exec = yes } service postlogin { executable = script-login rawlog } service quota-warning { executable = script /usr/local/bin/quota-warning.sh unix_listener quota-warning { user = vmail } user = vmail } ssl = no userdb { args = /usr/local/etc/dovecot/dovecot-sql.conf driver = sql } verbose_proctitle = yes protocol imap { imap_client_workarounds = delay-newmail tb-extra-mailbox-sep mail_plugins = " acl imap_acl autocreate fts fts_squat quota imap_quota" } protocol lmtp { mail_plugins = quota sieve } protocol pop3 { pop3_client_workarounds = outlook-no-nuls oe-ns-eoh pop3_uidl_format = %08Xu%08Xv } protocol lda { deliver_log_format = msgid=%m: %$ mail_plugins = sieve acl quota postmaster_address = postmaster at domain.eu sendmail_path = /usr/sbin/sendmail } -- ?ukasz From bind at enas.net Tue Jan 10 14:22:27 2012 From: bind at enas.net (Urban Loesch) Date: Tue, 10 Jan 2012 13:22:27 +0100 Subject: [Dovecot] Proxy login failures In-Reply-To: <27646CE2-F912-4D61-9016-F6BBE0DA9C56@iki.fi> References: <4F0ABF0A.1080404@enas.net> <69796D8B-5CFE-48A2-A092-B1A32331BC1F@iki.fi> <4F0B4CB6.2080703@enas.net> <27646CE2-F912-4D61-9016-F6BBE0DA9C56@iki.fi> Message-ID: <4F0C2D83.4010108@enas.net> On 09.01.2012 23:39, Timo Sirainen wrote: > On 9.1.2012, at 22.23, Urban Loesch wrote: > >>>> I'm using two dovecot pop3/imap proxies in front of our dovecot servers. >>>> Since some days I see many of the following errors in the logs of the two proxy-servers: >>>> >>>> dovecot: pop3-login: Error: proxy: Remote "IPV6-IP":110 disconnected: Connection closed: Connection reset by peer (state=0): user=, method=PLAIN, rip=remote-ip, lip=localip >>>> >>>> When this happens the Client gets the following error from the proxy: >>>> -ERR [IN-USE] Account is temporarily unavailable. >>> The connection to remote server dies before authentication finishes. The reason for why that happens should be logged by the backend server. Sounds like it crashes. Check for ANY error messages in backend servers. >>> >> >> I still did that, but I found nothing in the logs. > > It's difficult to guess then. At the very least there should be an "Info" message about a new connection at the time when this failure happened. If there's not even that, then maybe the problem is network related. No, there is nothing. > >> The only thing I could think about is that all 7 backend servers are virtual servers (using technology from http://linux-vserver.org) and they all are running >> on the same physical machine (DELL PER610 with 32GB RAM, RAID 10 SAS - load between 0.5 and 2.0, iowait about 1-5%). So they are sharing the same kernel. > > For testing, or what's the point in doing that? :) But the load is low enough that I doubt it has anything to do with it. This because the hardware is fast enough for handling about 40.000 Mailaccounts (both IMAP and POP). That tells me that dovecot is a really good piece of software. Very performant in my eyes. > >> Also all servers are connected to a mysql server, running on a different machine in the same subnet. Could it be that either the kernel needs some tcp tuning ore perhaps the answers from the remote mysql server >> could be to slow in some cases? > > MySQL server problem would show up with a different error message. TCP tuning is also unlikely to help, since the connection probably dies within a second. Actually it would be a good idea to log the duration. This patch adds it: > http://hg.dovecot.org/dovecot-2.0/raw-rev/8438f66433a6 > I installed the patch on my proxies and I got this: ... Jan 10 09:30:45 imap2 dovecot: pop3-login: Error: proxy: Remote "IPV6-IP":110 disconnected: Connection closed: Connection reset by peer (state=0, duration=0s): user=, method=PLAIN, rip=remote-ip, lip=local-ip Jan 10 09:45:21 imap2 dovecot: pop3-login: Error: proxy: Remote "IPV6-IP":110 disconnected: Connection closed: Connection reset by peer (state=0, duration=1s): user=, method=PLAIN, rip=remote-ip, lip=local-ip ... As you can see the duration is between 0 and 1 seconds. During this errors there was a tcpdump running on proxy #2 (imap2 in the above logs). In the time range of "09:30:45:00 - 09:30:46:00" I got an error that the backend server has resetted the connection (RST Flag set). The fact that dovecot on the backend server writes nothing in the log I think that the connection will be resetted on a lower level. Here is what whireshark tells me about that: No. Source Time Destination Protocol Info 101235 IPv6-Proxy-Server 2012-01-10 09:29:38.015073 IPv6-Backend-Server TCP 35341 > pop3 [SYN] Seq=0 Win=14400 Len=0 MSS=1440 SACK_PERM=1 TSV=1925901864 TSER=0 WS=7 101236 IPv6-Backend-Server 2012-01-10 09:29:38.015157 IPv6-Proxy-Server TCP pop3 > 35341 [SYN, ACK] Seq=0 Ack=1 Win=14280 Len=0 MSS=1440 SACK_PERM=1 TSV=309225565 TSER=1925901864 WS=7 101248 IPv6-Proxy-Server 2012-01-10 09:29:38.233046 IPv6-Backend-Server POP [TCP ACKed lost segment] [TCP Previous segment lost] C: UIDL 101249 IPv6-Backend-Server 2012-01-10 09:29:38.233312 IPv6-Proxy-Server POP S: +OK 101250 IPv6-Proxy-Server 2012-01-10 09:29:38.233328 IPv6-Backend-Server TCP 35341 > pop3 [ACK] Seq=57 Ack=50 Win=14464 Len=0 TSV=1925901886 TSER=309225587 101263 IPv6-Proxy-Server 2012-01-10 09:29:38.452210 IPv6-Backend-Server POP C: LIST 101264 IPv6-Backend-Server 2012-01-10 09:29:38.452403 IPv6-Proxy-Server POP S: +OK 0 messages: 101265 IPv6-Proxy-Server 2012-01-10 09:29:38.452426 IPv6-Backend-Server TCP 35341 > pop3 [ACK] Seq=63 Ack=70 Win=14464 Len=0 TSV=1925901908 TSER=309225609 101324 IPv6-Proxy-Server 2012-01-10 09:29:38.671209 IPv6-Backend-Server POP C: QUIT 101325 IPv6-Backend-Server 2012-01-10 09:29:38.671566 IPv6-Proxy-Server POP S: +OK Logging out. 101326 IPv6-Proxy-Server 2012-01-10 09:29:38.671678 IPv6-Backend-Server TCP 35341 > pop3 [FIN, ACK] Seq=69 Ack=89 Win=14464 Len=0 TSV=1925901930 TSER=309225631 101327 IPv6-Backend-Server 2012-01-10 09:29:38.671759 IPv6-Proxy-Server TCP pop3 > 35341 [ACK] Seq=89 Ack=70 Win=14336 Len=0 TSV=309225631 TSER=1925901930 134205 IPv6-Proxy-Server 2012-01-10 09:30:45.477314 IPv6-Backend-Server TCP [TCP Port numbers reused] 35341 > pop3 [SYN] Seq=0 Win=14400 Len=0 MSS=1440 SACK_PERM=1 TSV=1925908610 TSER=0 WS=7 134206 IPv6-Backend-Server 2012-01-10 09:30:45.477458 IPv6-Proxy-Server TCP pop3 > 35341 [SYN, ACK] Seq=0 Ack=1 Win=14280 Len=0 MSS=1440 SACK_PERM=1 TSV=309232311 TSER=1925908610 WS=7 134207 IPv6-Proxy-Server 2012-01-10 09:30:45.477499 IPv6-Backend-Server TCP 35341 > pop3 [ACK] Seq=1 Ack=1 Win=14464 Len=0 TSV=1925908610 TSER=309232311 134208 IPv6-Backend-Server 2012-01-10 09:30:45.477589 IPv6-Proxy-Server TCP pop3 > 35341 [RST] Seq=1 Win=0 Len=0 136052 IPv6-Backend-Server 2012-01-10 09:30:49.477950 IPv6-Proxy-Server TCP pop3 > 35341 [SYN, ACK] Seq=0 Ack=1 Win=14280 Len=0 MSS=1440 SACK_PERM=1 TSV=309232712 TSER=1925908610 WS=7 136053 IPv6-Proxy-Server 2012-01-10 09:30:49.477978 IPv6-Backend-Server TCP 35341 > pop3 [RST] Seq=1 Win=0 Len=0 138363 IPv6-Backend-Server 2012-01-10 09:30:55.877899 IPv6-Proxy-Server TCP pop3 > 35341 [SYN, ACK] Seq=0 Ack=1 Win=14280 Len=0 MSS=1440 SACK_PERM=1 TSV=309233352 TSER=1925908610 WS=7 138364 IPv6-Proxy-Server 2012-01-10 09:30:55.877925 IPv6-Backend-Server TCP 35341 > pop3 [RST] Seq=1 Win=0 Len=0 143154 IPv6-Backend-Server 2012-01-10 09:31:08.678005 IPv6-Proxy-Server TCP pop3 > 35341 [SYN, ACK] Seq=0 Ack=1 Win=14280 Len=0 MSS=1440 SACK_PERM=1 TSV=309234632 TSER=1925908610 WS=7 152353 IPv6-Backend-Server 2012-01-10 09:31:32.678103 IPv6-Proxy-Server TCP pop3 > 35341 [SYN, ACK] Seq=0 Ack=1 Win=14280 Len=0 MSS=1440 SACK_PERM=1 TSV=309237032 TSER=1925908610 WS=7 165891 IPv6-Backend-Server 2012-01-10 09:32:20.688324 IPv6-Proxy-Server TCP pop3 > 35341 [SYN, ACK] Seq=0 Ack=1 Win=14280 Len=0 MSS=1440 SACK_PERM=1 TSV=309241833 TSER=1925908610 WS=7 From Seq-No. 101235 - 101327 the session looks ok for me. But on Seq-No. 134205 whireshark tells me that the TCP port with source port number "35341" will be reused and on Seq-No. 34208 (after the TCP Session has been established correctly - see Seq-No. 134205 to 134207) the backend server sends a RST Packet for the session and the proxy logs the error message, that the connection has been resetted by the peer. I have no idea if dovecot is sendig the TCP reset or the kernel by himself. About 1,5 hours ago I changed the kernel flag "/proc/sys/net/ipv4/tcp_tw_recycle" to "1" on the physical backend machine. Since that I got no more error messages on the proxies. Changing the default values in "tcp_fin-timeout" or "tcp_tw_reuse" have had no effect. Only "tcp_tw_recycle" seems to help. Thanks Urban > These are the only explanations that I can think of for the error: > > * Remote Dovecot crashes / kills the connection (it would log an error message) > * Remote Dovecot server is full of handling existing connections (It would log a warning) > * Network trouble, something in the middle disconnecting the connection > * Source/destination OS trouble, disconnecting the connection > * Some hang that results in eventual disconnection. The duration patch would show if this is the case. > > From Ralf.Hildebrandt at charite.de Tue Jan 10 15:06:48 2012 From: Ralf.Hildebrandt at charite.de (Ralf Hildebrandt) Date: Tue, 10 Jan 2012 14:06:48 +0100 Subject: [Dovecot] 2.0.17: Index lost -> SAVEDON lost as well? In-Reply-To: <20120109150249.GH22506@charite.de> References: <20120109074057.GC22506@charite.de> <17E5EF90-C133-4F70-A377-0D93694181DE@iki.fi> <20120109150249.GH22506@charite.de> Message-ID: <20120110130648.GD6686@charite.de> * Ralf Hildebrandt : > * Timo Sirainen : > > On 9.1.2012, at 9.40, Ralf Hildebrandt wrote: > > > > > Today I encoundered these errors: > > > > > > Jan 9 08:30:06 mail dovecot: lmtp(31174, backup at backup.invalid): Error: Log synchronization error at seq=858,offset=44672 for > > > /home/mailboxname/mdbox/storage/dovecot.map.index: Append with UID 282388, but next_uid = 282389 > > > > Any idea why this happened? > > I was running those commands: > > # new style (dovecot) > vorgestern=`date -d "-2 day" +"%Y-%m-%d"` > doveadm expunge -u backup at backup.invalid mailbox INBOX SAVEDBEFORE $vorgestern > doveadm purge -u backup at backup.invalid So today: # doveadm search -u backup at backup.invalid mailbox INBOX SAVEDON 2012-01-08 | wc -l 0 # doveadm search -u backup at backup.invalid mailbox INBOX SAVEDON 2012-01-09 | wc -l 0 # doveadm search -u backup at backup.invalid mailbox INBOX SAVEDON 2012-01-10 | wc -l 45724 # doveadm search -u backup at backup.invalid mailbox INBOX SAVEDON 2012-01-11 | wc -l 0 Then: doveadm expunge -u backup at backup.invalid mailbox INBOX SAVEDBEFORE 2012-01-08 && \ doveadm purge -u backup at backup.invalid resulted in: doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/3f/4d/3f4d8043d87e248a2e97f87be1f604301573be49-72e4a90683d70a4fc47e000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/3f/4d/3f4d8043d87e248a2e97f87be1f604301573be49-afef6f1bf1d40a4f6773000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/32/7f/327f6d3cccc7aceb42da69ee7f3baea3267d631f-f4f5b90703d10a4f8d68000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/21/f4/21f48fad649f1b7249f9aab98b7c079b6ac19b5b-9a4fcb1e83d70a4fcd7e000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/21/f4) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/9a/fd/9afd968e9524449a151f64bd2fb1610dcf81da95-fe508a2195d30a4fb86f000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/9a/fd/9afd968e9524449a151f64bd2fb1610dcf81da95-a04fcb1e83d70a4fcd7e000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/9a/fd/9afd968e9524449a151f64bd2fb1610dcf81da95-beba543b03cf0a4fe062000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/9a/fd/9afd968e9524449a151f64bd2fb1610dcf81da95-52c15a0f03d10a4f9b68000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/9a/fd/9afd968e9524449a151f64bd2fb1610dcf81da95-c4ba543b03cf0a4fe062000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/00/04/00048d4ec98f654ad681a97b07d2e806a09c1641-22a9531683d70a4fc97e000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/00/04) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/c4/ae/c4aebf70927db7997eb8755c61a490581aff94a6-27bb543b03cf0a4fe062000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/bb/91/bb913960266ce20c2fea64ceaed1fb29eab868ce-4ba9531683d70a4fc97e000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/34/0b/340b8ae1e2c6ccbfba161475440b172caaff92b3-1d518a2195d30a4fb86f000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/4c/1e/4c1e264df5d168ed4e676267a4dcf38cd82e9797-1e518a2195d30a4fb86f000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/4c/1e) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/ca/a7/caa75263442d125e08493b237c332351604b651a-1f518a2195d30a4fb86f000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/ca/a7) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/c7/75/c775a5736e1800e3c654291b42f942ebebc6e343-c2327907cad70a4fd47e000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/c7/75) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/1b/da/1bdaede5f6b4175e577fa4148a1d2c75b6291047-c3327907cad70a4fd47e000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/1b/da) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/e4/83/e4838792800058921c4dce395f5c038e3072f053-c4327907cad70a4fd47e000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/e4/83) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/f6/bc/f6bc6d4a0127e275a61e0f8c3c56407240547bd6-4850cb1e83d70a4fcd7e000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/f6/bc) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/7f/9d/7f9dcd43a8a04aa0a0e438d1568129baf6d66105-c104472383d70a4fd17e000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/f7/4d/f74df38ff421889090e995383b5c81912c15879b-db04472383d70a4fd17e000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/f7/4d) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/bf/ef/bfef000b86fd483daefce6472bec6e1694aaac94-9a416f0c03d10a4f9868000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/bf/ef) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/bf/ef/bfef000b86fd483daefce6472bec6e1694aaac94-55bb543b03cf0a4fe062000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/bf/ef) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/95/fa/95fa16f52171e9cc30ca288eacf22ce8f5aa2fff-5bbb543b03cf0a4fe062000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/95/fa/95fa16f52171e9cc30ca288eacf22ce8f5aa2fff-61bb543b03cf0a4fe062000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/95/fa/95fa16f52171e9cc30ca288eacf22ce8f5aa2fff-62f6b90703d10a4f8d68000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/95/fa/95fa16f52171e9cc30ca288eacf22ce8f5aa2fff-ec327907cad70a4fd47e000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/95/fa/95fa16f52171e9cc30ca288eacf22ce8f5aa2fff-b9a9531683d70a4fc97e000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/24/c4/24c4692ab968bfd94cf1ca62fb46a88b7dcd78f1-df71181354d80a4f4e02000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/24/c4) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/9f/86/9f862a2ed2f9c8f9cffbfea60883da513abf390d-67c35a0f03d10a4f9b68000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/9f/86) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/31/bf/31bf583bd7db531f5b634f6f2220eb8c803f720d-50bd543b03cf0a4fe062000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/31/bf) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/41/97/4197a3de49f40e5f6c641be08b4e710c02a8a9f4-28e78c26add90a4f1706000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/41/97) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/fc/f7/fcf791a4e521548aceae0a62b3924b075f1c7b31-63ab531683d70a4fc97e000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/bd/18/bd18eecdcc9e9f17b851a1742c7ca6f8f7badfe7-2872181354d80a4f4e02000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/bb/1d/bb1da62d3688f09d4223188d0e16986a57458b91-2e72181354d80a4f4e02000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/bb/1d) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/eb/6e/eb6ebe01f3e1feaa1f5635cef5b8286e375dfdb1-3de78c26add90a4f1706000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/eb/6e) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/f1/94/f1943d0e581f54fe68f3ae052e3d2eba75ff3822-76c45a0f03d10a4f9b68000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/f1/94) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/38/27/3827505dc4412178b87c758a4f5d697644260e9e-0ee88c26add90a4f1706000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/38/27) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/ca/4e/ca4eec1630af1e986c66593bce522c32db4060cb-2b2f152613da0a4fab06000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/ca/4e) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/1b/46/1b4691175805ffb373f5c8406f33f79b41dceed2-c772181354d80a4f4e02000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/1b/46) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/78/94/789426ef58857e12e448e590201cf81acda1d3f0-af528a2195d30a4fb86f000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/b2/01/b201c8727f5d286fd3f99f61710e900aaae42bcf-0f446f0c03d10a4f9868000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/7a/58/7a58d847b53f9980365be256607df90bd4885152-10446f0c03d10a4f9868000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/83/21/8321d504845fb7fa518ffbbe9e65ba79357dc40d-11446f0c03d10a4f9868000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/2b/90/2b90a014dadfb625859720a930743d76ff1dc960-1ad49606d5d90a4f9f06000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/2b/90) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/26/bb/26bb6ef66a1c9374cde9dd4ee40c03d52a37a078-1bd49606d5d90a4f9f06000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/26/bb) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/7e/b6/7eb6cf6ecf3375708e922879cb3695c45c943650-6e73181354d80a4f4e02000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/7e/b6) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/3e/56/3e56af1afce990d2633c722e5c0b241064be0908-6f73181354d80a4f4e02000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/3e/56) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/9e/bd/9ebdb50383e3f1166b2aa753e78b855fae505528-21d49606d5d90a4f9f06000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/9e/bd) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/24/c4/24c4692ab968bfd94cf1ca62fb46a88b7dcd78f1-88e88c26add90a4f1706000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/24/c4) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/24/22/2422e76785185795d11ff19cd23c10af2df4aee3-9373181354d80a4f4e02000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/09/3b/093bbae088c17039975e55fe49f683ab5ac79f89-0112a307d5d90a4fa006000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/09/3b) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/5a/30/5a30cbae4a3900fdb2bb20e405db5d00ab93ffe3-0c12a307d5d90a4fa006000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/5a/30) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/1b/7f/1b7f03005f41026e42e354cb3a8dd805d793720e-5ed49606d5d90a4f9f06000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/1b/7f) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/fd/51/fd51d22ba92f2e018f842851149fffb81f1f1264-64d49606d5d90a4f9f06000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/fd/51) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/70/5c/705c1daf8f35ca7bf9f7bbf2fdf1b29de33766f8-65d49606d5d90a4f9f06000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/70/5c) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/e4/e0/e4e08863ae910339a3809ea51ddefb0a4db9c646-66d49606d5d90a4f9f06000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/e4/e0) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/6c/cc/6ccc5e659c6de92852315bfe977cab24b6238dc9-67d49606d5d90a4f9f06000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/6c/cc) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/7f/9c/7f9c841c810561bde8a5e3b3e51c55de53620f47-68d49606d5d90a4f9f06000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/7f/9c) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/d3/7e/d37eb19d8379eb292971bb931068e34ece403f1f-aac55a0f03d10a4f9b68000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/d3/7e) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/70/6b/706b010991ced768476e9921efd1d8baef6af103-abc55a0f03d10a4f9b68000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/70/6b) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/98/4e/984e36f32eb16f85349bafe5ef5b7c2367a30d45-bc73181354d80a4f4e02000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/98/4e) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/60/8d/608da5e9d2b4eb43705b62a7605068c886bc486b-8cd49606d5d90a4f9f06000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/60/8d) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/f9/e7/f9e75a76c6aacb9259e3055f5dffee9b7b37179d-eb73181354d80a4f4e02000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/f9/e7) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/c2/68/c268bd0abe0e334e64cf40e3b4e571eae6415c40-bbc55a0f03d10a4f9b68000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/c2/68) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/c5/aa/c5aac7e0a301a798a4eedc0140bd3f71329046df-ffbd543b03cf0a4fe062000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/f8/26/f82660e313a8a22e0152bf457231cb5a535eebcb-cbe88c26add90a4f1706000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/f8/26) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/9d/8f/9d8f0e6d58d86e5672876a4a5ac0d626f01b2653-6812a307d5d90a4fa006000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/9d/8f) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/f7/b5/f7b51d6500a594ea870151e1f6845ae1ca4dfa88-73538a2195d30a4fb86f000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/43/73/43737f46f935ea8a2077ebb3c4bc356572bb07ff-7e538a2195d30a4fb86f000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/43/73) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/4e/29/4e29fdbf309d66faba520a4a3e3f87ead728c7af-3774181354d80a4f4e02000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/4e/29) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/12/34/123406129b4eb0c148093a81dc07d63f01d6d409-8712a307d5d90a4fa006000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/12/34) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/78/8c/788c84e6f0aa744ba996e3adad4912547b85860d-90446f0c03d10a4f9868000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/78/8c) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/9f/68/9f688dfa87bd86e0302544a6f4051be4c0ebe9f3-2cf9b90703d10a4f8d68000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/19/86/198610c4bc9753908fcfe6c1bd6b330d8df7f7af-2df9b90703d10a4f8d68000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/3b/58/3b58ae30de03e06cba4520328dcaf8461321361f-4274181354d80a4f4e02000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/3b/58) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/2f/ec/2fecc3dfa406921e622a638d382f123826008e68-c12f152613da0a4fab06000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/08/e6/08e61656d670261693ada0e71514a17c523dc239-c22f152613da0a4fab06000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/d4/9d/d49dccea098551240dcae8a6454738a103d050eb-cd2f152613da0a4fab06000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/d4/9d) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/26/a8/26a8bd4bfa69aa7e699d9c9749443ed2b72bbbd9-96446f0c03d10a4f9868000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/26/a8) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/6d/f5/6df5a5c2dee317fa2c9d2aa2de9730ef0e086912-47f9b90703d10a4f8d68000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/ba/f0/baf0274f01c12252224fc0d67a7018a6323127ff-48f9b90703d10a4f8d68000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/f8/26/f82660e313a8a22e0152bf457231cb5a535eebcb-4ef9b90703d10a4f8d68000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/f8/26) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/5f/91/5f91067608300c30a80b6441b4bfe5d2e7ac3ab5-9c446f0c03d10a4f9868000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/5f/91) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/8d/d4/8dd4d6fd05df137fe72ae6bea359c4c740b41bdb-9712a307d5d90a4fa006000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/c8/97/c8979042d406fa3db0f0d5ab9d8e40fc5087c116-4d74181354d80a4f4e02000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/6b/24/6b2434e0486a4e033a5057e323c7fa76baced4aa-a7538a2195d30a4fb86f000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/6b/24) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/05/16/051673f94b6dbc17a265e6cb8a0f189f5a47518d-a8538a2195d30a4fb86f000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/9c/c1/9cc1bebd4a576aa28bf4a54f54206bf447c1e31c-87be543b03cf0a4fe062000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/9c/c1) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/04/00/04008a41d5d6d43a67ab5b823f9d80852b6f828e-f32f152613da0a4fab06000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/04/00) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/d0/b1/d0b1d7351d2a174541124b35f5471e57dd480795-fe2f152613da0a4fab06000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/d0/b1) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/3e/76/3e7645ee60d67e21c96d79ad0d961c7d9f5ca074-68f9b90703d10a4f8d68000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/3e/76) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/5e/5e/5e5e0db722cbdf59d7b23a50ab9613cd41585861-04e98c26add90a4f1706000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/5e/5e) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/1c/32/1c32564e82ebe7acc876ea5a1fd7e9f00a695c97-52ce1105c0db0a4f3c0c000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/1c/32) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/7a/bf/7abfe0943de9fa8e6cc468a3017705d1e44c9af4-15e98c26add90a4f1706000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/7a/bf) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/ca/22/ca22fb0b943c243bae14f43c08454765679805a5-25e98c26add90a4f1706000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/ca/22) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/44/3a/443ad09efc00aec5fd5579d1bfa741efcc54625c-9974181354d80a4f4e02000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/24/c4/24c4692ab968bfd94cf1ca62fb46a88b7dcd78f1-d4538a2195d30a4fb86f000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/24/c4) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/00/b3/00b3c0397b0f24dcc44410e60984147f1f6dbd4c-76ce1105c0db0a4f3c0c000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/00/b3) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/2d/e5/2de509ec0507087ceec4d16dc39260f4b369886f-86ce1105c0db0a4f3c0c000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/2d/e5) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/66/63/666325747133a0e84525bc402bb2984339037e31-c912a307d5d90a4fa006000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/66/63) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/22/f9/22f9241435febe0df6556f7d681792f1cc5b1637-ca12a307d5d90a4fa006000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/fa/1c/fa1c7138221dc54ab7b6d833970b6d230304fff3-91f9b90703d10a4f8d68000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/b5/44/b54476a0510cb83fd0144397169d603a72b3d8db-54e98c26add90a4f1706000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/b5/44) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/61/41/6141377ddbeaaa2d2f641392993723ac09dab7af-201cb50cb6db0a4f370c000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/2e/a3/2ea3fffc796512c7bf235885f3b37b0ec9c4c620-211cb50cb6db0a4f370c000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/85/cd/85cd1f6f08d3becd97d64653129bb71e513aa265-b0f9b90703d10a4f8d68000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/85/cd) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/bf/3c/bf3c51578bfee97ad56b9f0b1f6f74bbc8b30316-311cb50cb6db0a4f370c000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/bf/3c) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/c3/15/c315fdf651d9bc65d468634598ea1a8a5ef2f0dc-321cb50cb6db0a4f370c000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/c3/15) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/c9/6f/c96ff3f27a51bb19282196cf416a51a079c5f75e-2d30152613da0a4fab06000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/c9/6f) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/f8/61/f86119c665d89911b256e68c77354502389180eb-2e30152613da0a4fab06000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/a3/7c/a37c6f0966bc866dcb44b5304610604675bbb81b-2f30152613da0a4fab06000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/a3/7c) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/72/20/7220923da82b479c5381bdc7efe8a392c890b09d-02bf543b03cf0a4fe062000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/72/20) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/c9/6f/c96ff3f27a51bb19282196cf416a51a079c5f75e-5975181354d80a4f4e02000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/c9/6f) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/f8/61/f86119c665d89911b256e68c77354502389180eb-5a75181354d80a4f4e02000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/a3/7c/a37c6f0966bc866dcb44b5304610604675bbb81b-5b75181354d80a4f4e02000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/a3/7c) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/c1/47/c14767dfeaadbd2a8205767e9a274e2854c1b97c-451cb50cb6db0a4f370c000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/c1/47) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/6b/33/6b33d1b0fb794a97b1185158797bf360b96d3e62-461cb50cb6db0a4f370c000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/6b/33) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/c9/6f/c96ff3f27a51bb19282196cf416a51a079c5f75e-91e98c26add90a4f1706000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/c9/6f) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/f8/61/f86119c665d89911b256e68c77354502389180eb-92e98c26add90a4f1706000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/a3/7c/a37c6f0966bc866dcb44b5304610604675bbb81b-93e98c26add90a4f1706000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/a3/7c) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/21/57/2157e48b0f4d145909a05dc88dd9f4ab5eacba92-7215b60eb6db0a4f380c000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/21/57) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/2a/3a/2a3aade76b96474ff4d625ed2ecd9261dde5098e-e412a307d5d90a4fa006000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/6e/26/6e26032f9b2e9bb3eb6d042710b3a593a0ef4a6d-5b1cb50cb6db0a4f370c000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/6e/26) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/7a/7f/7a7fce2fad3b3046d2744f271e676f84b7bc931e-611cb50cb6db0a4f370c000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/7a/7f) failed: No such file or directory doveadm(backup at backup.invalid): Error: Corrupted dbox file /path/to/mdbox/storage/m.434 (around offset=61672172): purging found mismatched offsets (61672142 vs 61665615, 7661/10801) doveadm(backup at backup.invalid): Warning: mdbox /path/to/mdbox/storage: rebuilding indexes doveadm(backup at backup.invalid): Error: Purging namespace '' failed: Internal error occurred. Refer to server log for more information. [2012-01-10 13:59:52] After that: # doveadm search -u backup at backup.invalid mailbox INBOX SAVEDON 2012-01-08 | wc -l 0 # doveadm search -u backup at backup.invalid mailbox INBOX SAVEDON 2012-01-09 | wc -l 0 # doveadm search -u backup at backup.invalid mailbox INBOX SAVEDON 2012-01-10 | wc -l 189 # doveadm search -u backup at backup.invalid mailbox INBOX SAVEDON 2012-01-11 | wc -l 0 # fgrep dovecot: /var/log/mail.log |grep -v "dovecot: lmtp" # Nothing! -- Ralf Hildebrandt Gesch?ftsbereich IT | Abteilung Netzwerk Charit? - Universit?tsmedizin Berlin Campus Benjamin Franklin Hindenburgdamm 30 | D-12203 Berlin Tel. +49 30 450 570 155 | Fax: +49 30 450 570 962 ralf.hildebrandt at charite.de | http://www.charite.de From ath at b-one.net Tue Jan 10 16:05:14 2012 From: ath at b-one.net (Anders) Date: Tue, 10 Jan 2012 15:05:14 +0100 Subject: [Dovecot] A small bug and a question about CONTEXT=SEARCH Message-ID: <20120110140514.57EB0E51BC96F@bmail-n01.one.com> Hi, I have been looking at search and sorting with dovecot and have run into some things. The first one I think may be a minor bug because a set of commands result in the socket connection being closed without warning: UID SEARCH RETURN (SAVE COUNT) CHARSET UTF-8 (UNDELETED TEXT "foo") UID SEARCH RETURN (COUNT MIN) CHARSET UTF-8 () $ The empty paranthesis before the reference to the previous search result ($) is not legal IMAP, but should not cause the socket to be closed I think. Then I have question about RFC5267 and the announcement of CONTEXT=SEARCH in the capabilities. I think this RFC is supported by dovecot, or maybe just part of the RFC is supported? At least when I include the CONTEXT ADDTO or REMOVEFROM keywords I get an error, but UPDATE and CANCELUPDATE seem to be supported. The RFC has been updated by the RFC describing the NOTIFY extension to IMAP so maybe it has been decided to not add these keywords until a later time? I am using dovecot version 2.0.15 (with patches from Apple). Best Regards Anders From bschmidt at cms.hu-berlin.de Tue Jan 10 16:16:14 2012 From: bschmidt at cms.hu-berlin.de (Burckhard Schmidt) Date: Tue, 10 Jan 2012 15:16:14 +0100 Subject: [Dovecot] rewriting mail_location Message-ID: <4F0C482E.7000900@cms.hu-berlin.de> Hello, I have LDAP as userdb. Entries containing attributes mail=alias.user1 at some.domain.de and uid=user1 Mail to alias.user1 at some.domain.de gets delivered into /datatest/user/alias.user1 instead of /datatest/user/user1 by lda. I have userdb { args = /usr/dovecot/etc/ldapuser.conf driver = ldap } with a ldapuser.conf: hosts ... base ... user_filter = (&(|(mail=%u)(mail=%n at some.domain) (uid=%u))(objectClass=posixAccount)) user_attrs = uid=mail_location=maildir:/datatest/user/%$, uidNumber=29,gidNumber=133 I hoped local part of attribute mail could be replaced by uid for local delivery with dovecots lda? Any hints how to do that? (With postfix I could rewrite the address to uid at host and use local_transport = dovecot.) postfix has virtual_transport = dovecot. LDAP entry: mail: alias.user1 at some.domain.de uid: user1 homeDirectory: /dev/null uidNumber: 464 gidNumber: 100 mail to alias.user1 at some.domain.de Jan 10 14:03:24 ubu1004 postfix/qmgr[25221]: C434D1EE: from=, size=239, nrcpt=1 (queue active) Jan 10 14:03:24 ubu1004 dovecot: lda: Debug: Loading modules from directory: /usr/dovecot/lib/dovecot Jan 10 14:03:24 ubu1004 dovecot: lda: Debug: Module loaded: /usr/dovecot/lib/dovecot/lib20_autocreate_plugin.so Jan 10 14:03:24 ubu1004 dovecot: auth: Debug: master in: USER#0111#011alias.user1#011service=lda Jan 10 14:03:24 ubu1004 dovecot: auth: Debug: ldap(alias.user1): user search: base=ou=users,ou=...,c=de scope=subtree filter=(&(|(mail=alias.user1)(mail=alias.user1 at some.domain.de)(uid=alias.user1))(objectClass=posixAccount)) fields=uid,uidNumber,gidNumber some substitutions are visibly: Jan 10 14:03:24 ubu1004 dovecot: auth: Debug: ldap(alias.user1): result: uid(location=maildir:/datatest/user/%$/maildir)=user1 gidNumber(133)=100 uidNumber(29)=464 Jan 10 14:03:24 ubu1004 dovecot: auth: Debug: master out: USER#0111#011alias.user1#011location=maildir:/datatest/user/user1/maildir#011133=100#01129=464 Jan 10 14:03:24 ubu1004 dovecot: lda: Debug: auth input: alias.user1 location=maildir:/datatest/user/user1/maildir 133=100 29=464 Jan 10 14:03:24 ubu1004 dovecot: lda: Debug: Added userdb setting: plugin/location=maildir:/datatest/user/user1/maildir but alias "alias.user1" still used for delivery: Jan 10 14:03:24 ubu1004 dovecot: lda: Debug: Added userdb setting: plugin/133=100 Jan 10 14:03:24 ubu1004 dovecot: lda: Debug: Added userdb setting: plugin/29=464 Jan 10 14:03:24 ubu1004 dovecot: lda(alias.user1): Debug: Effective uid=29, gid=133, home= Jan 10 14:03:24 ubu1004 dovecot: lda(alias.user1): Debug: Namespace : type=private, prefix=, sep=/, inbox=yes, hidden=no, list=yes, subscriptions=yes location=maildir:/datatest/user/alias.user1/maildir:INDEX=/datatest/addons/index/alias.user1:CONTROL=/datatest/user/alias.user1/control:LAYOUT=fs Jan 10 14:03:24 ubu1004 dovecot: lda(alias.user1): Debug: fs: root=/datatest/user/alias.user1/maildir, index=/datatest/addons/index/alias.user1, control=/datatest/user/alias.user1/control, inbox=/datatest/user/alias.user1/maildir, alt= Jan 10 14:03:24 ubu1004 dovecot: lda(alias.user1): Debug: Namespace : Using permissions from /datatest/user/alias.user1/maildir: mode=0700 gid=-1 Jan 10 14:03:24 ubu1004 dovecot: lda(alias.user1): Debug: none: root=, index=, control=, inbox=, alt= Jan 10 14:03:24 ubu1004 dovecot: lda(alias.user1): Debug: Destination address: alias.user1 at ubu1004 (source: user at hostname) Jan 10 14:03:24 ubu1004 dovecot: lda(alias.user1): msgid=unspecified: saved mail to INBOX Jan 10 14:03:24 ubu1004 postfix/pipe[25226]: C434D1EE: to=, relay=dovecot, delay=14, delays=14/0.01/0/0.02, dsn=2.0.0, status=sent (delivered via dovecot service) Jan 10 14:03:24 ubu1004 postfix/qmgr[25221]: C434D1EE: removed dovecot -n # 2.0.17 (684381041dc4+): /usr/dovecot/etc/dovecot/dovecot.conf # OS: Linux 2.6.32-34-generic-pae i686 Ubuntu 10.04.3 LTS ext4 mail_gid = sysdov mail_location = maildir:/datatest/user/%n/maildir:INDEX=/datatest/addons/index/%n:CONTROL=/datatest/user/%n/control:LAYOUT=fs mail_plugins = autocreate mail_uid = sysdov passdb { args = failure_show_msg=yes imap driver = pam } service auth { client_limit = 30000 unix_listener auth-userdb { group = sysdov #effective 133 mode = 01204 user = sysdov #effective 29 } } userdb { args = /usr/dovecot/etc/ldapuser.conf driver = ldap } protocol lda { mail_plugins = autocreate } and ldapuser.conf: hosts ... base ... user_filter = (&(|(mail=%u)(mail=%n at some.domain) (uid=%u))(objectClass=posixAccount)) user_attrs = uid=mail_location=maildir:/datatest/user/%$, uidNumber=29,gidNumber=133 local part of mail should be replaced by uid for local delivery -- Regards --- Burckhard Schmidt From tss at iki.fi Tue Jan 10 16:16:51 2012 From: tss at iki.fi (Timo Sirainen) Date: Tue, 10 Jan 2012 16:16:51 +0200 Subject: [Dovecot] A small bug and a question about CONTEXT=SEARCH In-Reply-To: <20120110140514.57EB0E51BC96F@bmail-n01.one.com> References: <20120110140514.57EB0E51BC96F@bmail-n01.one.com> Message-ID: <1326205011.6987.90.camel@innu> On Tue, 2012-01-10 at 15:05 +0100, Anders wrote: > the socket connection being closed without warning: > > UID SEARCH RETURN (SAVE COUNT) CHARSET UTF-8 (UNDELETED TEXT "foo") You mean it closes with above also? It works fine with me. > UID SEARCH RETURN (COUNT MIN) CHARSET UTF-8 () $ This was fixed in v2.0.17. > Then I have question about RFC5267 and the announcement of > CONTEXT=SEARCH > > in the capabilities. I think this RFC is supported by dovecot, or maybe > just > > part of the RFC is supported? All of it is supported, as far as I know. > At least when I include the CONTEXT ADDTO or REMOVEFROM keywords I get > an error, These are server notifications. Clients aren't supposed to send them. From divizio at exentrica.it Tue Jan 10 17:16:17 2012 From: divizio at exentrica.it (Luca Di Vizio) Date: Tue, 10 Jan 2012 16:16:17 +0100 Subject: [Dovecot] little bug with Director in 2.1? Message-ID: Hi, in 2.1rc3 the "director_servers" setting does not accept hostnames as documented (with ip no problems). It works correctly in 2.0.17. Greetings, Luca From Juergen.Obermann at hrz.uni-giessen.de Tue Jan 10 17:32:07 2012 From: Juergen.Obermann at hrz.uni-giessen.de (=?iso-8859-1?b?SvxyZ2Vu?= Obermann) Date: Tue, 10 Jan 2012 16:32:07 +0100 Subject: [Dovecot] Panic: file mbox-sync.c: line 1348: assertion failed Message-ID: <20120110163207.182538xtgzoxjg8w@webmail.hrz.uni-giessen.de> Hallo, I have the following problem with doveadm: # gdb --args /opt/local/bin/doveadm -v mailbox status -u userxy/g029 'messages' "Software-alle/AK-Software-Tagung" GNU gdb 5.3 Copyright 2002 Free Software Foundation, Inc. GDB is free software, covered by the GNU General Public License, and you are welcome to change it and/or distribute copies of it under certain conditions. Type "show copying" to see the conditions. There is absolutely no warranty for GDB. Type "show warranty" for details. This GDB was configured as "sparc-sun-solaris2.8"... (gdb) run Starting program: /opt/local/bin/doveadm -v mailbox status -u g029 messages Software-alle/AK-Software-Tagung warning: Lowest section in /lib/libthread.so.1 is .dynamic at 00000074 warning: Lowest section in /lib/libdl.so.1 is .hash at 000000b4 doveadm(g029): Panic: file mbox-sync.c: line 1348: assertion failed: (file_size >= sync_ctx->expunged_space + trailer_size) doveadm(g029): Error: Raw backtrace: 0xff1cbc30 -> 0xff319544 -> 0xff319fa8 -> 0xff31add8 -> 0xff31b278 -> 0xff2a69b0 -> 0xff2a6bac -> 0x16808 -> 0x1b8fc -> 0x16ba0 -> 0x177cc -> 0x17944 -> 0x17a50 -> 0x204e8 -> 0x165c8 Program received signal SIGABRT, Aborted. 0xfe94dcdc in _lwp_kill () from /lib/libc.so.1 (gdb) bt full #0 0xfe94dcdc in _lwp_kill () from /lib/libc.so.1 No symbol table info available. #1 0xfe8e6fb4 in raise () from /lib/libc.so.1 No symbol table info available. #2 0xfe8c2078 in abort () from /lib/libc.so.1 No symbol table info available. #3 0xff1cb984 in default_fatal_finish () from /opt/local/lib/dovecot/libdovecot.so.0 No symbol table info available. #4 0xff1cbc38 in i_panic () from /opt/local/lib/dovecot/libdovecot.so.0 No symbol table info available. #5 0xff31954c in mbox_sync_handle_eof_updates () from /opt/local/lib/dovecot/libdovecot-storage.so.0 No symbol table info available. #6 0xff319fb0 in mbox_sync_do () from /opt/local/lib/dovecot/libdovecot-storage.so.0 No symbol table info available. #7 0xff31ade0 in mbox_sync_int () from /opt/local/lib/dovecot/libdovecot-storage.so.0 No symbol table info available. #8 0xff31b280 in mbox_storage_sync_init () from /opt/local/lib/dovecot/libdovecot-storage.so.0 No symbol table info available. #9 0xff2a69b8 in mailbox_sync_init () from /opt/local/lib/dovecot/libdovecot-storage.so.0 No symbol table info available. #10 0xff2a6bb4 in mailbox_sync () from /opt/local/lib/dovecot/libdovecot-storage.so.0 No symbol table info available. #11 0x00016810 in doveadm_mailbox_find_and_sync () No symbol table info available. #12 0x0001b904 in cmd_mailbox_status_run () No symbol table info available. #13 0x00016ba8 in doveadm_mail_next_user () No symbol table info available. #14 0x000177d4 in doveadm_mail_cmd () No symbol table info available. #15 0x0001794c in doveadm_mail_try_run_multi_word () No symbol table info available. #16 0x00017a58 in doveadm_mail_try_run () No symbol table info available. #17 0x000204f0 in main () No symbol table info available. (gdb) quit The program is running. Exit anyway? (y or n) y My configuration ist as follows: # /opt/local/bin/doveconf -n # 2.0.16: /opt/local/etc/dovecot/dovecot.conf # OS: SunOS 5.10 sun4v auth_verbose = yes disable_plaintext_auth = no lda_mailbox_autocreate = yes lda_mailbox_autosubscribe = yes listen = imap.hrz.uni-giessen.de localhost mail_location = mbox:~/Mail:INBOX=/var/mail/%u mail_plugins = mail_log notify zlib managesieve_notify_capability = mailto managesieve_sieve_capability = fileinto reject envelope encoded-character vacation subaddress comparator-i;ascii-numeric relational regex imap4flags copy include variables body enotify environment mailbox date ihave mdbox_rotate_interval = 1 days mdbox_rotate_size = 16 M namespace { inbox = yes location = prefix = separator = / type = private } namespace { hidden = yes list = no location = prefix = Mail/ separator = / subscriptions = yes type = private } passdb { driver = pam } passdb { args = /opt/local/etc/dovecot/dovecot-ldap.conf.ext driver = ldap } plugin { autocreate = Trash autocreate2 = caughtspam autocreate3 = Sent autocreate4 = Drafts autosubscribe = Trash autosubscribe2 = caughtspam autosubscribe3 = Sent autosubscribe4 = Drafts mail_log_events = delete undelete expunge copy mailbox_delete mailbox_rename mail_log_fields = uid box msgid size sieve = ~/.dovecot.sieve sieve_dir = ~/sieve zlib_save = gz zlib_save_level = 3 } postmaster_address = postmaster at hrz.uni-giessen.de quota_full_tempfail = yes sendmail_path = /usr/lib/sendmail service auth { client_limit = 11120 } service imap-login { process_min_avail = 16 service_count = 0 vsz_limit = 640 M } service imap { process_limit = 4096 vsz_limit = 1 G } ssl_cert = References: <3a8f9df5e523c0391c41964ae3d09d1b@imapproxy.hrz> <677F82FE-850B-43EC-86C1-6B99ED74642A@iki.fi> Message-ID: Am 20.12.2011 06:45, schrieb Timo Sirainen: > On 16.12.2011, at 0.00, J?rgen Obermann wrote: > >> Hello, >> when I try to convert from mbox to mdbox with dsync with one user it >> always panics: >> >> # /opt/local/bin/dsync -v -u userxy backup ssh root at minerva1 >> /opt/local/bin/dsync -v -u userxy >> dsync-remote(userxy): Panic: Trying to allocate 2147483648 bytes > > Well, this is clearly the problem.. But it's difficult to guess where > it's allocating that. I'd need a gdb backtrace. Does it write a core > file to userxy's home dir? If not, try replacing dsync with a script > that runs "ulimit -c unlimited" first and then execs dsync. > http://dovecot.org/bugreport.html tells what to do with core once you > have it. > > Alternative idea: Does it crash also when dsyncing locally? > gdb --args dsync -u userxy backup mdbox:/tmp/foobar > run > bt full Sorry, this problem is gone, I cannot reproduce it any more, neither locally nor with remote dsync. I found out that the user has one huge mail in his drafts folder with a 1GB video object attachment, but he surely never could send this mail because the mail size is limited to 50MB. Greetings, J?rgen -- J?rgen Obermann Hochschulrechenzentrum der Justus-Liebig-Universit?t Gie?en Heinrich-Buff-Ring 44 Tel. 0641-9913054 From mark at msapiro.net Tue Jan 10 18:34:20 2012 From: mark at msapiro.net (Mark Sapiro) Date: Tue, 10 Jan 2012 08:34:20 -0800 Subject: [Dovecot] Clients show .subscriptions folder Message-ID: Since upgrading from dovecot-2.1.rc1 to dovecot-2.1.rc3, some clients are showing a .subscriptions file in the user's mbox path as a folder. Some clients such as T'bird on Mac OS X create this file listing subscribed mbox files. Other clients such as T'bird on Windows XP show this file as a folder in the folder list even though it cannot be accessed as a folder (dovecot returns CANNOT Mailbox is not a valid mbox file). I think this may be a result of uncommenting the inbox namespace in conf.d/10-mail.conf . Is there a way to supress exposing this file to clients that don't use it? # dovecot -n # 2.1.rc3: /usr/local/etc/dovecot/dovecot.conf # OS: Linux 2.6.18-8.1.14.el5 i686 CentOS release 5 (Final) auth_mechanisms = plain apop login auth_worker_max_count = 5 mail_location = mbox:~/Mail:INBOX=/var/spool/mail/%u mail_privileged_group = mail mbox_write_locks = fcntl dotlock namespace inbox { inbox = yes location = mailbox Drafts { special_use = \Drafts } mailbox Junk { special_use = \Junk } mailbox Sent { special_use = \Sent } mailbox "Sent Messages" { special_use = \Sent } mailbox Trash { special_use = \Trash } prefix = } passdb { args = /usr/local/etc/dovecot.passwd driver = passwd-file } passdb { driver = pam } protocols = imap pop3 service auth { unix_listener /var/spool/postfix/private/auth { mode = 0666 } } ssl_cert = The highway is for gamblers, San Francisco Bay Area, California better use your sense - B. Dylan From mark.davenport at yahoo.co.uk Tue Jan 10 21:12:04 2012 From: mark.davenport at yahoo.co.uk (sparkietm) Date: Tue, 10 Jan 2012 11:12:04 -0800 (PST) Subject: [Dovecot] Dovecot under Virtual Host environment?? Message-ID: <33114381.post@talk.nabble.com> Hi all, Could anyone point me to a walk-through of setting up Dovecot for multiple domains under a Virtual Host environment? I'd like to offer clients their own email domain for each virtual host e.g john at client_1.com, sue at client_2.com. I'm guessing this is fairly common set-up, but a search on "multiple domains" didn't bring much back. Also, would it be possible to offer each client webmail? Many thanks in advance Mark -- View this message in context: http://old.nabble.com/Dovecot-under-Virtual-Host-environment---tp33114381p33114381.html Sent from the Dovecot mailing list archive at Nabble.com. From robert at schetterer.org Tue Jan 10 23:00:11 2012 From: robert at schetterer.org (Robert Schetterer) Date: Tue, 10 Jan 2012 22:00:11 +0100 Subject: [Dovecot] Dovecot under Virtual Host environment?? In-Reply-To: <33114381.post@talk.nabble.com> References: <33114381.post@talk.nabble.com> Message-ID: <4F0CA6DB.4010809@schetterer.org> Am 10.01.2012 20:12, schrieb sparkietm: > > Hi all, > Could anyone point me to a walk-through of setting up Dovecot for multiple > domains under a Virtual Host environment? I'd like to offer clients their > own email domain for each virtual host e.g john at client_1.com, > sue at client_2.com. I'm guessing this is fairly common set-up, but a search > on "multiple domains" didn't bring much back. > > Also, would it be possible to offer each client webmail? > > Many thanks in advance > > Mark look here for examples, for webmail i.e roundecube, horde , squirrelmail are widly used http://wiki.dovecot.org/HowTo From joseba.torre at ehu.es Wed Jan 11 13:12:16 2012 From: joseba.torre at ehu.es (Joseba Torre) Date: Wed, 11 Jan 2012 12:12:16 +0100 Subject: [Dovecot] Providing shared folders with multiple backend servers In-Reply-To: <4F0AF0B9.7030406@turmel.org> References: <68fd4hi9kbv8@mids.svenhartge.de> <4F08E4D9.1090203@hardwarefreak.com> <78fdevu9kbv8@mids.svenhartge.de> <4F09956C.1030109@hardwarefreak.com> <88fev069kbv8@mids.svenhartge.de> <4F0AEDCC.10109@hardwarefreak.com> <4F0AF0B9.7030406@turmel.org> Message-ID: <4F0D6E90.5010603@ehu.es> El 09/01/12 14:50, Phil Turmel escribi?: > I've been following this thread with great interest, but no advice to offer. > The content is entirely appropriate, and appreciated. Don't be embarrassed > by your enthusiasm, Stan. +1 From sven at svenhartge.de Wed Jan 11 14:50:54 2012 From: sven at svenhartge.de (Sven Hartge) Date: Wed, 11 Jan 2012 13:50:54 +0100 Subject: [Dovecot] Providing shared folders with multiple backend servers References: <68fd4hi9kbv8@mids.svenhartge.de> Message-ID: Sven Hartge wrote: > I am currently in the planning stage for a "new and improved" mail > system at my university. OK, executive summary of the design ideas so far: - deployment of X (starting with 4, but easily scalable) virtual servers on VMware ESX - storage will be backed by a RDM on our iSCSI SAN. + main mailbox storage will be on 15k SAS6 600GB disks + backup rsnapshot storage will be on 7.2k SAS6 2TB disks - XFS filesystem on LVM, allowing easy local snapshots for rsnapshot - sharing folders from one user to another is not needed - central public shared folders reside on its own storage server and are accessed through the imapc-backend configured for the "#shared."-namespace (needs dovecot 2.1~rc3 or higher) - mdbox with compression (23h lifetime, 50MB max size) - quota in MySQL, allowing my MXes to check the quota for a user _before_ accepting any mail for him. This is a much needed feature, currently not possible and thus leading to backscatter right now. - + Backup with bacula for file level backup every 24 hours (120 days retention) + rsnapshot to node local backup space for easier access (14 days retention) + possibly SAN-based remote snapshots to different storage tier. Because sharing a RDM (or VMDK) with multiple VMs pins the VM to an ESX server and prohibits HA and DRS in the ESX cluster and because of my bad experience with cluster FS I want to avoid one and use only local storage for the personal mailboxes of the users. Each user is fixed to one server, routing/redirecting of IMAP/POP3 connections happens via perdition (popmap feature via LDAP lookup) in a frontend server (this component is already working since some 3-ish years). So each node is isolated from the other nodes, knows only its users and does not care about users on other nodes. This prevents usage of the dovecot director, which only works if all nodes are able to access all mailboxes (correct?) I am aware this creates a SPoF for an 1/X portion of my users in the case of a VM failure, but this is deemed acceptable, since the use of VMs will allow me to quickly deploy a new one and reattach the RDM. (And if my whole iSCSI storage or ESX cluster fails, I have other, bigger problems than a non-functional mail system.) Comments? Gr??e, Sven. -- Sigmentation fault. Core dumped. From forumer at smartmobili.com Wed Jan 11 16:11:12 2012 From: forumer at smartmobili.com (forumer at smartmobili.com) Date: Wed, 11 Jan 2012 15:11:12 +0100 Subject: [Dovecot] Log imap commands Message-ID: Hi, I am trying to optimize an imap library and I am comparing with some existing webmail, for instance from roundcube I can log the imap commands with the following format : [11-Jan-2012 14:22:55 +0100]: [DBD1] S: * OK [CAPABILITY IMAP4rev1 LITERAL+ SASL-IR LOGIN-REFERRALS ID ENABLE IDLE STARTTLS AUTH=PLAIN AUTH=LOGIN AUTH=DIGEST-MD5 AUTH=CRAM-MD5] Dovecot ready. [11-Jan-2012 14:22:55 +0100]: [DBD1] C: A0001 ID ("name" "Roundcube Webmail" "version" "0.6" "php" "5.3.5-1ubuntu7.4" "os" "Linux" "command" "/") [11-Jan-2012 14:22:55 +0100]: [DBD1] S: * ID NIL [11-Jan-2012 14:22:55 +0100]: [DBD1] S: A0001 OK ID completed. [11-Jan-2012 14:22:55 +0100]: [DBD1] C: A0002 AUTHENTICATE CRAM-MD5 [11-Jan-2012 14:22:55 +0100]: [DBD1] S: + RDM1MTE1NjkxOTQzODE4NDEuMTMyNjI4ODE3NUBzZC0zMDYzNT4= [11-Jan-2012 14:22:55 +0100]: [DBD1] C: d2ViZ3Vlc3RAc21hcnRtb2JpbGkuY29tIDczODMxNjUzZmVlYzdjNDVlNzRkYTg1YjIwMjk2NWM0 [11-Jan-2012 14:22:55 +0100]: [DBD1] S: A0002 OK [CAPABILITY IMAP4rev1 LITERAL+ SASL-IR LOGIN-REFERRALS ID ENABLE IDLE SORT SORT=DISPLAY THREAD=REFERENCES THREAD=REFS MULTIAPPEND UNSELECT CHILDREN NAMESPACE UIDPLUS LIST-EXTENDED I18NLEVEL=1 CONDSTORE QRESYNC ESEARCH ESORT SEARCHRES WITHIN CONTEXT=SEARCH LIST-STATUS FUZZY] Logged in [11-Jan-2012 14:22:55 +0100]: [DBD1] C: A0003 NAMESPACE [11-Jan-2012 14:22:55 +0100]: [DBD1] S: * NAMESPACE (("" ".")) NIL NIL [11-Jan-2012 14:22:55 +0100]: [DBD1] S: A0003 OK Namespace completed. [11-Jan-2012 14:22:55 +0100]: [DBD1] C: A0004 LOGOUT [11-Jan-2012 14:22:55 +0100]: [DBD1] S: * BYE Logging out ... And now I would like to do the same from my imap library so I have started wireshark but it's a bit messy and difficult to compare. I was wondering if dovecot allows to log imap communications ? Thanks From wgillespie+dovecot at es2eng.com Wed Jan 11 16:19:44 2012 From: wgillespie+dovecot at es2eng.com (Willie Gillespie) Date: Wed, 11 Jan 2012 07:19:44 -0700 Subject: [Dovecot] Log imap commands In-Reply-To: References: Message-ID: <4F0D9A80.2020707@es2eng.com> On 1/11/2012 7:11 AM, forumer at smartmobili.com wrote: > I was wondering if dovecot allows to log imap communications ? You could look at Rawlog http://wiki.dovecot.org/Debugging/Rawlog http://wiki2.dovecot.org/Debugging/Rawlog From gerv at esrf.fr Wed Jan 11 17:04:06 2012 From: gerv at esrf.fr (Didier Gervaise) Date: Wed, 11 Jan 2012 16:04:06 +0100 Subject: [Dovecot] How to solve a "Connection queue full problem" - dovecot version 2.0.16 Message-ID: <4F0DA4E6.8040205@esrf.fr> Hello, I put in production dovecot yesterday. After an hour, nobody could log in ("Max number of imap connection" error message on Thunderbird) Afterward, I found these messages in the logs: Jan 10 09:21:20 mailsrv dovecot: [ID 583609 mail.info] imap-login: Disconnected: Connection queue full (no auth attempts): rip=xxx.xxx.xxx.xxx, lip=xxx.xxx.xxx.xxx In the panic, I changed these values in /usr/local/etc/dovecot/conf.d/10-master.conf default_process_limit = 20000 default_client_limit = 20000 This apparently solved the problem but now I have these messages when I start dovecot: Jan 11 14:41:08 mailsrvspare dovecot: [ID 583609 mail.info] master: Dovecot v2.0.15 starting up Jan 11 14:41:08 mailsrvspare dovecot: [ID 583609 mail.warning] config: Warning: service auth { client_limit=4096 } is lower than required under max. load (103024) Jan 11 14:41:08 mailsrvspare dovecot: [ID 583609 mail.warning] config: Warning: service anvil { client_limit=20000 } is lower than required under max. load (60003) What should I do ? - adding "service_count = 0" in service imap-login { ... } and removing the modifications I did in 10-master.conf ? or - should I configure differently default_process_limit and default_client_limit ? It is a small site (about 1000 users). Currently I have 666 imap processes and 136 imap-login processes. Additionnal infos: The Server is a Solaris 10 Sun X4540 32GB RAM mailsrv:~ % /usr/local/sbin/dovecot -n # 2.0.16: /usr/local/etc/dovecot/dovecot.conf doveconf: Warning: service auth { client_limit=4096 } is lower than required under max. load (103024) doveconf: Warning: service anvil { client_limit=20000 } is lower than required under max. load (60003) # OS: SunOS 5.10 i86pc default_client_limit = 20000 default_process_limit = 20000 disable_plaintext_auth = no first_valid_uid = 100 mail_debug = yes mail_plugins = " quota" mail_privileged_group = mail managesieve_notify_capability = mailto managesieve_sieve_capability = fileinto reject envelope encoded-character vacation subaddress comparator-i;ascii-numeric relational regex imap4flags copy include variables body enotify environment mailbox date ihave passdb { driver = pam } plugin { quota = maildir:User quota quota_rule = ?:storage=4G quota_rule2 = Trash:storage=+100M quota_warning = storage=95%% quota-warning 95 %u quota_warning2 = storage=90%% quota-warning 90 %u quota_warning3 = storage=80%% quota-warning 80 %u sieve = ~/.dovecot.sieve sieve_dir = ~/ } postmaster_address = postmaster at esrf.fr protocols = imap pop3 lmtp sieve service imap-login { inet_listener imap { port = 143 } inet_listener imaps { port = 993 } } service imap { process_limit = 2000 } service managesieve-login { inet_listener sieve { port = 4190 } } service pop3-login { inet_listener pop3 { port = 110 } inet_listener pop3s { port = 995 } } service quota-warning { executable = script /usr/local/bin/quota-warning.sh unix_listener quota-warning { user = dovecot } user = dovecot } ssl_cert = References: <4F0D9A80.2020707@es2eng.com> Message-ID: <53bff2b57fe462c70eb9795cda5b8f06@smartmobili.com> Le 11.01.2012 15:19, Willie Gillespie a ?crit?: > On 1/11/2012 7:11 AM, forumer at smartmobili.com wrote: >> I was wondering if dovecot allows to log imap communications ? > > You could look at Rawlog > http://wiki.dovecot.org/Debugging/Rawlog > http://wiki2.dovecot.org/Debugging/Rawlog Ok so I suppose I need to rebuild dovecot with the --with-rawlog option but I am under ubuntu and I was using some dovecot-2.x source package hosted here : http://xi.rename-it.nl/debian/ But now it seems to be dead, any idea where I could find a deb-src for dovecot 2.x ? From CMarcus at Media-Brokers.com Wed Jan 11 17:23:55 2012 From: CMarcus at Media-Brokers.com (Charles Marcus) Date: Wed, 11 Jan 2012 10:23:55 -0500 Subject: [Dovecot] Log imap commands In-Reply-To: <53bff2b57fe462c70eb9795cda5b8f06@smartmobili.com> References: <4F0D9A80.2020707@es2eng.com> <53bff2b57fe462c70eb9795cda5b8f06@smartmobili.com> Message-ID: <4F0DA98B.4080705@Media-Brokers.com> On 2012-01-11 10:09 AM, forumer at smartmobili.com wrote: > Le 11.01.2012 15:19, Willie Gillespie a ?crit : >> On 1/11/2012 7:11 AM, forumer at smartmobili.com wrote: >>> I was wondering if dovecot allows to log imap communications ? >> >> You could look at Rawlog >> http://wiki.dovecot.org/Debugging/Rawlog >> http://wiki2.dovecot.org/Debugging/Rawlog > > Ok so I suppose I need to rebuild dovecot with the --with-rawlog option > but I am under ubuntu > and I was using some dovecot-2.x source package hosted here : > http://xi.rename-it.nl/debian/ > But now it seems to be dead, any idea where I could find a deb-src for > dovecot 2.x ? Another option that shouldn't require recompiling might be the MailLog plugin: http://wiki2.dovecot.org/Plugins/MailLog -- Best regards, Charles From Frank.Post at pallas.com Wed Jan 11 17:35:42 2012 From: Frank.Post at pallas.com (Frank Post) Date: Wed, 11 Jan 2012 16:35:42 +0100 Subject: [Dovecot] sieve under lmtp using wrong homedir ? Message-ID: Hi, i have a problem with dovecot-2.0.15. All is working well except lmtp. Sieve scripts are correctly saved under /var/vmail/test.com/test/sieve, but under lmtp sieve will use /var/vmail//testuser/ Uid testuser has mail=test at test.com configured in ldap. As i could see in the debug logs, there is a difference between the auth "master out" lines, but why ? working if managesieve stores scripts: Jan 11 15:02:42 auth: Debug: master in: REQUEST 3533701121 23001 1 7ec31d3c65cb934785e8eb0f33a182ae Jan 11 15:02:42 auth: Debug: ldap(test at test.com,10.234.201.4): result: mail(user)=test at test.com Jan 11 15:02:42 auth: Debug: master out: USER 3533701121 test at test.com home=/var/vmail/test.com/test uid=5000 gid=5000 Jan 11 15:02:42 managesieve(test at test.com): Debug: Effective uid=5000, gid=5000, home=/var/vmail/test.com/test but under lmtp not: Jan 11 14:39:53 auth: Debug: master in: USER 1 testuser service=lmtp lip=10.234.201.9 rip=10.234.201.4 Jan 11 14:39:53 auth: Debug: auth(testuser,10.234.201.4): username changed testuser -> test at test.com Jan 11 14:39:53 auth: Debug: ldap(test at test.com,10.234.201.4): result: mail(user)=test at test.com Jan 11 14:39:53 auth: Debug: master out: USER 1 test at test.com home=/var/vmail//testuser uid=5000 gid=5000 Jan 11 14:39:53 lmtp(8499): Debug: auth input: test at test.com home=/var/vmail//testuser uid=5000 gid=5000 Jan 11 14:39:53 lmtp(8499): Debug: changed username to test at test.com Jan 11 14:39:53 lmtp(8499, test at test.com): Debug: Effective uid=5000, gid=5000, home=/var/vmail//testuser Jan 11 14:39:53 lmtp(8499, test at test.com): Debug: Quota root: name=User quota backend=maildir args= Jan 11 14:39:53 lmtp(8499, test at test.com): Debug: Quota rule: root=User quota mailbox=* bytes=2147483648 messages=0 Jan 11 14:39:53 lmtp(8499, test at test.com): Debug: Quota warning: bytes=1932735283 (90%) messages=0 reverse=no command=quota-warning 90 test at test.com Jan 11 14:39:53 lmtp(8499, test at test.com): Debug: maildir++: root=/var/vmail/test.com/test/Maildir, index=/var/dovecot/indexes/test.com/test, control=, inbox=/var/vmail/test.com/test/Maildir, alt= Jan 11 14:39:53 lmtp(8499, test at test.com): Debug: trash: No trash setting - plugin disabled Jan 11 14:39:53 lmtp(8499, test at test.com): Debug: sieve: include: sieve_global_dir is not set; it is currently not possible to include `:global' scripts. Jan 11 14:39:53 lmtp(8499, test at test.com): Debug: pla8CymRDU8zIQAAFrfQGQ: sieve: user's script path /var/vmail//testuser/.dovecot.sieve doesn't exist (using global script path in stead) Jan 11 14:39:53 lmtp(8499, test at test.com): Debug: pla8CymRDU8zIQAAFrfQGQ: sieve: user has no valid personal script Jan 11 14:39:53 lmtp(8499, test at test.com): Debug: pla8CymRDU8zIQAAFrfQGQ: sieve: no scripts to execute: reverting to default delivery. Jan 11 14:39:53 lmtp(8499, test at test.com): Debug: Namespace : Using permissions from /var/vmail/test.com/test/Maildir: mode=0700 gid=-1 Thanks, for your help. Frank -------------- next part -------------- A non-text attachment was scrubbed... Name: dovecot-front.conf Type: application/octet-stream Size: 4126 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: dovecot-back.conf Type: application/octet-stream Size: 3394 bytes Desc: not available URL: From ath at b-one.net Wed Jan 11 17:57:46 2012 From: ath at b-one.net (Anders) Date: Wed, 11 Jan 2012 16:57:46 +0100 Subject: [Dovecot] A small bug and a question about CONTEXT=SEARCH Message-ID: <20120111155746.BD7BDDA030B2B@bmail06.one.com> On Tue, 2012-01-10 at 15:05 +0100, Anders wrote: > > the socket connection being closed without warning: > > UID SEARCH RETURN (SAVE COUNT) CHARSET UTF-8 (UNDELETED TEXT "foo") > > You mean it closes with above also? It works fine with me. No, that also works fine here :-) > > UID SEARCH RETURN (COUNT MIN) CHARSET UTF-8 () $ > > This was fixed in v2.0.17. Great, thanks! > > Then I have question about RFC5267 and the announcement of > > CONTEXT=SEARCH > > in the capabilities. I think this RFC is supported by dovecot, or maybe > > just part of the RFC is supported? > > All of it is supported, as far as I know. > > At least when I include the CONTEXT ADDTO or REMOVEFROM keywords I get > > an error, > These are server notifications. Clients aren't supposed to send them. Sorry, apparently I was a bit too fast there. ADDTO and REMOVEFROM should not be sent by a client, but I think that a client can send CONTEXT as a hint to the server, see http://tools.ietf.org/html/rfc5267#section-4.2 Thanks! Regards Anders From forumer at smartmobili.com Wed Jan 11 19:19:28 2012 From: forumer at smartmobili.com (forumer at smartmobili.com) Date: Wed, 11 Jan 2012 18:19:28 +0100 Subject: [Dovecot] Log imap commands In-Reply-To: <53bff2b57fe462c70eb9795cda5b8f06@smartmobili.com> References: <4F0D9A80.2020707@es2eng.com> <53bff2b57fe462c70eb9795cda5b8f06@smartmobili.com> Message-ID: <1f53a15cea08527fb79bd71037fa161f@smartmobili.com> Le 11.01.2012 16:09, forumer at smartmobili.com a ?crit?: > Le 11.01.2012 15:19, Willie Gillespie a ?crit?: >> On 1/11/2012 7:11 AM, forumer at smartmobili.com wrote: >>> I was wondering if dovecot allows to log imap communications ? >> >> You could look at Rawlog >> http://wiki.dovecot.org/Debugging/Rawlog >> http://wiki2.dovecot.org/Debugging/Rawlog > > Ok so I suppose I need to rebuild dovecot with the --with-rawlog > option but I am under ubuntu > and I was using some dovecot-2.x source package hosted here : > http://xi.rename-it.nl/debian/ > But now it seems to be dead, any idea where I could find a deb-src > for dovecot 2.x ? Actually I finally found that repository it still working. From adrian.minta at gmail.com Wed Jan 11 19:30:47 2012 From: adrian.minta at gmail.com (Adrian Minta) Date: Wed, 11 Jan 2012 19:30:47 +0200 Subject: [Dovecot] howto disable indexing on dovecot-lda ? In-Reply-To: <4F06F0E7.904@gmail.com> References: <4F06D5D9.20001@gmail.com> <4F06DFF5.40707@hardwarefreak.com> <4F06F0E7.904@gmail.com> Message-ID: <4F0DC747.4070505@gmail.com> Hello, I tested with "mail_location = whatever-you-have-now:INDEX=MEMORY" and it seems to help, but in the mean time I found another option completely undocumented that seems to do exactly what I wanted: protocol lda { mailbox_list_index_disable= yes } Does anyone knows exactly what "mailbox_list_index_disable" does and if is still available in 2.0 and 2.1 branch ? From kadafax at gmail.com Wed Jan 11 20:00:37 2012 From: kadafax at gmail.com (huret deffgok) Date: Wed, 11 Jan 2012 19:00:37 +0100 Subject: [Dovecot] Dovecot LDA and address extensions - folders flood Message-ID: Hi list, This post is slightly OT, I hope no one will take offense. I was following the wiki on using dovecot LDA with postfix and implemented, for our future mail server, the address extensions mechanism: an email sent to "validUser+foldername at mydomain.com" will have dovecot-lda automagically create and subscribe the "foldername" folder. With some basic scripting I was able to create hundreds of folders in a few seconds. So my question is how do you implement this great feature in a secure way so that funny random people out there cant flood your mailbox with gigatons of folder. Thanks, kfx From CMarcus at Media-Brokers.com Wed Jan 11 20:04:49 2012 From: CMarcus at Media-Brokers.com (Charles Marcus) Date: Wed, 11 Jan 2012 13:04:49 -0500 Subject: [Dovecot] Dovecot LDA and address extensions - folders flood In-Reply-To: References: Message-ID: <4F0DCF41.7040204@Media-Brokers.com> On 2012-01-11 1:00 PM, huret deffgok wrote: > Hi list, > > This post is slightly OT, I hope no one will take offense. > I was following the wiki on using dovecot LDA with postfix and implemented, > for our future mail server, the address extensions mechanism: an email sent > to "validUser+foldername at mydomain.com" will have dovecot-lda automagically > create and subscribe the "foldername" folder. With some basic scripting I > was able to create hundreds of folders in a few seconds. So my question is > how do you implement this great feature in a secure way so that funny > random people out there cant flood your mailbox with gigatons of folder. Don't have it autocreate the folder... Seriously, there is no way to provide that functionality and have the system determine when it is *you* doing it or someone else... But I think it is a non problem... how often do you receive plus-addressed spam?? -- Best regards, Charles From forumer at smartmobili.com Wed Jan 11 20:29:26 2012 From: forumer at smartmobili.com (forumer at smartmobili.com) Date: Wed, 11 Jan 2012 19:29:26 +0100 Subject: [Dovecot] Log imap commands In-Reply-To: <1f53a15cea08527fb79bd71037fa161f@smartmobili.com> References: <4F0D9A80.2020707@es2eng.com> <53bff2b57fe462c70eb9795cda5b8f06@smartmobili.com> <1f53a15cea08527fb79bd71037fa161f@smartmobili.com> Message-ID: I have added the following lines to dovecot configuration(/etc/dovecot/conf.d/10-master.conf) : ... service pop3 { # Max. number of POP3 processes (connections) #process_limit = 1024 } service postlogin { executable = script-login -d rawlog unix_listener postlogin { } } ... and I have created a folder dovecot.rawlog was shown below : root at vf-12345:/home/vmail/smartmobili.com/webguest# ls -la ... drwxrwxrwx 2 vmail vmail 4096 2012-01-11 19:11 dovecot.rawlog/ -rw------- 1 vmail vmail 19002 2011-12-27 13:01 dovecot-uidlist -rw------- 1 vmail vmail 8 2012-01-11 12:52 dovecot-uidvalidity ... And after that I have restarted dovecot and logged in with the webguest account but cannot see any logs. What am I doing wrong ? From geoffb at corp.sonic.net Wed Jan 11 20:53:30 2012 From: geoffb at corp.sonic.net (Geoffrey Broadwell) Date: Wed, 11 Jan 2012 10:53:30 -0800 Subject: [Dovecot] (no subject) Message-ID: <1326308010.2329.47.camel@rover> I'm working on a Dovecot plugin, but I'm pretty new to Dovecot, so there's a LOT to learn about the code base and it's pretty slow going. I've got a few things coded so far, but I want to make sure I'm headed down the right path and get some advice before I go too much further. A couple years ago, I wrote some code for our Courier implementation that sent a magic UDP packet to a small server each time a user modified their voicemail IMAP folder. That UDP server would then connect back to Courier via IMAP again and check whether the folder had any unread messages left in it. Finally, it would contact our phone switches to modify the state of the message waiting indicator (MWI) on that user's phone line appropriately. Fast forward to now, and we want to migrate wholesale to Dovecot 2.x. The servers are all in place, they've been well tested and burned in (with Dovecot 2.0.15 I believe), and the final migration is pretty much waiting on a port to Dovecot of the MWI update functionality. The good news is that I originally spent some effort to isolate the UDP packet generation and delivery, and I used purely standard portable code as per APUE2, so I think that chunk of code should be reusable with only minor modifications. I'm aware that internally Dovecot has its own memory, buffer, and string management functions, but it doesn't feel like a win to try to convert the existing code. It's small, completely isolated, and well reviewed -- I'd be more afraid of using the new (to me) Dovecot API incorrectly than I am that the existing code has bugs in buffer handling. By cribbing from other plugins and editing appropriately, I've also created the skeleton for my plugin: Makefile, docs, conf snippet, .spec (I'll be deploying the plugin as an RPM), and so on. I've got the beginnings of the .h and .c written, just enough to init and deinit the plugin by calling mail_storage_hooks_{add,remove}() with some stub hook functions. This all seems good so far; test builds are error-free and seem sane. So now the hard part is writing the piece that I can't just crib from elsewhere -- making sure that I hook every place in Dovecot that the user's voicemail folder can be changed in a way that would change it between having one or more unread messages, and not having any unread messages at all (or vice-versa, of course). At the same time, I want to minimize the performance impact to Dovecot (and the load on the UDP server) by only hooking the places I need to, filtering out as many false positives as I can without introducing massive complexity, and only pinging the UDP server when it's most likely to notice a change in the state of that user's voicemail server. It seems to me that I need to at least capture mailbox_allocated from the mail_storage hooks, for a couple reasons: 1. The state of the voicemail folder could be changed because the entire folder is created, destroyed, or renamed. 2. I want to only do further checks when I'm sure I'm looking at the voicemail folder. There's no reason to do work when the user is working with any other folder. So now the questions: Does all of the above seem sane so far? Do I need to hook mail_allocated as well, or will I be able to see any change I need to monitor just from the mailbox? Finally, I'm lost about what operations on the mailbox and the mails within it I need to check. Can anyone offer some advice (or doc pointers) on this? Thank you! -'f From geoffb at corp.sonic.net Wed Jan 11 20:57:11 2012 From: geoffb at corp.sonic.net (Geoffrey Broadwell) Date: Wed, 11 Jan 2012 10:57:11 -0800 Subject: [Dovecot] Need help with details for new Dovecot plugin In-Reply-To: <1326308010.2329.47.camel@rover> References: <1326308010.2329.47.camel@rover> Message-ID: <1326308231.2329.50.camel@rover> My sincere apologies for the subjectless email (my MUA should have caught that!); the above is the corrected subject line. -'f On Wed, 2012-01-11 at 10:53 -0800, Geoffrey Broadwell wrote: > I'm working on a Dovecot plugin, but I'm pretty new to Dovecot, so > there's a LOT to learn about the code base and it's pretty slow going. > I've got a few things coded so far, but I want to make sure I'm headed > down the right path and get some advice before I go too much further. > > A couple years ago, I wrote some code for our Courier implementation > that sent a magic UDP packet to a small server each time a user modified > their voicemail IMAP folder. That UDP server would then connect back to > Courier via IMAP again and check whether the folder had any unread > messages left in it. Finally, it would contact our phone switches to > modify the state of the message waiting indicator (MWI) on that user's > phone line appropriately. > > Fast forward to now, and we want to migrate wholesale to Dovecot 2.x. > The servers are all in place, they've been well tested and burned in > (with Dovecot 2.0.15 I believe), and the final migration is pretty much > waiting on a port to Dovecot of the MWI update functionality. > > The good news is that I originally spent some effort to isolate the UDP > packet generation and delivery, and I used purely standard portable code > as per APUE2, so I think that chunk of code should be reusable with only > minor modifications. I'm aware that internally Dovecot has its own > memory, buffer, and string management functions, but it doesn't feel > like a win to try to convert the existing code. It's small, completely > isolated, and well reviewed -- I'd be more afraid of using the new (to > me) Dovecot API incorrectly than I am that the existing code has bugs in > buffer handling. > > By cribbing from other plugins and editing appropriately, I've also > created the skeleton for my plugin: Makefile, docs, conf snippet, .spec > (I'll be deploying the plugin as an RPM), and so on. I've got the > beginnings of the .h and .c written, just enough to init and deinit the > plugin by calling mail_storage_hooks_{add,remove}() with some stub hook > functions. This all seems good so far; test builds are error-free and > seem sane. > > So now the hard part is writing the piece that I can't just crib from > elsewhere -- making sure that I hook every place in Dovecot that the > user's voicemail folder can be changed in a way that would change it > between having one or more unread messages, and not having any unread > messages at all (or vice-versa, of course). At the same time, I want to > minimize the performance impact to Dovecot (and the load on the UDP > server) by only hooking the places I need to, filtering out as many > false positives as I can without introducing massive complexity, and > only pinging the UDP server when it's most likely to notice a change in > the state of that user's voicemail server. > > It seems to me that I need to at least capture mailbox_allocated from > the mail_storage hooks, for a couple reasons: > > 1. The state of the voicemail folder could be changed because > the entire folder is created, destroyed, or renamed. > > 2. I want to only do further checks when I'm sure I'm looking at > the voicemail folder. There's no reason to do work when the > user is working with any other folder. > > So now the questions: > > Does all of the above seem sane so far? > > Do I need to hook mail_allocated as well, or will I be able to see any > change I need to monitor just from the mailbox? > > Finally, I'm lost about what operations on the mailbox and the mails > within it I need to check. Can anyone offer some advice (or doc > pointers) on this? > > Thank you! > > > -'f > > From nicolas.kowalski at gmail.com Wed Jan 11 21:01:18 2012 From: nicolas.kowalski at gmail.com (Nicolas KOWALSKI) Date: Wed, 11 Jan 2012 20:01:18 +0100 Subject: [Dovecot] proxy, managesieve and ssl? Message-ID: <20120111190118.GD14492@petole.demisel.net> Hello, On a dovecot 2.0.14 proxy, I found that proxying managesieve works well when using 'starttls' option in pass_attrs, but does not work when using 'ssl' option. The backend server is also dovecot 2.0.14; when using the ssl option, it reports "no auth attempts" in the logs about managesieve-login, and meanwhile the MUA, Thunderbird with sieve plugin, reports [TRYLATER] account is temporary disabled; no problem when using starttls option on the proxy, all works well. I would like to use IMAPs, instead of IMAP+STARTTLS, from proxy to backend, and have Managesieve still working. Is this supported? Thanks, -- Nicolas From kadafax at gmail.com Wed Jan 11 21:05:43 2012 From: kadafax at gmail.com (huret deffgok) Date: Wed, 11 Jan 2012 20:05:43 +0100 Subject: [Dovecot] Dovecot LDA and address extensions - folders flood In-Reply-To: <4F0DCF41.7040204@Media-Brokers.com> References: <4F0DCF41.7040204@Media-Brokers.com> Message-ID: On Wed, Jan 11, 2012 at 7:04 PM, Charles Marcus wrote: > On 2012-01-11 1:00 PM, huret deffgok wrote: > >> Hi list, >> >> This post is slightly OT, I hope no one will take offense. >> I was following the wiki on using dovecot LDA with postfix and >> implemented, >> for our future mail server, the address extensions mechanism: an email >> sent >> to "validUser+foldername@**mydomain.com" >> will have dovecot-lda automagically >> create and subscribe the "foldername" folder. With some basic scripting I >> was able to create hundreds of folders in a few seconds. So my question is >> how do you implement this great feature in a secure way so that funny >> random people out there cant flood your mailbox with gigatons of folder. >> > > Don't have it autocreate the folder... > > Seriously, there is no way to provide that functionality and have the > system determine when it is *you* doing it or someone else... > > But I think it is a non problem... how often do you receive plus-addressed > spam?? None from now. But I was thinking about something like malice rather than spamming. For me it's an open door to DOS the service. What about a functionality that would throttle the rate of creation of folders from one IP address, with a ban in case of abuse ? Or maybe should I look at the file system level. From CMarcus at Media-Brokers.com Wed Jan 11 21:25:24 2012 From: CMarcus at Media-Brokers.com (Charles Marcus) Date: Wed, 11 Jan 2012 14:25:24 -0500 Subject: [Dovecot] Dovecot LDA and address extensions - folders flood In-Reply-To: References: <4F0DCF41.7040204@Media-Brokers.com> Message-ID: <4F0DE224.8000900@Media-Brokers.com> On 2012-01-11 2:05 PM, huret deffgok wrote: > On Wed, Jan 11, 2012 at 7:04 PM, Charles Marcus wrote: >> On 2012-01-11 1:00 PM, huret deffgok wrote: >>> This post is slightly OT, I hope no one will take offense. I was >>> following the wiki on using dovecot LDA with postfix and >>> implemented, for our future mail server, the address extensions >>> mechanism: an email sent to >>> "validUser+foldername@**mydomain.com" >>> will have dovecot-lda automagically create and subscribe the >>> "foldername" folder. With some basic scripting I was able to >>> create hundreds of folders in a few seconds. So my question is >>> how do you implement this great feature in a secure way so that >>> funny random people out there cant flood your mailbox with >>> gigatons of folder. >> Don't have it autocreate the folder... >> >> Seriously, there is no way to provide that functionality and have the >> system determine when it is *you* doing it or someone else... >> >> But I think it is a non problem... how often do you receive plus-addressed >> spam?? > None from now. But I was thinking about something like malice rather than > spamming. For me it's an open door to DOS the service. > What about a functionality that would throttle the rate of creation of > folders from one IP address, with a ban in case of abuse ? Or maybe should > I look at the file system level. Again - and no offense - but I think you are tilting at windmills... If you get hit by this, you will not only have thousands or millions of folders, you'll have one email for each folder. So, the question is, how do you prevent being flooded with spam... and the answer is, decent anti-spam measures. I prefer ASSP, but I just wish you could use it as an after queue content filter (for its most excellent content filtering and more importantly quarantine management/block reporting features/functionality). That said, postfix, with sane anti-spam measures, along with the most excellent new postscreen (available in 2.8+ I believe) is good enough to stop most anything like this that you may be worried about. Like I said, set up postfix (or your smtp server) right, and this is a non-issue. -- Best regards, Charles From tss at iki.fi Wed Jan 11 22:34:33 2012 From: tss at iki.fi (Timo Sirainen) Date: Wed, 11 Jan 2012 22:34:33 +0200 Subject: [Dovecot] proxy, managesieve and ssl? In-Reply-To: <20120111190118.GD14492@petole.demisel.net> References: <20120111190118.GD14492@petole.demisel.net> Message-ID: <95F23E50-BD64-4844-8838-04E5BB9033A7@iki.fi> On 11.1.2012, at 21.01, Nicolas KOWALSKI wrote: > On a dovecot 2.0.14 proxy, I found that proxying managesieve works well > when using 'starttls' option in pass_attrs, but does not work when using > 'ssl' option. The backend server is also dovecot 2.0.14; when using the > ssl option, it reports "no auth attempts" in the logs about > managesieve-login, and meanwhile the MUA, Thunderbird with sieve plugin, > reports [TRYLATER] account is temporary disabled; no problem when using > starttls option on the proxy, all works well. > > I would like to use IMAPs, instead of IMAP+STARTTLS, from proxy to > backend, and have Managesieve still working. Is this supported? You'll need to kludge it a little bit. I guess you're using LDAP, since you mentioned pass_attrs? protocol sieve { passdb { args = ldap-with-starttls.conf } } protocol !sieve { passdb { args = ldap-with-ssl.conf } } From stephan at rename-it.nl Wed Jan 11 23:06:51 2012 From: stephan at rename-it.nl (Stephan Bosch) Date: Wed, 11 Jan 2012 22:06:51 +0100 Subject: [Dovecot] proxy, managesieve and ssl? In-Reply-To: <20120111190118.GD14492@petole.demisel.net> References: <20120111190118.GD14492@petole.demisel.net> Message-ID: <4F0DF9EB.50605@rename-it.nl> On 1/11/2012 8:01 PM, Nicolas KOWALSKI wrote: > Hello, > > On a dovecot 2.0.14 proxy, I found that proxying managesieve works well > when using 'starttls' option in pass_attrs, but does not work when using > 'ssl' option. The backend server is also dovecot 2.0.14; when using the > ssl option, it reports "no auth attempts" in the logs about > managesieve-login, and meanwhile the MUA, Thunderbird with sieve plugin, > reports [TRYLATER] account is temporary disabled; no problem when using > starttls option on the proxy, all works well. > > I would like to use IMAPs, instead of IMAP+STARTTLS, from proxy to > backend, and have Managesieve still working. Is this supported? Although there is no such thing as a standard sieveS protocol, you can make Dovecot v2.x talk SSL from the start at a ManageSieve socket. Since normally people will not use something like this, it is not available by default. In conf.d/20-managesieve.conf you can adjust the service definition of ManageSieve as follows: service managesieve-login { inet_listener sieve { port = 4190 } inet_listener sieves { port = 5190 ssl = yes } } This starts the normal protocol on port 4190 and the direct-SSL version on an alternative port. You can also put the ssl=yes directly in the port 4190 listener, as long as no client will have to connect to this server directly (no client will support it). Regards, Stephan. From michael.abbott at apple.com Thu Jan 12 01:09:17 2012 From: michael.abbott at apple.com (Mike Abbott) Date: Wed, 11 Jan 2012 17:09:17 -0600 Subject: [Dovecot] MASTER_AUTH_MAX_DATA_SIZE Message-ID: <1BCAD28D-8120-45C9-BAA2-B6597C34545A@apple.com> In 2.0.17 you increased LOGIN_MAX_INBUF_SIZE from 1024 to 4096. Should you also have increased MASTER_AUTH_MAX_DATA_SIZE from (1024*2) to (4096*2)? /* This should be kept in sync with LOGIN_MAX_INBUF_SIZE. Multiply it by two to make sure there's space to transfer the command tag */ From dlie76 at yahoo.com.au Thu Jan 12 04:30:49 2012 From: dlie76 at yahoo.com.au (Daminto Lie) Date: Wed, 11 Jan 2012 18:30:49 -0800 (PST) Subject: [Dovecot] could not start dovecot - unknown section type Message-ID: <1326335449.87714.YahooMailNeo@web113411.mail.gq1.yahoo.com> Hi, I was wondering if I could get some help with the following error when trying to start dovecot service on Ubuntu Server 10.04. The error message is as follows ?* Starting IMAP/POP3 mail server dovecot?????????????????????????????????????? Error: Error in configuration file /usr/local/etc/dovecot/dovecot.conf line 15: Unknown section type Fatal: Invalid configuration in /usr/local/etc/dovecot/dovecot.conf [fail] I have just managed to upgrade it from 1.2.19 to 2.0.17. Then, I tried to start the dovecot by running the command $ sudo /etc/init.d/dovecot start And I received the above message. Below is the configuration for dovecot.conf # 2.0.17 (684381041dc4+): /usr/local/etc/dovecot/dovecot.conf # OS: Linux 2.6.32-37-generic-pae i686 Ubuntu 10.04.3 LTS ext4 auth_debug = yes auth_debug_passwords = yes auth_mechanisms = plain login auth_username_format = %Lu auth_verbose = yes base_dir = /var/run/dovecot disable_plaintext_auth = no first_valid_uid = 1001 last_valid_uid = 2000 log_timestamp = "%Y-%m-%d %H:%M:%S " mail_location = maildir:/home/vmail/%u/Maildir mail_privileged_group = mail passdb { ? driver = pam } passdb { ? args = /usr/local/etc/dovecot/dovecot-ldap.conf ? driver = ldap } plugin { ? quota = maildir ? quota_rule = *:storage=3GB ? quota_rule2 = Trash:storage=20%% ? quota_rule3 = Spam:storage=10%% ? quota_warning = storage=95%% /usr/local/bin/quota-warning.sh 95 ? quota_warning2 = storage=80%% /usr/local/bin/quota-warning.sh 80 } protocols = imap service auth { ? unix_listener /var/run/dovecot-auth-master { ??? group = vmail ??? mode = 0660 ??? user = vmail ? } ? unix_listener /var/spool/postfix/private/auth { ??? group = mail ??? mode = 0660 ??? user = postfix ? } ? user = root } service imap-login { ? chroot = login ? executable = /usr/lib/dovecot/imap-login ? inet_listener imap { ??? address = * ??? port = 143 ? } ? user = dovecot } service imap { ? executable = /usr/lib/dovecot/imap } service pop3-login { ? chroot = login ? user = dovecot } ssl = no userdb { ? driver = passwd } userdb { ? args = uid=1001 gid=1001 home=/home/vmail/%u allow_all_users=yes ? driver = static } verbose_proctitle = yes protocol imap { ? imap_client_workarounds = delay-newmail ? mail_plugins = quota imap_quota } protocol pop3 { ? pop3_uidl_format = %08Xu%08Xv } protocol lda { ? auth_socket_path = /var/run/dovecot-auth-master ? mail_plugins = quota ? postmaster_address = postmaster at example.com ? rejection_reason = Your message to <%t> was automatically rejected:%n%r ? sendmail_path = /usr/lib/sendmail } Any help would be greatly appreciated. Thank you From rob0 at gmx.co.uk Thu Jan 12 05:12:38 2012 From: rob0 at gmx.co.uk (/dev/rob0) Date: Wed, 11 Jan 2012 21:12:38 -0600 Subject: [Dovecot] could not start dovecot - unknown section type In-Reply-To: <1326335449.87714.YahooMailNeo@web113411.mail.gq1.yahoo.com> References: <1326335449.87714.YahooMailNeo@web113411.mail.gq1.yahoo.com> Message-ID: <201201112112.39736@harrier.slackbuilds.org> On Wednesday 11 January 2012 20:30:49 Daminto Lie wrote: > I was wondering if I could get some help with the following error > when trying to start dovecot service on Ubuntu Server 10.04. > > The error message is as follows > > * Starting IMAP/POP3 mail server > dovecot > > Error: Error in configuration file > /usr/local/etc/dovecot/dovecot.conf line 15: Unknown section type > Fatal: Invalid configuration in > /usr/local/etc/dovecot/dovecot.conf [fail] > > > I have just managed to upgrade it from 1.2.19 to 2.0.17. Then, I > tried to start the dovecot by running the command > > > $ sudo /etc/init.d/dovecot start > > And I received the above message. It would seem that you did not upgrade the init script, and the old one is reading the config file and expecting a different format. You used source to upgrade, which means you did not "upgrade" in the conventional sense -- you installed new software. Either fix the script or run without it: dovecot start See: http://wiki2.dovecot.org/CompilingSource http://wiki2.dovecot.org/RunningDovecot > Below is the configuration for dovecot.conf snip -- http://rob0.nodns4.us/ -- system administration and consulting Offlist GMX mail is seen only if "/dev/rob0" is in the Subject: From dlie76 at yahoo.com.au Thu Jan 12 07:19:42 2012 From: dlie76 at yahoo.com.au (Daminto Lie) Date: Wed, 11 Jan 2012 21:19:42 -0800 (PST) Subject: [Dovecot] could not start dovecot - unknown section type In-Reply-To: <201201112112.39736@harrier.slackbuilds.org> References: <1326335449.87714.YahooMailNeo@web113411.mail.gq1.yahoo.com> <201201112112.39736@harrier.slackbuilds.org> Message-ID: <1326345582.91512.YahooMailNeo@web113409.mail.gq1.yahoo.com> Thank you for your reply. Yes, you're right. I should not have called it an upgrade since I actually removed dovecot 1.2.9 completely and installed the dovecot 2.0.17 from the source. Later, I mucked up the init file because I still used the old version one. I'm sorry about this. I remember I tried to upgrade by running doveconf -n -c dovecot.conf > dovecot-2.conf, I got an error message saying doveconf: command not found. Then, I tried to google it to find solutions but to no avail. This is why I decided to install it from scratch. Thank you for your help ________________________________ From: /dev/rob0 To: dovecot at dovecot.org Sent: Thursday, 12 January 2012 2:12 PM Subject: Re: [Dovecot] could not start dovecot - unknown section type On Wednesday 11 January 2012 20:30:49 Daminto Lie wrote: > I was wondering if I could get some help with the following error > when trying to start dovecot service on Ubuntu Server 10.04. > > The error message is as follows > >? * Starting IMAP/POP3 mail server > dovecot? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? > > Error: Error in configuration file > /usr/local/etc/dovecot/dovecot.conf line 15: Unknown section type > Fatal: Invalid configuration in > /usr/local/etc/dovecot/dovecot.conf [fail] > > > I have just managed to upgrade it from 1.2.19 to 2.0.17. Then, I > tried to start the dovecot by running the command > > > $ sudo /etc/init.d/dovecot start > > And I received the above message. It would seem that you did not upgrade the init script, and the old one is reading the config file and expecting a different format. You used source to upgrade, which means you did not "upgrade" in the conventional sense -- you installed new software. Either fix the script or run without it: ??? dovecot start See: ??? http://wiki2.dovecot.org/CompilingSource ??? http://wiki2.dovecot.org/RunningDovecot > Below is the configuration for dovecot.conf snip -- ? http://rob0.nodns4.us/ -- system administration and consulting ? Offlist GMX mail is seen only if "/dev/rob0" is in the Subject: From nicolas.kowalski at gmail.com Thu Jan 12 10:47:07 2012 From: nicolas.kowalski at gmail.com (Nicolas KOWALSKI) Date: Thu, 12 Jan 2012 09:47:07 +0100 Subject: [Dovecot] proxy, managesieve and ssl? In-Reply-To: <95F23E50-BD64-4844-8838-04E5BB9033A7@iki.fi> References: <20120111190118.GD14492@petole.demisel.net> <95F23E50-BD64-4844-8838-04E5BB9033A7@iki.fi> Message-ID: <20120112084707.GE14492@petole.demisel.net> On Wed, Jan 11, 2012 at 10:34:33PM +0200, Timo Sirainen wrote: > On 11.1.2012, at 21.01, Nicolas KOWALSKI wrote: > > > I would like to use IMAPs, instead of IMAP+STARTTLS, from proxy to > > backend, and have Managesieve still working. Is this supported? > > You'll need to kludge it a little bit. I guess you're using LDAP, since you mentioned pass_attrs? Yes, I am using LDAP. > protocol sieve { > passdb { > args = ldap-with-starttls.conf > } > } When just adding the above, it works perfectly, Thanks! > protocol !sieve { > passdb { > args = ldap-with-ssl.conf > } > } Is this really needed? It looks like it works without it. When using it, I get this error: Jan 12 09:40:59 imap1 dovecot: auth: Fatal: No passdbs specified in configuration file. PLAIN mechanism needs one Jan 12 09:40:59 imap1 dovecot: master: Error: service(auth): command startup failed, throttling -- Nicolas From nicolas.kowalski at gmail.com Thu Jan 12 10:58:13 2012 From: nicolas.kowalski at gmail.com (Nicolas KOWALSKI) Date: Thu, 12 Jan 2012 09:58:13 +0100 Subject: [Dovecot] proxy, managesieve and ssl? In-Reply-To: <4F0DF9EB.50605@rename-it.nl> References: <20120111190118.GD14492@petole.demisel.net> <4F0DF9EB.50605@rename-it.nl> Message-ID: <20120112085813.GF14492@petole.demisel.net> On Wed, Jan 11, 2012 at 10:06:51PM +0100, Stephan Bosch wrote: > On 1/11/2012 8:01 PM, Nicolas KOWALSKI wrote: > > > >I would like to use IMAPs, instead of IMAP+STARTTLS, from proxy to > >backend, and have Managesieve still working. Is this supported? > > Although there is no such thing as a standard sieveS protocol, you > can make Dovecot v2.x talk SSL from the start at a ManageSieve > socket. Since normally people will not use something like this, it > is not available by default. > > In conf.d/20-managesieve.conf you can adjust the service definition > of ManageSieve as follows: > > service managesieve-login { > inet_listener sieve { > port = 4190 > } > > inet_listener sieves { > port = 5190 > ssl = yes > } > } This works well, when using (as Timo wrote) a different ldap pass_attrs for sieve, specifying this specific 5190 port. Thanks for your suggestion. > This starts the normal protocol on port 4190 and the direct-SSL > version on an alternative port. You can also put the ssl=yes > directly in the port 4190 listener, as long as no client will have > to connect to this server directly (no client will support it). Well, as this is non-standard, I guess I will not use it. I much prefer to stick with what has been RFCed. -- Nicolas From kjonca at o2.pl Thu Jan 12 12:39:06 2012 From: kjonca at o2.pl (Kamil =?iso-8859-2?Q?Jo=F1ca?=) Date: Thu, 12 Jan 2012 11:39:06 +0100 Subject: [Dovecot] compressed mboxes very slow References: <87iptnoans.fsf@alfa.kjonca> Message-ID: <8739blw6gl.fsf@alfa.kjonca> kjonca at o2.pl (Kamil Jo?ca) writes: > I have some archive mails in gzipped mboxes. I could use them with > dovecot 1.x without problems. > But recently I have installed dovecot 2.0.12, and they are slow. very > slow. Recently I have to read some compressed mboxes again, and no progress :( I took 2.0.17 sources and put some i_debug ("#kjonca["__FILE__",%d,%s] %d", __LINE__,__func__,...some parameters ...); lines into istream-bzlib.c, istream-raw-mbox.c and istream-limit.c and found that: in istream-limit.c in function around lines 40-45: --8<---------------cut here---------------start------------->8--- i_stream_seek(stream->parent, lstream->istream.parent_start_offset + stream->istream.v_offset); stream->pos -= stream->skip; stream->skip = 0; --8<---------------cut here---------------end--------------->8--- seeks stream, (calling i_stream_raw_mbox_seek in file istream-raw-mbox.c ) and then (line 50 ) --8<---------------cut here---------------start------------->8--- if ((ret = i_stream_read(stream->parent)) == -2) return -2; --8<---------------cut here---------------end--------------->8--- tries to read some data earlier in stream, and with compressed mboxes it cause reread file from the beginning. Then I commented out (just for testing) lines 40-45 from istream-limit.c and bzipped mbox can be opened in reasonable time. (MOreover I can read some randomly picked mails without problems) Unfortunately, meanig of fields in istream* structures is very unclear for me (especially skip,pos and offset) to write proper code by myself. KJ -- http://sporothrix.wordpress.com/2011/01/16/usa-sie-krztusza-kto-nastepny/ Jak kto? ma pecha, to z?amie z?b podczas seksu oralnego (S.Sok??) From info_postfix at gmx.ch Thu Jan 12 12:00:52 2012 From: info_postfix at gmx.ch (maximus12) Date: Thu, 12 Jan 2012 02:00:52 -0800 (PST) Subject: [Dovecot] Server Time 45min ahead Message-ID: <33126760.post@talk.nabble.com> Hi, I have the issue that my server clock is 45min fast. Therefore I would like to install ntp. I read a lot on the internet about dovecot and ntp. My issue is that 45 min are a lot an I would like to minimize mail server downtimes as much as possible. I don't care if the time corrections with ntp takes more than a few month. Does anyone know how I should proceed (e.g. how I have to setup ntp -> no time jump during installation and afterwards). Thanks a lot for your help! -- View this message in context: http://old.nabble.com/Server-Time-45min-ahead-tp33126760p33126760.html Sent from the Dovecot mailing list archive at Nabble.com. From Ralf.Hildebrandt at charite.de Thu Jan 12 13:43:50 2012 From: Ralf.Hildebrandt at charite.de (Ralf Hildebrandt) Date: Thu, 12 Jan 2012 12:43:50 +0100 Subject: [Dovecot] Server Time 45min ahead In-Reply-To: <33126760.post@talk.nabble.com> References: <33126760.post@talk.nabble.com> Message-ID: <20120112114350.GQ1341@charite.de> * maximus12 : > > Hi, > > I have the issue that my server clock is 45min fast. Therefore I would like > to install ntp. > I read a lot on the internet about dovecot and ntp. > My issue is that 45 min are a lot an I would like to minimize mail server > downtimes as much as possible. > I don't care if the time corrections with ntp takes more than a few month. > > Does anyone know how I should proceed (e.g. how I have to setup ntp -> no > time jump during installation and afterwards). stop dovecot & postfix ntpdate timeserver start dovecot & postfix start ntpd the time jump is only really critical when the programs are running. -- Ralf Hildebrandt Gesch?ftsbereich IT | Abteilung Netzwerk Charit? - Universit?tsmedizin Berlin Campus Benjamin Franklin Hindenburgdamm 30 | D-12203 Berlin Tel. +49 30 450 570 155 | Fax: +49 30 450 570 962 ralf.hildebrandt at charite.de | http://www.charite.de From info_postfix at gmx.ch Thu Jan 12 13:49:15 2012 From: info_postfix at gmx.ch (maximus12) Date: Thu, 12 Jan 2012 03:49:15 -0800 (PST) Subject: [Dovecot] Server Time 45min ahead In-Reply-To: <20120112114350.GQ1341@charite.de> References: <33126760.post@talk.nabble.com> <20120112114350.GQ1341@charite.de> Message-ID: <33127262.post@talk.nabble.com> Thanks a lot for your quick response. I thought that dovecot won't start until the server time reaches the time before the "time jump". >From your point of view dovecot will start normally if I adjust the time when dovecot is stopped? Thanks a lot for clarification. Ralf Hildebrandt wrote: > > * maximus12 : >> >> Hi, >> >> I have the issue that my server clock is 45min fast. Therefore I would >> like >> to install ntp. >> I read a lot on the internet about dovecot and ntp. >> My issue is that 45 min are a lot an I would like to minimize mail server >> downtimes as much as possible. >> I don't care if the time corrections with ntp takes more than a few >> month. >> >> Does anyone know how I should proceed (e.g. how I have to setup ntp -> no >> time jump during installation and afterwards). > > stop dovecot & postfix > ntpdate timeserver > start dovecot & postfix > start ntpd > > the time jump is only really critical when the programs are running. > > -- > Ralf Hildebrandt > Gesch?ftsbereich IT | Abteilung Netzwerk > Charit? - Universit?tsmedizin Berlin > Campus Benjamin Franklin > Hindenburgdamm 30 | D-12203 Berlin > Tel. +49 30 450 570 155 | Fax: +49 30 450 570 962 > ralf.hildebrandt at charite.de | http://www.charite.de > > > -- View this message in context: http://old.nabble.com/Server-Time-45min-ahead-tp33126760p33127262.html Sent from the Dovecot mailing list archive at Nabble.com. From Ralf.Hildebrandt at charite.de Thu Jan 12 13:57:05 2012 From: Ralf.Hildebrandt at charite.de (Ralf Hildebrandt) Date: Thu, 12 Jan 2012 12:57:05 +0100 Subject: [Dovecot] Server Time 45min ahead In-Reply-To: <33127262.post@talk.nabble.com> References: <33126760.post@talk.nabble.com> <20120112114350.GQ1341@charite.de> <33127262.post@talk.nabble.com> Message-ID: <20120112115705.GS1341@charite.de> * maximus12 : > > Thanks a lot for your quick response. > > I thought that dovecot won't start until the server time reaches the time > before the "time jump". > > From your point of view dovecot will start normally if I adjust the time > when dovecot is stopped? Don't take my word for it, but I think the behaviour is this: * dovecot is running, time jumps backwards -> dovecot exits * dovecot is not running, time jumps backwards -> dovecot can be started It also depends on your dovecot version, see http://wiki.dovecot.org/TimeMovedBackwards -- Ralf Hildebrandt Gesch?ftsbereich IT | Abteilung Netzwerk Charit? - Universit?tsmedizin Berlin Campus Benjamin Franklin Hindenburgdamm 30 | D-12203 Berlin Tel. +49 30 450 570 155 | Fax: +49 30 450 570 962 ralf.hildebrandt at charite.de | http://www.charite.de From Harlan.Stenn at pfcs.com Thu Jan 12 14:47:31 2012 From: Harlan.Stenn at pfcs.com (Harlan Stenn) Date: Thu, 12 Jan 2012 07:47:31 -0500 Subject: [Dovecot] Server Time 45min ahead In-Reply-To: <20120112114350.GQ1341@charite.de> References: <33126760.post@talk.nabble.com> <20120112114350.GQ1341@charite.de> Message-ID: <20120112124731.42C752842A@gwc.pfcs.com> Ralf wrote: > stop dovecot & postfix > ntpdate timeserver > start dovecot & postfix > start ntpd Speaking as stenn at ntp.org, I recommend: - run 'ntpd -gN' as early as possible in the startup sequence (no need for ntpdate) then as late as possible in the startup sequence, run: - ntp-wait -v -s 1 ; start dovecot and postfix (and database servers) H From moseleymark at gmail.com Thu Jan 12 20:32:16 2012 From: moseleymark at gmail.com (Mark Moseley) Date: Thu, 12 Jan 2012 10:32:16 -0800 Subject: [Dovecot] moving mail out of alt storage In-Reply-To: <87obylafsw.fsf_-_@algae.riseup.net> References: <87sjnya3z5.fsf@algae.riseup.net> <1316077133.12936.18.camel@hurina> <87obylafsw.fsf_-_@algae.riseup.net> Message-ID: On Thu, Sep 15, 2011 at 10:14 AM, Micah Anderson wrote: > Timo Sirainen writes: > >> On Wed, 2011-09-14 at 23:17 -0400, Micah Anderson wrote: >>> I moved some mail into the alt storage: >>> >>> doveadm altmove -u johnd at example.com seen savedbefore 1w >>> >>> and now I want to move it back to the regular INBOX, but I can't see how >>> I can do that with either 'altmove' or 'mailbox move'. >> >> Is this sdbox or mdbox? With sdbox you could simply "mv" the files. Or >> apply patch: http://hg.dovecot.org/dovecot-2.0/rev/1910c76a6cc9 > > This is mdbox, which is why I am not sure how to operate because I am > used to individual files as is with maildir. > > micah > I'm curious about this too. Is moving the m.# file out of the ALT path's storage/ directory into the non-ALT storage/ directory sufficient? Or will that cause odd issues? From tss at iki.fi Thu Jan 12 22:20:06 2012 From: tss at iki.fi (Timo Sirainen) Date: Thu, 12 Jan 2012 22:20:06 +0200 Subject: [Dovecot] MASTER_AUTH_MAX_DATA_SIZE In-Reply-To: <1BCAD28D-8120-45C9-BAA2-B6597C34545A@apple.com> References: <1BCAD28D-8120-45C9-BAA2-B6597C34545A@apple.com> Message-ID: <09EF3E7A-15A2-45EE-91BD-6EEFD1FD8049@iki.fi> On 12.1.2012, at 1.09, Mike Abbott wrote: > In 2.0.17 you increased LOGIN_MAX_INBUF_SIZE from 1024 to 4096. > Should you also have increased MASTER_AUTH_MAX_DATA_SIZE from (1024*2) to (4096*2)? > /* This should be kept in sync with LOGIN_MAX_INBUF_SIZE. Multiply it by two > to make sure there's space to transfer the command tag */ Well, yes.. Although I'd rather not do that. 1. Command tag length needs to be restricted to something reasonable, maybe 100 chars, so it won't have to be multiplied by 2 but just added the 100 (+1 for NUL). 2. Maybe I can change the LOGIN_MAX_INBUF_SIZE back to its original size and change the AUTHENTICATE command handling to read the SASL initial response to a separate buffer. I'll try doing those next week. From mcbdovecot at robuust.nl Fri Jan 13 01:10:59 2012 From: mcbdovecot at robuust.nl (Maarten Bezemer) Date: Fri, 13 Jan 2012 00:10:59 +0100 (CET) Subject: [Dovecot] Need help with details for new Dovecot plugin In-Reply-To: <1326308231.2329.50.camel@rover> References: <1326308010.2329.47.camel@rover> <1326308231.2329.50.camel@rover> Message-ID: >> A couple years ago, I wrote some code for our Courier implementation >> that sent a magic UDP packet to a small server each time a user modified >> their voicemail IMAP folder. That UDP server would then connect back to >> Courier via IMAP again and check whether the folder had any unread >> messages left in it. Finally, it would contact our phone switches to >> modify the state of the message waiting indicator (MWI) on that user's >> phone line appropriately. Using a Dovecot plugin for this would require mail delivery to go through Dovecot as well as all mail access. So, no postfix or exim or whatever doing mail delivery by itself (mbox/maildir), and no MUAs accessing mail locally. With courier, you probably had everything going through courier, but with Dovecot, that need not always be the case. So, using a dovecot-plugin for this may not even catch everything. Of course I don't know anything about the details of the project (number of users, requirements for speed of MWI updates, mail storage type, etc.) but if it's not a very large setup and mail storage is mbox or maildir, I'd probably go for cron-based external monitoring using find and stuff like that. Maybe even with login scripting for extra triggering. HTH... -- Maarten From tss at iki.fi Fri Jan 13 01:17:47 2012 From: tss at iki.fi (Timo Sirainen) Date: Fri, 13 Jan 2012 01:17:47 +0200 Subject: [Dovecot] (no subject) In-Reply-To: <1326308010.2329.47.camel@rover> References: <1326308010.2329.47.camel@rover> Message-ID: On 11.1.2012, at 20.53, Geoffrey Broadwell wrote: > So now the hard part is writing the piece that I can't just crib from > elsewhere -- making sure that I hook every place in Dovecot that the > user's voicemail folder can be changed in a way that would change it > between having one or more unread messages, and not having any unread > messages at all (or vice-versa, of course). At the same time, I want to > minimize the performance impact to Dovecot (and the load on the UDP > server) by only hooking the places I need to, filtering out as many > false positives as I can without introducing massive complexity, and > only pinging the UDP server when it's most likely to notice a change in > the state of that user's voicemail server. I think notify plugin would help you do this the easiest way. See mail_log plugin for an example of how to use it. From noel.butler at ausics.net Fri Jan 13 03:15:13 2012 From: noel.butler at ausics.net (Noel Butler) Date: Fri, 13 Jan 2012 11:15:13 +1000 Subject: [Dovecot] Server Time 45min ahead In-Reply-To: <20120112124731.42C752842A@gwc.pfcs.com> References: <33126760.post@talk.nabble.com> <20120112114350.GQ1341@charite.de> <20120112124731.42C752842A@gwc.pfcs.com> Message-ID: <1326417313.5785.3.camel@tardis> On Thu, 2012-01-12 at 07:47 -0500, Harlan Stenn wrote: > > then as late as possible in the startup sequence, run: > > - ntp-wait -v -s 1 ; start dovecot and postfix (and database servers) I'll +1 that advice, I introduced ntp-wait sometime ago when dovecot kept bitching, not a single glitch since. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 490 bytes Desc: This is a digitally signed message part URL: From user+dovecot at localhost.localdomain.org Fri Jan 13 03:25:41 2012 From: user+dovecot at localhost.localdomain.org (Pascal Volk) Date: Fri, 13 Jan 2012 02:25:41 +0100 Subject: [Dovecot] 2.1.rc3 (1a722c7676bb): doveadm mailbox list withholds child mailboxes Message-ID: <4F0F8815.8070609@localhost.localdomain.org> Probably I've overlooked something. But a quick search in `hg log -k doveadm` didn't show appropriate information. doveadm mailbox list -u user at example.com doesn't show child mailboxes. mailbox = N/A || \*: Sent Trash INBOX Drafts Junk-E-Mail Supplier mailbox = Supplier*: Supplier mailbox = Supplier/*: Supplier/Dell Supplier/VMware Supplier/? The same problem exists in `doveadm mailbox status` Regards, Pascal -- The trapper recommends today: defaced.1201301 at localdomain.org From moseleymark at gmail.com Fri Jan 13 04:00:08 2012 From: moseleymark at gmail.com (Mark Moseley) Date: Thu, 12 Jan 2012 18:00:08 -0800 Subject: [Dovecot] MySQL server has gone away Message-ID: I'm running 2.0.17 and I'm still seeing a decent amount of "MySQL server has gone away" errors, despite having multiple hosts defined in my auth userdb 'connect'. This is Debian Lenny 32-bit and I'm seeing the same thing with 2.0.16 on Debian Squeeze 64-bit. E.g.: Jan 12 20:30:33 auth-worker: Error: mysql: Query failed, retrying: MySQL server has gone away Our mail mysql servers are busy enough that wait_timeout is set to a whopping 30 seconds. On my regular boxes, I see a good deal of these in the logs. I've been doing a lot of mucking with doveadm/dsync (working on maildir->mdbox migration finally, yay!) on test boxes (same dovecot package & version) and when I get this error, despite the log saying it's retrying, it doesn't seem to be. Instead I get: dsync(root): Error: user ...: Auth USER lookup failed dsync(root): Fatal: User lookup failed: Internal error occurred. Refer to server log for more information. Watching tcpdump at the same time, it looks like it's going through some of the mysql servers, but all of them have by now disconnected and are in CLOSE_WAIT. Here's an (edited) example after doing a dsync that completes without errors, with tcpdump running in the background: # sleep 30; netstat -ant | grep 3306; dsync -C^ -u mailbox at test.com backup mdbox:~/mdbox tcp 1 0 10.1.15.129:57436 10.1.52.48:3306 CLOSE_WAIT tcp 1 0 10.1.15.129:49917 10.1.52.49:3306 CLOSE_WAIT tcp 1 0 10.1.15.129:35904 10.1.52.47:3306 CLOSE_WAIT 20:49:59.725005 IP 10.1.15.129.35904 > 10.1.52.47.3306: F 1126:1126(0) ack 807 win 1004 20:49:59.725459 IP 10.1.52.47.3306 > 10.1.15.129.35904: . ack 1127 win 123 20:49:59.725568 IP 10.1.15.129.57436 > 10.1.52.48.3306: F 1126:1126(0) ack 807 win 1004 20:49:59.725779 IP 10.1.52.48.3306 > 10.1.15.129.57436: . ack 1127 win 123 dsync(root): Error: user mailbox at test.com: Auth USER lookup failed dsync(root): Fatal: User lookup failed: Internal error occurred. Refer to server log for more information. 10.1.15.129 in this case is the dovecot server, and the 10.1.52.0/24 boxes are mysql servers. That's the same pattern I've seen almost every time. Just a FIN packet to two of the servers (ack'd by the mysql server) and then it fails. Is the retry mechanism supposed to transparently start a new connection, or is this how it works? In connecting remotely to these same servers (which aren't getting production traffic, so I'm the only person connecting to them), I get seemingly random disconnects via IMAP, always coinciding with a "MySQL server has gone away" error in the logs. This is non-production, so I'm happy to turn on whatever debugging would be useful. Here's doveconf -n from the box the tcpdump was on. This box is just configured for lmtp (but have seen the same thing on one configured for IMAP/POP as well), so it's pretty small, config-wise: # 2.0.17: /etc/dovecot/dovecot/dovecot.conf # OS: Linux 3.0.9-nx i686 Debian 5.0.9 auth_cache_negative_ttl = 0 auth_cache_ttl = 0 auth_debug = yes auth_failure_delay = 0 base_dir = /var/run/dovecot/ debug_log_path = /var/log/dovecot/debug.log default_client_limit = 3005 default_internal_user = doveauth default_process_limit = 1500 deliver_log_format = M=%m, F=%f, S="%s" => %$ disable_plaintext_auth = no first_valid_uid = 199 last_valid_uid = 201 lda_mailbox_autocreate = yes listen = * log_path = /var/log/dovecot/mail.log mail_debug = yes mail_fsync = always mail_location = maildir:~/Maildir:INDEX=/var/cache/dovecot/%2Mu/%2.2Mu/%u mail_nfs_index = yes mail_nfs_storage = yes mail_plugins = zlib quota mail_privileged_group = mail mail_uid = 200 managesieve_notify_capability = mailto managesieve_sieve_capability = fileinto reject envelope encoded-character vacation subaddress comparator-i;ascii-numeric relational regex imap4flags copy include variables body enotify environment mailbox date ihave mdbox_rotate_interval = 1 days mmap_disable = yes namespace { hidden = no inbox = yes list = yes location = prefix = INBOX. separator = . subscriptions = yes type = private } passdb { args = /opt/dovecot/etc/lmtp/sql.conf driver = sql } plugin { info_log_path = /var/log/dovecot/dovecot-deliver.log log_path = /var/log/dovecot/dovecot-deliver.log quota = maildir:User quota quota_rule = *:bytes=25M quota_rule2 = INBOX.Trash:bytes=+10%% quota_rule3 = *:messages=3000 sieve = ~/sieve/dovecot.sieve sieve_before = /etc/dovecot/scripts/spam.sieve sieve_dir = ~/sieve/ zlib_save = gz zlib_save_level = 3 } protocols = lmtp sieve service auth-worker { unix_listener auth-worker { mode = 0666 } user = doveauth } service auth { client_limit = 8000 unix_listener login/auth { mode = 0666 } user = doveauth } service lmtp { executable = lmtp -L process_min_avail = 10 unix_listener lmtp { mode = 0666 } } ssl = no userdb { driver = prefetch } userdb { args = /opt/dovecot/etc/lmtp/sql.conf driver = sql } verbose_proctitle = yes protocol lmtp { mail_plugins = zlib quota sieve } Thanks! From henson at acm.org Fri Jan 13 04:51:29 2012 From: henson at acm.org (Paul B. Henson) Date: Thu, 12 Jan 2012 18:51:29 -0800 Subject: [Dovecot] MySQL server has gone away In-Reply-To: References: Message-ID: <4F0F9C31.8070009@acm.org> On 1/12/2012 6:00 PM, Mark Moseley wrote: > Jan 12 20:30:33 auth-worker: Error: mysql: Query failed, retrying: > MySQL server has gone away I've actually been meaning to send a similar message for the last couple of months :). We run dovecot solely as a sasl authentication provider to postfix for smtp authentication. We're currently running 2.0.15 with a handful of patches from a few months ago when Timo fixed mysql failover. We also see sporadic messages like that in the logs: Jan 11 01:00:57 sparky dovecot: auth-worker: Error: mysql: Query failed, retrying: MySQL server has gone away We do have a timeout on the mysql servers, so I don't necessarily mind this message, except we also see some number of these: Jan 11 01:00:57 sparky dovecot: auth-worker: Error: sql(clgeurts,108.38.64.98): Password query failed: MySQL server has gone away The mysql servers have never been down or unresponsive, if it retries, it should succeed. I'm not sure what's happening here, perhaps it tries the query on one mysql server connection (we have two configured) which has timed out, and then tries the other one, and if the other one has also timed out just fails? I also see some auth timeouts: Jan 11 22:06:02 sparky dovecot: auth: CRAM-MD5(?,200.37.175.14): Request 10232.28 timeouted after 150 secs, state=2 I'm not sure if they're related to the mysql timeouts. There are also some postfix auth errors: Jan 11 23:55:41 sparky postfix/smtpd[20994]: warning: unknown[200.37.175.14]: SASL CRAM-MD5 authentication failed: Connection lost to authentication server Which I think happen when dovecot takes too long to respond. I haven't had time to dig into it or get any debugging info, but just thought I'd pipe up when I saw your similar question :). -- Paul B. Henson | (909) 979-6361 | http://www.csupomona.edu/~henson/ Operating Systems and Network Analyst | henson at csupomona.edu California State Polytechnic University | Pomona CA 91768 From user+dovecot at localhost.localdomain.org Fri Jan 13 05:10:31 2012 From: user+dovecot at localhost.localdomain.org (Pascal Volk) Date: Fri, 13 Jan 2012 04:10:31 +0100 Subject: [Dovecot] 2.1.rc3 (1a722c7676bb):dsync umlaut problems Message-ID: <4F0FA0A7.10909@localhost.localdomain.org> All umlauts in mailbox names are lost after converting mbox/Maildir mailboxes to mdbox. [location2 scp-ed from the old server] # ls -d /srv/import/Maildir/.Gel\&APY-schte\ Elemente/ /srv/import/Maildir/.Gel&APY-schte Elemente/ # dsync -u jane at example.com -v mirror maildir:/srv/import/Maildir/ ? dsync(jane at example.com): Info: Gel?schte Elemente: only in dest ? ? # doveadm mailbox list -u jane at example.com Gel* Gel__schte_Elemente # ls -d mdbox/mailboxes/Gel* mdbox/mailboxes/Gel__schte_Elemente Regards, Pascal -- The trapper recommends today: cafefeed.1201303 at localdomain.org From mark at msapiro.net Fri Jan 13 06:37:24 2012 From: mark at msapiro.net (Mark Sapiro) Date: Thu, 12 Jan 2012 20:37:24 -0800 Subject: [Dovecot] Clients show .subscriptions folder In-Reply-To: References: Message-ID: <4F0FB504.5070802@msapiro.net> Mark Sapiro wrote: > Since upgrading from dovecot-2.1.rc1 to dovecot-2.1.rc3, some clients > are showing a .subscriptions file in the user's mbox path as a folder. > > Some clients such as T'bird on Mac OS X create this file listing > subscribed mbox files. Other clients such as T'bird on Windows XP show > this file as a folder in the folder list even though it cannot be > accessed as a folder (dovecot returns CANNOT Mailbox is not a valid > mbox file). > > I think this may be a result of uncommenting the inbox namespace in > conf.d/10-mail.conf > . > > Is there a way to supress exposing this file to clients that don't use > it? I worked around this by setting the client to show only subscribed folders. -- Mark Sapiro The highway is for gamblers, San Francisco Bay Area, California better use your sense - B. Dylan From kjonca at o2.pl Fri Jan 13 08:20:13 2012 From: kjonca at o2.pl (Kamil =?iso-8859-2?Q?Jo=F1ca?=) Date: Fri, 13 Jan 2012 07:20:13 +0100 Subject: [Dovecot] dovecot 2.0.15 - purge errors Message-ID: <87hb00run6.fsf@alfa.kjonca> Dovecot 2.0.15, debian package, am I lost some mails? How can I check what is in *.broken file? --8<---------------cut here---------------start------------->8--- $doveadm -v purge doveadm(kjonca): Error: Corrupted dbox file /home/kjonca/Mail/0/storage/m.6469 (around offset=291530): purging found mismatched offsets (291500 vs 299692, 60/215) doveadm(kjonca): Warning: mdbox /home/kjonca/Mail/0/storage: rebuilding indexes doveadm(kjonca): Error: Corrupted dbox file /home/kjonca/Mail/0/storage/m.6469 (around offset=599914): metadata header has bad magic value doveadm(kjonca): Warning: dbox: Copy of the broken file saved to /home/kjonca/Mail/0/storage/m.6469.broken doveadm(kjonca): Warning: Transaction log file /home/kjonca/Mail/0/storage/dovecot.map.index.log was locked for 211 seconds doveadm(kjonca): Error: Purging namespace '' failed: Internal error occurred. Refer to server log for more information. [2012-01-13 06:45:07] --8<---------------cut here---------------end--------------->8--- doveconf -n --8<---------------cut here---------------start------------->8--- # 2.0.15: /etc/dovecot/dovecot.conf # OS: Linux 2.6.38+3-64 x86_64 Debian wheezy/sid auth_debug = yes auth_mechanisms = digest-md5 cram-md5 login plain auth_verbose = yes listen = alfa log_path = /var/log/dovecot log_timestamp = "%Y-%m-%d %H:%M:%S " mail_debug = yes mail_location = mdbox:~/Mail/0 mail_log_prefix = "%Us(%u): " mail_plugins = zlib notify acl mail_privileged_group = mail namespace { hidden = no inbox = yes list = yes location = prefix = separator = / subscriptions = yes type = private } namespace { hidden = no inbox = no list = yes location = mbox:~/Mail/Old:CONTROL=~/Mail/.dovecot/control/Old:INDEX=~/Mail/.dovecot/index/Old prefix = "#Old/" separator = / subscriptions = yes type = private } passdb { args = scheme=PLAIN /etc/security/dovecot.pwd driver = passwd-file } plugin { acl = vfile mail_log_events = delete undelete expunge copy mailbox_delete mailbox_rename mail_log_fields = uid box msgid size zlib_save = bz2 zlib_save_level = 9 } protocols = imap service auth { user = root } service imap-login { process_limit = 2 process_min_avail = 1 } service imap { vsz_limit = 512 M } service pop3-login { process_limit = 2 process_min_avail = 1 } service pop3 { vsz_limit = 512 M } ssl = no userdb { driver = passwd } verbose_proctitle = yes protocol imap { mail_max_userip_connections = 20 mail_plugins = zlib imap_zlib mail_log notify acl } protocol pop3 { pop3_uidl_format = %08Xu%08Xv } protocol lda { deliver_log_format = msgid=%m: %$ log_path = ~/log/deliver.log postmaster_address = root at localhost } --8<---------------cut here---------------end--------------->8--- -- Gdyby kto? mia? zb?dny Toshiba G450 - to ch?tnie przejm? ;) ---------------- Biologia poucza, ze je?li ci? co? ugryz?o, to niemal pewne, ze by?a to samica. From goetz.reinicke at filmakademie.de Fri Jan 13 11:01:05 2012 From: goetz.reinicke at filmakademie.de (=?ISO-8859-15?Q?G=F6tz_Reinicke?=) Date: Fri, 13 Jan 2012 10:01:05 +0100 Subject: [Dovecot] more than 200 imap processes for one user Message-ID: <4F0FF2D1.4040909@filmakademie.de> HI, recently I noticed, that our dovecot server (RH EL 5.7 dovecot-1.0.7-7.el5_7.1) 'fires' up a lot of imap processes only for one user. I counted 214 :-) most of tham in the 'S' state and are started nearly at the same time within 5 minutes. Usually users do have about 4 to 10 .... Dose anyone has an idea, what could be the cause? Thanks for any suggestion and best regards . G?tz -- G?tz Reinicke IT-Koordinator Tel. +49 7141 969 420 Fax +49 7141 969 55 420 E-Mail goetz.reinicke at filmakademie.de Filmakademie Baden-W?rttemberg GmbH Akademiehof 10 71638 Ludwigsburg www.filmakademie.de Eintragung Amtsgericht Stuttgart HRB 205016 Vorsitzender des Aufsichtsrats: J?rgen Walter MdL Staatssekret?r im Ministerium f?r Wissenschaft, Forschung und Kunst Baden-W?rttemberg Gesch?ftsf?hrer: Prof. Thomas Schadt -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 5161 bytes Desc: S/MIME Kryptografische Unterschrift URL: From tss at iki.fi Fri Jan 13 11:36:38 2012 From: tss at iki.fi (Timo Sirainen) Date: Fri, 13 Jan 2012 11:36:38 +0200 Subject: [Dovecot] MySQL server has gone away In-Reply-To: References: Message-ID: On 13.1.2012, at 4.00, Mark Moseley wrote: > I'm running 2.0.17 and I'm still seeing a decent amount of "MySQL > server has gone away" errors, despite having multiple hosts defined in > my auth userdb 'connect'. This is Debian Lenny 32-bit and I'm seeing > the same thing with 2.0.16 on Debian Squeeze 64-bit. > > E.g.: > > Jan 12 20:30:33 auth-worker: Error: mysql: Query failed, retrying: > MySQL server has gone away > > Our mail mysql servers are busy enough that wait_timeout is set to a > whopping 30 seconds. On my regular boxes, I see a good deal of these > in the logs. I've been doing a lot of mucking with doveadm/dsync > (working on maildir->mdbox migration finally, yay!) on test boxes > (same dovecot package & version) and when I get this error, despite > the log saying it's retrying, it doesn't seem to be. Instead I get: > > dsync(root): Error: user ...: Auth USER lookup failed Try with only one host in the "connect" string? My guess: Both the connections have timed out, and the retrying fails as well (there is only one retry). Although if the retrying lookup fails, there should be an error logged about it also (you don't see one?) Also another idea to avoid them in the first place: service auth-worker { idle_kill = 20 } From tss at iki.fi Fri Jan 13 11:40:02 2012 From: tss at iki.fi (Timo Sirainen) Date: Fri, 13 Jan 2012 11:40:02 +0200 Subject: [Dovecot] more than 200 imap processes for one user In-Reply-To: <4F0FF2D1.4040909@filmakademie.de> References: <4F0FF2D1.4040909@filmakademie.de> Message-ID: On 13.1.2012, at 11.01, G?tz Reinicke wrote: > recently I noticed, that our dovecot server (RH EL 5.7 > dovecot-1.0.7-7.el5_7.1) 'fires' up a lot of imap processes only for one > user. v1.1+ limits this to 10 processes by default. > Dose anyone has an idea, what could be the cause? Some client gone crazy. From janfrode at tanso.net Fri Jan 13 12:26:56 2012 From: janfrode at tanso.net (Jan-Frode Myklebust) Date: Fri, 13 Jan 2012 11:26:56 +0100 Subject: [Dovecot] dsync conversion and ldap attributes Message-ID: <20120113102656.GA12031@dibs.tanso.net> I have: mail_home = /srv/mailstore/%256RHu/%d/%n mail_location = maildir:~/:INDEX=/indexes/%1u/%1.1u/%u userdb { args = /etc/dovecot/dovecot-ldap.conf.ext driver = ldap } and the dovecot-ldap.conf.ext specifies: user_attrs = mailMessageStore=home, mailLocation=mail, mailQuota=quota_rule=*:storage=%$ Now I want to convert individual users to mdbox using dsync, but how to I tell location2 to not fetch "home" and "mail" from ldap and use different mail_location (mdbox:~/mdbox) ? I.e. I want converted accounts stored in mail_location mdbox:/srv/mailstore/%256RHu/%d/%n/mdbox. -jf From CMarcus at Media-Brokers.com Fri Jan 13 13:38:01 2012 From: CMarcus at Media-Brokers.com (Charles Marcus) Date: Fri, 13 Jan 2012 06:38:01 -0500 Subject: [Dovecot] Need help with details for new Dovecot plugin In-Reply-To: References: <1326308010.2329.47.camel@rover> <1326308231.2329.50.camel@rover> Message-ID: <4F101799.6040002@Media-Brokers.com> On 2012-01-12 6:10 PM, Maarten Bezemer wrote: > Of course I don't know anything about the details of the project (number > of users, requirements for speed of MWI updates, mail storage type, > etc.) but if it's not a very large setup and mail storage is mbox or > maildir, I'd probably go for cron-based external monitoring using find > and stuff like that. Maybe even with login scripting for extra triggering. I know that dovecot supports inotify (not sure how or in what way, and ianap, so may be totally off base), so maybe that could be leveraged? -- Best regards, Charles From CMarcus at Media-Brokers.com Fri Jan 13 13:41:51 2012 From: CMarcus at Media-Brokers.com (Charles Marcus) Date: Fri, 13 Jan 2012 06:41:51 -0500 Subject: [Dovecot] Need help with details for new Dovecot plugin - was: Re: (no subject) In-Reply-To: References: <1326308010.2329.47.camel@rover> Message-ID: <4F10187F.3010507@Media-Brokers.com> On 2012-01-12 6:17 PM, Timo Sirainen wrote: > On 11.1.2012, at 20.53, Geoffrey Broadwell wrote: >> So now the hard part is writing the piece that I can't just crib from >> elsewhere -- making sure that I hook every place in Dovecot that the >> user's voicemail folder can be changed in a way that would change it >> between having one or more unread messages, and not having any unread >> messages at all (or vice-versa, of course). At the same time, I want to >> minimize the performance impact to Dovecot (and the load on the UDP >> server) by only hooking the places I need to, filtering out as many >> false positives as I can without introducing massive complexity, and >> only pinging the UDP server when it's most likely to notice a change in >> the state of that user's voicemail server. > I think notify plugin would help you do this the easiest way. See > mail_log plugin for an example of how to use it. Oops, should have read all messages before replying (I usually skip messages with (no subject), but I try to read everything on some lists (dovecot is one of them)... Timo - searching on 'inotify' or 'notify' on both wiki1 and wiki2 has 'no results'... maybe the search indexes need to be updated? Or, is it just that there really is no documentation of inotify on either of the wikis? -- Best regards, Charles From joseba.torre at ehu.es Fri Jan 13 15:59:25 2012 From: joseba.torre at ehu.es (Joseba Torre) Date: Fri, 13 Jan 2012 14:59:25 +0100 Subject: [Dovecot] Dsync and compressed mailboxes Message-ID: <4F1038BD.1010605@ehu.es> Hi, I will begin two migrations next week, and in both cases I plan to use compressed mailboxes with mdbox format. But in the last minute one doubt has appeared: is dsync aware of compressed mailboxes? I'm not sure if dsync -u $USER mirror mdbox:compressed_mdbox_path works, or if I have to use something else (I guess that with a running dovecot dsync backup should work). Thanks. From ivo at crm.walltopia.com Fri Jan 13 19:11:30 2012 From: ivo at crm.walltopia.com (IVO GELOV (CRM)) Date: Fri, 13 Jan 2012 19:11:30 +0200 Subject: [Dovecot] Using Dovecot-auth to return error code 450 (or other 4xx) to Postfix when user is on vacation Message-ID: Hello to all members. I am using Dovecot for 5 years, but this is my first post here. I am aware of the various autoresponder scripts for vacation autoreplies (I am using Virtual Vacation 3.1 by Mischa Peters). I have an issue with auto-replies - it is vulnerable to spamming with forged email address. Forging can be prevented with several Postfix settings, which I did in the past - but was forced to remove, because our company occasionaly has clients with improper configurations and those settings prevent us to receive their legitimate mail (and this of course is not good for the business). So I have though about another idea. Since I use Dovecot-auth to verify mailbox existence - I just wonder is it possible to somehow indicate specific error code (and hopefully descriptive text also) to Postfix (e.g. 450 or some other temporary failure) when the owner of the mailbox is currently on vacation ? Best wishes, IVO GELOV From CMarcus at Media-Brokers.com Fri Jan 13 20:03:36 2012 From: CMarcus at Media-Brokers.com (Charles Marcus) Date: Fri, 13 Jan 2012 13:03:36 -0500 Subject: [Dovecot] Using Dovecot-auth to return error code 450 (or other 4xx) to Postfix when user is on vacation In-Reply-To: References: Message-ID: <4F1071F8.4080202@Media-Brokers.com> On 2012-01-13 12:11 PM, IVO GELOV (CRM) wrote: > I am aware of the various autoresponder scripts for vacation autoreplies > (I am using Virtual Vacation 3.1 by Mischa Peters). > I have an issue with auto-replies - it is vulnerable to spamming with > forged email address. I think you are using an extremely old/outdated version... The latest version would not suffer this problem, because it has a lot of message types that it will *not* respond to, including messages appearing to be from yourself... Get the latest version fro the postfixadmin package. However, I don't know how to use it without also using postfixadmin (it creates databases for storing the vacation message, etc)... -- Best regards, Charles From moseleymark at gmail.com Fri Jan 13 20:29:45 2012 From: moseleymark at gmail.com (Mark Moseley) Date: Fri, 13 Jan 2012 10:29:45 -0800 Subject: [Dovecot] MySQL server has gone away In-Reply-To: References: Message-ID: On Fri, Jan 13, 2012 at 1:36 AM, Timo Sirainen wrote: > On 13.1.2012, at 4.00, Mark Moseley wrote: > >> I'm running 2.0.17 and I'm still seeing a decent amount of "MySQL >> server has gone away" errors, despite having multiple hosts defined in >> my auth userdb 'connect'. This is Debian Lenny 32-bit and I'm seeing >> the same thing with 2.0.16 on Debian Squeeze 64-bit. >> >> E.g.: >> >> Jan 12 20:30:33 auth-worker: Error: mysql: Query failed, retrying: >> MySQL server has gone away >> >> Our mail mysql servers are busy enough that wait_timeout is set to a >> whopping 30 seconds. On my regular boxes, I see a good deal of these >> in the logs. I've been doing a lot of mucking with doveadm/dsync >> (working on maildir->mdbox migration finally, yay!) on test boxes >> (same dovecot package & version) and when I get this error, despite >> the log saying it's retrying, it doesn't seem to be. Instead I get: >> >> dsync(root): Error: user ...: Auth USER lookup failed > > Try with only one host in the "connect" string? My guess: Both the connections have timed out, and the retrying fails as well (there is only one retry). Although if the retrying lookup fails, there should be an error logged about it also (you don't see one?) > > Also another idea to avoid them in the first place: > > service auth-worker { > ?idle_kill = 20 > } > With just one 'connect' host, it seems to reconnect just fine (using the same tests as above) and I'm not seeing the same error. It worked every time that I tried, with no complaints of "MySQL server has gone away". If there are multiple hosts, it seems like the most robust thing to do would be to exhaust the existing connections and if none of those succeed, then start a new connection to one of them. It will probably result in much more convoluted logic but it'd probably match better what people expect from a retry. Alternatively, since in all my tests, the mysql server has closed the connection prior to this, is the auth worker not recognizing its connection is already half-closed (in which case, it probably shouldn't even consider it a legitimate connection and just automatically reconnect, i.e. try #1, not the retry, which would happen after another failure). I'll give the idle_kill a try too. I kind of like the idea of idle_kill for auth processes anyway, just to free up some connections on the mysql server. From ghandidrivesahumvee at rocketfish.com Fri Jan 13 20:59:02 2012 From: ghandidrivesahumvee at rocketfish.com (Dovecot-GDH) Date: Fri, 13 Jan 2012 10:59:02 -0800 Subject: [Dovecot] Dsync and compressed mailboxes In-Reply-To: <4F1038BD.1010605@ehu.es> References: <4F1038BD.1010605@ehu.es> Message-ID: <01D2B152-D1C3-4A89-8CE7-608357ADCBC2@rocketfish.com> The dsync process will be aware of whatever configuration file it refers to. The best thing to do is to set up a separate instance of Dovecot with compression enabled (really not that hard to do) and point dsync to that separate instances's configuration. Mailboxes written by dsync will be compressed. On Jan 13, 2012, at 5:59 AM, Joseba Torre wrote: > Hi, > > I will begin two migrations next week, and in both cases I plan to use compressed mailboxes with mdbox format. But in the last minute one doubt has appeared: is dsync aware of compressed mailboxes? I'm not sure if > > dsync -u $USER mirror mdbox:compressed_mdbox_path > > works, or if I have to use something else (I guess that with a running dovecot dsync backup should work). > > Thanks. From robert at schetterer.org Fri Jan 13 21:38:28 2012 From: robert at schetterer.org (Robert Schetterer) Date: Fri, 13 Jan 2012 20:38:28 +0100 Subject: [Dovecot] MySQL server has gone away In-Reply-To: References: Message-ID: <4F108834.60709@schetterer.org> Am 13.01.2012 19:29, schrieb Mark Moseley: > On Fri, Jan 13, 2012 at 1:36 AM, Timo Sirainen wrote: >> On 13.1.2012, at 4.00, Mark Moseley wrote: >> >>> I'm running 2.0.17 and I'm still seeing a decent amount of "MySQL >>> server has gone away" errors, despite having multiple hosts defined in >>> my auth userdb 'connect'. This is Debian Lenny 32-bit and I'm seeing >>> the same thing with 2.0.16 on Debian Squeeze 64-bit. >>> >>> E.g.: >>> >>> Jan 12 20:30:33 auth-worker: Error: mysql: Query failed, retrying: >>> MySQL server has gone away >>> >>> Our mail mysql servers are busy enough that wait_timeout is set to a >>> whopping 30 seconds. On my regular boxes, I see a good deal of these >>> in the logs. I've been doing a lot of mucking with doveadm/dsync >>> (working on maildir->mdbox migration finally, yay!) on test boxes >>> (same dovecot package & version) and when I get this error, despite >>> the log saying it's retrying, it doesn't seem to be. Instead I get: >>> >>> dsync(root): Error: user ...: Auth USER lookup failed >> >> Try with only one host in the "connect" string? My guess: Both the connections have timed out, and the retrying fails as well (there is only one retry). Although if the retrying lookup fails, there should be an error logged about it also (you don't see one?) >> >> Also another idea to avoid them in the first place: >> >> service auth-worker { >> idle_kill = 20 >> } >> > > With just one 'connect' host, it seems to reconnect just fine (using > the same tests as above) and I'm not seeing the same error. It worked > every time that I tried, with no complaints of "MySQL server has gone > away". > > If there are multiple hosts, it seems like the most robust thing to do > would be to exhaust the existing connections and if none of those > succeed, then start a new connection to one of them. It will probably > result in much more convoluted logic but it'd probably match better > what people expect from a retry. > > Alternatively, since in all my tests, the mysql server has closed the > connection prior to this, is the auth worker not recognizing its > connection is already half-closed (in which case, it probably > shouldn't even consider it a legitimate connection and just > automatically reconnect, i.e. try #1, not the retry, which would > happen after another failure). > > I'll give the idle_kill a try too. I kind of like the idea of > idle_kill for auth processes anyway, just to free up some connections > on the mysql server. by the way , if you use sql for auth have you tried auth caching ? http://wiki.dovecot.org/Authentication/Caching i.e. # Authentication cache size (e.g. 10M). 0 means it's disabled. Note that # bsdauth, PAM and vpopmail require cache_key to be set for caching to be used. auth_cache_size = 10M # Time to live for cached data. After TTL expires the cached record is no # longer used, *except* if the main database lookup returns internal failure. # We also try to handle password changes automatically: If user's previous # authentication was successful, but this one wasn't, the cache isn't used. # For now this works only with plaintext authentication. auth_cache_ttl = 1 hour # TTL for negative hits (user not found, password mismatch). # 0 disables caching them completely. auth_cache_negative_ttl = 0 -- Best Regards MfG Robert Schetterer Germany/Munich/Bavaria From moseleymark at gmail.com Fri Jan 13 22:45:03 2012 From: moseleymark at gmail.com (Mark Moseley) Date: Fri, 13 Jan 2012 12:45:03 -0800 Subject: [Dovecot] MySQL server has gone away In-Reply-To: <4F108834.60709@schetterer.org> References: <4F108834.60709@schetterer.org> Message-ID: On Fri, Jan 13, 2012 at 11:38 AM, Robert Schetterer wrote: > Am 13.01.2012 19:29, schrieb Mark Moseley: >> On Fri, Jan 13, 2012 at 1:36 AM, Timo Sirainen wrote: >>> On 13.1.2012, at 4.00, Mark Moseley wrote: >>> >>>> I'm running 2.0.17 and I'm still seeing a decent amount of "MySQL >>>> server has gone away" errors, despite having multiple hosts defined in >>>> my auth userdb 'connect'. This is Debian Lenny 32-bit and I'm seeing >>>> the same thing with 2.0.16 on Debian Squeeze 64-bit. >>>> >>>> E.g.: >>>> >>>> Jan 12 20:30:33 auth-worker: Error: mysql: Query failed, retrying: >>>> MySQL server has gone away >>>> >>>> Our mail mysql servers are busy enough that wait_timeout is set to a >>>> whopping 30 seconds. On my regular boxes, I see a good deal of these >>>> in the logs. I've been doing a lot of mucking with doveadm/dsync >>>> (working on maildir->mdbox migration finally, yay!) on test boxes >>>> (same dovecot package & version) and when I get this error, despite >>>> the log saying it's retrying, it doesn't seem to be. Instead I get: >>>> >>>> dsync(root): Error: user ...: Auth USER lookup failed >>> >>> Try with only one host in the "connect" string? My guess: Both the connections have timed out, and the retrying fails as well (there is only one retry). Although if the retrying lookup fails, there should be an error logged about it also (you don't see one?) >>> >>> Also another idea to avoid them in the first place: >>> >>> service auth-worker { >>> ?idle_kill = 20 >>> } >>> >> >> With just one 'connect' host, it seems to reconnect just fine (using >> the same tests as above) and I'm not seeing the same error. It worked >> every time that I tried, with no complaints of "MySQL server has gone >> away". >> >> If there are multiple hosts, it seems like the most robust thing to do >> would be to exhaust the existing connections and if none of those >> succeed, then start a new connection to one of them. It will probably >> result in much more convoluted logic but it'd probably match better >> what people expect from a retry. >> >> Alternatively, since in all my tests, the mysql server has closed the >> connection prior to this, is the auth worker not recognizing its >> connection is already half-closed (in which case, it probably >> shouldn't even consider it a legitimate connection and just >> automatically reconnect, i.e. try #1, not the retry, which would >> happen after another failure). >> >> I'll give the idle_kill a try too. I kind of like the idea of >> idle_kill for auth processes anyway, just to free up some connections >> on the mysql server. > > by the way , if you use sql for auth have you tried auth caching ? > > http://wiki.dovecot.org/Authentication/Caching > > i.e. > > # Authentication cache size (e.g. 10M). 0 means it's disabled. Note that > # bsdauth, PAM and vpopmail require cache_key to be set for caching to > be used. > > auth_cache_size = 10M > > # Time to live for cached data. After TTL expires the cached record is no > # longer used, *except* if the main database lookup returns internal > failure. > # We also try to handle password changes automatically: If user's previous > # authentication was successful, but this one wasn't, the cache isn't used. > # For now this works only with plaintext authentication. > > auth_cache_ttl = 1 hour > > # TTL for negative hits (user not found, password mismatch). > # 0 disables caching them completely. > > auth_cache_negative_ttl = 0 Yup, we have caching turned on for our production boxes. On this particular box, I'd just shut off caching so that I could work on a script for converting from maildir->mdbox and run it repeatedly on the same mailbox. I got tired of restarting dovecot between each test :) From user+dovecot at localhost.localdomain.org Fri Jan 13 23:04:12 2012 From: user+dovecot at localhost.localdomain.org (Pascal Volk) Date: Fri, 13 Jan 2012 22:04:12 +0100 Subject: [Dovecot] 2.1.rc3 (1a722c7676bb):dsync umlaut problems In-Reply-To: <4F0FA0A7.10909@localhost.localdomain.org> References: <4F0FA0A7.10909@localhost.localdomain.org> Message-ID: <4F109C4C.5050402@localhost.localdomain.org> On 01/13/2012 04:10 AM Pascal Volk wrote: > All umlauts in mailbox names are lost after converting mbox/Maildir > mailboxes to mdbox. > > # ls -d /srv/import/Maildir/.Gel\&APY-schte\ Elemente/ > /srv/import/Maildir/.Gel&APY-schte Elemente/ > ? > # doveadm mailbox list -u jane at example.com Gel* > Gel__schte_Elemente Oh, and child mailboxes with umlauts becomes top level mailboxes: # ls -d /srv/import/Maildir/.INBOX.Projekte.K\&APY-ln /srv/import/Maildir/.INBOX.Projekte.K&APY-ln #ls -d mdbox/mailboxes/INBOX_Projekte_K__ln mdbox/mailboxes/INBOX_Projekte_K__ln Regards, Pascal -- The trapper recommends today: f007ba11.1201305 at localdomain.org From user+dovecot at localhost.localdomain.org Sat Jan 14 00:04:33 2012 From: user+dovecot at localhost.localdomain.org (Pascal Volk) Date: Fri, 13 Jan 2012 23:04:33 +0100 Subject: [Dovecot] 2.1.rc3 (1a722c7676bb): Panic: file ostream.c: line 173 (o_stream_sendv): assertion failed: (stream->stream_errno != 0) Message-ID: <4F10AA71.6030901@localhost.localdomain.org> Hi Timo, today some imap processes are crashed. Regards, Pascal -- The trapper recommends today: f007ba11.1201322 at localdomain.org -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: core.imap.1326475521-24777_bt.txt URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: doveconf-n.txt URL: From info_postfix at gmx.ch Sat Jan 14 00:15:02 2012 From: info_postfix at gmx.ch (maximus12) Date: Fri, 13 Jan 2012 14:15:02 -0800 (PST) Subject: [Dovecot] Server Time 45min ahead In-Reply-To: <20120112115705.GS1341@charite.de> References: <33126760.post@talk.nabble.com> <20120112114350.GQ1341@charite.de> <33127262.post@talk.nabble.com> <20120112115705.GS1341@charite.de> Message-ID: <33137241.post@talk.nabble.com> Hi Ralf, Thanks for your help. Dovecot stop Change the server time Dovecot start Got a warning but it worked! Thanks a lot for your help. (With dovecot 1.x) -- View this message in context: http://old.nabble.com/Server-Time-45min-ahead-tp33126760p33137241.html Sent from the Dovecot mailing list archive at Nabble.com. From henson at acm.org Sat Jan 14 00:46:08 2012 From: henson at acm.org (Paul B. Henson) Date: Fri, 13 Jan 2012 14:46:08 -0800 Subject: [Dovecot] MySQL server has gone away In-Reply-To: References: Message-ID: <20120113224607.GS4844@bender.csupomona.edu> On Fri, Jan 13, 2012 at 01:36:38AM -0800, Timo Sirainen wrote: > Also another idea to avoid them in the first place: > > service auth-worker { > idle_kill = 20 > } Ah, set the auth-worker timeout to less than the mysql timeout to prevent a stale mysql connection from ever being used. I'll try that, thanks. -- Paul B. Henson | (909) 979-6361 | http://www.csupomona.edu/~henson/ Operating Systems and Network Analyst | henson at csupomona.edu California State Polytechnic University | Pomona CA 91768 From moseleymark at gmail.com Sat Jan 14 01:19:28 2012 From: moseleymark at gmail.com (Mark Moseley) Date: Fri, 13 Jan 2012 15:19:28 -0800 Subject: [Dovecot] MySQL server has gone away In-Reply-To: <20120113224607.GS4844@bender.csupomona.edu> References: <20120113224607.GS4844@bender.csupomona.edu> Message-ID: On Fri, Jan 13, 2012 at 2:46 PM, Paul B. Henson wrote: > On Fri, Jan 13, 2012 at 01:36:38AM -0800, Timo Sirainen wrote: > >> Also another idea to avoid them in the first place: >> >> service auth-worker { >> ? idle_kill = 20 >> } > > Ah, set the auth-worker timeout to less than the mysql timeout to > prevent a stale mysql connection from ever being used. I'll try that, > thanks. I gave that a try. Sometimes it seems to kill off the auth-worker but not till after a minute or so (with idle_kill = 20). Other times, the worker stays around for more like 5 minutes (I gave up watching), despite being idle -- and I'm the only person connecting to it, so it's definitely idle. Does auth-worker perhaps only wake up every so often to check its idle status? To test, I kicked off a dsync, then grabbed a netstat: tcp 0 0 10.1.15.129:40070 10.1.52.47:3306 ESTABLISHED 29146/auth worker [ tcp 0 0 10.1.15.129:33369 10.1.52.48:3306 ESTABLISHED 29146/auth worker [ tcp 0 0 10.1.15.129:54083 10.1.52.49:3306 ESTABLISHED 29146/auth worker [ then kicked off this loop: # while true; do date; ps p 29146 |tail -n1; sleep 1; done Fri Jan 13 18:05:14 EST 2012 29146 ? S 0:00 dovecot/auth worker [0 wait, 0 passdb, 0 userdb] Fri Jan 13 18:05:15 EST 2012 29146 ? S 0:00 dovecot/auth worker [0 wait, 0 passdb, 0 userdb] .... More lines of the loop ... Fri Jan 13 18:05:35 EST 2012 29146 ? S 0:00 dovecot/auth worker [0 wait, 0 passdb, 0 userdb] 18:05:36.252976 IP 10.1.52.48.3306 > 10.1.15.129.33369: F 77:77(0) ack 92 win 91 18:05:36.288549 IP 10.1.15.129.33369 > 10.1.52.48.3306: . ack 78 win 913 Fri Jan 13 18:05:36 EST 2012 29146 ? S 0:00 dovecot/auth worker [0 wait, 0 passdb, 0 userdb] 18:05:37.196204 IP 10.1.52.49.3306 > 10.1.15.129.54083: F 806:806(0) ack 1126 win 123 18:05:37.228594 IP 10.1.15.129.54083 > 10.1.52.49.3306: . ack 807 win 1004 18:05:37.411955 IP 10.1.52.47.3306 > 10.1.15.129.40070: F 806:806(0) ack 1126 win 123 18:05:37.448573 IP 10.1.15.129.40070 > 10.1.52.47.3306: . ack 807 win 1004 Fri Jan 13 18:05:37 EST 2012 29146 ? S 0:00 dovecot/auth worker [0 wait, 0 passdb, 0 userdb] ... more lines of the loop ... Fri Jan 13 18:10:13 EST 2012 29146 ? S 0:00 dovecot/auth worker [0 wait, 0 passdb, 0 userdb] Fri Jan 13 18:10:14 EST 2012 29146 ? S 0:00 dovecot/auth worker [0 wait, 0 passdb, 0 userdb] ^C at which point I bailed out. Looking again a couple of minutes later, it was gone. Nothing else was going on and the logs don't show any activity between 18:05:07 and 18:10:44. From henson at acm.org Sat Jan 14 02:19:12 2012 From: henson at acm.org (Paul B. Henson) Date: Fri, 13 Jan 2012 16:19:12 -0800 Subject: [Dovecot] MySQL server has gone away In-Reply-To: <4F108834.60709@schetterer.org> References: <4F108834.60709@schetterer.org> Message-ID: <20120114001912.GZ4844@bender.csupomona.edu> On Fri, Jan 13, 2012 at 11:38:28AM -0800, Robert Schetterer wrote: > by the way , if you use sql for auth have you tried auth caching ? > > http://wiki.dovecot.org/Authentication/Caching Hmm, hadn't tried that, but flipped it on to see how it might work out. The only tradeoff is a potential delay between when an account is disabled and when it can stop authenticating. I set the timeout to 10 minutes for now, with an hour timeout for negative caching. That page says you can send a USR2 signal to the auth process for cache stats? That doesn't seem to work. OTOH, that page is for version 1, not 2; is there some other way to generate cache stats in version 2? Thanks... -- Paul B. Henson | (909) 979-6361 | http://www.csupomona.edu/~henson/ Operating Systems and Network Analyst | henson at csupomona.edu California State Polytechnic University | Pomona CA 91768 From henson at acm.org Sat Jan 14 03:54:29 2012 From: henson at acm.org (Paul B. Henson) Date: Fri, 13 Jan 2012 17:54:29 -0800 Subject: [Dovecot] MySQL server has gone away In-Reply-To: References: Message-ID: <4F10E055.4030303@acm.org> On 1/13/2012 10:29 AM, Mark Moseley wrote: > connection prior to this, is the auth worker not recognizing its > connection is already half-closed (in which case, it probably > shouldn't even consider it a legitimate connection and just > automatically reconnect, i.e. try #1, not the retry, which would > happen after another failure). I don't think there's any way to tell from the mysql api that the server has closed the connection short of trying to use it and getting that specific error. I suppose that specific error could be special cased as an immediate "try again with no penalty" rather than considered a failure. -- Paul B. Henson | (909) 979-6361 | http://www.csupomona.edu/~henson/ Operating Systems and Network Analyst | henson at csupomona.edu California State Polytechnic University | Pomona CA 91768 From robert at schetterer.org Sat Jan 14 10:01:12 2012 From: robert at schetterer.org (Robert Schetterer) Date: Sat, 14 Jan 2012 09:01:12 +0100 Subject: [Dovecot] MySQL server has gone away In-Reply-To: <20120114001912.GZ4844@bender.csupomona.edu> References: <4F108834.60709@schetterer.org> <20120114001912.GZ4844@bender.csupomona.edu> Message-ID: <4F113648.2000902@schetterer.org> Am 14.01.2012 01:19, schrieb Paul B. Henson: > On Fri, Jan 13, 2012 at 11:38:28AM -0800, Robert Schetterer wrote: > >> by the way , if you use sql for auth have you tried auth caching ? >> >> http://wiki.dovecot.org/Authentication/Caching > > Hmm, hadn't tried that, but flipped it on to see how it might work out. > The only tradeoff is a potential delay between when an account is > disabled and when it can stop authenticating. I set the timeout to 10 > minutes for now, with an hour timeout for negative caching. dont know if i unserstand you right perhaps this is what you mean, i use this with/cause fail2ban # TTL for negative hits (user not found, password mismatch). # 0 disables caching them completely. auth_cache_negative_ttl = 0 > > That page says you can send a USR2 signal to the auth process for cache > stats? That doesn't seem to work. OTOH, that page is for version 1, not > 2; is there some other way to generate cache stats in version 2? auth cache works with dove 2, no idea about dove 1 ,didnt test, but i guess it does > > Thanks... > -- Best Regards MfG Robert Schetterer Germany/Munich/Bavaria From yubao.liu at gmail.com Sat Jan 14 15:49:31 2012 From: yubao.liu at gmail.com (Yubao Liu) Date: Sat, 14 Jan 2012 21:49:31 +0800 Subject: [Dovecot] [PATCH] support master user to login as other users by DIGEST-MD5 SASL proxy authorization Message-ID: <4F1187EB.5070002@gmail.com> Hi Timo, As http://wiki2.dovecot.org/Authentication/MasterUsers states, currently the first way for master users to log in as other users only supports PLAIN SASL mechanism, and because DIGEST-MD5 uses user name to calculate MD5 digest, the second way can't support DIGEST-MD5. I enhance the code to support DIGEST-MD5 too for the first way, please review the attached patch against dovecot-2.0 HG tip. The patch also contains a little fix to "nonce-count" string, RFC 2831 shows it should be "nc". I tested it on Debian Wheezy, it seems OK. Below are my verification steps. (Debian packaged 2.0.15 + http://hg.dovecot.org/dovecot-2.0/rev/bed15faedfd4 + attached patch) $ doveconf -n # 2.0.15: /etc/dovecot/dovecot.conf # OS: Linux 3.1.0-1-686-pae i686 Debian wheezy/sid auth_default_realm = corp.example.com auth_krb5_keytab = /etc/dovecot.keytab auth_master_user_separator = * auth_mechanisms = gssapi digest-md5 cram-md5 auth_realms = corp.example.com auth_username_format = %n first_valid_gid = 1000 first_valid_uid = 1000 mail_location = mdbox:/srv/mail/%u/Mail managesieve_notify_capability = mailto managesieve_sieve_capability = fileinto reject envelope encoded-character vacation subaddress comparator-i;ascii-numeric relational regex imap4flags copy include variables body enotify environment mailbox date ihave passdb { args = /etc/dovecot/master-users driver = passwd-file master = yes } passdb { driver = pam } plugin { sieve = /srv/mail/%u/.dovecot.sieve sieve_dir = /srv/mail/%u/sieve } protocols = " imap lmtp sieve" service auth { unix_listener auth-client { group = Debian-exim mode = 0660 } } ssl_cert = , method=DIGEST-MD5, rip=127.0.0.1, lip=127.0.1.1, mpid=15974, TLS Jan 14 20:35:32 gold dovecot: imap: Debug: Added userdb setting: plugin/master_user=webmail2 Jan 14 20:35:32 gold dovecot: imap(dieken): Debug: Effective uid=1000, gid=1000, home=/srv/mail/dieken Jan 14 20:35:32 gold dovecot: imap(dieken): Debug: fs: root=/srv/mail/dieken/Mail, index=, control=, inbox=, alt= Jan 14 20:35:32 gold dovecot: imap(dieken): Debug: Namespace : Using permissions from /srv/mail/dieken/Mail: mode=0700 gid=-1 Jan 14 20:35:34 gold dovecot: imap(dieken): Disconnected: Logged out bytes=8/329 Jan 14 20:35:34 gold dovecot: imap-login: Warning: SSL alert: where=0x4008, ret=256: warning close notify [127.0.0.1] Jan 14 21:04:50 gold dovecot: imap(dieken): Disconnected: Logged out bytes=131/533 Jan 14 21:33:59 gold dovecot: imap-login: Login: user=, method=DIGEST-MD5, rip=127.0.0.1, lip=127.0.1.1, mpid=16114, TLS Jan 14 21:34:03 gold dovecot: imap(dieken): Disconnected: Logged out bytes=8/329 Jan 14 21:36:56 gold dovecot: imap-login: Disconnected (no auth attempts): rip=127.0.0.1, lip=127.0.1.1 Jan 14 21:36:56 gold dovecot: imap-login: Disconnected (no auth attempts): rip=127.0.0.1, lip=127.0.1.1 Jan 14 21:36:58 gold dovecot: imap-login: Login: user=, method=DIGEST-MD5, rip=127.0.0.1, lip=127.0.1.1, mpid=16135, TLS Jan 14 21:37:00 gold dovecot: imap(dieken): Disconnected: Logged out bytes=10/377 Regards, Yubao Liu -------------- next part -------------- A non-text attachment was scrubbed... Name: digest-md5-sasl-proxy-authorization.patch Type: text/x-patch Size: 2322 bytes Desc: not available URL: From AxelLuttgens at swing.be Sat Jan 14 19:03:22 2012 From: AxelLuttgens at swing.be (Axel Luttgens) Date: Sat, 14 Jan 2012 18:03:22 +0100 Subject: [Dovecot] v2.x services documentation In-Reply-To: <04D662E7-2A0A-448B-BA21-1E337A400CA6@iki.fi> References: <04D662E7-2A0A-448B-BA21-1E337A400CA6@iki.fi> Message-ID: <92A86804-CEEE-4EB6-9EE7-FC8B7905AA2C@swing.be> Le 7 d?c. 2011 ? 15:22, Timo Sirainen a ?crit : > If you've ever wanted to know everything about the service {} blocks, this should be quite helpful: http://wiki2.dovecot.org/Services Hello Timo, I know, I'm quite late at reading the messages, and this is really a nice and useful one; thanks! Up to now, I only had the opportunity to quickly read the wiki page, and have a small question; one may read: process_min_avail Minimum number of processes that always should be available to accept more client connections. For service_limit=1 processes this decreases the latency for handling new connections. For service_limit!=1 processes it could be set to the number of CPU cores on the system to balance the load among them. What's that service_limit setting? TIA, Axel From ivo at crm.walltopia.com Sat Jan 14 19:23:58 2012 From: ivo at crm.walltopia.com (IVO GELOV (CRM)) Date: Sat, 14 Jan 2012 19:23:58 +0200 Subject: [Dovecot] Using Dovecot-auth to return error code 450 (or other 4xx) to Postfix when user is on vacation In-Reply-To: <4F1071F8.4080202@Media-Brokers.com> References: <4F1071F8.4080202@Media-Brokers.com> Message-ID: On Fri, 13 Jan 2012 20:03:36 +0200, Charles Marcus wrote: > On 2012-01-13 12:11 PM, IVO GELOV (CRM) wrote: >> I am aware of the various autoresponder scripts for vacation autoreplies >> (I am using Virtual Vacation 3.1 by Mischa Peters). >> I have an issue with auto-replies - it is vulnerable to spamming with >> forged email address. > > I think you are using an extremely old/outdated version... > > The latest version would not suffer this problem, because it has a lot > of message types that it will *not* respond to, including messages > appearing to be from yourself... > > Get the latest version fro the postfixadmin package. > > However, I don't know how to use it without also using postfixadmin (it > creates databases for storing the vacation message, etc)... > I have downloaded the latest version 4.0 - but it seems there is no way to prevent spammers to use forged email addresses. I decided to remove the vacation feature from our corporate mail server, because it actually opens a backdoor (even though only when someone decides to activate his vacation auto-reply) for spammers and puts a risk on the company (our server can be blacklisted). I still think that my idea with custom error codes is more useful - if the user is on vacation, the message is rejected immediately (no auto-reply is sent) and sender can see (hopefully, because most users just ignore error messages) the reason why the messages was rejected. Probably Dovecot-auth does not offer such flexibility right now - but it worths considering. From robert at schetterer.org Sat Jan 14 21:24:39 2012 From: robert at schetterer.org (Robert Schetterer) Date: Sat, 14 Jan 2012 20:24:39 +0100 Subject: [Dovecot] Using Dovecot-auth to return error code 450 (or other 4xx) to Postfix when user is on vacation In-Reply-To: References: <4F1071F8.4080202@Media-Brokers.com> Message-ID: <4F11D677.2040706@schetterer.org> Am 14.01.2012 18:23, schrieb IVO GELOV (CRM): > On Fri, 13 Jan 2012 20:03:36 +0200, Charles Marcus > wrote: > >> On 2012-01-13 12:11 PM, IVO GELOV (CRM) wrote: >>> I am aware of the various autoresponder scripts for vacation autoreplies >>> (I am using Virtual Vacation 3.1 by Mischa Peters). >>> I have an issue with auto-replies - it is vulnerable to spamming with >>> forged email address. >> >> I think you are using an extremely old/outdated version... >> >> The latest version would not suffer this problem, because it has a lot >> of message types that it will *not* respond to, including messages >> appearing to be from yourself... >> >> Get the latest version fro the postfixadmin package. >> >> However, I don't know how to use it without also using postfixadmin (it >> creates databases for storing the vacation message, etc)... >> > > I have downloaded the latest version 4.0 - but it seems there is no way > to prevent > spammers to use forged email addresses. I decided to remove the vacation > feature > from our corporate mail server, because it actually opens a backdoor > (even though > only when someone decides to activate his vacation auto-reply) for > spammers and > puts a risk on the company (our server can be blacklisted). > > I still think that my idea with custom error codes is more useful - if > the user is > on vacation, the message is rejected immediately (no auto-reply is sent) > and sender > can see (hopefully, because most users just ignore error messages) the > reason why > the messages was rejected. > > Probably Dovecot-auth does not offer such flexibility right now - but it > worths > considering. your right there is no way make perfekt sure that someone not uses your emailaddress "from and to" for spamming ( dkim and spf may help little ) now i hope i understand your problem right a good way is to use dove lmtp with sieve also good antispam in postfix, perhaps a before global antispam sieve filter rule, that catched spam is sorted in some special junk folder , and so its not handled by incomming in mailbox inbox with what userdefined sieve rule ( i.e Vacation ) ever look here http://wiki.dovecot.org/LDA/Sieve for ideas anyway if you use other vacation tecs, make sure allready flagged spam by i.e clamav, amavis, spamassassin etc in postfix stage is not handled by your vacation service , script etc. as far i remember i gave some patch to the postfixadmin vacation script doing exact this there is no ultimate way not to answer spammers by vacation or other auto script etc but if you do right , the problem goes nearly null the risk of beeing blacklisted by third party exist ever when i.e forwarding ( redirect ) mail to outside ( so antispam filter is a "must have" here ), a simple vacation message only, is no high or none risk, as long it does not include any part of the real spam message also vacation should only answer once in some time period, which should protect against loops and flooding others the corect answer to your subject would be if you want postfix simple to reject mails for some mailaddress with error code you like if the mailaddressowner is away, use a postfix reject table, if you want with i.e in/with mysql and some gui ( i.e. php ) so the mailaddressowner can edit the table himself anyway, i personally dont use vacation anymore for many reasons , but others find it hardly needed -- Best Regards MfG Robert Schetterer Germany/Munich/Bavaria From mail at kinesis.me Sat Jan 14 22:17:58 2012 From: mail at kinesis.me (Charles Thompson) Date: Sat, 14 Jan 2012 12:17:58 -0800 Subject: [Dovecot] IMAP maillog error: file lib.c: line 37 (nearest_power): assertion failed: (num <= ((size_t)1 << (BITS_IN_SIZE_T-1))) Message-ID: Dear Mailing List, What does this error mean and how do I fix it? I am on a Centos 4.9 >From /var/log/maillog : Jan 14 11:54:51 hostname imap(username): file lib.c: line 37 (nearest_power): assertion failed: (num <= ((size_t)1 << (BITS_IN_SIZE_T-1))) Version information : root at hostname[/etc/rc.d/rc3.d]# dovecot --version ; dovecot -n ; cat /etc/*release* 0.99.11 Usage: dovecot [-F] [-c ] Fatal: Unknown argument: -n CentOS release 4.9 (Final) root at hostname[/etc/rc.d/rc3.d]# Thank you. -- Sincerely, Charles Thompson *UNIX & Linux Administrator* Tel* : *(650) 906-9156 Web : www.kinesis.me Mail: mail at kinesis.me From user+dovecot at localhost.localdomain.org Sat Jan 14 22:45:29 2012 From: user+dovecot at localhost.localdomain.org (Pascal Volk) Date: Sat, 14 Jan 2012 21:45:29 +0100 Subject: [Dovecot] IMAP maillog error: file lib.c: line 37 (nearest_power): assertion failed: (num <= ((size_t)1 << (BITS_IN_SIZE_T-1))) In-Reply-To: References: Message-ID: <4F11E969.2000909@localhost.localdomain.org> On 01/14/2012 09:17 PM Charles Thompson wrote: > Dear Mailing List, > > What does this error mean and how do I fix it? I am on a Centos 4.9 > > From /var/log/maillog : > Jan 14 11:54:51 hostname imap(username): file lib.c: line 37 > (nearest_power): assertion failed: (num <= ((size_t)1 << > (BITS_IN_SIZE_T-1))) > > > Version information : > root at hostname[/etc/rc.d/rc3.d]# dovecot --version ; dovecot -n ; cat > /etc/*release* > 0.99.11 > Usage: dovecot [-F] [-c ] > Fatal: Unknown argument: -n > CentOS release 4.9 (Final) > root at hostname[/etc/rc.d/rc3.d]# > > Thank you. To make it sort: Upgrade Regards, Pascal -- The trapper recommends today: cafefeed.1201421 at localdomain.org From CMarcus at Media-Brokers.com Sun Jan 15 14:33:24 2012 From: CMarcus at Media-Brokers.com (Charles Marcus) Date: Sun, 15 Jan 2012 07:33:24 -0500 Subject: [Dovecot] Using Dovecot-auth to return error code 450 (or other 4xx) to Postfix when user is on vacation In-Reply-To: References: <4F1071F8.4080202@Media-Brokers.com> Message-ID: <4F12C794.6070609@Media-Brokers.com> On 2012-01-14 12:23 PM, IVO GELOV (CRM) wrote: > I have downloaded the latest version 4.0 - but it seems there is no > way to prevent spammers to use forged email addresses. I decided to > remove the vacation feature from our corporate mail server, because > it actually opens a backdoor (even though only when someone decides > to activate his vacation auto-reply) for spammers and puts a risk on > the company (our server can be blacklisted). Sorry, I misread your message... However, (I *think*) there *is* a simple solution to your problem, if I now understand it correctly... Simply disallow anyone sending from an email address in your domain from sending without SASL_AUTHing... The way I do this is: in main.cf (I put all of my restrictions in smtpd_recipient_restrictions) add: check_sender_access ${hash}/nospoof, somewhere after reject_unauth_destination *but before any RBL checks) where nospoof contains: # Prevent spoofing from domains that we own allowed_address1 at example.com OK allowed_address2 at example.com OK example.com REJECT You must use sasl_auth to send from one of our example.com email addresses... and of course be sure to postmap the nospoof database after making any changes... -- Best regards, Charles From CMarcus at Media-Brokers.com Sun Jan 15 14:40:05 2012 From: CMarcus at Media-Brokers.com (Charles Marcus) Date: Sun, 15 Jan 2012 07:40:05 -0500 Subject: [Dovecot] IMAP maillog error: file lib.c: line 37 (nearest_power): assertion failed: (num <= ((size_t)1 << (BITS_IN_SIZE_T-1))) In-Reply-To: References: Message-ID: <4F12C925.4030008@Media-Brokers.com> On 2012-01-14 3:17 PM, Charles Thompson wrote: > Version information : > root at hostname[/etc/rc.d/rc3.d]# dovecot --version ; dovecot -n ; cat > /etc/*release* > 0.99.11 0.99 is simply way, way, *way* too old to waste any time helping you. The short answer is - *upgrade* to a more recent version (at *least* the latest 1.2.x series, but preferably 2.0.16)... Be sure to read all of the docs on upgrading, because you *will* have some reconfiguring to do... *Then*, if you have any questions/issues, by all means come back and ask... -- Best regards, Charles From CMarcus at Media-Brokers.com Sun Jan 15 14:50:00 2012 From: CMarcus at Media-Brokers.com (Charles Marcus) Date: Sun, 15 Jan 2012 07:50:00 -0500 Subject: [Dovecot] Using Dovecot-auth to return error code 450 (or other 4xx) to Postfix when user is on vacation In-Reply-To: <4F12C794.6070609@Media-Brokers.com> References: <4F1071F8.4080202@Media-Brokers.com> <4F12C794.6070609@Media-Brokers.com> Message-ID: <4F12CB78.6020602@Media-Brokers.com> On 2012-01-15 7:33 AM, Charles Marcus wrote: > check_sender_access ${hash}/nospoof, Oh - if you aren't using variables for the maps paths, just use: check_sender_access hash:/path/to/map/nospoof, -- Best regards, Charles From user+dovecot at localhost.localdomain.org Sun Jan 15 15:11:05 2012 From: user+dovecot at localhost.localdomain.org (Pascal Volk) Date: Sun, 15 Jan 2012 14:11:05 +0100 Subject: [Dovecot] 2.1.rc3 (1a722c7676bb): doveadm mailbox list -> Segmentation fault Message-ID: <4F12D069.9060102@localhost.localdomain.org> Oops, I did it again. -- The trapper recommends today: c01dcofe.1201514 at localdomain.org -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: core.doveadm.1326628435-21046_bt.txt URL: From CMarcus at Media-Brokers.com Sun Jan 15 19:03:42 2012 From: CMarcus at Media-Brokers.com (Charles Marcus) Date: Sun, 15 Jan 2012 12:03:42 -0500 Subject: [Dovecot] Using Dovecot-auth to return error code 450 (or other 4xx) to Postfix when user is on vacation In-Reply-To: <4F12CB78.6020602@Media-Brokers.com> References: <4F1071F8.4080202@Media-Brokers.com> <4F12C794.6070609@Media-Brokers.com> <4F12CB78.6020602@Media-Brokers.com> Message-ID: <4F1306EE.3050907@Media-Brokers.com> On 2012-01-15 7:50 AM, Charles Marcus wrote: > On 2012-01-15 7:33 AM, Charles Marcus wrote: >> check_sender_access ${hash}/nospoof, > > Oh - if you aren't using variables for the maps paths, just use: > > check_sender_access hash:/path/to/map/nospoof, One last thing - this obviously requires one or both of: permit_sasl_authenticated permit_mynetworks *before* the check_sender_access check... -- Best regards, Charles From CMarcus at Media-Brokers.com Sun Jan 15 19:10:31 2012 From: CMarcus at Media-Brokers.com (Charles Marcus) Date: Sun, 15 Jan 2012 12:10:31 -0500 Subject: [Dovecot] Using Dovecot-auth to return error code 450 (or other 4xx) to Postfix when user is on vacation In-Reply-To: <4F1306EE.3050907@Media-Brokers.com> References: <4F1071F8.4080202@Media-Brokers.com> <4F12C794.6070609@Media-Brokers.com> <4F12CB78.6020602@Media-Brokers.com> <4F1306EE.3050907@Media-Brokers.com> Message-ID: <4F130887.1020304@Media-Brokers.com> On 2012-01-15 12:03 PM, Charles Marcus wrote: > On 2012-01-15 7:50 AM, Charles Marcus wrote: >> On 2012-01-15 7:33 AM, Charles Marcus wrote: >>> check_sender_access ${hash}/nospoof, >> Oh - if you aren't using variables for the maps paths, just use: >> >> check_sender_access hash:/path/to/map/nospoof, > One last thing - this obviously requires one or both of: > > permit_sasl_authenticated > permit_mynetworks > > *before* the check_sender_access check... spoke too soon... one more 'last thing'... This also obviously requires you to enforce a policy that all users must either sasl_auth or be on a system whose IP is included in my_networks... -- Best regards, Charles From henson at acm.org Sun Jan 15 23:20:29 2012 From: henson at acm.org (Paul B. Henson) Date: Sun, 15 Jan 2012 13:20:29 -0800 Subject: [Dovecot] MySQL server has gone away In-Reply-To: <4F113648.2000902@schetterer.org> References: <4F108834.60709@schetterer.org> <20120114001912.GZ4844@bender.csupomona.edu> <4F113648.2000902@schetterer.org> Message-ID: <20120115212029.GC21623@bender.csupomona.edu> On Sat, Jan 14, 2012 at 12:01:12AM -0800, Robert Schetterer wrote: > > Hmm, hadn't tried that, but flipped it on to see how it might work out. > > The only tradeoff is a potential delay between when an account is > > disabled and when it can stop authenticating. I set the timeout to 10 > > minutes for now, with an hour timeout for negative caching. > > dont know if i unserstand you right Before I turned on auth caching, every attempted authentication hit our mysql database, which in addition to the password itself contains a flag indicating whether or not the account is enabled. So if somebody was abusing smtp authentication, our helpdesk could disable their account, and it would *immediately* stop working. Whereas with authentication caching enabled, there is a window the size of the ttl where an account that has been disabled can continue to successfully authenticate. > > That page says you can send a USR2 signal to the auth process for cache > > stats? That doesn't seem to work. OTOH, that page is for version 1, not > > 2; is there some other way to generate cache stats in version 2? > > auth cache works with dove 2, no idea about dove 1 ,didnt test, but i > guess it does I'm using dovecot 2; my question was that the documentation for dovecot 1 described a way to make dovecot dump the authentication cache statistics that doesn't seem to work for dovecot 2, and if there was some other way to get the cache statistics in dovecot 2. Thanks... From mark at msapiro.net Sun Jan 15 23:36:48 2012 From: mark at msapiro.net (Mark Sapiro) Date: Sun, 15 Jan 2012 13:36:48 -0800 Subject: [Dovecot] Using Dovecot-auth to return error code 450 (or other 4xx) to Postfix when user is on vacation In-Reply-To: References: Message-ID: <4F1346F0.6020908@msapiro.net> IVO GELOV (CRM) wrote: > I still think that my idea with custom error codes is more useful - if the user is > on vacation, the message is rejected immediately (no auto-reply is sent) and sender > can see (hopefully, because most users just ignore error messages) the reason why > the messages was rejected. A 4xx status will not do this. It should just cause the sending MTA to keep the message queued and keep retrying. Depending on the sending MTA's retry and notification policies, the sender may see no error or delay notification for several days. If you really want the sender to immediately see a rejection, you have to use a 5xx status. -- Mark Sapiro The highway is for gamblers, San Francisco Bay Area, California better use your sense - B. Dylan From mark at msapiro.net Sun Jan 15 23:50:02 2012 From: mark at msapiro.net (Mark Sapiro) Date: Sun, 15 Jan 2012 13:50:02 -0800 Subject: [Dovecot] Using Dovecot-auth to return error code 450 (or other 4xx) to Postfix when user is on vacation In-Reply-To: <4F12C794.6070609@Media-Brokers.com> References: <4F1071F8.4080202@Media-Brokers.com> <4F12C794.6070609@Media-Brokers.com> Message-ID: <4F134A0A.70804@msapiro.net> On 11:59 AM, Charles Marcus wrote: > On 2012-01-14 12:23 PM, IVO GELOV (CRM) wrote: >> I have downloaded the latest version 4.0 - but it seems there is no >> way to prevent spammers to use forged email addresses. I decided to >> remove the vacation feature from our corporate mail server, because >> it actually opens a backdoor (even though only when someone decides >> to activate his vacation auto-reply) for spammers and puts a risk on >> the company (our server can be blacklisted). > > Sorry, I misread your message... > > However, (I *think*) there *is* a simple solution to your problem, if I > now understand it correctly... > > Simply disallow anyone sending from an email address in your domain from > sending without SASL_AUTHing... I don't see how this will help. The scenario the OP is concerned about is spammer at foreign.domain sends a message with forged From: and maybe envelope sender victim at other.foreign.domain to his user on vacation. The vacation program sends an autoresponse to the victim. However, why worry about this minimal backscatter? A good vacation program will not send more that one autoresponse per long time (a week?) for a given sender/recipient and won't include the original spam payload. So, even though a spammer might use this backdoor to cause your server to send messages to multiple recipients, the messages should not have spam payloads and shouldn't be sent more that once to a given end recipient. -- Mark Sapiro The highway is for gamblers, San Francisco Bay Area, California better use your sense - B. Dylan From phessler at theapt.org Mon Jan 16 11:15:21 2012 From: phessler at theapt.org (Peter Hessler) Date: Mon, 16 Jan 2012 10:15:21 +0100 Subject: [Dovecot] per-user limit? Message-ID: <20120116091521.GA10944@gir.theapt.org> I am seeing a problem where users are limited to 6 imap logins total. One of my users has a bunch of phones and computers, and wants them all on at the same time. I'm looking through my configuration, and I cannot see a limit on how many times a single user can connect. He is connecting from different IPs. Any ideas? My logs show the following error when they attempt to auth for a 7th time: dovecot: imap-login: Disconnected (no auth attempts): rip=111.yy.zz.xx, lip=81.209.183.113, TLS $ dovecot -n # 2.0.16: /etc/dovecot/dovecot.conf # OS: OpenBSD 5.1 amd64 ffs auth_mechanisms = plain login base_dir = /var/dovecot/ listen = *, [::] mail_location = maildir:/usr/home/%u/Maildir managesieve_notify_capability = mailto managesieve_sieve_capability = fileinto reject envelope encoded-character vacation subaddress comparator-i;ascii-numeric relational regex imap4flags copy include variables body enotify environment mailbox date ihave mbox_write_locks = fcntl passdb { driver = bsdauth } service auth { unix_listener /var/run/dovecot/auth-master { mode = 0600 } unix_listener /var/spool/postfix/private/auth { group = wheel mode = 0660 user = _postfix } user = root } service imap-login { process_limit = 128 process_min_avail = 6 service_count = 1 user = _dovecot } service pop3-login { process_limit = 64 process_min_avail = 6 service_count = 1 user = _dovecot } ssl_cert = References: <4F1346F0.6020908@msapiro.net> Message-ID: On Sun, 15 Jan 2012 23:36:48 +0200, Mark Sapiro wrote: > IVO GELOV (CRM) wrote: > >> I still think that my idea with custom error codes is more useful - if the user is >> on vacation, the message is rejected immediately (no auto-reply is sent) and sender >> can see (hopefully, because most users just ignore error messages) the reason why >> the messages was rejected. > > > A 4xx status will not do this. It should just cause the sending MTA to > keep the message queued and keep retrying. Depending on the sending > MTA's retry and notification policies, the sender may see no error or > delay notification for several days. > > If you really want the sender to immediately see a rejection, you have > to use a 5xx status. > Yes, you are right. The error code is the smallest difficulty :) From ivo at crm.walltopia.com Mon Jan 16 11:38:01 2012 From: ivo at crm.walltopia.com (IVO GELOV (CRM)) Date: Mon, 16 Jan 2012 11:38:01 +0200 Subject: [Dovecot] Using Dovecot-auth to return error code 450 (or other 4xx) to Postfix when user is on vacation In-Reply-To: <4F134A0A.70804@msapiro.net> References: <4F1071F8.4080202@Media-Brokers.com> <4F12C794.6070609@Media-Brokers.com> <4F134A0A.70804@msapiro.net> Message-ID: On Sun, 15 Jan 2012 23:50:02 +0200, Mark Sapiro wrote: > On 11:59 AM, Charles Marcus wrote: >> On 2012-01-14 12:23 PM, IVO GELOV (CRM) wrote: >>> I have downloaded the latest version 4.0 - but it seems there is no >>> way to prevent spammers to use forged email addresses. I decided to >>> remove the vacation feature from our corporate mail server, because >>> it actually opens a backdoor (even though only when someone decides >>> to activate his vacation auto-reply) for spammers and puts a risk on >>> the company (our server can be blacklisted). >> >> Sorry, I misread your message... >> >> However, (I *think*) there *is* a simple solution to your problem, if I >> now understand it correctly... >> >> Simply disallow anyone sending from an email address in your domain from >> sending without SASL_AUTHing... > > > I don't see how this will help. The scenario the OP is concerned about > is spammer at foreign.domain sends a message with forged From: and maybe > envelope sender victim at other.foreign.domain to his user on vacation. The > vacation program sends an autoresponse to the victim. > > However, why worry about this minimal backscatter? A good vacation > program will not send more that one autoresponse per long time (a week?) > for a given sender/recipient and won't include the original spam > payload. So, even though a spammer might use this backdoor to cause your > server to send messages to multiple recipients, the messages should not > have spam payloads and shouldn't be sent more that once to a given end > recipient. > The limitation of 1 message per week for any unique combination of sender/recipient does not stop backscatter - because each message can come with a new forged FROM address, and from different compromised mail servers. The spammer does not have control over the body of the auto-replies (which is something like "I am not at the office, please write to my colleagues"), but it still may cause the victims to take some measures. From ivo at crm.walltopia.com Mon Jan 16 11:48:11 2012 From: ivo at crm.walltopia.com (IVO GELOV (CRM)) Date: Mon, 16 Jan 2012 11:48:11 +0200 Subject: [Dovecot] Using Dovecot-auth to return error code 450 (or other 4xx) to Postfix when user is on vacation In-Reply-To: <4F12C794.6070609@Media-Brokers.com> References: <4F1071F8.4080202@Media-Brokers.com> <4F12C794.6070609@Media-Brokers.com> Message-ID: On Sun, 15 Jan 2012 14:33:24 +0200, Charles Marcus wrote: > On 2012-01-14 12:23 PM, IVO GELOV (CRM) wrote: >> I have downloaded the latest version 4.0 - but it seems there is no >> way to prevent spammers to use forged email addresses. I decided to >> remove the vacation feature from our corporate mail server, because >> it actually opens a backdoor (even though only when someone decides >> to activate his vacation auto-reply) for spammers and puts a risk on >> the company (our server can be blacklisted). > > Sorry, I misread your message... > > However, (I *think*) there *is* a simple solution to your problem, if I > now understand it correctly... > > Simply disallow anyone sending from an email address in your domain from > sending without SASL_AUTHing... > > The way I do this is: > > in main.cf (I put all of my restrictions in > smtpd_recipient_restrictions) add: > > check_sender_access ${hash}/nospoof, > > somewhere after reject_unauth_destination *but before any RBL checks) > > where nospoof contains: > > # Prevent spoofing from domains that we own > allowed_address1 at example.com OK > allowed_address2 at example.com OK > example.com REJECT You must use sasl_auth to send from one of our > example.com email addresses... > > and of course be sure to postmap the nospoof database after making any > changes... > These are the restrictions I apply (or had been applying for some time). Anyway, for now I simply disabled the vacation plugin. smtpd_client_restrictions = permit_mynetworks, check_client_access mysql:/etc/postfix/sender_ip, permit_sasl_authenticated, reject_unknown_client #reject_rhsbl_client blackhole.securitysage.com, reject_rbl_client opm.blitzed.org, #smtpd_client_restrictions = permit_sasl_authenticated, permit_mynetworks, check_client_access mysql:/etc/postfix/client_sql, reject_rbl_client sbl.spamhaus.org, reject_rbl_client list.dsbl.org,reject_rbl_client cbl.abuseat.org, reject_rbl_client dul.dnsbl.sorbs.net, reject_rbl_client dnsbl.ahbl.org, permit #smtpd_client_restrictions = permit_sasl_authenticated, permit_mynetworks, check_client_access mysql:/etc/postfix/client_ok, reject_rbl_client sbl.spamhaus.org, reject_rbl_client list.dsbl.org,reject_rbl_client cbl.abuseat.org, reject_rbl_client dul.dnsbl.sorbs.net, reject_rbl_client dnsbl.ahbl.org, reject_unknown_client ###, check_policy_service inet:127.0.0.1:10040, reject_rbl_client sbl.spamhaus.org, reject_rbl_client cbl.abuseat.org, reject_rbl_client dul.dnsbl.sorbs.net, reject_rbl_client dnsbl.ahbl.org #,reject_rbl_client opm.blitzed.org, reject_rbl_client relays.ordb.org, reject_rbl_client dun.dnsrbl.net #REJECT_NON_FQDN_HOSTNAME - proverka dali HELO e pylno Domain ime (sus suffix) #smtpd_helo_restrictions = check_helo_access hash:/etc/postfix/helo_access, reject_invalid_hostname, reject_non_fqdn_hostname smtpd_helo_restrictions = reject_invalid_hostname smtpd_sender_restrictions = permit_sasl_authenticated, permit_mynetworks, reject_rhsbl_sender rhsbl.ahbl.org, reject_rhsbl_sender rhsbl.sorbs.net, reject_rhsbl_sender multi.surbl.org #reject_rhsbl_sender blackhole.securitysage.com, reject_rhsbl_sender opm.blitzed.org, #smtpd_sender_restrictions = permit_sasl_authenticated, permit_mynetworks, check_sender_access mysql:/etc/postfix/sender_sql, reject_non_fqdn_sender, reject_unknown_sender_domain, reject_rhsbl_sender rhsbl.ahbl.org, reject_rhsbl_sender block.rhs.mailpolice.com, reject_rhsbl_sender rhsbl.sorbs.net, reject_rhsbl_sender multi.surbl.org, reject_rhsbl_sender dsn.rfc-ignorant.org, permit #, reject_rhsbl_sender dsn.rfc-ignorant.org, reject_rhsbl_sender relays.ordb.org, reject_rhsbl_sender dun.dnsrbl.net #smtpd_recipient_restrictions = permit_sasl_authenticated, permit_mynetworks, reject_unauth_destination, reject_unauth_pipelining, check_recipient_access regexp:/etc/postfix/dspam_incoming smtpd_recipient_restrictions = permit_sasl_authenticated, permit_mynetworks, reject_unauth_destination, reject_unauth_pipelining smtpd_data_restrictions = reject_unauth_pipelining From joseba.torre at ehu.es Mon Jan 16 11:50:49 2012 From: joseba.torre at ehu.es (Joseba Torre) Date: Mon, 16 Jan 2012 10:50:49 +0100 Subject: [Dovecot] Using Dovecot-auth to return error code 450 (or other 4xx) to Postfix when user is on vacation In-Reply-To: <4F11D677.2040706@schetterer.org> References: <4F1071F8.4080202@Media-Brokers.com> <4F11D677.2040706@schetterer.org> Message-ID: <4F13F2F9.2070008@ehu.es> > anyway if you use other vacation tecs, make sure allready flagged spam > by i.e clamav, amavis, spamassassin etc in postfix stage is not handled > by your vacation service , script etc. > as far i remember i gave some patch to the postfixadmin vacation script > doing exact this If you're using any antispam soft that gives every mail a spam score (like spamassassin does), you can use a strong rule for vacation replies (like "only messages with a spam score under 5 are allowed, but only those under 3 may have a vacation reply"). From rasca at miamammausalinux.org Mon Jan 16 12:42:08 2012 From: rasca at miamammausalinux.org (RaSca) Date: Mon, 16 Jan 2012 11:42:08 +0100 Subject: [Dovecot] Quota is not working (Debian Squeeze - Dovecot 1.2) Message-ID: <4F13FF00.1050108@miamammausalinux.org> Hi all, I'm trying to make quota work in Squeeze (Dovecot 1.2.15-7). The quota module is correctly loaded and, when receiving a message, from the log I see these messages: Jan 16 11:20:04 mail-1 dovecot: deliver(testquota@): Loading modules from directory: /usr/lib/dovecot/modules/lda Jan 16 11:20:04 mail-1 dovecot: deliver(testquota@): Module loaded: /usr/lib/dovecot/modules/lda/lib10_quota_plugin.so Jan 16 11:20:04 mail-1 dovecot: deliver(testquota@): Module loaded: /usr/lib/dovecot/modules/lda/lib90_sieve_plugin.so Jan 16 11:20:04 mail-1 dovecot: deliver(testquota@): auth input: uid=5000 Jan 16 11:20:04 mail-1 dovecot: deliver(testquota@): auth input: gid=5000 Jan 16 11:20:04 mail-1 dovecot: deliver(testquota@): auth input: home=/mail/mailboxes//testquota Jan 16 11:20:04 mail-1 dovecot: deliver(testquota@): Quota root: name=/mail/mailboxes//testquota backend=maildir args= Jan 16 11:20:04 mail-1 dovecot: deliver(testquota@): maildir: data=/mail/mailboxes//testquota@ Jan 16 11:20:04 mail-1 dovecot: deliver(testquota@): maildir++: root=/mail/mailboxes//testquota@, index=, control=, inbox=/mail/mailboxes//testquota@ Jan 16 11:20:04 mail-1 dovecot: deliver(testquota@): sieve: user's script path /mail/mailboxes//testquota/.dovecot.sieve doesn't exist (using global script path in stead) Jan 16 11:20:04 mail-1 dovecot: deliver(testquota@): sieve: using sieve path for user's script: /mail/sieve/globalsieverc Jan 16 11:20:04 mail-1 dovecot: deliver(testquota@): sieve: opening script /mail/sieve/globalsieverc Jan 16 11:20:04 mail-1 dovecot: deliver(testquota@): sieve: executing compiled script /mail/sieve/globalsieverc Jan 16 11:20:04 mail-1 dovecot: deliver(testquota@): Namespace : Using permissions from /mail/mailboxes//testquota@: mode=0700 gid=-1 Jan 16 11:20:05 mail-1 dovecot: deliver(testquota@): sieve: msgid=<4F13F996.4000501 at seat.it>: stored mail into mailbox 'INBOX' Now, since I've got a message like this: Quota root: name=/mail/mailboxes//testquota@ backend=maildir args= it seems that something is checked, but even if this directory is over quota, nothing happens. This is my dovecot conf: protocols = imap pop3 disable_plaintext_auth = no log_timestamp = "%Y-%m-%d %H:%M:%S " mail_location = maildir:/mail/mailboxes/%d/%n@%d mail_privileged_group = mail mail_debug = yes mail_nfs_storage = yes mmap_disable=yes fsync_disable=no mail_nfs_index = yes protocol imap { mail_plugins = quota imap_quota } protocol pop3 { pop3_uidl_format = %08Xu%08Xv mail_plugins = quota } protocol managesieve { } protocol lda { auth_socket_path = /var/run/dovecot/auth-master postmaster_address = postmaster@ mail_plugins = sieve quota quota_full_tempfail = no log_path = } auth default { mechanisms = plain passdb sql { args = /etc/dovecot/dovecot-sql.conf } userdb passwd { } userdb static { args = uid=5000 gid=5000 home=/mail/mailboxes/%d/%n@%d allow_all_users=yes } user = root socket listen { master { path = /var/run/dovecot/auth-master mode = 0600 user = vmail } client { path = /var/spool/postfix/private/auth mode = 0660 user = postfix group = postfix } } } plugin { quota = maildir:/mail/mailboxes/%d/%n@%d sieve_global_path = /mail/sieve/globalsieverc } The db connection works, this is /etc/dovecot/dovecot-sql.conf: driver = mysql connect = host= dbname=mail user= password= default_pass_scheme = CRYPT password_query = SELECT username, password FROM mailbox WHERE username='%u' user_query = SELECT username AS user, maildir AS home, CONCAT('*:storage=', quota , 'B') AS quota_rule FROM mailbox WHERE username = '%u' AND active = '1' and for the user testquota the user_query results in this: +-------------------+----------------------------+--------------------+ | user | home | quota_rule | +-------------------+----------------------------+--------------------+ | testquota@ | /testquota@/ | *:storage=1024000B | +-------------------+----------------------------+--------------------+ everything else is ok, for example I'm using sieve for the spam filter, and the SPAM is correctly put in the .SPAM dir. I turned on debug on dovecot, but I can't see if the query in some way fails. Can you please help me to understand what am I doing wrong? -- RaSca Mia Mamma Usa Linux: Niente ? impossibile da capire, se lo spieghi bene! rasca at miamammausalinux.org http://www.miamammausalinux.org From jsxmoney at gmail.com Mon Jan 16 14:38:44 2012 From: jsxmoney at gmail.com (Jason X, Maney) Date: Mon, 16 Jan 2012 14:38:44 +0200 Subject: [Dovecot] Dovecot unable to locate mailbox Message-ID: Dear all, I hope someone can point me in the right direction. here. I have setup my Dovecot v2.0.13 on Ubuntu 11.10. The logs tells me that the mail location has failed as follows: ========= Jan 16 14:18:16 myservername dovecot: pop3-login: Login: user=, method=PLAIN, rip=aaa.bbb.ccc.ddd, lip=www.xxx.yyy.zzz, mpid=1360, TLS Jan 16 14:18:16 myservername dovecot: pop3(userA): Error: user molla: Initialization failed: mail_location not set and autodetection failed: Mail storage autodetection failed with home=/home/userA Jan 16 14:18:16 myservername dovecot: pop3(userA): Error: Invalid user settings. Refer to server log for more information. ========= Yet my config also come out strangely as below: ========= root at guyana:~# dovecot -n # 2.0.13: /etc/dovecot/dovecot.conf # OS: Linux 3.0.0-12-server x86_64 Ubuntu 11.10 passdb { driver = pam } protocols = " imap pop3" ssl_cert = References: <4F1071F8.4080202@Media-Brokers.com> <4F12C794.6070609@Media-Brokers.com> <4F134A0A.70804@msapiro.net> Message-ID: <4F141D93.30406@Media-Brokers.com> On 2012-01-15 4:50 PM, Mark Sapiro wrote: > I don't see how this will help. The scenario the OP is concerned about > isspammer at foreign.domain sends a message with forged From: and maybe > envelope sendervictim at other.foreign.domain to his user on vacation. Guess I should read more carefully... for some reason I thought I remembered him being worried about forged senders in his own domain(s)... Sorry for the noise... -- Best regards, Charles From kirill at shutemov.name Mon Jan 16 17:05:05 2012 From: kirill at shutemov.name (Kirill A. Shutemov) Date: Mon, 16 Jan 2012 17:05:05 +0200 Subject: [Dovecot] v2.1.rc3 released In-Reply-To: <1325878845.17774.38.camel@hurina> References: <1325878845.17774.38.camel@hurina> Message-ID: <20120116150504.GA28883@shutemov.name> On Fri, Jan 06, 2012 at 09:40:44PM +0200, Timo Sirainen wrote: > http://dovecot.org/releases/2.1/rc/dovecot-2.1.rc3.tar.gz > http://dovecot.org/releases/2.1/rc/dovecot-2.1.rc3.tar.gz.sig > > Whops, rc2 was missing a file. I always run "make distcheck", which > should catch these, but recently it has always failed due to clang > static checking giving one "error" that I didn't really want to fix. > Because of that the distcheck didn't finish and didn't check for the > missing file. > > So, anyway, I've made clang happy again, and now that I see how bad idea > it is to just ignore the failed distcheck, I won't do that again in > future. :) > > ./autogen failed: $ ./autogen.sh libtoolize: Consider adding `AC_CONFIG_MACRO_DIR([m4])' to configure.in and libtoolize: rerunning libtoolize, to keep the correct libtool macros in-tree. libtoolize: Consider adding `-I m4' to ACLOCAL_AMFLAGS in Makefile.am. src/plugins/fts/Makefile.am:52: `pkglibexecdir' is not a legitimate directory for `SCRIPTS' Makefile.am:24: `pkglibdir' is not a legitimate directory for `DATA' autoreconf: automake failed with exit status: 1 $ automake --version | head -1 automake (GNU automake) 1.11.2 -- Kirill A. Shutemov From info at simonecaruso.com Mon Jan 16 17:40:59 2012 From: info at simonecaruso.com (Simone Caruso) Date: Mon, 16 Jan 2012 16:40:59 +0100 Subject: [Dovecot] Quota is not working (Debian Squeeze - Dovecot 1.2) In-Reply-To: <4F13FF00.1050108@miamammausalinux.org> References: <4F13FF00.1050108@miamammausalinux.org> Message-ID: <4F14450B.8000903@simonecaruso.com> On 16/01/2012 11:42, RaSca wrote: > Hi all, > I'm trying to make quota work in Squeeze (Dovecot 1.2.15-7). try "auth_debug = yes" -- Simone Caruso IT Consultant +39 349 65 90 805 From thomas at koch.ro Mon Jan 16 17:51:45 2012 From: thomas at koch.ro (Thomas Koch) Date: Mon, 16 Jan 2012 16:51:45 +0100 Subject: [Dovecot] Trying to get metadata plugin working Message-ID: <201201161651.46232.thomas@koch.ro> Hi, I'm working on a Kolab related project and wanted to use dovecot on my dev machine. However I'm stuck with the metadata-plugin. I "solved" the permissions problems but now I get dict: Error: file dict commit: file_dotlock_open(~/Maildir/shared-metadata) failed: No such file or directory Before that, I had dict { metadata = file:/var/lib/dovecot/shared-metadata but got problems since my normal user had no permission to access /var/lib/dovecot. I compiled the plugin from the most recent commit. My dovecot runs in a chroot. I can login with KMail and can create Groupware (annotated) folders, but the metadata file dict won't get created and I also can't set/get metadata via telnet. doveconf -N # 2.0.15: /etc/dovecot/dovecot.conf # OS: Linux 3.1.0-1-amd64 x86_64 Debian 6.0.3 auth_mechanisms = plain dict { metadata = file:~/Maildir/shared-metadata } mail_access_groups = dovecot mail_location = maildir:~/Maildir mail_plugins = " metadata" passdb { driver = pam } plugin { metadata_dict = proxy::metadata } protocols = " imap" service dict { unix_listener dict { group = dovecot mode = 0666 } } ssl_cert = References: <4F1071F8.4080202@Media-Brokers.com> <4F12C794.6070609@Media-Brokers.com> <4F134A0A.70804@msapiro.net> Message-ID: <4F144E57.9060802@msapiro.net> On 11:59 AM, IVO GELOV (CRM) wrote: > > The limitation of 1 message per week for any unique combination of > sender/recipient > does not stop backscatter - because each message can come with a new > forged FROM address, > and from different compromised mail servers. > The spammer does not have control over the body of the auto-replies > (which is something > like "I am not at the office, please write to my colleagues"), but it still > may cause the victims to take some measures. All true, but the sender in the sender/recipient combination is the forged From: that ultimately receives the backscatter and the recipient is your local user who set the vacation autoresponse. If you only have one or two local users on vacation at a time, any given backscatter recipient could receive at most one or two backscatter messages per week regardless of how many compromised servers the spammer sends from. And this assumes the spam is initially sent to multiple local users on vacation and gets past your local spam filtering. I don't know about you, but I have more significant potential backscatter sources to worry about. -- Mark Sapiro The highway is for gamblers, San Francisco Bay Area, California better use your sense - B. Dylan From rasca at miamammausalinux.org Mon Jan 16 18:28:58 2012 From: rasca at miamammausalinux.org (RaSca) Date: Mon, 16 Jan 2012 17:28:58 +0100 Subject: [Dovecot] Quota is not working (Debian Squeeze - Dovecot 1.2) In-Reply-To: <4F14450B.8000903@simonecaruso.com> References: <4F13FF00.1050108@miamammausalinux.org> <4F14450B.8000903@simonecaruso.com> Message-ID: <4F14504A.9010302@miamammausalinux.org> Il giorno Lun 16 Gen 2012 16:40:59 CET, Simone Caruso ha scritto: > On 16/01/2012 11:42, RaSca wrote: >> Hi all, >> I'm trying to make quota work in Squeeze (Dovecot 1.2.15-7). > try "auth_debug = yes" > In fact, enabling auth_debug gives me this: Jan 16 17:21:06 mail-2 dovecot: auth(default): master in: USER#0111#011testquota@#011service=deliver Jan 16 17:21:06 mail-2 dovecot: auth(default): passwd(testquota@): lookup Jan 16 17:21:06 mail-2 dovecot: auth(default): passwd(testquota@): unknown user Jan 16 17:21:06 mail-2 dovecot: auth(default): master out: USER#0111#011testquota@#011uid=5000#011gid=5000#011home=/mail/mailboxes//testquota@ But what I don't understand is that manually doing password_query and user_query works. So why I receive unknown user? Is there something else to set? -- RaSca Mia Mamma Usa Linux: Niente ? impossibile da capire, se lo spieghi bene! rasca at miamammausalinux.org http://www.miamammausalinux.org From greve at kolabsys.com Mon Jan 16 18:13:14 2012 From: greve at kolabsys.com (Georg C. F. Greve) Date: Mon, 16 Jan 2012 17:13:14 +0100 Subject: [Dovecot] [Kolab-devel] Trying to get metadata plugin working In-Reply-To: <201201161651.46232.thomas@koch.ro> References: <201201161651.46232.thomas@koch.ro> Message-ID: <2001652.RYW7Y0I4zo@katana.lair> On Monday 16 January 2012 16.51:45 Thomas Koch wrote: > I'm working on a Kolab related project and wanted to use dovecot on my dev > machine. Very interesting. Please document your findings in wiki.kolab.org once you're done. > dict: Error: file dict commit: file_dotlock_open(~/Maildir/shared-metadata) > failed: No such file or directory Can't really help with that one, I'm afraid. Best regards, Georg -- Georg C. F. Greve Chief Executive Officer Kolab Systems AG Z?rich, Switzerland e: greve at kolabsys.com t: +41 78 904 43 33 w: http://kolabsys.com pgp: 86574ACA Georg C. F. Greve -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 308 bytes Desc: This is a digitally signed message part. URL: From tss at iki.fi Mon Jan 16 19:16:57 2012 From: tss at iki.fi (Timo Sirainen) Date: Mon, 16 Jan 2012 19:16:57 +0200 Subject: [Dovecot] Trying to get metadata plugin working In-Reply-To: <201201161651.46232.thomas@koch.ro> References: <201201161651.46232.thomas@koch.ro> Message-ID: <23312B5E-14CF-42D9-8A18-F995EDA874C4@iki.fi> On 16.1.2012, at 17.51, Thomas Koch wrote: > dict: Error: file dict commit: file_dotlock_open(~/Maildir/shared-metadata) > failed: No such file or directory It's not expanding ~/ > dict { > metadata = file:~/Maildir/shared-metadata Use %h/ instead of ~/ From thomas at koch.ro Mon Jan 16 20:26:12 2012 From: thomas at koch.ro (Thomas Koch) Date: Mon, 16 Jan 2012 19:26:12 +0100 Subject: [Dovecot] Trying to get metadata plugin working In-Reply-To: <23312B5E-14CF-42D9-8A18-F995EDA874C4@iki.fi> References: <201201161651.46232.thomas@koch.ro> <23312B5E-14CF-42D9-8A18-F995EDA874C4@iki.fi> Message-ID: <201201161926.12309.thomas@koch.ro> Timo Sirainen: > Use %h/ instead of ~/ Hi Timo, it doesn't expand either %h nor %%h. When I hardcode the path to my dev user's homedir I get a permission error. After hardcoding it to /tmp/shared-metadata the file gets at least written, but the content looks strange: shared/mailbox/7c2ae515102e144f172d0000d1887b74/shared//vendor/kolab/folder- test true Best regards, Thomas Koch, http://www.koch.ro From tss at iki.fi Mon Jan 16 20:33:44 2012 From: tss at iki.fi (Timo Sirainen) Date: Mon, 16 Jan 2012 20:33:44 +0200 Subject: [Dovecot] Trying to get metadata plugin working In-Reply-To: <201201161926.12309.thomas@koch.ro> References: <201201161651.46232.thomas@koch.ro> <23312B5E-14CF-42D9-8A18-F995EDA874C4@iki.fi> <201201161926.12309.thomas@koch.ro> Message-ID: On 16.1.2012, at 20.26, Thomas Koch wrote: > Timo Sirainen: >> Use %h/ instead of ~/ > > Hi Timo, > > it doesn't expand either %h nor %%h. Oh, right, wrong place. If you make it go through proxy, it doesn't do any expansion. It's then accessed by the "dict" process (which probably runs as "dovecot" user). You could instead use something like: metadata_dict = file:%h/Maildir/shared-metadata > When I hardcode the path to my dev user's > homedir I get a permission error. After hardcoding it to /tmp/shared-metadata > the file gets at least written, but the content looks strange: > > shared/mailbox/7c2ae515102e144f172d0000d1887b74/shared//vendor/kolab/folder- > test > true I haven't really looked at what the metadata plugin actually does.. From buchholz at easystreet.net Tue Jan 17 00:41:46 2012 From: buchholz at easystreet.net (Don Buchholz) Date: Mon, 16 Jan 2012 14:41:46 -0800 Subject: [Dovecot] imap-login process_limit reached Message-ID: <4F14A7AA.8010507@easystreet.net> I've been having some problems with IMAP user connections to the Dovecot (v2.0.8) server. The following message is being logged. Jan 16 10:51:36 postal dovecot: master: Warning: service(imap-login): process_limit reached, client connections are being dropped The server is running Red Hat Enterprise Linux release 4 (update 6). Dovecot is v2.0.8. We have only 29 user accounts in /etc/dovecot/users. There were 196 "dovecot/imap" processes and 6 other dovecot processes, for a total of 202 "dovecot" processes, listed in the 'ps aux' output when problems were being experienced. Stopping and restarting the Dovecot system fixes the problem -- for a while. The 'doveconf -n' output is attached. I have not set any "process_limit" values, and I don't think I'm getting anywhere close to the 1024 default, so I'm pretty confused as to what might be wrong. Any suggestions on what to do next are appreciated. Thanks, - Don ------------------------------------------------------------------------ -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: doveconf-n.txt URL: From lists at wildgooses.com Tue Jan 17 02:22:35 2012 From: lists at wildgooses.com (Ed W) Date: Tue, 17 Jan 2012 00:22:35 +0000 Subject: [Dovecot] Storing passwords encrypted... bcrypt? In-Reply-To: <4F04FAA9.3020908@localhost.localdomain.org> References: <4F0367A2.1000807@Media-Brokers.com> <4F04FAA9.3020908@localhost.localdomain.org> Message-ID: <4F14BF4B.5060804@wildgooses.com> On 05/01/2012 01:19, Pascal Volk wrote: > On 01/03/2012 09:40 PM Charles Marcus wrote: >> Hi everyone, >> >> Was just perusing this article about how trivial it is to decrypt >> passwords that are stored using most (standard) encryption methods (like >> MD5), and was wondering - is it possible to use bcrypt with >> dovecot+postfix+mysql (or posgres)? > Yes it is possible to use bcrypt with dovecot. Currently you have only > to write your password scheme plugin. The bcrypt algorithm is described > at http://en.wikipedia.org/wiki/Bcrypt. > > If you are using Dovecot>= 2.0 'doveadm pw' supports the schemes: > *BSD: Blowfish-Crypt > *Linux (since glibc 2.7): SHA-256-Crypt and SHA-512-Crypt > Some distributions have also added support for Blowfish-Crypt > See also: doveadm-pw(1) > > If you are using Dovecot< 2.0 you can also use any of the algorithms > supported by your system's libc. But then you have to prefix the hashes > with {CRYPT} - not {{BLF,SHA256,SHA512}-CRYPT}. > I'm a bit late, but the above is absolutely correct Basically the simplest solution is to pick a glibc which natively supports bcrypt (and the equivalent algorithm, but using SHA-256/512). Then you can effectively use any of these hashes in your /etc/{passwd,shadow} file. With the hash testing native in your glibc then a bunch of applications automatically acquire the ability to test passwords stored in these hash formats, dovecot being one of them To generate the hashes in that format, choose an appropriate library for your web interface or whatever generates the hashes for you. There are even command line utilities (mkpasswd) to do this for you. I forget the config knobs (/etc/logins.def ?), but it's entirely possible to also have all your normal /etc/shadow hashes generated in this format going forward if you wish I posted some patches for uclibc recently for bcrypt and I think sha-256/512 already made it in. I believe several of the big names have similar patches for glibc. Just to attack some of the myths here: - Salting passwords basically means adding some random garbage at the front of the password before hashing. - Salting passwords prevents you using a big lookup table to cheat and instantly reverse the password - Salting has very little ability to stop you bruteforcing the password, ie it takes around the same time to figure out the SHA or blowfish hash of every word in some dictionary, regardless of whether you use the raw word or the word with some garbage in front of it - Using an iterated hash algorithm gives you a linear increase in difficulty in bruteforcing passwords. So if you do a million iterations on each password, then it takes a million times longer to bruteforce (probably there are shortcuts to be discovered, assume that this is best case, but it's still a good improvement). - Bear in mind that off the shelf GPU crackers will do of the order 100-300 million hashes per second!! http://www.cryptohaze.com/multiforcer.php The last statistic should be scary to someone who has some small knowledge of the number of unique words in the [english] language, even multiplying up for trivial permutations with numbers or punctuation... So in conclusion: everyone who stores passwords in hash form should make their way in an orderly fashion towards the door if they don't currently use an iterated hash function. No need to run, but it definitely should be on the todo list to apply where feasible. BCrypt is very common and widely implemented, but it would seem logical to consider SHA-256/512 (iterated) options where there is application support. Note I personally believe there are valid reasons to store plaintext passwords - this seems to cause huge criticism due to the ensuing disaster which can happen if the database is pinched, but it does allow for enhanced security in the password exchange, so ultimately it depends on where your biggest risk lies... Good luck Ed W From lists at wildgooses.com Tue Jan 17 02:28:32 2012 From: lists at wildgooses.com (Ed W) Date: Tue, 17 Jan 2012 00:28:32 +0000 Subject: [Dovecot] compressed mboxes very slow In-Reply-To: <8739blw6gl.fsf@alfa.kjonca> References: <87iptnoans.fsf@alfa.kjonca> <8739blw6gl.fsf@alfa.kjonca> Message-ID: <4F14C0B0.9020709@wildgooses.com> On 12/01/2012 10:39, Kamil Jo?ca wrote: > kjonca at o2.pl (Kamil Jo?ca) writes: > >> I have some archive mails in gzipped mboxes. I could use them with >> dovecot 1.x without problems. >> But recently I have installed dovecot 2.0.12, and they are slow. very >> slow. > > Recently I have to read some compressed mboxes again, and no progress :( > I took 2.0.17 sources and put some > i_debug ("#kjonca["__FILE__",%d,%s] %d", __LINE__,__func__,...some parameters ...); > > lines into istream-bzlib.c, istream-raw-mbox.c and istream-limit.c > and found that: > > in istream-limit.c in function around lines 40-45: > --8<---------------cut here---------------start------------->8--- > i_stream_seek(stream->parent, lstream->istream.parent_start_offset + > stream->istream.v_offset); > stream->pos -= stream->skip; > stream->skip = 0; > --8<---------------cut here---------------end--------------->8--- > seeks stream, (calling i_stream_raw_mbox_seek in file istream-raw-mbox.c ) > > and then (line 50 ) > --8<---------------cut here---------------start------------->8--- > if ((ret = i_stream_read(stream->parent)) == -2) > return -2; > --8<---------------cut here---------------end--------------->8--- > > tries to read some data earlier in stream, and with compressed mboxes it > cause reread file from the beginning. > Just wanted to bump this since it seems interesting. Timo do you have a comment? I definitely see your point that skipping backwards in a compressed stream is going to be very CPU intensive. Ed W From moseleymark at gmail.com Tue Jan 17 03:17:26 2012 From: moseleymark at gmail.com (Mark Moseley) Date: Mon, 16 Jan 2012 17:17:26 -0800 Subject: [Dovecot] LMTP Logging Message-ID: Just had a minor suggestion, with no clue how hard/easy it would be to implement: The %f flag in deliver_log_format seems to pick up the From: header, instead of the "MAIL FROM:<...>" arg. It'd be handy to have a %F that shows the "MAIL FROM" arg instead. I'm looking at tracking emails through logs from Exim to Dovecot easily. I know Message-ID can be used for correlation but it adds some complexity to searching, i.e. I can't just grep for the sender (as logged by Exim), unless I assume "MAIL FROM" always == From: From janfrode at tanso.net Tue Jan 17 10:36:19 2012 From: janfrode at tanso.net (Jan-Frode Myklebust) Date: Tue, 17 Jan 2012 09:36:19 +0100 Subject: [Dovecot] resolve mail_home ? Message-ID: <20120117083619.GA21186@dibs.tanso.net> I now have "mail_home = /srv/mailstore/%256RHu/%d/%n". Is there any way of asking dovecot where a user's home directory is? It's not in "doveadm user": $ doveadm user -f home janfrode at lyse.net $ doveadm user janfrode at tanso.net userdb: janfrode at tanso.net mail : mdbox:~/mdbox quota_rule: *:storage=1048576 Alternatively, is there an easy way to calculate the %256RHu hash ? -jf From ivo at crm.walltopia.com Tue Jan 17 10:52:40 2012 From: ivo at crm.walltopia.com (IVO GELOV (CRM)) Date: Tue, 17 Jan 2012 10:52:40 +0200 Subject: [Dovecot] Using Dovecot-auth to return error code 450 (or other 4xx) to Postfix when user is on vacation In-Reply-To: <4F144E57.9060802@msapiro.net> References: <4F1071F8.4080202@Media-Brokers.com> <4F12C794.6070609@Media-Brokers.com> <4F134A0A.70804@msapiro.net> <4F144E57.9060802@msapiro.net> Message-ID: On Mon, 16 Jan 2012 18:20:39 +0200, Mark Sapiro wrote: > On 11:59 AM, IVO GELOV (CRM) wrote: >> >> The limitation of 1 message per week for any unique combination of >> sender/recipient >> does not stop backscatter - because each message can come with a new >> forged FROM address, >> and from different compromised mail servers. >> The spammer does not have control over the body of the auto-replies >> (which is something >> like "I am not at the office, please write to my colleagues"), but it still >> may cause the victims to take some measures. > > > All true, but the sender in the sender/recipient combination is the > forged From: that ultimately receives the backscatter and the recipient > is your local user who set the vacation autoresponse. If you only have > one or two local users on vacation at a time, any given backscatter > recipient could receive at most one or two backscatter messages per week > regardless of how many compromised servers the spammer sends from. And > this assumes the spam is initially sent to multiple local users on > vacation and gets past your local spam filtering. > > I don't know about you, but I have more significant potential > backscatter sources to worry about. > I see your point and I agree with you this is a minor problem. Thanks for your time, Mark. Best wishes, Ivo Gelov From ivo at crm.walltopia.com Tue Jan 17 11:59:14 2012 From: ivo at crm.walltopia.com (IVO GELOV (CRM)) Date: Tue, 17 Jan 2012 11:59:14 +0200 Subject: [Dovecot] Dovecot unable to locate mailbox In-Reply-To: References: Message-ID: On Mon, 16 Jan 2012 14:38:44 +0200, Jason X, Maney wrote: > Dear all, > > I hope someone can point me in the right direction. here. I have setup my > Dovecot v2.0.13 on Ubuntu 11.10. The logs tells me that the mail location > has failed as follows: > > ========= > Jan 16 14:18:16 myservername dovecot: pop3-login: Login: user=, > method=PLAIN, rip=aaa.bbb.ccc.ddd, lip=www.xxx.yyy.zzz, mpid=1360, TLS > Jan 16 14:18:16 myservername dovecot: pop3(userA): Error: user molla: > Initialization failed: mail_location not set and autodetection failed: Mail > storage autodetection failed with home=/home/userA > Jan 16 14:18:16 myservername dovecot: pop3(userA): Error: Invalid user > settings. Refer to server log for more information. > ========= > > Yet my config also come out strangely as below: > > # path given in the mail_location setting. > # mail_location = maildir:~/Maildir > # mail_location = mbox:~/mail:INBOX=/var/mail/%u > # mail_location = mbox:/var/mail/%d/%1n/%n:INDEX=/var/indexes/%d/%1n/%n > mail_location = maildir:~/Maildir > # explicitly, ie. mail_location does nothing unless you have a namespace > # mail_location, which is also the default for it. Hi, Jason. I will describe my configuration and probably you will find some usefull information. I am using Postfix as MTA and have configured Dovecot to be LDA. I have several domains, so I am using the following folder schema: /var/mail/vhosts = the root of the mail storage /var/mail/vhosts/domain_1 = first domain /var/mail/vhosts/domain_1/user_1 = first mailbox in this domain .... /var/mail/vhosts/domain_2 = another domain /var/mail/vhosts/domain_2/user_1 = first mailbox in the other domain This is achieved with the following setting in mail.conf: mail_location = maildir:/var/mail/vhosts/%d/%n But since I do not want to manually go and create the corresponding folders each time I add new user (I manage accounts through a MySQL table), I also use the following setting in lda.conf: lda_mailbox_autocreate = yes lda_mailbox_autosubscribe = yes Perhaps you only need to add the latter settings in lda.conf and everything should run fine. Best wishes, IVO GELOV From interfasys at gmail.com Tue Jan 17 13:07:28 2012 From: interfasys at gmail.com (=?UTF-8?B?aW50ZXJmYVN5cyBzw6BybA==?=) Date: Tue, 17 Jan 2012 11:07:28 +0000 Subject: [Dovecot] Antispam plugin not compatible with Dovecot 2.1 Message-ID: <4F155670.6010905@gmail.com> Here is what I get when I try to compile the antispam plugin agaisnt Dovecot 2.1 ************** mailbox.c: In function 'antispam_save_begin': mailbox.c:138:12: error: 'struct mail_save_context' has no member named 'copying' mailbox.c: In function 'antispam_save_finish': mailbox.c:174:12: error: 'struct mail_save_context' has no member named 'copying' Failed to compile mailbox.c (plugin)! gmake[3]: *** [mailbox.plugin.o] Error 1 ************** The other objects compile fine. Cheers, Olivier From CMarcus at Media-Brokers.com Tue Jan 17 13:26:39 2012 From: CMarcus at Media-Brokers.com (Charles Marcus) Date: Tue, 17 Jan 2012 06:26:39 -0500 Subject: [Dovecot] per-user limit? In-Reply-To: <20120116091521.GA10944@gir.theapt.org> References: <20120116091521.GA10944@gir.theapt.org> Message-ID: <4F155AEF.3080105@Media-Brokers.com> On 2012-01-16 4:15 AM, Peter Hessler wrote: > I'm looking through my configuration, and I cannot see a limit on how > many times a single user can connect. He is connecting from different > IPs. I think you're needing: http://wiki2.dovecot.org/Services#Service_limits -- Best regards, Charles From tss at iki.fi Tue Jan 17 16:20:13 2012 From: tss at iki.fi (Timo Sirainen) Date: Tue, 17 Jan 2012 16:20:13 +0200 Subject: [Dovecot] little bug with Director in 2.1? In-Reply-To: References: Message-ID: <1326810013.11500.1.camel@innu> Hi, On Tue, 2012-01-10 at 16:16 +0100, Luca Di Vizio wrote: > in 2.1rc3 the "director_servers" setting does not accept hostnames as > documented (with ip no problems). > It works correctly in 2.0.17. The problem most likely was that v2.1 chroots the director process by default, but it did it a bit too early so hostname lookups failed. http://hg.dovecot.org/dovecot-2.1/rev/1d54d2963392 should fix it. From michael at orlitzky.com Tue Jan 17 16:23:47 2012 From: michael at orlitzky.com (Michael Orlitzky) Date: Tue, 17 Jan 2012 09:23:47 -0500 Subject: [Dovecot] imap-login process_limit reached In-Reply-To: <4F14A7AA.8010507@easystreet.net> References: <4F14A7AA.8010507@easystreet.net> Message-ID: <4F158473.1000901@orlitzky.com> First of all, feature request: doveconf -d show the default value of all settings On 01/16/12 17:41, Don Buchholz wrote: > > The 'doveconf -n' output is attached. I have not set any > "process_limit" values, and I don't think I'm getting anywhere close to > the 1024 default, so I'm pretty confused as to what might be wrong. > > Any suggestions on what to do next are appreciated. What makes you think 1024 is the default? We had to increase it. It shows up in doveconf -n output, so I don't think that's the default. # doveconf -n | grep limit default_process_limit = 1024 From phessler at theapt.org Tue Jan 17 16:27:31 2012 From: phessler at theapt.org (Peter Hessler) Date: Tue, 17 Jan 2012 15:27:31 +0100 Subject: [Dovecot] per-user limit? In-Reply-To: <4F155AEF.3080105@Media-Brokers.com> References: <20120116091521.GA10944@gir.theapt.org> <4F155AEF.3080105@Media-Brokers.com> Message-ID: <20120117142731.GF24394@gir.theapt.org> On 2012 Jan 17 (Tue) at 06:26:39 -0500 (-0500), Charles Marcus wrote: :On 2012-01-16 4:15 AM, Peter Hessler wrote: :>I'm looking through my configuration, and I cannot see a limit on how :>many times a single user can connect. He is connecting from different :>IPs. : :I think you're needing: : :http://wiki2.dovecot.org/Services#Service_limits : Thanks for the pointer. Hoever, this doesn't seem to help me. When I do "doveconf | grep [foo]" I find that the limits are either '0' or '1'. Except in "service imap-login { process-limit = 128 }". I had bumped that up from 64, and now it is at 1024. I don't have many users (about 6 that use imap), and nobody can use more than 6. I also double checked my process limits, and they are either unlimited, or measured in the ten-thousands. -- Osborn's Law: Variables won't; constants aren't. From duihi77 at gmail.com Tue Jan 17 16:31:01 2012 From: duihi77 at gmail.com (Duane Hill) Date: Tue, 17 Jan 2012 14:31:01 +0000 Subject: [Dovecot] imap-login process_limit reached In-Reply-To: <4F158473.1000901@orlitzky.com> References: <4F14A7AA.8010507@easystreet.net> <4F158473.1000901@orlitzky.com> Message-ID: <716809841.20120117143101@gmail.com> On Tuesday, January 17, 2012 at 14:23:47 UTC, michael at orlitzky.com confabulated: > First of all, feature request: > doveconf -d > show the default value of all settings You mean like doveconf(1) ? OPTIONS -a Show all settings with their currently configured values. -- If at first you don't succeed... ...so much for skydiving. From michael at orlitzky.com Tue Jan 17 16:58:04 2012 From: michael at orlitzky.com (Michael Orlitzky) Date: Tue, 17 Jan 2012 09:58:04 -0500 Subject: [Dovecot] imap-login process_limit reached In-Reply-To: <716809841.20120117143101@gmail.com> References: <4F14A7AA.8010507@easystreet.net> <4F158473.1000901@orlitzky.com> <716809841.20120117143101@gmail.com> Message-ID: <4F158C7C.4070209@orlitzky.com> On 01/17/12 09:31, Duane Hill wrote: > On Tuesday, January 17, 2012 at 14:23:47 UTC, michael at orlitzky.com confabulated: > >> First of all, feature request: > >> doveconf -d >> show the default value of all settings > > You mean like doveconf(1) ? > > OPTIONS > -a Show all settings with their currently configured values. > Using -a shows you all settings, as they're running in your installation. That's the defaults, except where they're overwritten by your config. I was asking for the defaults regardless of what's in my config file, so that I don't have to deduce them from the combined doveconf output & my config file. From michael at orlitzky.com Tue Jan 17 17:01:45 2012 From: michael at orlitzky.com (Michael Orlitzky) Date: Tue, 17 Jan 2012 10:01:45 -0500 Subject: [Dovecot] imap-login process_limit reached In-Reply-To: <4F158C7C.4070209@orlitzky.com> References: <4F14A7AA.8010507@easystreet.net> <4F158473.1000901@orlitzky.com> <716809841.20120117143101@gmail.com> <4F158C7C.4070209@orlitzky.com> Message-ID: <4F158D59.2070703@orlitzky.com> On 01/17/12 09:58, Michael Orlitzky wrote: > > Using -a shows you all settings, as they're running in your > installation. That's the defaults, except where they're overwritten by > your config. > > I was asking for the defaults regardless of what's in my config file, so > that I don't have to deduce them from the combined doveconf output & my > config file. In other words, I don't want to have to do this: mail2 ~ # touch empty-config.conf mail2 ~ # doveconf -a -c empty-config.conf | grep limit | head doveconf: Error: ssl enabled, but ssl_cert not set doveconf: Error: ssl enabled, but ssl_cert not set doveconf: Fatal: Error in configuration file empty-config.conf: ssl enabled, but ssl_cert not set default_client_limit = 1000 default_process_limit = 100 default_vsz_limit = 256 M recipient_delimiter = + client_limit = 0 process_limit = 1 vsz_limit = 18446744073709551615 B client_limit = 1 process_limit = 0 vsz_limit = 18446744073709551615 B to find out that the default process limit isn't 1000. From tss at iki.fi Tue Jan 17 17:27:15 2012 From: tss at iki.fi (Timo Sirainen) Date: Tue, 17 Jan 2012 17:27:15 +0200 Subject: [Dovecot] resolve mail_home ? In-Reply-To: <20120117083619.GA21186@dibs.tanso.net> References: <20120117083619.GA21186@dibs.tanso.net> Message-ID: <1326814035.11500.9.camel@innu> On Tue, 2012-01-17 at 09:36 +0100, Jan-Frode Myklebust wrote: > I now have "mail_home = /srv/mailstore/%256RHu/%d/%n". Is there any way > of asking dovecot where a user's home directory is? No.. > It's not in "doveadm user": > > $ doveadm user -f home janfrode at lyse.net > $ doveadm user janfrode at tanso.net > userdb: janfrode at tanso.net > mail : mdbox:~/mdbox > quota_rule: *:storage=1048576 Right, because it's a default setting, not something that comes from a userdb lookup. > Alternatively, is there an easy way to calculate the %256RHu hash ? Nope.. Maybe a new command, or maybe a parameter to doveadm user that would show mail_uid/gid/home. Or maybe something that dumps config output with %vars expanded to the given user. Hmm. From tss at iki.fi Tue Jan 17 17:30:11 2012 From: tss at iki.fi (Timo Sirainen) Date: Tue, 17 Jan 2012 17:30:11 +0200 Subject: [Dovecot] per-user limit? In-Reply-To: <20120116091521.GA10944@gir.theapt.org> References: <20120116091521.GA10944@gir.theapt.org> Message-ID: <1326814211.11500.11.camel@innu> On Mon, 2012-01-16 at 10:15 +0100, Peter Hessler wrote: > I am seeing a problem where users are limited to 6 imap logins total. > One of my users has a bunch of phones and computers, and wants them all > on at the same time. > > I'm looking through my configuration, and I cannot see a limit on how > many times a single user can connect. He is connecting from different > IPs. > > Any ideas? My logs show the following error when they attempt to auth > for a 7th time: > > dovecot: imap-login: Disconnected (no auth attempts): rip=111.yy.zz.xx, lip=81.209.183.113, TLS This means that the client simply didn't try to log in. If Dovecot reaches some kind of a limit, it logs about that. If there isn't anything else logged, I don't think the problem is in Dovecot itself. Can you reproduce this yourself by logging in with e.g. telnet? From javierdemiguel at us.es Tue Jan 17 17:35:17 2012 From: javierdemiguel at us.es (=?UTF-8?B?SmF2aWVyIE1pZ3VlbCBSb2Ryw61ndWV6?=) Date: Tue, 17 Jan 2012 16:35:17 +0100 Subject: [Dovecot] resolve mail_home ? In-Reply-To: <1326814035.11500.9.camel@innu> References: <20120117083619.GA21186@dibs.tanso.net> <1326814035.11500.9.camel@innu> Message-ID: <4F159535.1070701@us.es> That comand/paramater should be great for our backup scripts in our hashed mdboxes tree, we are using now slocate... Regards Javier > Nope.. > > Maybe a new command, or maybe a parameter to doveadm user that would > show mail_uid/gid/home. Or maybe something that dumps config output with > %vars expanded to the given user. Hmm. > From CMarcus at Media-Brokers.com Tue Jan 17 18:13:44 2012 From: CMarcus at Media-Brokers.com (Charles Marcus) Date: Tue, 17 Jan 2012 11:13:44 -0500 Subject: [Dovecot] imap-login process_limit reached In-Reply-To: <4F158C7C.4070209@orlitzky.com> References: <4F14A7AA.8010507@easystreet.net> <4F158473.1000901@orlitzky.com> <716809841.20120117143101@gmail.com> <4F158C7C.4070209@orlitzky.com> Message-ID: <4F159E38.5020802@Media-Brokers.com> On 2012-01-17 9:58 AM, Michael Orlitzky wrote: > Using -a shows you all settings, as they're running in your > installation. That's the defaults, except where they're overwritten by > your config. > > I was asking for the defaults regardless of what's in my config file, so > that I don't have to deduce them from the combined doveconf output& my > config file. Yeah, I had suggested this to Timo a long time ago when I suggested doveconf -n (the way postfix does it), but I don't think he ever did the -d option... maybe it got lost in the huffle... -- Best regards, Charles From buchholz at easystreet.net Tue Jan 17 20:15:28 2012 From: buchholz at easystreet.net (Don Buchholz) Date: Tue, 17 Jan 2012 10:15:28 -0800 Subject: [Dovecot] imap-login process_limit reached In-Reply-To: <4F158473.1000901@orlitzky.com> References: <4F14A7AA.8010507@easystreet.net> <4F158473.1000901@orlitzky.com> Message-ID: <4F15BAC0.3060003@easystreet.net> Michael Orlitzky wrote: > First of all, feature request: > > doveconf -d > show the default value of all settings > > > On 01/16/12 17:41, Don Buchholz wrote: > >> The 'doveconf -n' output is attached. I have not set any >> "process_limit" values, and I don't think I'm getting anywhere close to >> the 1024 default, so I'm pretty confused as to what might be wrong. >> >> Any suggestions on what to do next are appreciated. >> > > > What makes you think 1024 is the default? We had to increase it. It > shows up in doveconf -n output, so I don't think that's the default. > > # doveconf -n | grep limit > default_process_limit = 1024 > What makes me think 1024 is the default? The documentation: --> http://wiki2.dovecot.org/Services?highlight=%28process_limit%29#imap.2C_pop3.2C_managesieve From michael at orlitzky.com Tue Jan 17 20:30:02 2012 From: michael at orlitzky.com (Michael Orlitzky) Date: Tue, 17 Jan 2012 13:30:02 -0500 Subject: [Dovecot] imap-login process_limit reached In-Reply-To: <4F15BAC0.3060003@easystreet.net> References: <4F14A7AA.8010507@easystreet.net> <4F158473.1000901@orlitzky.com> <4F15BAC0.3060003@easystreet.net> Message-ID: <4F15BE2A.6010605@orlitzky.com> On 01/17/12 13:15, Don Buchholz wrote: >> > What makes me think 1024 is the default? > The documentation: > --> > http://wiki2.dovecot.org/Services?highlight=%28process_limit%29#imap.2C_pop3.2C_managesieve > That's only for those three services (imap, pop3, managesieve), not for imap-login unfortunately. Check here for more info, http://wiki2.dovecot.org/LoginProcess but the good part is, Since one login process can handle only one connection, the service's process_limit setting limits the number of users that can be logging in at the same time (defaults to default_process_limit=100). From buchholz at easystreet.net Tue Jan 17 21:02:37 2012 From: buchholz at easystreet.net (Don Buchholz) Date: Tue, 17 Jan 2012 11:02:37 -0800 Subject: [Dovecot] imap-login process_limit reached In-Reply-To: <4F15BAC0.3060003@easystreet.net> References: <4F14A7AA.8010507@easystreet.net> <4F158473.1000901@orlitzky.com> <4F15BAC0.3060003@easystreet.net> Message-ID: <4F15C5CD.80904@easystreet.net> Don Buchholz wrote: > Michael Orlitzky wrote: >> First of all, feature request: >> >> doveconf -d >> show the default value of all settings >> >> >> On 01/16/12 17:41, Don Buchholz wrote: >> >>> The 'doveconf -n' output is attached. I have not set any >>> "process_limit" values, and I don't think I'm getting anywhere close to >>> the 1024 default, so I'm pretty confused as to what might be wrong. >>> >>> Any suggestions on what to do next are appreciated. >>> >> >> >> What makes you think 1024 is the default? We had to increase it. It >> shows up in doveconf -n output, so I don't think that's the default. >> >> # doveconf -n | grep limit >> default_process_limit = 1024 >> > What makes me think 1024 is the default? > The documentation: > --> > http://wiki2.dovecot.org/Services?highlight=%28process_limit%29#imap.2C_pop3.2C_managesieve > > But, Michael's right, documentation can be wrong. So, I dumped the entire configuration. Here are the values found on the running system. Both imap and pop3 services have "process_limit = 1024". | [root at postal ~]# doveconf -a | # 2.0.8: /etc/dovecot/dovecot.conf | # OS: Linux 2.6.9-67.0.1.ELsmp i686 Red Hat Enterprise Linux WS release 4 (Nahant Update 6) ext3 | ... | default_process_limit = 100 | ... | service anvil { | ... | process_limit = 1 | ... | } | service auth-worker { | ... | process_limit = 0 | ... | } | service auth { | ... | process_limit = 1 | ... | } | service config { | ... | process_limit = 0 | ... | } | service dict { | ... | process_limit = 0 | ... | } | service director { | ... | process_limit = 1 | ... | } | service dns_client { | ... | process_limit = 0 | ... | } | service doveadm { | ... | process_limit = 0 | ... | } | service imap-login { | ... | process_limit = 0 | ... | } | service imap { | ... | process_limit = 1024 | ... | } | service lmtp { | ... | process_limit = 0 | ... | } | service log { | ... | process_limit = 1 | ... | } | service pop3-login { | ... | process_limit = 0 | ... | } | service pop3 { | ... | process_limit = 1024 | ... | } | service ssl-params { | ... | process_limit = 0 | ... | } From michael at orlitzky.com Tue Jan 17 21:12:55 2012 From: michael at orlitzky.com (Michael Orlitzky) Date: Tue, 17 Jan 2012 14:12:55 -0500 Subject: [Dovecot] imap-login process_limit reached In-Reply-To: <4F15C5CD.80904@easystreet.net> References: <4F14A7AA.8010507@easystreet.net> <4F158473.1000901@orlitzky.com> <4F15BAC0.3060003@easystreet.net> <4F15C5CD.80904@easystreet.net> Message-ID: <4F15C837.2020002@orlitzky.com> On 01/17/12 14:02, Don Buchholz wrote: >> > But, Michael's right, documentation can be wrong. So, I dumped the > entire configuration. Here are the values found on the running system. > Both imap and pop3 services have "process_limit = 1024". > You probably just posted this while my last message was in-flight, but just in case, 'imap' and 'imap-login' are different, and have different process limits. As the title of the thread suggests, you're out of imap-login processes, not imap ones. From buchholz at easystreet.net Tue Jan 17 21:48:29 2012 From: buchholz at easystreet.net (Don Buchholz) Date: Tue, 17 Jan 2012 11:48:29 -0800 Subject: [Dovecot] imap-login process_limit reached In-Reply-To: <4F15BE2A.6010605@orlitzky.com> References: <4F14A7AA.8010507@easystreet.net> <4F158473.1000901@orlitzky.com> <4F15BAC0.3060003@easystreet.net> <4F15BE2A.6010605@orlitzky.com> Message-ID: <4F15D08D.4070209@easystreet.net> Michael Orlitzky wrote: > On 01/17/12 13:15, Don Buchholz wrote: > >>> >>> >> What makes me think 1024 is the default? >> The documentation: >> --> >> http://wiki2.dovecot.org/Services?highlight=%28process_limit%29#imap.2C_pop3.2C_managesieve >> >> > > That's only for those three services (imap, pop3, managesieve), not for > imap-login unfortunately. Check here for more info, > > http://wiki2.dovecot.org/LoginProcess > > but the good part is, > > Since one login process can handle only one connection, the > service's process_limit setting limits the number of users that can > be logging in at the same time (defaults to > default_process_limit=100). > Doh! Thanks, Michael. I wasn't looking at the original error message closely enough. I scanned too quickly and saw "service(imap)" and not "service(imap-login)". Now the failure when there are only ~200 (total) dovecot processes makes sense (because about half of the processes here are dovecot/imap-login). I've added the following to our configuration: service imap-login { process_limit = 500 process_min_avail = 2 } Thanks for your help ... and patience. - Don From buchholz at easystreet.net Tue Jan 17 21:52:19 2012 From: buchholz at easystreet.net (Don Buchholz) Date: Tue, 17 Jan 2012 11:52:19 -0800 Subject: [Dovecot] imap-login process_limit reached In-Reply-To: <4F15C837.2020002@orlitzky.com> References: <4F14A7AA.8010507@easystreet.net> <4F158473.1000901@orlitzky.com> <4F15BAC0.3060003@easystreet.net> <4F15C5CD.80904@easystreet.net> <4F15C837.2020002@orlitzky.com> Message-ID: <4F15D173.9090103@easystreet.net> Michael Orlitzky wrote: > On 01/17/12 14:02, Don Buchholz wrote: > >> But, Michael's right, documentation can be wrong. So, I dumped the >> entire configuration. Here are the values found on the running system. >> Both imap and pop3 services have "process_limit = 1024". >> >> > > You probably just posted this while my last message was in-flight, but > just in case, 'imap' and 'imap-login' are different, and have different > process limits. > > As the title of the thread suggests, you're out of imap-login processes, > not imap ones. > Yup! ... see reply on other branch in this thread. Thanks again! - Don From michael at orlitzky.com Tue Jan 17 22:13:22 2012 From: michael at orlitzky.com (Michael Orlitzky) Date: Tue, 17 Jan 2012 15:13:22 -0500 Subject: [Dovecot] imap-login process_limit reached In-Reply-To: <4F15D08D.4070209@easystreet.net> References: <4F14A7AA.8010507@easystreet.net> <4F158473.1000901@orlitzky.com> <4F15BAC0.3060003@easystreet.net> <4F15BE2A.6010605@orlitzky.com> <4F15D08D.4070209@easystreet.net> Message-ID: <4F15D662.9030601@orlitzky.com> On 01/17/12 14:48, Don Buchholz wrote: >> > Doh! Thanks, Michael. I wasn't looking at the original error message > closely enough. I scanned too quickly and saw "service(imap)" and not > "service(imap-login)". Now the failure when there are only ~200 (total) > dovecot processes makes sense (because about half of the processes here > are dovecot/imap-login). > > ... > > Thanks for your help ... and patience. > No problem, I went through the exact same process when we hit the limit. From interfasys at gmail.com Wed Jan 18 03:03:46 2012 From: interfasys at gmail.com (=?UTF-8?B?aW50ZXJmYVN5cyBzw6BybA==?=) Date: Wed, 18 Jan 2012 01:03:46 +0000 Subject: [Dovecot] [Dovecot 2.1] ACL plugin makes imap service crash when using some clients Message-ID: <4F161A72.8030400@gmail.com> Hello, I've just noticed that when Horde is connecting to Dovecot 2.1, it crashes the imap service if Dovecot is configured to use the ACL plugin. I'm not sure what's so special about the command Horde sends, but it shouldn't make Dovecot crash. Everything is fine when using Thunderbird. Here is the message in Dovecot's logs "Fatal: master: service(imap): child 89974 killed with signal 11 (core not dumped)" The message says that the core is not dumped, even though I did add drop_priv_before_exec=yes to my config file. I've tried connecting to the pid using gdb, but the process just hangs as soon as I'm connected. Cheers, Olivier From user+dovecot at localhost.localdomain.org Wed Jan 18 03:33:19 2012 From: user+dovecot at localhost.localdomain.org (Pascal Volk) Date: Wed, 18 Jan 2012 02:33:19 +0100 Subject: [Dovecot] [Dovecot 2.1] ACL plugin makes imap service crash when using some clients In-Reply-To: <4F161A72.8030400@gmail.com> References: <4F161A72.8030400@gmail.com> Message-ID: <4F16215F.5000909@localhost.localdomain.org> On 01/18/2012 02:03 AM interfaSys s?rl wrote: > Hello, > > I've just noticed that when Horde is connecting to Dovecot 2.1, it > crashes the imap service if Dovecot is configured to use the ACL plugin. > I'm not sure what's so special about the command Horde sends, but it > shouldn't make Dovecot crash. Everything is fine when using Thunderbird. > > Here is the message in Dovecot's logs > "Fatal: master: service(imap): child 89974 killed with signal 11 (core > not dumped)" > > The message says that the core is not dumped, even though I did add > drop_priv_before_exec=yes to my config file. dovecot stop ulimit -c unlimited dovecot Now connect with Horde and let it crash. > I've tried connecting to the pid using gdb, but the process just hangs > as soon as I'm connected. > continue [wait for the crash] bt full detach quit Regards, Pascal -- The trapper recommends today: cafefeed.1201802 at localdomain.org From gordon.grubert+lists at uni-greifswald.de Wed Jan 18 14:02:58 2012 From: gordon.grubert+lists at uni-greifswald.de (Gordon Grubert) Date: Wed, 18 Jan 2012 13:02:58 +0100 Subject: [Dovecot] Dovecot crashes totally - SOLVED In-Reply-To: <4EB6D845.7040208@uni-greifswald.de> References: <4EA317B5.3090209@uni-greifswald.de> <1320435812.21919.150.camel@hurina> <4EB6D845.7040208@uni-greifswald.de> Message-ID: <4F16B4F2.5050107@uni-greifswald.de> On 11/06/2011 07:56 PM, Gordon Grubert wrote: > On 11/04/2011 08:43 PM, Timo Sirainen wrote: >> On Sat, 2011-10-22 at 21:21 +0200, Gordon Grubert wrote: >>> Hello, >>> >>> our dovecot server crashes totally without any really useful >>> log messages. The error log can be found in the attachment. >>> The only way to get dovecot running again is a complete >>> system restart. >> >> How often does it break? If really a "complete system restart" is needed >> to fix it, it doesn't sound like a Dovecot problem. Check if it's enough >> to stop dovecot and then make sure there aren't any dovecot processes >> lying around afterwards. > Currently, the problem occurred three times. The last time some days > ago. The last "crash" was in the night and, therefore, we used the > chance for a detailed debugging of the system. > > You could be right, that it's not a dovecot problem. Next to dovecot, > we found other processes hanging and could not be killed by "kill -9". > Additionally, we found a commonness of all of these processes: They > hanged while trying to access the mailbox volume. Therefore, we repaired > the filesystem. Now, we're watching the system ... > >>> Oct 11 09:55:23 mailserver2 dovecot: master: Error: service(imap): >>> Initial status notification not received in 30 seconds, killing the >>> process >>> Oct 11 09:56:23 mailserver2 dovecot: imap-login: Error: master(imap): >>> Auth request timed out (received 0/12 bytes) >> >> Kind of looks like auth process is hanging. You could see if stracing it >> shows anything useful. Also are any errors logged about LDAP? Is LDAP >> running on the same server? > Dovecot authenticates against postfix and postfix has an LDAP > connection. The LDAP is running on an external cluster. Here, > no errors are reported. > > We hope, that the filesystem error was the reason for the problem > and, that the problem is fixed by repairing it. During the last two month, no error occurred. Therefore, the problem in the filesystem seems to be the reason for the dovecot crash. Thx and best regards, Gordon -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 5396 bytes Desc: S/MIME Cryptographic Signature URL: From tss at iki.fi Wed Jan 18 14:23:00 2012 From: tss at iki.fi (Timo Sirainen) Date: Wed, 18 Jan 2012 14:23:00 +0200 Subject: [Dovecot] imap-login process_limit reached In-Reply-To: <4F14A7AA.8010507@easystreet.net> References: <4F14A7AA.8010507@easystreet.net> Message-ID: <1326889380.11500.16.camel@innu> On Mon, 2012-01-16 at 14:41 -0800, Don Buchholz wrote: > I've been having some problems with IMAP user connections to the Dovecot > (v2.0.8) server. The following message is being logged. > > Jan 16 10:51:36 postal dovecot: master: Warning: > service(imap-login): process_limit reached, client connections are > being dropped Maybe this will help some in future: http://hg.dovecot.org/dovecot-2.1/rev/a4e61c99c7eb The new error message is: service(imap-login): process_limit (100) reached, client connections are being dropped From lee at standen.id.au Wed Jan 18 14:44:35 2012 From: lee at standen.id.au (Lee Standen) Date: Wed, 18 Jan 2012 20:44:35 +0800 Subject: [Dovecot] Performance of Maildir vs sdbox/mdbox Message-ID: <02f401ccd5de$ef17c470$cd474d50$@standen.id.au> Hi Guys, I've been desperately trying to find some comparative performance information about the different mailbox formats supported by Dovecot in order to make an assessment on which format is right for our environment. This is a brand new build, with customer mailboxes to be migrated in over the course of 3-4 months. Some details on our new environment: * Approximately 1.6M+ mailboxes once all legacy systems are combined * NetApp FAS6280 storage w/ 120TB usable for mail storage, 1TB of FlashCache in each controller * All mail storage presented via NFS over 10Gbps Ethernet (Jumbo Frames) * Postfix will feed new email to Dovecot via LMTP * Dovecot servers have been split based on their role - Dovecot LDA Servers (running LMTP protocol) - Dovecot POP/IMAP servers (running POP/IMAP protocols) - LDA & POP/IMAP servers are segmented into geographically split groups (so no server sees every single mailbox) - Nginx proxy used to terminate customer connections, connections are redirected to the appropriate geographic servers * Apache Lucene indexes will be used to accelerate IMAP search for users Our closest current live configuration (Qmail SMTP, Courier IMAP, Maildir) has 600K mailboxes and pushes ~ 35,000 NFS operations per second at peak Some of the things I would like to know: * Are we likely to see a reduction in IOPS/User by using Maildir alone under Dovecot? * What kind of IOPS/User reduction could we expect to see under mdbox? * If someone can give some technical reasoning behind why mdbox does less IOPS than Maildir? I understand some of the reasons for the mdbox IOPS question, but I need some more information so we can discuss internally and make a decision as to whether we're comfortable going with mdbox from day one. We're very familiar with Maidlir, and there's just some uneasiness internally around going to a new mail storage format. Thanks! From tss at iki.fi Wed Jan 18 14:58:15 2012 From: tss at iki.fi (Timo Sirainen) Date: Wed, 18 Jan 2012 14:58:15 +0200 Subject: [Dovecot] Dovecot Solutions company update Message-ID: <1326891495.11500.32.camel@innu> Hi, A small update: My Dovecot support company finally has web pages: http://www.dovecot.fi/ We've also started providing 24/7 support. From robert at schetterer.org Wed Jan 18 15:05:57 2012 From: robert at schetterer.org (Robert Schetterer) Date: Wed, 18 Jan 2012 14:05:57 +0100 Subject: [Dovecot] Performance of Maildir vs sdbox/mdbox In-Reply-To: <02f401ccd5de$ef17c470$cd474d50$@standen.id.au> References: <02f401ccd5de$ef17c470$cd474d50$@standen.id.au> Message-ID: <4F16C3B5.80404@schetterer.org> Am 18.01.2012 13:44, schrieb Lee Standen: > Hi Guys, > > > > I've been desperately trying to find some comparative performance > information about the different mailbox formats supported by Dovecot in > order to make an assessment on which format is right for our environment. > > This is a brand new build, with customer mailboxes to be migrated in over > the course of 3-4 months. > > > > Some details on our new environment: > > * Approximately 1.6M+ mailboxes once all legacy systems are combined > > * NetApp FAS6280 storage w/ 120TB usable for mail storage, 1TB of FlashCache > in each controller > > * All mail storage presented via NFS over 10Gbps Ethernet (Jumbo Frames) nfs may not be optimal clusterfilesystem might better, but this is an heavy seperate discussion > > * Postfix will feed new email to Dovecot via LMTP perfect > > * Dovecot servers have been split based on their role > > - Dovecot LDA Servers (running LMTP protocol) > > - Dovecot POP/IMAP servers (running POP/IMAP protocols) > > - LDA & POP/IMAP servers are segmented into geographically split groups > (so no server sees every single mailbox) > > - Nginx proxy used to terminate customer connections, connections are > redirected to the appropriate geographic servers > > * Apache Lucene indexes will be used to accelerate IMAP search for users > sounds ok > > > > > Our closest current live configuration (Qmail SMTP, Courier IMAP, Maildir) > has 600K mailboxes and pushes ~ 35,000 NFS operations per second at peak wow thats big > > > > Some of the things I would like to know: > > * Are we likely to see a reduction in IOPS/User by using Maildir alone under > Dovecot? > > * What kind of IOPS/User reduction could we expect to see under mdbox? there should be people on the list , knowing this , by migration done > > * If someone can give some technical reasoning behind why mdbox does less > IOPS than Maildir? as far i remember mdbox takes 8 mails per file ( i am not using it currently, so i didnt investigate it ), better wait for more qualified answer, anyway mdbox seems recommended in your case from our last plans about 25k mailboxes we decide using mdbox, as far i remember.... > > > > I understand some of the reasons for the mdbox IOPS question, but I need > some more information so we can discuss internally and make a decision as to > whether we're comfortable going with mdbox from day one. We're very > familiar with Maidlir, and there's just some uneasiness internally around > going to a new mail storage format. > > > > Thanks! > > > > from my personal knowledge io on storage has most influance of performance, if at last ,all other setup parts are solved optimal wait a little bit , i guess more matching answers will come up after all ,you can hire someone, perhaps Timo, if you stuck in something -- Best Regards MfG Robert Schetterer Germany/Munich/Bavaria From javierdemiguel at us.es Wed Jan 18 15:27:52 2012 From: javierdemiguel at us.es (=?ISO-8859-1?Q?Javier_Miguel_Rodr=EDguez?=) Date: Wed, 18 Jan 2012 14:27:52 +0100 Subject: [Dovecot] Performance of Maildir vs sdbox/mdbox In-Reply-To: <02f401ccd5de$ef17c470$cd474d50$@standen.id.au> References: <02f401ccd5de$ef17c470$cd474d50$@standen.id.au> Message-ID: <4F16C8D8.1090804@us.es> Spanish edu site here, 80k users, 4,5 TB of email, 6.000 iops (indexes) + 9.000 iops (mdboxes) in working hours here. We evaluated mdbox against Maildir and we found that with these setting dovecot 2 perfoms better than Maildir: mdbox_rotate_interval = 1d mdbox_rotate_size=60m zlib_save_level = 9 # 1..9 zlib_save = gz # or bz2 We detected 40% less iops with this setup *in working hours (more info below)*. Zlib saved some writes (15-30%). With mdbox, deletion of a message is written to indexes (use SSD for this), and a nightly cronjob deletes the real message from the mdbox, this saves us some iops in working hours. Also, backup software is MUCH happier handling hundreds of thousands files (mdbox) versus tens of millions (maildir) Mdbox has also drawbacks: you have to be VERY careful with your indexes, they contain data that can not be rebuilt from mdboxes. The nightly cronjob "purging" the mdboxes hammers the SAN. Full backup time is reduced, but incremental backup space & time increases: if you delete a message, after "purging" it from the mdbox the mdbox file changes (size and date), so the incremental backup has to copy it again. Regards Javier From email at randpoger.org Wed Jan 18 15:29:31 2012 From: email at randpoger.org (email at randpoger.org) Date: Wed, 18 Jan 2012 14:29:31 +0100 Subject: [Dovecot] Dovecot did not accept Login from Host Message-ID: <192f7dbb6b6c9e71bd44c41f08097a92-EhVcX1lATAFfWEQABwoYZ1dfaANWUkNeXEJbAVo1WEdQS1oIXkF3CEtXWV4wQEYAWVJQQ1tSWQ==-webmailer2@server06.webmailer.hosteurope.de> Hi! My Dovecot is running and i can connect + login through telnet: -------------------------------------------- >> telnet localhost 143 Trying ::1... Trying 127.0.0.1... Connected to localhost. Escape character is '^]'. * OK [CAPABILITY IMAP4rev1 LITERAL+ SASL-IR LOGIN-REFERRALS ID ENABLE STARTTLS AUTH=PLAIN AUTH=LOGIN] Dovecot ready. >> 1 login user passwort 1 OK [...] Logged in -------------------------------------------- But through my domain i can only connect, but than i get an error: -------------------------------------------- >> telnet domain.de 143 Trying xx.xxx.xxx.xx... Connected to domain.de. Escape character is '^]'. * OK [CAPABILITY IMAP4rev1 LITERAL+ SASL-IR LOGIN-REFERRALS ID ENABLE STARTTLS AUTH=PLAIN AUTH=LOGIN] Dovecot ready. >> 1 login user passwort 1 NO [AUTHENTICATIONFAILED] Authentication failed. -------------------------------------------- My dovecot.conf: -------------------------------------------- protocols = imap imaps ssl_cert_file = /etc/ssl/certs/dovecot.pem ssl_key_file = /etc/ssl/private/dovecot.pem mail_location= /var/mail/%u log_path = /var/log/dovecot.log log_timestamp = "%Y-%m-%d %H:%M:%S " auth_verbose = yes auth_debug = yes protocol imap { } auth default { mechanisms = plain login passdb pam { } userdb passwd { } user = root } -------------------------------------------- If i try to connect+login through an domain, dovecot write NOTHING into the .log Someone ideas about this? From tss at iki.fi Wed Jan 18 15:54:31 2012 From: tss at iki.fi (Timo Sirainen) Date: Wed, 18 Jan 2012 15:54:31 +0200 Subject: [Dovecot] Performance of Maildir vs sdbox/mdbox In-Reply-To: <02f401ccd5de$ef17c470$cd474d50$@standen.id.au> References: <02f401ccd5de$ef17c470$cd474d50$@standen.id.au> Message-ID: <1326894871.11500.45.camel@innu> On Wed, 2012-01-18 at 20:44 +0800, Lee Standen wrote: > I've been desperately trying to find some comparative performance > information about the different mailbox formats supported by Dovecot in > order to make an assessment on which format is right for our environment. Unfortunately there aren't really any. Everyone who seems to switch to sdbox/mdbox usually also change their hardware at the same time, so there aren't really any before/after metrics. I've of course some unrealistic synthetic benchmarks, but I don't think they are very useful. So, I would also be very interested in seeing some before/after graphs of disk IO, CPU and memory usage of Maildir -> dbox switch in same hardware. Maildir is anyway definitely worse performance then sdbox or mdbox. mdbox also uses less NFS operations, but I don't know how much faster (if any) it is with Netapps. > * All mail storage presented via NFS over 10Gbps Ethernet (Jumbo Frames) > > * Postfix will feed new email to Dovecot via LMTP > > * Dovecot servers have been split based on their role > > - Dovecot LDA Servers (running LMTP protocol) > > - Dovecot POP/IMAP servers (running POP/IMAP protocols) You're going to run into NFS caching troubles with the above split setup. I don't recommend it. You will see error messages about index corruption with it, and with dbox it can cause metadata loss. http://wiki2.dovecot.org/NFS http://wiki2.dovecot.org/Director > - LDA & POP/IMAP servers are segmented into geographically split groups > (so no server sees every single mailbox) > > - Nginx proxy used to terminate customer connections, connections are > redirected to the appropriate geographic servers Can the same mailbox still be accessed via multiple geographic servers? I've had some plans for doing this kind of access/replication using dsync.. > * Apache Lucene indexes will be used to accelerate IMAP search for users Dovecot's fts-solr or fts-lucene? > Our closest current live configuration (Qmail SMTP, Courier IMAP, Maildir) > has 600K mailboxes and pushes ~ 35,000 NFS operations per second at peak > > Some of the things I would like to know: > > * Are we likely to see a reduction in IOPS/User by using Maildir alone under > Dovecot? If you have webmail type of clients, definitely. For Outlook/Thunderbird you should still see improvement, but not necessarily as much. You didn't mention POP3. That isn't Dovecot's strong point. Its performance should be about the same as Courier-POP3, but could be less than QMail-POP3. Although if many of your POP3 users keep a lot of mails on server it > * If someone can give some technical reasoning behind why mdbox does less > IOPS than Maildir? Maildir renames files a lot. From new/ -> to cur/ and then every time message flag changes. That's why sdbox is faster. Why mdbox should be faster than sdbox is because mdbox puts (or should put) more mail data physically closer in disks to make reading it faster. > I understand some of the reasons for the mdbox IOPS question, but I need > some more information so we can discuss internally and make a decision as to > whether we're comfortable going with mdbox from day one. We're very > familiar with Maidlir, and there's just some uneasiness internally around > going to a new mail storage format. It's at least safer to first switch to Dovecot+Maildir to make sure that any problems you might find aren't related to the mailbox format.. From ebroch at whitehorsetc.com Wed Jan 18 16:20:31 2012 From: ebroch at whitehorsetc.com (Eric Broch) Date: Wed, 18 Jan 2012 07:20:31 -0700 Subject: [Dovecot] shared folder files not displaying in thunderbird Message-ID: <4F16D52F.2040907@whitehorsetc.com> Hello, I have dovecot installed with the configuration below. One of the subfolders created (using the email client) under the '/home/vpopmail/domains/mydomain.com/shared/projects' share no longer (it used to) displays the files located in it. There are about 150 folders under the '/home/vpopmail/domains/mydomain.com/shared/projects' share all of which display the files located in them, the one mentioned used to display the contents but no longer does. What would be the reason that one folder would no longer display existing files in the email client (Thunderbird) and the other folders would? And, how do I fix this? I've already tried unsubscribing and resubscribing the folder. This did not work. Would it now be simply a matter of unsubscribing the folder, deleting the dovecot files, and resubscribing to the folder? Eric # 2.0.11: /etc/dovecot/dovecot.conf # OS: Linux 2.6.18-238.19.1.el5 i686 CentOS release 5.7 (Final) auth_cache_size = 32 M auth_mechanisms = plain login digest-md5 cram-md5 auth_username_format = %Lu disable_plaintext_auth = no first_valid_uid = 89 log_path = /var/log/dovecot.log login_greeting = Dovecot toaster ready. namespace { inbox = yes location = prefix = INBOX. separator = . type = private } namespace { location = maildir:/home/vpopmail/domains/mydomain.com/shared/projects prefix = projects. separator = . type = public } passdb { args = cache_key=%u webmail=127.0.0.1 driver = vpopmail } plugin/quota = maildir protocols = imap ssl_cert = References: <1326891495.11500.32.camel@innu> Message-ID: <4F16D607.5030800@schetterer.org> Am 18.01.2012 13:58, schrieb Timo Sirainen: > Hi, > > A small update: My Dovecot support company finally has web pages: > http://www.dovecot.fi/ > > We've also started providing 24/7 support. > > Hi Timo, very cool ! -- Best Regards MfG Robert Schetterer Germany/Munich/Bavaria From tss at iki.fi Wed Jan 18 16:32:36 2012 From: tss at iki.fi (Timo Sirainen) Date: Wed, 18 Jan 2012 16:32:36 +0200 Subject: [Dovecot] Dovecot unable to locate mailbox In-Reply-To: References: Message-ID: <1326897156.11500.51.camel@innu> On Mon, 2012-01-16 at 14:38 +0200, Jason X, Maney wrote: > Jan 16 14:18:16 myservername dovecot: pop3(userA): Error: user molla: > Initialization failed: mail_location not set and autodetection failed: Mail > storage autodetection failed with home=/home/userA As it says. > Yet my config also come out strangely as below: > > ========= > root at guyana:~# dovecot -n > # 2.0.13: /etc/dovecot/dovecot.conf > # OS: Linux 3.0.0-12-server x86_64 Ubuntu 11.10 > passdb { > driver = pam > } > protocols = " imap pop3" > ssl_cert = ssl_key = userdb { > driver = passwd > } > root at guyana:~# > ========= There is no mail_location above. This is the configuration Dovecot sees. > My mailbox location setting is as follows: > > ========= > cat conf.d/10-mail.conf |grep mail_location Look at /etc/dovecot/dovecot.conf file. Do you see !include conf.d/*.conf in there? Probably not, so those files aren't being read. From tss at iki.fi Wed Jan 18 16:34:18 2012 From: tss at iki.fi (Timo Sirainen) Date: Wed, 18 Jan 2012 16:34:18 +0200 Subject: [Dovecot] Antispam plugin not compatible with Dovecot 2.1 In-Reply-To: <4F155670.6010905@gmail.com> References: <4F155670.6010905@gmail.com> Message-ID: <1326897258.11500.53.camel@innu> On Tue, 2012-01-17 at 11:07 +0000, interfaSys s?rl wrote: > Here is what I get when I try to compile the antispam plugin agaisnt > Dovecot 2.1 > > ************** > mailbox.c: In function 'antispam_save_begin': > mailbox.c:138:12: error: 'struct mail_save_context' has no member named > 'copying' The "copying" should be changed to "copying_via_save". From lee at standen.id.au Wed Jan 18 16:36:45 2012 From: lee at standen.id.au (Lee Standen) Date: Wed, 18 Jan 2012 22:36:45 +0800 Subject: [Dovecot] Performance of Maildir vs sdbox/mdbox Message-ID: <7c413ccbddc8e25584311c55672a51e5@standen.id.au> On 18.01.2012 21:54, Timo Sirainen wrote: > On Wed, 2012-01-18 at 20:44 +0800, Lee Standen wrote: > >> I've been desperately trying to find some comparative performance >> information about the different mailbox formats supported by Dovecot >> in >> order to make an assessment on which format is right for our >> environment. > > Unfortunately there aren't really any. Everyone who seems to switch > to > sdbox/mdbox usually also change their hardware at the same time, so > there aren't really any before/after metrics. I've of course some > unrealistic synthetic benchmarks, but I don't think they are very > useful. > > So, I would also be very interested in seeing some before/after > graphs > of disk IO, CPU and memory usage of Maildir -> dbox switch in same > hardware. > > Maildir is anyway definitely worse performance then sdbox or mdbox. > mdbox also uses less NFS operations, but I don't know how much faster > (if any) it is with Netapps. We have bought new hardware for this project too, so we might not be able to help out massively on that front... we do have NFS operations monitored though so we should at least be able to compare that metric since the underlying storage operating system is the same. All NetApp hardware runs their Data ONTAP operating system, so the metrics are assured to be the same :) How about this... are there any tools available (that you know of) to capture real live customer POP3/IMAP traffic and replay it against a separate system? That might be a feasible option for doing a like-for-like comparison in our environment? We could probably get something in place to simulate the load if we can do something like that... >> * All mail storage presented via NFS over 10Gbps Ethernet (Jumbo >> Frames) >> >> * Postfix will feed new email to Dovecot via LMTP >> >> * Dovecot servers have been split based on their role >> >> - Dovecot LDA Servers (running LMTP protocol) >> >> - Dovecot POP/IMAP servers (running POP/IMAP protocols) > > You're going to run into NFS caching troubles with the above split > setup. I don't recommend it. You will see error messages about index > corruption with it, and with dbox it can cause metadata loss. > http://wiki2.dovecot.org/NFS http://wiki2.dovecot.org/Director That might be the one thing (unfortunately) which prevents us from going with the dbox format. I understand the same issue can actually occur on Dovecot Maildir as well, but because Maildir works without these index files, we were willing to just go with it. I will raise it again, but there has been a lot of push back about introducing a single point of failure, even though this is a perceived one. The biggest challenge I have at the moment if I try to sell the dbox format is providing some kind of data on the expected gains from this. If it's only a 10% reduction in NFS operations for the typical user, then it's probably not worth our while. > >> - LDA & POP/IMAP servers are segmented into geographically split >> groups >> (so no server sees every single mailbox) >> >> - Nginx proxy used to terminate customer connections, connections >> are >> redirected to the appropriate geographic servers > > Can the same mailbox still be accessed via multiple geographic > servers? > I've had some plans for doing this kind of access/replication using > dsync.. No, we're using the nginx proxy layer to ensure that if a user in Sydney (for example) tries to access a Perth mailbox, their connection is redirected (by nginx) to the Perth POP/IMAP servers. Postfix configuration is handling the same thing on the LMTP side. The requirement here is for all users to have the same settings regardless of location, but still be able to locate the email servers and data close to the customer. > >> * Apache Lucene indexes will be used to accelerate IMAP search for >> users > > Dovecot's fts-solr or fts-lucene? fts-solr. I've been using Lucene/Solr interchangeably when discussing this project with my peers :) > >> Our closest current live configuration (Qmail SMTP, Courier IMAP, >> Maildir) >> has 600K mailboxes and pushes ~ 35,000 NFS operations per second at >> peak >> >> Some of the things I would like to know: >> >> * Are we likely to see a reduction in IOPS/User by using Maildir >> alone under >> Dovecot? > > If you have webmail type of clients, definitely. For > Outlook/Thunderbird > you should still see improvement, but not necessarily as much. > > You didn't mention POP3. That isn't Dovecot's strong point. Its > performance should be about the same as Courier-POP3, but could be > less > than QMail-POP3. Although if many of your POP3 users keep a lot of > mails > on server it > Our existing systems run with about 21K concurrent IMAP connections at any one point in time, not counting Webmail POP3 runs at about 3600 concurrent connections, but since those are not long lived it's not particularly indicative of customer numbers. Vague recollection is something like 25% IMAP, 55-60% POP3, rest < 20% Webmail. I'd have to go back and check the breakdown again. >> * If someone can give some technical reasoning behind why mdbox does >> less >> IOPS than Maildir? > > Maildir renames files a lot. From new/ -> to cur/ and then every time > message flag changes. That's why sdbox is faster. Why mdbox should be > faster than sdbox is because mdbox puts (or should put) more mail > data > physically closer in disks to make reading it faster. > >> I understand some of the reasons for the mdbox IOPS question, but I >> need >> some more information so we can discuss internally and make a >> decision as to >> whether we're comfortable going with mdbox from day one. We're very >> familiar with Maidlir, and there's just some uneasiness internally >> around >> going to a new mail storage format. > > It's at least safer to first switch to Dovecot+Maildir to make sure > that > any problems you might find aren't related to the mailbox format.. Yep, I'm considering that. The flip side is that it's actually going to be difficult for us to change mail format once we've migrated into this system, but we have an opportunity for (literally) a month long testing phase beginning in Feb/March which will let us test as many possibilities as we can. From tss at iki.fi Wed Jan 18 16:52:58 2012 From: tss at iki.fi (Timo Sirainen) Date: Wed, 18 Jan 2012 16:52:58 +0200 Subject: [Dovecot] LMTP Logging In-Reply-To: References: Message-ID: <1326898378.11500.54.camel@innu> On Mon, 2012-01-16 at 17:17 -0800, Mark Moseley wrote: > Just had a minor suggestion, with no clue how hard/easy it would be to > implement: > > The %f flag in deliver_log_format seems to pick up the From: header, > instead of the "MAIL FROM:<...>" arg. It'd be handy to have a %F that > shows the "MAIL FROM" arg instead. I'm looking at tracking emails > through logs from Exim to Dovecot easily. I know Message-ID can be > used for correlation but it adds some complexity to searching, i.e. I > can't just grep for the sender (as logged by Exim), unless I assume > "MAIL FROM" always == From: Added to v2.1: http://hg.dovecot.org/dovecot-2.1/rev/7ee2cfbcae2e http://hg.dovecot.org/dovecot-2.1/rev/08cc9d2a79e6 From tss at iki.fi Wed Jan 18 16:56:41 2012 From: tss at iki.fi (Timo Sirainen) Date: Wed, 18 Jan 2012 16:56:41 +0200 Subject: [Dovecot] Quota is not working (Debian Squeeze - Dovecot 1.2) In-Reply-To: <4F13FF00.1050108@miamammausalinux.org> References: <4F13FF00.1050108@miamammausalinux.org> Message-ID: <1326898601.11500.56.camel@innu> On Mon, 2012-01-16 at 11:42 +0100, RaSca wrote: > passdb sql { > args = /etc/dovecot/dovecot-sql.conf > } > userdb passwd { > } > userdb static { > args = uid=5000 gid=5000 home=/mail/mailboxes/%d/%n@%d > allow_all_users=yes > } You're using SQL only for passdb lookup. > plugin { > quota = maildir:/mail/mailboxes/%d/%n@%d The above path probably doesn't do what you intended. It's only the user-visible quota root name. It could just as well be "User quota" or something. > The db connection works, this is /etc/dovecot/dovecot-sql.conf: > > driver = mysql > connect = host= dbname=mail user= password= > default_pass_scheme = CRYPT > password_query = SELECT username, password FROM mailbox WHERE username='%u' > user_query = SELECT username AS user, maildir AS home, > CONCAT('*:storage=', quota , 'B') AS quota_rule FROM mailbox WHERE > username = '%u' AND active = '1' user_query isn't used, because you aren't using userdb sql. From tss at iki.fi Wed Jan 18 17:06:49 2012 From: tss at iki.fi (Timo Sirainen) Date: Wed, 18 Jan 2012 17:06:49 +0200 Subject: [Dovecot] v2.1.rc3 released In-Reply-To: <20120116150504.GA28883@shutemov.name> References: <1325878845.17774.38.camel@hurina> <20120116150504.GA28883@shutemov.name> Message-ID: <1326899209.11500.58.camel@innu> On Mon, 2012-01-16 at 17:05 +0200, Kirill A. Shutemov wrote: > ./autogen failed: > > $ ./autogen.sh > libtoolize: Consider adding `AC_CONFIG_MACRO_DIR([m4])' to configure.in and > libtoolize: rerunning libtoolize, to keep the correct libtool macros in-tree. > libtoolize: Consider adding `-I m4' to ACLOCAL_AMFLAGS in Makefile.am. > src/plugins/fts/Makefile.am:52: `pkglibexecdir' is not a legitimate directory for `SCRIPTS' > Makefile.am:24: `pkglibdir' is not a legitimate directory for `DATA' > autoreconf: automake failed with exit status: 1 > $ automake --version | head -1 > automake (GNU automake) 1.11.2 Looks like automake bug: http://old.nabble.com/Re%3A-Scripts-in-pkglibexecdir--p33070266.html From lee at standen.id.au Wed Jan 18 17:21:33 2012 From: lee at standen.id.au (Lee Standen) Date: Wed, 18 Jan 2012 23:21:33 +0800 Subject: [Dovecot] Performance of Maildir vs sdbox/mdbox In-Reply-To: <1326894871.11500.45.camel@innu> References: <02f401ccd5de$ef17c470$cd474d50$@standen.id.au> <1326894871.11500.45.camel@innu> Message-ID: Out of interest, has the NFS issue been tested on NFS4? My understanding is that NFS4 has a lot of fixes for the locking/caching problems that plague NFS3, and we were planning to use NFS4 from day one. If this hasn't been tested, is there some kind of load simulator that we could run to see if the issue does occur in our environment? On 18.01.2012 21:54, Timo Sirainen wrote: > On Wed, 2012-01-18 at 20:44 +0800, Lee Standen wrote: > >> I've been desperately trying to find some comparative performance >> information about the different mailbox formats supported by Dovecot >> in >> order to make an assessment on which format is right for our >> environment. > > Unfortunately there aren't really any. Everyone who seems to switch > to > sdbox/mdbox usually also change their hardware at the same time, so > there aren't really any before/after metrics. I've of course some > unrealistic synthetic benchmarks, but I don't think they are very > useful. > > So, I would also be very interested in seeing some before/after > graphs > of disk IO, CPU and memory usage of Maildir -> dbox switch in same > hardware. > > Maildir is anyway definitely worse performance then sdbox or mdbox. > mdbox also uses less NFS operations, but I don't know how much faster > (if any) it is with Netapps. > >> * All mail storage presented via NFS over 10Gbps Ethernet (Jumbo >> Frames) >> >> * Postfix will feed new email to Dovecot via LMTP >> >> * Dovecot servers have been split based on their role >> >> - Dovecot LDA Servers (running LMTP protocol) >> >> - Dovecot POP/IMAP servers (running POP/IMAP protocols) > > You're going to run into NFS caching troubles with the above split > setup. I don't recommend it. You will see error messages about index > corruption with it, and with dbox it can cause metadata loss. > http://wiki2.dovecot.org/NFS http://wiki2.dovecot.org/Director > >> - LDA & POP/IMAP servers are segmented into geographically split >> groups >> (so no server sees every single mailbox) >> >> - Nginx proxy used to terminate customer connections, connections >> are >> redirected to the appropriate geographic servers > > Can the same mailbox still be accessed via multiple geographic > servers? > I've had some plans for doing this kind of access/replication using > dsync.. > >> * Apache Lucene indexes will be used to accelerate IMAP search for >> users > > Dovecot's fts-solr or fts-lucene? > >> Our closest current live configuration (Qmail SMTP, Courier IMAP, >> Maildir) >> has 600K mailboxes and pushes ~ 35,000 NFS operations per second at >> peak >> >> Some of the things I would like to know: >> >> * Are we likely to see a reduction in IOPS/User by using Maildir >> alone under >> Dovecot? > > If you have webmail type of clients, definitely. For > Outlook/Thunderbird > you should still see improvement, but not necessarily as much. > > You didn't mention POP3. That isn't Dovecot's strong point. Its > performance should be about the same as Courier-POP3, but could be > less > than QMail-POP3. Although if many of your POP3 users keep a lot of > mails > on server it > >> * If someone can give some technical reasoning behind why mdbox does >> less >> IOPS than Maildir? > > Maildir renames files a lot. From new/ -> to cur/ and then every time > message flag changes. That's why sdbox is faster. Why mdbox should be > faster than sdbox is because mdbox puts (or should put) more mail > data > physically closer in disks to make reading it faster. > >> I understand some of the reasons for the mdbox IOPS question, but I >> need >> some more information so we can discuss internally and make a >> decision as to >> whether we're comfortable going with mdbox from day one. We're very >> familiar with Maidlir, and there's just some uneasiness internally >> around >> going to a new mail storage format. > > It's at least safer to first switch to Dovecot+Maildir to make sure > that > any problems you might find aren't related to the mailbox format.. From tss at iki.fi Wed Jan 18 17:28:36 2012 From: tss at iki.fi (Timo Sirainen) Date: Wed, 18 Jan 2012 17:28:36 +0200 Subject: [Dovecot] Performance of Maildir vs sdbox/mdbox In-Reply-To: <7c413ccbddc8e25584311c55672a51e5@standen.id.au> References: <7c413ccbddc8e25584311c55672a51e5@standen.id.au> Message-ID: <1326900516.11500.71.camel@innu> On Wed, 2012-01-18 at 22:36 +0800, Lee Standen wrote: > How about this... are there any tools available (that you know of) to > capture real live customer POP3/IMAP traffic and replay it against a > separate system? That might be a feasible option for doing a > like-for-like comparison in our environment? We could probably get > something in place to simulate the load if we can do something like > that... I've thought about that too before, but with IMAP traffic it doesn't work very well. Even if the storages were 100% synchronized at startup, the session states could easily become desynced. For example if client does a NOOP at the same time when two mails are being delivered to the mailbox, serverA might show only one of them while serverB would show two of them because it was executed a tiny bit later. All of the client's future commands could then be affected by this desync. (OK, I wrote the above thinking about a real-time system where you could redirect the client's traffic to two systems, but basically same problems exist for offline replays too. Although it would be easier to fix the replays to handle this.) > > You're going to run into NFS caching troubles with the above split > > setup. I don't recommend it. You will see error messages about index > > corruption with it, and with dbox it can cause metadata loss. > > http://wiki2.dovecot.org/NFS http://wiki2.dovecot.org/Director > > That might be the one thing (unfortunately) which prevents us from > going with the dbox format. I understand the same issue can actually > occur on Dovecot Maildir as well, but because Maildir works without > these index files, we were willing to just go with it. Are you planning on also redirecting POP3/IMAP connections to somewhat randomly to the different servers? I really don't recommend that, even with Maildir.. Some of the errors will be user visible, even if no actual data loss happens. Users may get disconnected, and sometimes might have to clean their client's cache. > I will raise it > again, but there has been a lot of push back about introducing a single > point of failure, even though this is a perceived one. What is a single point of failure there? > > It's at least safer to first switch to Dovecot+Maildir to make sure > > that > > any problems you might find aren't related to the mailbox format.. > > Yep, I'm considering that. The flip side is that it's actually going > to be difficult for us to change mail format once we've migrated into > this system, but we have an opportunity for (literally) a month long > testing phase beginning in Feb/March which will let us test as many > possibilities as we can. The mailbox format switching can be done one user at a time with zero downtime with dsync. From tss at iki.fi Wed Jan 18 17:34:54 2012 From: tss at iki.fi (Timo Sirainen) Date: Wed, 18 Jan 2012 17:34:54 +0200 Subject: [Dovecot] Performance of Maildir vs sdbox/mdbox In-Reply-To: References: <02f401ccd5de$ef17c470$cd474d50$@standen.id.au> <1326894871.11500.45.camel@innu> Message-ID: <1326900894.11500.74.camel@innu> On Wed, 2012-01-18 at 23:21 +0800, Lee Standen wrote: > Out of interest, has the NFS issue been tested on NFS4? My > understanding is that NFS4 has a lot of fixes for the locking/caching > problems that plague NFS3, and we were planning to use NFS4 from day > one. I've tried with Linux NFS4 server+client a few years ago. It seemed to have all the same caching problems as NFS3. > If this hasn't been tested, is there some kind of load simulator that > we could run to see if the issue does occur in our environment? http://imapwiki.org/ImapTest should easily trigger it. Just run it against two servers, both hammering the same mailbox. From tss at iki.fi Wed Jan 18 17:59:39 2012 From: tss at iki.fi (Timo Sirainen) Date: Wed, 18 Jan 2012 17:59:39 +0200 Subject: [Dovecot] 2.1.rc3 (1a722c7676bb): doveadm mailbox list -> Segmentation fault In-Reply-To: <4F12D069.9060102@localhost.localdomain.org> References: <4F12D069.9060102@localhost.localdomain.org> Message-ID: <1326902379.11500.81.camel@innu> On Sun, 2012-01-15 at 14:11 +0100, Pascal Volk wrote: > Core was generated by `doveadm mailbox list -u > jane.roe at example.com /*'. Finally fixed: http://hg.dovecot.org/dovecot-2.1/rev/99ea6da7dc99 From tss at iki.fi Wed Jan 18 18:04:49 2012 From: tss at iki.fi (Timo Sirainen) Date: Wed, 18 Jan 2012 18:04:49 +0200 Subject: [Dovecot] v2.x services documentation In-Reply-To: <92A86804-CEEE-4EB6-9EE7-FC8B7905AA2C@swing.be> References: <04D662E7-2A0A-448B-BA21-1E337A400CA6@iki.fi> <92A86804-CEEE-4EB6-9EE7-FC8B7905AA2C@swing.be> Message-ID: <1326902689.11500.82.camel@innu> On Sat, 2012-01-14 at 18:03 +0100, Axel Luttgens wrote: > Up to now, I only had the opportunity to quickly read the wiki page, and have a small question; one may read: > > process_min_avail > Minimum number of processes that always should be available to accept more client connections. For service_limit=1 processes this decreases the latency for handling new connections. For service_limit!=1 processes it could be set to the number of CPU cores on the system to balance the load among them. > > What's that service_limit setting? Thanks, fixed. Was supposed to be service_count. From eugene at raptor.kiev.ua Wed Jan 18 18:19:58 2012 From: eugene at raptor.kiev.ua (Eugene Paskevich) Date: Wed, 18 Jan 2012 18:19:58 +0200 Subject: [Dovecot] Antispam plugin not compatible with Dovecot 2.1 In-Reply-To: <1326897258.11500.53.camel@innu> References: <4F155670.6010905@gmail.com> <1326897258.11500.53.camel@innu> Message-ID: On Wed, 18 Jan 2012 16:34:18 +0200, Timo Sirainen wrote: > On Tue, 2012-01-17 at 11:07 +0000, interfaSys s?rl wrote: >> Here is what I get when I try to compile the antispam plugin agaisnt >> Dovecot 2.1 >> >> ************** >> mailbox.c: In function 'antispam_save_begin': >> mailbox.c:138:12: error: 'struct mail_save_context' has no member named >> 'copying' > > The "copying" should be changed to "copying_via_save". Thank you, Timo. Would #if DOVECOT_IS_GE(2,1) suffice or do I need anything more specific? -- Eugene Paskevich | *==)----------- | Plug me into eugene at raptor.kiev.ua | -----------(==* | The Matrix From tss at iki.fi Wed Jan 18 18:31:49 2012 From: tss at iki.fi (Timo Sirainen) Date: Wed, 18 Jan 2012 18:31:49 +0200 Subject: [Dovecot] Antispam plugin not compatible with Dovecot 2.1 In-Reply-To: References: <4F155670.6010905@gmail.com> <1326897258.11500.53.camel@innu> Message-ID: <1326904309.11500.83.camel@innu> On Wed, 2012-01-18 at 18:19 +0200, Eugene Paskevich wrote: > >> mailbox.c: In function 'antispam_save_begin': > >> mailbox.c:138:12: error: 'struct mail_save_context' has no member named > >> 'copying' > > > > The "copying" should be changed to "copying_via_save". > > Thank you, Timo. > Would #if DOVECOT_IS_GE(2,1) suffice or do I need anything more specific? Where do you expect to find such macro? ;) Hm. Perhaps I should try to add one. From eugene at raptor.kiev.ua Wed Jan 18 18:41:39 2012 From: eugene at raptor.kiev.ua (Eugene Paskevich) Date: Wed, 18 Jan 2012 18:41:39 +0200 Subject: [Dovecot] Antispam plugin not compatible with Dovecot 2.1 In-Reply-To: <1326904309.11500.83.camel@innu> References: <4F155670.6010905@gmail.com> <1326897258.11500.53.camel@innu> <1326904309.11500.83.camel@innu> Message-ID: On Wed, 18 Jan 2012 18:31:49 +0200, Timo Sirainen wrote: > On Wed, 2012-01-18 at 18:19 +0200, Eugene Paskevich wrote: >> >> mailbox.c: In function 'antispam_save_begin': >> >> mailbox.c:138:12: error: 'struct mail_save_context' has no member >> named >> >> 'copying' >> > >> > The "copying" should be changed to "copying_via_save". >> >> Thank you, Timo. >> Would #if DOVECOT_IS_GE(2,1) suffice or do I need anything more >> specific? > > Where do you expect to find such macro? ;) Hm. Perhaps I should try to > add one. Heh. That's Johannes' package private macro... :) -- Eugene Paskevich | *==)----------- | Plug me into eugene at raptor.kiev.ua | -----------(==* | The Matrix From moseleymark at gmail.com Wed Jan 18 19:17:40 2012 From: moseleymark at gmail.com (Mark Moseley) Date: Wed, 18 Jan 2012 09:17:40 -0800 Subject: [Dovecot] LMTP Logging In-Reply-To: <1326898378.11500.54.camel@innu> References: <1326898378.11500.54.camel@innu> Message-ID: On Wed, Jan 18, 2012 at 6:52 AM, Timo Sirainen wrote: > On Mon, 2012-01-16 at 17:17 -0800, Mark Moseley wrote: >> Just had a minor suggestion, with no clue how hard/easy it would be to >> implement: >> >> The %f flag in deliver_log_format seems to pick up the From: header, >> instead of the "MAIL FROM:<...>" arg. It'd be handy to have a %F that >> shows the "MAIL FROM" arg instead. I'm looking at tracking emails >> through logs from Exim to Dovecot easily. I know Message-ID can be >> used for correlation but it adds some complexity to searching, i.e. I >> can't just grep for the sender (as logged by Exim), unless I assume >> "MAIL FROM" always == From: > > Added to v2.1: http://hg.dovecot.org/dovecot-2.1/rev/7ee2cfbcae2e > http://hg.dovecot.org/dovecot-2.1/rev/08cc9d2a79e6 > > You're awesome, thanks! From moseleymark at gmail.com Wed Jan 18 19:54:15 2012 From: moseleymark at gmail.com (Mark Moseley) Date: Wed, 18 Jan 2012 09:54:15 -0800 Subject: [Dovecot] Performance of Maildir vs sdbox/mdbox In-Reply-To: <7c413ccbddc8e25584311c55672a51e5@standen.id.au> References: <7c413ccbddc8e25584311c55672a51e5@standen.id.au> Message-ID: >>> * All mail storage presented via NFS over 10Gbps Ethernet (Jumbo Frames) >>> >>> * Postfix will feed new email to Dovecot via LMTP >>> >>> * Dovecot servers have been split based on their role >>> >>> ?- Dovecot LDA Servers (running LMTP protocol) >>> >>> ?- Dovecot POP/IMAP servers (running POP/IMAP protocols) >> >> >> You're going to run into NFS caching troubles with the above split >> setup. I don't recommend it. You will see error messages about index >> corruption with it, and with dbox it can cause metadata loss. >> http://wiki2.dovecot.org/NFS http://wiki2.dovecot.org/Director > > > That might be the one thing (unfortunately) which prevents us from going > with the dbox format. ?I understand the same issue can actually occur on > Dovecot Maildir as well, but because Maildir works without these index > files, we were willing to just go with it. ?I will raise it again, but there > has been a lot of push back about introducing a single point of failure, > even though this is a perceived one. I'm in the middle of working on a Maildir->mdbox migration as well, and likewise, over NFS (all Netapps but moving to Sun), and likewise with split LDA and IMAP/POP servers (and both of those served out of pools). I was hoping doing things like setting "mail_nfs_index = yes" and "mmap_disable = yes" and "mail_fsync = always/optimized" would mitigate most of the risks of index corruption, as well as probably turning indexing off on the LDA side of things--i.e. all the suggestions at http://wiki2.dovecot.org/NFS. Is that definitely not the case? Is there anything else (beyond moving to a director-based architecture) that can mitigate the risk of index corruption? In our case, incoming IMAP/POP are 'stuck' to servers based on IP persistence for a given amount of time, but incoming LDA is randomly distributed. From tss at iki.fi Wed Jan 18 19:58:31 2012 From: tss at iki.fi (Timo Sirainen) Date: Wed, 18 Jan 2012 19:58:31 +0200 Subject: [Dovecot] Performance of Maildir vs sdbox/mdbox In-Reply-To: References: <7c413ccbddc8e25584311c55672a51e5@standen.id.au> Message-ID: <33C8A3B3-7DB5-4638-8C34-54E0C7E739A4@iki.fi> On 18.1.2012, at 19.54, Mark Moseley wrote: > I'm in the middle of working on a Maildir->mdbox migration as well, > and likewise, over NFS (all Netapps but moving to Sun), and likewise > with split LDA and IMAP/POP servers (and both of those served out of > pools). I was hoping doing things like setting "mail_nfs_index = yes" > and "mmap_disable = yes" and "mail_fsync = always/optimized" would > mitigate most of the risks of index corruption, They help, but aren't 100% effective and they also make the performance worse. > as well as probably > turning indexing off on the LDA side of things You can't turn off indexing with dbox. > --i.e. all the > suggestions at http://wiki2.dovecot.org/NFS. Is that definitely not > the case? Is there anything else (beyond moving to a director-based > architecture) that can mitigate the risk of index corruption? In our > case, incoming IMAP/POP are 'stuck' to servers based on IP persistence > for a given amount of time, but incoming LDA is randomly distributed. What's the problem with director-based architecture? From buchholz at easystreet.net Wed Jan 18 20:25:40 2012 From: buchholz at easystreet.net (Don Buchholz) Date: Wed, 18 Jan 2012 10:25:40 -0800 Subject: [Dovecot] imap-login process_limit reached In-Reply-To: <1326889380.11500.16.camel@innu> References: <4F14A7AA.8010507@easystreet.net> <1326889380.11500.16.camel@innu> Message-ID: <4F170EA4.20909@easystreet.net> Timo Sirainen wrote: > On Mon, 2012-01-16 at 14:41 -0800, Don Buchholz wrote: > >> I've been having some problems with IMAP user connections to the Dovecot >> (v2.0.8) server. The following message is being logged. >> >> Jan 16 10:51:36 postal dovecot: master: Warning: >> service(imap-login): process_limit reached, client connections are >> being dropped >> > > Maybe this will help some in future: > http://hg.dovecot.org/dovecot-2.1/rev/a4e61c99c7eb > > The new error message is: > > service(imap-login): process_limit (100) reached, client connections are being dropped > Great idea! Thanks, Timo. - Don From janfrode at tanso.net Wed Jan 18 20:51:38 2012 From: janfrode at tanso.net (Jan-Frode Myklebust) Date: Wed, 18 Jan 2012 19:51:38 +0100 Subject: [Dovecot] Performance of Maildir vs sdbox/mdbox In-Reply-To: <33C8A3B3-7DB5-4638-8C34-54E0C7E739A4@iki.fi> References: <7c413ccbddc8e25584311c55672a51e5@standen.id.au> <33C8A3B3-7DB5-4638-8C34-54E0C7E739A4@iki.fi> Message-ID: <20120118185137.GA21945@dibs.tanso.net> On Wed, Jan 18, 2012 at 07:58:31PM +0200, Timo Sirainen wrote: > > > --i.e. all the > > suggestions at http://wiki2.dovecot.org/NFS. Is that definitely not > > the case? Is there anything else (beyond moving to a director-based > > architecture) that can mitigate the risk of index corruption? In our > > case, incoming IMAP/POP are 'stuck' to servers based on IP persistence > > for a given amount of time, but incoming LDA is randomly distributed. > > What's the problem with director-based architecture? It hasn't been working reliably for lmtp in v2.0. To quote yourself: ----8<----8<----8<-----8<-----8<-----8<----8<-----8<----8<----8<-- I think the way I originally planned LMTP proxying to work is simply too complex to work reliably, perhaps even if the code was bug-free. So instead of reading+writing DATA at the same time, this patch changes the DATA to be first read into memory or temp file, and then from there read and sent to the LMTP backends: http://hg.dovecot.org/dovecot-2.1/raw-rev/51d87deb5c26 ----8<----8<----8<-----8<-----8<-----8<----8<-----8<----8<----8<-- unfortunately I haven't tested that patch, so I have no idea if it fixed the issues or not... -jf From tss at iki.fi Wed Jan 18 21:03:18 2012 From: tss at iki.fi (Timo Sirainen) Date: Wed, 18 Jan 2012 21:03:18 +0200 Subject: [Dovecot] Performance of Maildir vs sdbox/mdbox In-Reply-To: <20120118185137.GA21945@dibs.tanso.net> References: <7c413ccbddc8e25584311c55672a51e5@standen.id.au> <33C8A3B3-7DB5-4638-8C34-54E0C7E739A4@iki.fi> <20120118185137.GA21945@dibs.tanso.net> Message-ID: <23FFD99C-7D70-40BE-A4F3-FD259FFC62E9@iki.fi> On 18.1.2012, at 20.51, Jan-Frode Myklebust wrote: > On Wed, Jan 18, 2012 at 07:58:31PM +0200, Timo Sirainen wrote: >> >>> --i.e. all the >>> suggestions at http://wiki2.dovecot.org/NFS. Is that definitely not >>> the case? Is there anything else (beyond moving to a director-based >>> architecture) that can mitigate the risk of index corruption? In our >>> case, incoming IMAP/POP are 'stuck' to servers based on IP persistence >>> for a given amount of time, but incoming LDA is randomly distributed. >> >> What's the problem with director-based architecture? > > It hasn't been working reliably for lmtp in v2.0. Yes, besides that :) > To quote yourself: > > ----8<----8<----8<-----8<-----8<-----8<----8<-----8<----8<----8<-- > > I think the way I originally planned LMTP proxying to work is simply too > complex to work reliably, perhaps even if the code was bug-free. So > instead of reading+writing DATA at the same time, this patch changes the > DATA to be first read into memory or temp file, and then from there read > and sent to the LMTP backends: > > http://hg.dovecot.org/dovecot-2.1/raw-rev/51d87deb5c26 > > ----8<----8<----8<-----8<-----8<-----8<----8<-----8<----8<----8<-- > > unfortunately I haven't tested that patch, so I have no idea if it > fixed the issues or not... I'm not sure if that patch is useful or not. The important patch to fix it is http://hg.dovecot.org/dovecot-2.0/rev/71084b799a6c From moseleymark at gmail.com Wed Jan 18 21:49:59 2012 From: moseleymark at gmail.com (Mark Moseley) Date: Wed, 18 Jan 2012 11:49:59 -0800 Subject: [Dovecot] Performance of Maildir vs sdbox/mdbox In-Reply-To: <33C8A3B3-7DB5-4638-8C34-54E0C7E739A4@iki.fi> References: <7c413ccbddc8e25584311c55672a51e5@standen.id.au> <33C8A3B3-7DB5-4638-8C34-54E0C7E739A4@iki.fi> Message-ID: On Wed, Jan 18, 2012 at 9:58 AM, Timo Sirainen wrote: > On 18.1.2012, at 19.54, Mark Moseley wrote: > >> I'm in the middle of working on a Maildir->mdbox migration as well, >> and likewise, over NFS (all Netapps but moving to Sun), and likewise >> with split LDA and IMAP/POP servers (and both of those served out of >> pools). I was hoping doing things like setting "mail_nfs_index = yes" >> and "mmap_disable = yes" and "mail_fsync = always/optimized" would >> mitigate most of the risks of index corruption, > > They help, but aren't 100% effective and they also make the performance worse. In testing, it seemed very much like the benefits of reducing IOPS by up to a couple orders of magnitude outweighed having to use those settings. Both in scripted testing and just using a mail UI, with the NFS-ish settings, I didn't notice any lag and doing things like checking a good-sized mailbox were at least as quick as Maildir. And I'm hoping that reducing IOPS across the entire set of NFS servers will compound the benefits quite a bit. >> as well as probably >> turning indexing off on the LDA side of things > > You can't turn off indexing with dbox. Ah, too bad. I was hoping I could get away with the LDA not updating the index but just dropping the message into storage/m.# but it'd still be seen on the IMAP/POP side--but hadn't tested that. Guess that's not the case. >> --i.e. all the >> suggestions at http://wiki2.dovecot.org/NFS. Is that definitely not >> the case? Is there anything else (beyond moving to a director-based >> architecture) that can mitigate the risk of index corruption? In our >> case, incoming IMAP/POP are 'stuck' to servers based on IP persistence >> for a given amount of time, but incoming LDA is randomly distributed. > > What's the problem with director-based architecture? Nothing, per se. It's just that migrating to mdbox *and* to a director architecture is quite a bit more added complexity than simply migrating to mdbox alone. Hopefully, I'm not hijacking this thread. This seems pretty pertinent as well to the OP. From janfrode at tanso.net Wed Jan 18 22:14:37 2012 From: janfrode at tanso.net (Jan-Frode Myklebust) Date: Wed, 18 Jan 2012 21:14:37 +0100 Subject: [Dovecot] Performance of Maildir vs sdbox/mdbox In-Reply-To: <23FFD99C-7D70-40BE-A4F3-FD259FFC62E9@iki.fi> References: <7c413ccbddc8e25584311c55672a51e5@standen.id.au> <33C8A3B3-7DB5-4638-8C34-54E0C7E739A4@iki.fi> <20120118185137.GA21945@dibs.tanso.net> <23FFD99C-7D70-40BE-A4F3-FD259FFC62E9@iki.fi> Message-ID: <20120118201437.GA23070@dibs.tanso.net> On Wed, Jan 18, 2012 at 09:03:18PM +0200, Timo Sirainen wrote: > On 18.1.2012, at 20.51, Jan-Frode Myklebust wrote: > > >> What's the problem with director-based architecture? > > > > It hasn't been working reliably for lmtp in v2.0. > > Yes, besides that :) Besides that it's great! > > unfortunately I haven't tested that patch, so I have no idea if it > > fixed the issues or not... > > I'm not sure if that patch is useful or not. The important patch to fix it is http://hg.dovecot.org/dovecot-2.0/rev/71084b799a6c So with that oneliner on our directors, you expect lmtp proxying trough director to be better than lmtp to rr-dns towards backend servers? If so, I guess we should give it another try. -jf From tss at iki.fi Wed Jan 18 22:26:31 2012 From: tss at iki.fi (Timo Sirainen) Date: Wed, 18 Jan 2012 22:26:31 +0200 Subject: [Dovecot] Performance of Maildir vs sdbox/mdbox In-Reply-To: <20120118201437.GA23070@dibs.tanso.net> References: <7c413ccbddc8e25584311c55672a51e5@standen.id.au> <33C8A3B3-7DB5-4638-8C34-54E0C7E739A4@iki.fi> <20120118185137.GA21945@dibs.tanso.net> <23FFD99C-7D70-40BE-A4F3-FD259FFC62E9@iki.fi> <20120118201437.GA23070@dibs.tanso.net> Message-ID: <956410A8-290E-408A-B85A-5AD46F5CDB70@iki.fi> On 18.1.2012, at 22.14, Jan-Frode Myklebust wrote: >>> unfortunately I haven't tested that patch, so I have no idea if it >>> fixed the issues or not... >> >> I'm not sure if that patch is useful or not. The important patch to fix it is http://hg.dovecot.org/dovecot-2.0/rev/71084b799a6c > > So with that oneliner on our directors, you expect lmtp proxying trough > director to be better than lmtp to rr-dns towards backend servers? If so, > I guess we should give it another try. It should fix the hangs that were common. I'm not sure if it fixes everything without the complexity reduction patch. From admin at opsys.de Wed Jan 18 22:41:02 2012 From: admin at opsys.de (Markus Fritz) Date: Wed, 18 Jan 2012 21:41:02 +0100 Subject: [Dovecot] Quota won't work Message-ID: I tried to set a quota setting. I installed dovecot with newest version, patched it and started it. dovecot -n: # 1.2.15: /etc/dovecot/dovecot.conf # OS: Linux 2.6.32-5-amd64 x86_64 Debian 6.0.3 ext4 log_timestamp: %Y-%m-%d %H:%M:%S protocols: imap imaps pop3 pop3s ssl_listen: 143 ssl_cipher_list: ALL:!LOW:!SSLv2 disable_plaintext_auth: no login_dir: /var/run/dovecot/login login_executable(default): /usr/lib/dovecot/imap-login login_executable(imap): /usr/lib/dovecot/imap-login login_executable(pop3): /usr/lib/dovecot/pop3-login mail_privileged_group: mail mail_location: maildir:/var/vmail/%d/%n/Maildir mbox_write_locks: fcntl dotlock mail_executable(default): /usr/lib/dovecot/imap mail_executable(imap): /usr/lib/dovecot/imap mail_executable(pop3): /usr/lib/dovecot/pop3 mail_plugins(default): quota imap_quota mail_plugins(imap): quota imap_quota mail_plugins(pop3): quota mail_plugin_dir(default): /usr/lib/dovecot/modules/imap mail_plugin_dir(imap): /usr/lib/dovecot/modules/imap mail_plugin_dir(pop3): /usr/lib/dovecot/modules/pop3 namespace: type: private inbox: yes list: yes subscriptions: yes lda: postmaster_address: postmaster at opsys.de mail_plugins: sieve quota log_path: auth default: mechanisms: plain login verbose: yes passdb: driver: sql args: /etc/dovecot/dovecot-sql.conf userdb: driver: static args: uid=5000 gid=5000 home=/var/vmail/%d/%n/Maildir allow_all_users=yes socket: type: listen client: path: /var/spool/postfix/private/auth mode: 432 user: postfix group: postfix master: path: /var/run/dovecot/auth-master mode: 384 user: vmail /etc/dovecot/dovecot-sql.conf: driver = mysql connect = host=127.0.0.1 dbname=mailserver user=mailuser password=****** default_pass_scheme = PLAIN-MD5 password_query = SELECT email as user, password FROM virtual_users WHERE email='%u'; user_query = SELECT CONCAT('/var/mail/', maildir) AS home, CONCAT('*:bytes=', quota) AS quota_rule \ FROM virtual_users WHERE email='%u' virtual_users has this: CREATE TABLE IF NOT EXISTS `virtual_users` ( `id` int(11) NOT NULL AUTO_INCREMENT, `domain_id` int(11) NOT NULL, `password` varchar(32) NOT NULL, `email` varchar(100) NOT NULL, `quota` int(11) NOT NULL DEFAULT '629145600', PRIMARY KEY (`id`), UNIQUE KEY `email` (`email`), KEY `domain_id` (`domain_id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; Also postfix is installed with this (not the hole cfg): virtual_mailbox_domains = mysql:/etc/postfix/mysql-virtual-mailbox-domains.cf virtual_mailbox_limit_inbox = no virtual_mailbox_limit_maps = mysql:/etc/postfix/mysql-quota.cf virtual_mailbox_limit_override = yes virtual_mailbox_maps = mysql:/etc/postfix/mysql-virtual-mailbox-maps.cf virtual_maildir_extended = yes virtual_maildir_limit_message = "The user you are trying to reach is over quota." virtual_maildir_limit_message_maps = mail:/etc/postfix/mysql-quota.cf virtual_overquota_bounce = yes /etc/postfix/mysql-quota.cf: user = mailuser password = ****** hosts = 127.0.0.1 dbname = mailserver query = SELECT quota FROM virtual_users WHERE email='%s' I changed the quota of my mail account to 40, so 40Byte should be the maximum. My account is at a size of 600KB now. I still recieve mails, also they will be saved without errors. /var/log/mail.log says nothing to quota, just normal recieve and store entries. What to fix? -- Markus Fritz Administration opsys.de From tss at iki.fi Wed Jan 18 23:05:40 2012 From: tss at iki.fi (Timo Sirainen) Date: Wed, 18 Jan 2012 23:05:40 +0200 Subject: [Dovecot] Performance of Maildir vs sdbox/mdbox In-Reply-To: References: <7c413ccbddc8e25584311c55672a51e5@standen.id.au> <33C8A3B3-7DB5-4638-8C34-54E0C7E739A4@iki.fi> Message-ID: <233EA3FE-D978-4A62-AEE7-4E908AE83935@iki.fi> On 18.1.2012, at 21.49, Mark Moseley wrote: >> What's the problem with director-based architecture? > > Nothing, per se. It's just that migrating to mdbox *and* to a director > architecture is quite a bit more added complexity than simply > migrating to mdbox alone. Yes, I agree it's safer to do one thing that a time. That's why I'd do a switch to director first. :) From tss at iki.fi Wed Jan 18 23:07:42 2012 From: tss at iki.fi (Timo Sirainen) Date: Wed, 18 Jan 2012 23:07:42 +0200 Subject: [Dovecot] Quota won't work In-Reply-To: References: Message-ID: <40CE1ECA-D884-4127-862E-A6733B685594@iki.fi> On 18.1.2012, at 22.41, Markus Fritz wrote: > passdb: > driver: sql > args: /etc/dovecot/dovecot-sql.conf > userdb: > driver: static > args: uid=5000 gid=5000 home=/var/vmail/%d/%n/Maildir allow_all_users=yes You use sql as passdb, static as userdb. > password_query = SELECT email as user, password FROM virtual_users WHERE email='%u'; passdb sql executes password_query. > user_query = SELECT CONCAT('/var/mail/', maildir) AS home, CONCAT('*:bytes=', quota) AS quota_rule \ > FROM virtual_users WHERE email='%u' userdb sql executes user_query. But you're not using userdb sql, you're using userdb static. This query never gets executed. Also you don't have plugin { quota } setting. From Juergen.Obermann at hrz.uni-giessen.de Wed Jan 18 23:40:17 2012 From: Juergen.Obermann at hrz.uni-giessen.de (=?UTF-8?Q?J=C3=BCrgen_Obermann?=) Date: Wed, 18 Jan 2012 22:40:17 +0100 Subject: [Dovecot] Panic: file mbox-sync.c: line 1348: assertion failed In-Reply-To: <20120110163207.182538xtgzoxjg8w@webmail.hrz.uni-giessen.de> References: <20120110163207.182538xtgzoxjg8w@webmail.hrz.uni-giessen.de> Message-ID: <1460d9f2fc09b7f8f0d607cb5a86e01b@imapproxy.hrz> Am 10.01.2012 16:32, schrieb J?rgen Obermann: > > I have the following problem with doveadm: > > # gdb --args /opt/local/bin/doveadm -v mailbox status -u > userxy/g029 'messages' "Software-alle/AK-Software-Tagung" > GNU gdb 5.3 > Copyright 2002 Free Software Foundation, Inc. > GDB is free software, covered by the GNU General Public License, and > you are > welcome to change it and/or distribute copies of it under certain > conditions. > Type "show copying" to see the conditions. > There is absolutely no warranty for GDB. Type "show warranty" for > details. > This GDB was configured as "sparc-sun-solaris2.8"... > (gdb) run > Starting program: /opt/local/bin/doveadm -v mailbox status -u g029 > messages Software-alle/AK-Software-Tagung > warning: Lowest section in /lib/libthread.so.1 is .dynamic at > 00000074 > warning: Lowest section in /lib/libdl.so.1 is .hash at 000000b4 > doveadm(g029): Panic: file mbox-sync.c: line 1348: assertion failed: > (file_size >= sync_ctx->expunged_space + trailer_size) > doveadm(g029): Error: Raw backtrace: 0xff1cbc30 -> 0xff319544 -> > 0xff319fa8 -> 0xff31add8 -> 0xff31b278 -> 0xff2a69b0 -> 0xff2a6bac -> > 0x16808 -> 0x1b8fc -> 0x16ba0 -> 0x177cc -> 0x17944 -> 0x17a50 -> > 0x204e8 -> 0x165c8 > > Program received signal SIGABRT, Aborted. Hallo, the problem went away after I deleted the dovecot index files for the mailbox. Greetins, J?rgen Obermann Hochschulrechenzentrum der Justus-Liebig-Universit?t Gie?en Heinrich-Buff-Ring 44 Tel. 0641-9913054 From stan at hardwarefreak.com Thu Jan 19 06:39:04 2012 From: stan at hardwarefreak.com (Stan Hoeppner) Date: Wed, 18 Jan 2012 22:39:04 -0600 Subject: [Dovecot] Performance of Maildir vs sdbox/mdbox In-Reply-To: <1326894871.11500.45.camel@innu> References: <02f401ccd5de$ef17c470$cd474d50$@standen.id.au> <1326894871.11500.45.camel@innu> Message-ID: <4F179E68.5020408@hardwarefreak.com> On 1/18/2012 7:54 AM, Timo Sirainen wrote: > On Wed, 2012-01-18 at 20:44 +0800, Lee Standen wrote: >> * All mail storage presented via NFS over 10Gbps Ethernet (Jumbo Frames) >> >> * Postfix will feed new email to Dovecot via LMTP >> >> * Dovecot servers have been split based on their role >> >> - Dovecot LDA Servers (running LMTP protocol) >> >> - Dovecot POP/IMAP servers (running POP/IMAP protocols) > > You're going to run into NFS caching troubles with the above split > setup. I don't recommend it. You will see error messages about index > corruption with it, and with dbox it can cause metadata loss. > http://wiki2.dovecot.org/NFS http://wiki2.dovecot.org/Director Would it be possible to fix this NFS mdbox index corruption issue in this split scenario by using a dual namespace and disabling indexing on the INBOX? The goal being no index file collisions between LDA and imap processes. Maybe something like: namespace { separator = / prefix = "#mbox/" location = mbox:~/mail:INBOX=/var/mail/%u:INDEX=MEMORY inbox = yes hidden = yes list = no } namespace { separator = / prefix = location = mdbox:~/mdbox } Client access to new mail might be a little slower, but if it eliminates the index corruption issue and allows the split architecture, it may be a viable option. -- Stan From ebroch at whitehorsetc.com Thu Jan 19 08:48:29 2012 From: ebroch at whitehorsetc.com (Eric Broch) Date: Wed, 18 Jan 2012 23:48:29 -0700 Subject: [Dovecot] shared folder files not displaying in thunderbird Message-ID: <4F17BCBD.3020802@whitehorsetc.com> Can anyone help me figure out why email in a sub-folder (created using Thunderbird) of a dovecot namespace will not display in Thunderbird? ... Hello, I have dovecot installed with the configuration below. One of the subfolders created (using the email client) under the '/home/vpopmail/domains/mydomain.com/shared/projects' share no longer (it used to) displays the files located in it. There are about 150 folders under the '/home/vpopmail/domains/mydomain.com/shared/projects' share all of which display the files located in them, the one mentioned used to display the contents but no longer does. What would be the reason that one folder would no longer display existing files in the email client (Thunderbird) and the other folders would? And, how do I fix this? I've already tried unsubscribing and resubscribing the folder. This did not work. Would it now be simply a matter of unsubscribing the folder, deleting the dovecot files, and resubscribing to the folder? Eric # 2.0.11: /etc/dovecot/dovecot.conf # OS: Linux 2.6.18-238.19.1.el5 i686 CentOS release 5.7 (Final) auth_cache_size = 32 M auth_mechanisms = plain login digest-md5 cram-md5 auth_username_format = %Lu disable_plaintext_auth = no first_valid_uid = 89 log_path = /var/log/dovecot.log login_greeting = Dovecot toaster ready. namespace { inbox = yes location = prefix = INBOX. separator = . type = private } namespace { location = maildir:/home/vpopmail/domains/mydomain.com/shared/projects prefix = projects. separator = . type = public } passdb { args = cache_key=%u webmail=127.0.0.1 driver = vpopmail } plugin/quota = maildir protocols = imap ssl_cert = Hi, i want to send mails direct into a public folder. If i send an email via my local postfix the mail will be handled as a normal private mail. Dovecot does create a mailbox in the private Namespace and do use not the mailbox in public one. I hope you can help me with my little problem. Here sone informations about my configuration: [root at imap1 etc]# ls -la /var/dovecot/imap/public/ insgesamt 16 drwxr-x--- 3 vmail vmail 4096 19. Jan 10:12 . drwxr-x--- 5 vmail vmail 4096 18. Jan 08:41 .. -rw-r----- 1 vmail vmail 0 19. Jan 10:11 dovecot-acl-list -rw-r----- 1 vmail vmail 8 19. Jan 10:12 dovecot-uidvalidity -r--r--r-- 1 vmail vmail 0 19. Jan 10:12 dovecot-uidvalidity.4f17de84 drwx------ 5 vmail vmail 4096 19. Jan 10:12 .hrztest and here is me configuration: # 2.0.9: /etc/dovecot/dovecot.conf # OS: Linux 2.6.32-220.2.1.el6.i686 i686 Red Hat Enterprise Linux Server release 6.2 (Santiago) auth_username_format = %Ln disable_plaintext_auth = no login_greeting = Dovecot IMAP der Jade Hochschule. mail_access_groups = vmail mail_debug = yes mail_gid = vmail mail_plugins = quota acl mail_uid = vmail managesieve_notify_capability = mailto managesieve_sieve_capability = fileinto reject envelope encoded-character vacation subaddress comparator-i;ascii-numeric relational regex imap4flags copy include variables body enotify environment mailbox date imapflags notify mbox_write_locks = fcntl namespace { inbox = yes location = maildir:/var/dovecot/imap/%1n/%n prefix = separator = / type = private } namespace { list = children location = maildir:/var/dovecot/imap/public/ prefix = public/ separator = / subscriptions = no type = public } passdb { args = /etc/dovecot/dovecot-ldap.conf driver = ldap } passdb { driver = pam } plugin { acl = vfile acl_shared_dict = file:/var/lib/dovecot/shared-mailboxes mail_log_fields = uid box msgid size quota = dict:user::file:/var/dovecot/imap/%1n/%n/dovecot-quota quota_rule = *:storage=50MB quota_rule2 = Trash:storage=+10% sieve = /var/dovecot/imap/%1n/%n/.dovecot.sieve sieve_dir = /var/dovecot/imap/%1n/%n/sieve sieve_extensions = +notify +imapflags sieve_quota_max_scripts = 2 } postmaster_address = postmaster at jade-hs.de protocols = imap pop3 lmtp sieve service lmtp { unix_listener /var/spool/postfix/private/dovecot-lmtp { group = postfix mode = 0660 user = postfix } } service managesieve-login { inet_listener sieve { port = 4190 } } ssl_cert = References: <1BCAD28D-8120-45C9-BAA2-B6597C34545A@apple.com> <09EF3E7A-15A2-45EE-91BD-6EEFD1FD8049@iki.fi> Message-ID: <1326981545.11500.86.camel@innu> On Thu, 2012-01-12 at 22:20 +0200, Timo Sirainen wrote: > On 12.1.2012, at 1.09, Mike Abbott wrote: > > > In 2.0.17 you increased LOGIN_MAX_INBUF_SIZE from 1024 to 4096. > > Should you also have increased MASTER_AUTH_MAX_DATA_SIZE from (1024*2) to (4096*2)? > > /* This should be kept in sync with LOGIN_MAX_INBUF_SIZE. Multiply it by two > > to make sure there's space to transfer the command tag */ > > Well, yes.. Although I'd rather not do that. > > 1. Command tag length needs to be restricted to something reasonable, maybe 100 chars, so it won't have to be multiplied by 2 but just added the 100 (+1 for NUL). > > 2. Maybe I can change the LOGIN_MAX_INBUF_SIZE back to its original size and change the AUTHENTICATE command handling to read the SASL initial response to a separate buffer. > > I'll try doing those next week. http://hg.dovecot.org/dovecot-2.1/rev/b86f7dd170c6 does this. From moseleymark at gmail.com Thu Jan 19 19:08:06 2012 From: moseleymark at gmail.com (Mark Moseley) Date: Thu, 19 Jan 2012 09:08:06 -0800 Subject: [Dovecot] Performance of Maildir vs sdbox/mdbox In-Reply-To: <4F179E68.5020408@hardwarefreak.com> References: <02f401ccd5de$ef17c470$cd474d50$@standen.id.au> <1326894871.11500.45.camel@innu> <4F179E68.5020408@hardwarefreak.com> Message-ID: On Wed, Jan 18, 2012 at 8:39 PM, Stan Hoeppner wrote: > On 1/18/2012 7:54 AM, Timo Sirainen wrote: >> On Wed, 2012-01-18 at 20:44 +0800, Lee Standen wrote: > >>> * All mail storage presented via NFS over 10Gbps Ethernet (Jumbo Frames) >>> >>> * Postfix will feed new email to Dovecot via LMTP >>> >>> * Dovecot servers have been split based on their role >>> >>> ? - Dovecot LDA Servers (running LMTP protocol) >>> >>> ? - Dovecot POP/IMAP servers (running POP/IMAP protocols) >> >> You're going to run into NFS caching troubles with the above split >> setup. I don't recommend it. You will see error messages about index >> corruption with it, and with dbox it can cause metadata loss. >> http://wiki2.dovecot.org/NFS http://wiki2.dovecot.org/Director > > Would it be possible to fix this NFS mdbox index corruption issue in > this split scenario by using a dual namespace and disabling indexing on > the INBOX? ?The goal being no index file collisions between LDA and imap > processes. ?Maybe something like: > > namespace { > ?separator = / > ?prefix = "#mbox/" > ?location = mbox:~/mail:INBOX=/var/mail/%u:INDEX=MEMORY > ?inbox = yes > ?hidden = yes > ?list = no > } > namespace { > ?separator = / > ?prefix = > ?location = mdbox:~/mdbox > } > > Client access to new mail might be a little slower, but if it eliminates > the index corruption issue and allows the split architecture, it may be > a viable option. > > -- > Stan It could be that I botched my test up somehow, but when I tested something similar yesterday (pointing the index at another location on the LDA), it didn't work. I was sending from the LDA server and confirmed that the messages made it to storage/m.# but without the real indexes being updated. When I checked the mailbox via IMAP, it never seemed to register that there was a message there, so I'm guessing that dovecot never looks at the storage files but just relies on the indexes to be correct. That sound right, Timo? From rob0 at gmx.co.uk Thu Jan 19 19:37:15 2012 From: rob0 at gmx.co.uk (/dev/rob0) Date: Thu, 19 Jan 2012 11:37:15 -0600 Subject: [Dovecot] Storing passwords encrypted... bcrypt? In-Reply-To: <4F14BF4B.5060804@wildgooses.com> References: <4F0367A2.1000807@Media-Brokers.com> <4F04FAA9.3020908@localhost.localdomain.org> <4F14BF4B.5060804@wildgooses.com> Message-ID: <20120119173715.GD14195@harrier.slackbuilds.org> On Tue, Jan 17, 2012 at 12:22:35AM +0000, Ed W wrote: > Note I personally believe there are valid reasons to store > plaintext passwords - this seems to cause huge criticism due to > the ensuing disaster which can happen if the database is pinched, > but it does allow for enhanced security in the password exchange, > so ultimately it depends on where your biggest risk lies... Exactly. In any security decision, consider the threat model first. There are too many kneejerk "secure" ideas in circulation. -- http://rob0.nodns4.us/ -- system administration and consulting Offlist GMX mail is seen only if "/dev/rob0" is in the Subject: From tss at iki.fi Thu Jan 19 21:18:00 2012 From: tss at iki.fi (Timo Sirainen) Date: Thu, 19 Jan 2012 21:18:00 +0200 Subject: [Dovecot] Performance of Maildir vs sdbox/mdbox In-Reply-To: <4F179E68.5020408@hardwarefreak.com> References: <02f401ccd5de$ef17c470$cd474d50$@standen.id.au> <1326894871.11500.45.camel@innu> <4F179E68.5020408@hardwarefreak.com> Message-ID: <44D201BF-3A31-4354-8B55-0BFE11721601@iki.fi> On 19.1.2012, at 6.39, Stan Hoeppner wrote: >> You're going to run into NFS caching troubles with the above split >> setup. I don't recommend it. You will see error messages about index >> corruption with it, and with dbox it can cause metadata loss. >> http://wiki2.dovecot.org/NFS http://wiki2.dovecot.org/Director > > Would it be possible to fix this NFS mdbox index corruption issue in > this split scenario by using a dual namespace and disabling indexing on > the INBOX? The goal being no index file collisions between LDA and imap > processes. Maybe something like: > > namespace { > separator = / > prefix = "#mbox/" > location = mbox:~/mail:INBOX=/var/mail/%u:INDEX=MEMORY > inbox = yes > hidden = yes > list = no > } > namespace { > separator = / > prefix = > location = mdbox:~/mdbox > } > > Client access to new mail might be a little slower, but if it eliminates > the index corruption issue and allows the split architecture, it may be > a viable option. That assumes that mails are only being delivered to INBOX (i.e. no Sieve or +mailbox addressing). I suppose you could do that if you can live with that limitation. Slightly better for performance would be to not actually keep INBOX mails in mbox format but use snarf plugin to move them to mdbox. And of course the above still requires that for imap/pop3 access the user is redirected to the same server every time. I don't really see it helping much. From tss at iki.fi Thu Jan 19 21:21:20 2012 From: tss at iki.fi (Timo Sirainen) Date: Thu, 19 Jan 2012 21:21:20 +0200 Subject: [Dovecot] Performance of Maildir vs sdbox/mdbox In-Reply-To: References: <02f401ccd5de$ef17c470$cd474d50$@standen.id.au> <1326894871.11500.45.camel@innu> <4F179E68.5020408@hardwarefreak.com> Message-ID: <2DF8A9C6-EE59-4557-A1AE-4E4D2BC91C93@iki.fi> On 19.1.2012, at 19.08, Mark Moseley wrote: >> namespace { >> separator = / >> prefix = "#mbox/" >> location = mbox:~/mail:INBOX=/var/mail/%u:INDEX=MEMORY >> inbox = yes >> hidden = yes >> list = no >> } >> >> Client access to new mail might be a little slower, but if it eliminates >> the index corruption issue and allows the split architecture, it may be >> a viable option. >> >> -- >> Stan > > It could be that I botched my test up somehow, but when I tested > something similar yesterday (pointing the index at another location on > the LDA), it didn't work. Note that Stan used mbox format for INBOX, not mdbox. > I was sending from the LDA server and > confirmed that the messages made it to storage/m.# but without the > real indexes being updated. When I checked the mailbox via IMAP, it > never seemed to register that there was a message there, so I'm > guessing that dovecot never looks at the storage files but just relies > on the indexes to be correct. That sound right, Timo? Correct. dbox absolutely relies on index files always being up to date. In some error situations it can figure out that it should do an index rebuild and then it finds any missing mails, but in normal situations it doesn't even try, because that would unnecessarily waste disk IO. (And there's of course doveadm force-resync to force it.) From tss at iki.fi Thu Jan 19 21:25:38 2012 From: tss at iki.fi (Timo Sirainen) Date: Thu, 19 Jan 2012 21:25:38 +0200 Subject: [Dovecot] shared folder files not displaying in thunderbird In-Reply-To: <4F16D52F.2040907@whitehorsetc.com> References: <4F16D52F.2040907@whitehorsetc.com> Message-ID: <69E3CE17-A92B-48A4-8A56-F16EE6450898@iki.fi> On 18.1.2012, at 16.20, Eric Broch wrote: > I have dovecot installed with the configuration below. > One of the subfolders created (using the email client) under the > '/home/vpopmail/domains/mydomain.com/shared/projects' share no longer > (it used to) displays the files located in it. There are about 150 > folders under the '/home/vpopmail/domains/mydomain.com/shared/projects' > share all of which display the files located in them, the one mentioned > used to display the contents but no longer does. > > What would be the reason that one folder would no longer display > existing files in the email client (Thunderbird) and the other folders > would? And, how do I fix this? So the folder itself exists, but it just appears empty? Have you tried with another IMAP client? Have you checked if the files are actually still there in the maildir? You can check if this is a server problem or a client problem by running: doveadm fetch -u user at domain uid mailbox project.missing.sub.folder all If the output is empty, then Dovecot doesn't see any mails in there (check if there are any files in the maildir). If it outputs something, then the client's local cache is broken and you need to tell the client to do a resync. > Would it now be simply a matter of unsubscribing the folder, deleting > the dovecot files, and resubscribing to the folder? Subscriptions won't matter. Deleting Dovecot's files may emulate the client's cache flush because it changes IMAP UIDVALIDITY. From tss at iki.fi Thu Jan 19 21:31:57 2012 From: tss at iki.fi (Timo Sirainen) Date: Thu, 19 Jan 2012 21:31:57 +0200 Subject: [Dovecot] Problems sending email direct into publich folders In-Reply-To: References: Message-ID: <0D641C8A-B7E5-464F-9BFC-3A256ED4C615@iki.fi> On 19.1.2012, at 14.02, Bohlken, Henning wrote: > i want to send mails direct into a public folder. If i send an email via my local postfix the mail will be handled as a normal private mail. Dovecot does create a mailbox in the private Namespace and do use not the mailbox in public one. Depends on how you want to do this.. For example all mails intended to be put to public namespace could be sent to a "publicuser" named user, which has write permissions to the public namespace. Then you'll simply create a sieve script for the publicuser which redirects the mails to the wanted folder (e.g. fileinto "public/hrztest"). From ebroch at whitehorsetc.com Thu Jan 19 23:03:58 2012 From: ebroch at whitehorsetc.com (Eric Broch) Date: Thu, 19 Jan 2012 14:03:58 -0700 Subject: [Dovecot] shared folder files not displaying in thunderbird In-Reply-To: <69E3CE17-A92B-48A4-8A56-F16EE6450898@iki.fi> References: <4F16D52F.2040907@whitehorsetc.com> <69E3CE17-A92B-48A4-8A56-F16EE6450898@iki.fi> Message-ID: <4F18853E.5020003@whitehorsetc.com> Timo, > So the folder itself exists, but it just appears empty? Yes. > Have you tried with another IMAP client? Yes, both Outlook and Thunderbird > Have you checked if the files are actually still there in the maildir? I've done a list (ls -la) of the directory where the files reside (path.to.share.sub.dir/cur). They exist. > You can check if this is a server problem or a client problem by running: doveadm fetch -u user at domain uid mailbox project.missing.sub.folder all I did this per your instructions and there is no output. So, email exists in the share, and it does not show up in Thunderbird, Outlook, or using doveadm. Eric From tss at iki.fi Thu Jan 19 23:29:34 2012 From: tss at iki.fi (Timo Sirainen) Date: Thu, 19 Jan 2012 23:29:34 +0200 Subject: [Dovecot] shared folder files not displaying in thunderbird In-Reply-To: <4F18853E.5020003@whitehorsetc.com> References: <4F16D52F.2040907@whitehorsetc.com> <69E3CE17-A92B-48A4-8A56-F16EE6450898@iki.fi> <4F18853E.5020003@whitehorsetc.com> Message-ID: <489C9E80-1E22-4C18-BC08-2F869592CFD6@iki.fi> On 19.1.2012, at 23.03, Eric Broch wrote: >> Have you checked if the files are actually still there in the maildir? > I've done a list (ls -la) of the directory where the files reside > (path.to.share.sub.dir/cur). They exist. >> You can check if this is a server problem or a client problem by running: doveadm fetch -u user at domain uid mailbox project.missing.sub.folder all > I did this per your instructions and there is no output. Try "touch path.to/cur" and the doveadm fetch again. Does it help? If not, there's some kind of a mismatch between what you think is happening in Dovecot and what is happening in filesystem. I'd like to know the exact full path and the mailbox name then. (Or you could run doveadm through strace and see if it's accessing the intended directory.) From stan at hardwarefreak.com Fri Jan 20 01:51:06 2012 From: stan at hardwarefreak.com (Stan Hoeppner) Date: Thu, 19 Jan 2012 17:51:06 -0600 Subject: [Dovecot] Performance of Maildir vs sdbox/mdbox In-Reply-To: <44D201BF-3A31-4354-8B55-0BFE11721601@iki.fi> References: <02f401ccd5de$ef17c470$cd474d50$@standen.id.au> <1326894871.11500.45.camel@innu> <4F179E68.5020408@hardwarefreak.com> <44D201BF-3A31-4354-8B55-0BFE11721601@iki.fi> Message-ID: <4F18AC6A.4050508@hardwarefreak.com> On 1/19/2012 1:18 PM, Timo Sirainen wrote: > On 19.1.2012, at 6.39, Stan Hoeppner wrote: > >>> You're going to run into NFS caching troubles with the above split >>> setup. I don't recommend it. You will see error messages about index >>> corruption with it, and with dbox it can cause metadata loss. >>> http://wiki2.dovecot.org/NFS http://wiki2.dovecot.org/Director >> >> Would it be possible to fix this NFS mdbox index corruption issue in >> this split scenario by using a dual namespace and disabling indexing on >> the INBOX? The goal being no index file collisions between LDA and imap >> processes. Maybe something like: >> >> namespace { >> separator = / >> prefix = "#mbox/" >> location = mbox:~/mail:INBOX=/var/mail/%u:INDEX=MEMORY >> inbox = yes >> hidden = yes >> list = no >> } >> namespace { >> separator = / >> prefix = >> location = mdbox:~/mdbox >> } >> >> Client access to new mail might be a little slower, but if it eliminates >> the index corruption issue and allows the split architecture, it may be >> a viable option. > > That assumes that mails are only being delivered to INBOX (i.e. no Sieve or +mailbox addressing). I suppose you could do that if you can live with that limitation. Slightly better for performance would be to not actually keep INBOX mails in mbox format but use snarf plugin to move them to mdbox. > > And of course the above still requires that for imap/pop3 access the user is redirected to the same server every time. I don't really see it helping much. I spent a decent amount of time last night researching the NFS cache issue. It seems there is no way to completely disable NFS client caching (in lie of rewriting the code oneself--a daunting tak), which would seem to be the real solution to the mdbox index corruption problem. So I went looking for alternatives and came up with the idea above. Obviously it's far from an optimal solution and introduces some limitations, but I thought it was worth tossing out for discussion. Timo, it seems that when you designed mdbox you didn't have NFS based clusters in mind. Do you consider mdbox simply not suitable for such an NFS cluster deployment? If one has no choice but an NFS cluster architecture, what Dovecot mailbox format do you recommend? Stick with maildir? In this case the OP has Netapp storage. Netapp units support both NFS exports as well as iSCSI LUNs. If the OP could utilize iSCSI instead of NFS, switching to GFS2 or OCFS, do you see these cluster filesystem as preferable for mdbox? -- Stan From tss at iki.fi Fri Jan 20 02:13:26 2012 From: tss at iki.fi (Timo Sirainen) Date: Fri, 20 Jan 2012 02:13:26 +0200 Subject: [Dovecot] Performance of Maildir vs sdbox/mdbox In-Reply-To: <4F18AC6A.4050508@hardwarefreak.com> References: <02f401ccd5de$ef17c470$cd474d50$@standen.id.au> <1326894871.11500.45.camel@innu> <4F179E68.5020408@hardwarefreak.com> <44D201BF-3A31-4354-8B55-0BFE11721601@iki.fi> <4F18AC6A.4050508@hardwarefreak.com> Message-ID: On 20.1.2012, at 1.51, Stan Hoeppner wrote: > I spent a decent amount of time last night researching the NFS cache > issue. It seems there is no way to completely disable NFS client > caching (in lie of rewriting the code oneself--a daunting tak), which > would seem to be the real solution to the mdbox index corruption problem. > > So I went looking for alternatives and came up with the idea above. > Obviously it's far from an optimal solution and introduces some > limitations, but I thought it was worth tossing out for discussion. I spent months looking into NFS related issues. I read through Linux and FreeBSD kernel source codes to figure out if there's something I could do to avoid the problems I see. I sent some patches to try to improve things, which of course didn't get accepted (some alternative ways might have been, but it would have required much more work from my part). The mail_nfs_* settings are the result of what I found out. They don't fully work, so I gave up. > Timo, it seems that when you designed mdbox you didn't have NFS based > clusters in mind. Do you consider mdbox simply not suitable for such an > NFS cluster deployment? If one has no choice but an NFS cluster > architecture, what Dovecot mailbox format do you recommend? Stick with > maildir? In the typical random-access NFS setup I don't consider any of Dovecot's formats suitable. Not maildir, not dbox. Perhaps in future I can redesign everything in a way that just happens to work well with all kinds of NFS setups, but I don't really hold a lot of hope for that. It seems that either you'll get bad performance (I'm not really interested in making Dovecot do that) or you'll use such a setup where you get good performance by avoiding the NFS problems. There are several huge Dovecot+NFS setups. They use director. It works well enough (and with the recent fixes, I'd hope perfectly). > In this case the OP has Netapp storage. Netapp units support both NFS > exports as well as iSCSI LUNs. If the OP could utilize iSCSI instead of > NFS, switching to GFS2 or OCFS, do you see these cluster filesystem as > preferable for mdbox? I don't have personal experience with cluster filesystems in recent years (other than glusterfs, which had some problems, but most(/all?) were fixed already or are available from their commercial support..). Based on what I've heard, I'm guessing they work better than random-access-NFS, but even if there are no actual corruption problems, it sounds like their performance isn't very good. From noel.butler at ausics.net Fri Jan 20 03:18:16 2012 From: noel.butler at ausics.net (Noel Butler) Date: Fri, 20 Jan 2012 11:18:16 +1000 Subject: [Dovecot] Performance of Maildir vs sdbox/mdbox In-Reply-To: References: <02f401ccd5de$ef17c470$cd474d50$@standen.id.au> <1326894871.11500.45.camel@innu> <4F179E68.5020408@hardwarefreak.com> <44D201BF-3A31-4354-8B55-0BFE11721601@iki.fi> <4F18AC6A.4050508@hardwarefreak.com> Message-ID: <1327022296.9133.3.camel@tardis> On Fri, 2012-01-20 at 02:13 +0200, Timo Sirainen wrote: > There are several huge Dovecot+NFS setups. They use director. It works well enough (and with the recent fixes, I'd hope perfectly). Not to mention other huge NFS setups that don't use director, and also have no problems. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 490 bytes Desc: This is a digitally signed message part URL: From stan at hardwarefreak.com Fri Jan 20 04:27:59 2012 From: stan at hardwarefreak.com (Stan Hoeppner) Date: Thu, 19 Jan 2012 20:27:59 -0600 Subject: [Dovecot] Performance of Maildir vs sdbox/mdbox In-Reply-To: References: <02f401ccd5de$ef17c470$cd474d50$@standen.id.au> <1326894871.11500.45.camel@innu> <4F179E68.5020408@hardwarefreak.com> <44D201BF-3A31-4354-8B55-0BFE11721601@iki.fi> <4F18AC6A.4050508@hardwarefreak.com> Message-ID: <4F18D12F.2050809@hardwarefreak.com> On 1/19/2012 6:13 PM, Timo Sirainen wrote: > On 20.1.2012, at 1.51, Stan Hoeppner wrote: > >> I spent a decent amount of time last night researching the NFS cache >> issue. It seems there is no way to completely disable NFS client >> caching (in lie of rewriting the code oneself--a daunting tak), which >> would seem to be the real solution to the mdbox index corruption problem. >> >> So I went looking for alternatives and came up with the idea above. >> Obviously it's far from an optimal solution and introduces some >> limitations, but I thought it was worth tossing out for discussion. > > I spent months looking into NFS related issues. I read through Linux and FreeBSD kernel source codes to figure out if there's something I could do to avoid the problems I see. I sent some patches to try to improve things, which of course didn't get accepted (some alternative ways might have been, but it would have required much more work from my part). The mail_nfs_* settings are the result of what I found out. They don't fully work, so I gave up. Yeah, I recall some of your posts from that time, and your frustration. If an NFS config option existed to simply turn off the NFS client caching, would that resolve most/all of the remaining issues? Or is the problem more complex than just the file caching? I ask as it would seem creating such a Boolean NFS config option should be simple to implement. If the devs could be convinced of the need for it. >> Timo, it seems that when you designed mdbox you didn't have NFS based >> clusters in mind. Do you consider mdbox simply not suitable for such an >> NFS cluster deployment? If one has no choice but an NFS cluster >> architecture, what Dovecot mailbox format do you recommend? Stick with >> maildir? > > In the typical random-access NFS setup I don't consider any of Dovecot's formats suitable. Not maildir, not dbox. Perhaps in future I can redesign everything in a way that just happens to work well with all kinds of NFS setups, but I don't really hold a lot of hope for that. It seems that either you'll get bad performance (I'm not really interested in making Dovecot do that) or you'll use such a setup where you get good performance by avoiding the NFS problems. > > There are several huge Dovecot+NFS setups. They use director. It works well enough (and with the recent fixes, I'd hope perfectly). Are any of these huge setups using mdbox? Or does it make a difference? I.e. Indexes are indexes whether they be maildir or mdbox. Would Director alone allow the OP to avoid the cache corruption issues discussed in this thread? Or would there still be problems due to the split LDA setup? >> In this case the OP has Netapp storage. Netapp units support both NFS >> exports as well as iSCSI LUNs. If the OP could utilize iSCSI instead of >> NFS, switching to GFS2 or OCFS, do you see these cluster filesystem as >> preferable for mdbox? > > I don't have personal experience with cluster filesystems in recent years (other than glusterfs, which had some problems, but most(/all?) were fixed already or are available from their commercial support..). Based on what I've heard, I'm guessing they work better than random-access-NFS, but even if there are no actual corruption problems, it sounds like their performance isn't very good. So would an ideal long term solution to indexes in a cluster (NFS or clusterFS) environment be something like Dovecot's own index metadata broker daemon/lock manager that controls access to the files/indexes? Either a distributed token based architecture, or maybe something 'simple' such as a master node which all others send index updates to with the master performing the actual writes to the files, similar to a database architecture? The former likely being more difficult to implement, the latter having potential scalability and SPOF issues. Or is the percentage of Dovecot cluster deployments so small that it's difficult to justify the development investment for such a thing? Thanks Timo. -- Stan From robert at schetterer.org Fri Jan 20 09:43:01 2012 From: robert at schetterer.org (Robert Schetterer) Date: Fri, 20 Jan 2012 08:43:01 +0100 Subject: [Dovecot] Performance of Maildir vs sdbox/mdbox In-Reply-To: References: <02f401ccd5de$ef17c470$cd474d50$@standen.id.au> <1326894871.11500.45.camel@innu> <4F179E68.5020408@hardwarefreak.com> <44D201BF-3A31-4354-8B55-0BFE11721601@iki.fi> <4F18AC6A.4050508@hardwarefreak.com> Message-ID: <4F191B05.9020409@schetterer.org> Am 20.01.2012 01:13, schrieb Timo Sirainen: > On 20.1.2012, at 1.51, Stan Hoeppner wrote: > >> I spent a decent amount of time last night researching the NFS cache >> issue. It seems there is no way to completely disable NFS client >> caching (in lie of rewriting the code oneself--a daunting tak), which >> would seem to be the real solution to the mdbox index corruption problem. >> >> So I went looking for alternatives and came up with the idea above. >> Obviously it's far from an optimal solution and introduces some >> limitations, but I thought it was worth tossing out for discussion. > > I spent months looking into NFS related issues. I read through Linux and FreeBSD kernel source codes to figure out if there's something I could do to avoid the problems I see. I sent some patches to try to improve things, which of course didn't get accepted (some alternative ways might have been, but it would have required much more work from my part). The mail_nfs_* settings are the result of what I found out. They don't fully work, so I gave up. > >> Timo, it seems that when you designed mdbox you didn't have NFS based >> clusters in mind. Do you consider mdbox simply not suitable for such an >> NFS cluster deployment? If one has no choice but an NFS cluster >> architecture, what Dovecot mailbox format do you recommend? Stick with >> maildir? > > In the typical random-access NFS setup I don't consider any of Dovecot's formats suitable. Not maildir, not dbox. Perhaps in future I can redesign everything in a way that just happens to work well with all kinds of NFS setups, but I don't really hold a lot of hope for that. It seems that either you'll get bad performance (I'm not really interested in making Dovecot do that) or you'll use such a setup where you get good performance by avoiding the NFS problems. > > There are several huge Dovecot+NFS setups. They use director. It works well enough (and with the recent fixes, I'd hope perfectly). > >> In this case the OP has Netapp storage. Netapp units support both NFS >> exports as well as iSCSI LUNs. If the OP could utilize iSCSI instead of >> NFS, switching to GFS2 or OCFS, do you see these cluster filesystem as >> preferable for mdbox? > > I don't have personal experience with cluster filesystems in recent years (other than glusterfs, which had some problems, but most(/all?) were fixed already or are available from their commercial support..). Based on what I've heard, I'm guessing they work better than random-access-NFS, but even if there are no actual corruption problems, it sounds like their performance isn't very good. for info i have 3500 users behind keepalived loadbalancers with drbd ocfs2 on two lucid servers, they are heavy penetrated by pop3 with maildir on dove2 , in the begin i had some performance problem but they were mostly related to the raid controlers io, so imap was very slow. Fixing this raid problems gave good imap performance now ( beside some dovecot and kernel tuneups ), anyway i would overthink this whole setup again going up to more users i.e i guess mixing loadbalancers and directors is no problem, maildir seems to be slow by io in design , so mdbox might better, and after all i would more investigate about drbd and compare gfs ocfs and other cluster filesystems better, i.e switching to iSCSI i.e i think it should be poosible to design partitioning with ldap or sql to i.e split up heavy and big mailboxes in seperate storage partitions etc am i right here Timo ? anyway i would like to test some cross hostingplace setup with i.e glusterfs lustre etc to get more knowledge as base of a multi redundant mailsystem -- Best Regards MfG Robert Schetterer Germany/Munich/Bavaria From ewald.lists at fun.de Fri Jan 20 14:35:39 2012 From: ewald.lists at fun.de (Ewald Dieterich) Date: Fri, 20 Jan 2012 13:35:39 +0100 Subject: [Dovecot] Notify plugin: segmentation fault Message-ID: <4F195F9B.3030202@fun.de> I'm trying to develop a plugin that uses the hooks provided by the notify plugin. The notify plugin segfaults if you don't set the mailbox_rename hook. I attached a patch to notify-plugin.c from Dovecot 2.0.16 that should fix this. Ewald -------------- next part -------------- A non-text attachment was scrubbed... Name: notify-plugin.c.patch Type: text/x-diff Size: 530 bytes Desc: not available URL: From harm at vevida.nl Fri Jan 20 00:30:12 2012 From: harm at vevida.nl (Harm Weites) Date: Thu, 19 Jan 2012 23:30:12 +0100 Subject: [Dovecot] LMTP ignoring tcpwrappers Message-ID: <1327012212.2003.32.camel@manbearpig.lan.kantoor.vevida.net> Hello, we want to use dovecot LMTP for efficient mail delivery from our MX servers (running postfix 2.8) to our storage servers (dovecot 2.0.17). However, the one problem we see is the lack of access control when using LMTP. It apears that every client in our network who has access to the storage machines can drop a message in a Maildir of any user on that storage server. To prevent this behaviour it would be nice to use libwrap, just as it can be used for POP3/IMAP protocols. This, however, seems to be impossible using the configuration as mentioned on the dovecot wiki: login_access_sockets = tcpwrap service tcpwrap { unix_listener login/tcpwrap { group = $default_login_user mode = 0600 user = $default_login_user } } This seems to imply it only works for a login, and LMTP does not use that. The above works perfectly when trying to block access to IMAP or POP3 in /etc/hosts.deny, though a setting for LMTP is simply ignored. Is there a configuration setting needed for this to work for LMTP, or is it simply not possible (yet) and does libwrap support for LMTP requires a patch? Any help is appreciated. Regards, Harm From simon.brereton at buongiorno.com Fri Jan 20 18:06:45 2012 From: simon.brereton at buongiorno.com (Simon Brereton) Date: Fri, 20 Jan 2012 11:06:45 -0500 Subject: [Dovecot] mail_max_userip_connections exceeded. Message-ID: Hi I'm using Dovecot version 1:1.2.15-7 installed on Debian Squeeze via apt-get.. I have this error in the logs. /var/log/mail.log.1:2490:Jan 19 12:02:55 mail dovecot: imap-login: Maximum number of connections from user+IP exceeded (mail_max_userip_connections): user=, method=PLAIN, rip=127.0.0.1, secured I never changed this from the default 10. When I googled this error there was a thread on this list from May 2011 that indicated one would need one connection per user per subscribed folder. However, I know that user doesn't have 10 folders, let alone 10 subscribed folders! I can increase, it but it's not going to scale well. And there are people on this list with many 1000x users than I have - so how do they deal with that? 127.0.0.1 is obviously webmail (IMP5). So, how/why am I seeing this, and should I be concerned? Simon From jesus.navarro at bvox.net Fri Jan 20 18:24:41 2012 From: jesus.navarro at bvox.net (=?utf-8?q?Jes=C3=BAs_M=2E?= Navarro) Date: Fri, 20 Jan 2012 17:24:41 +0100 Subject: [Dovecot] UID 0 problem while issuing an UID THREAD REFS command Message-ID: <201201201724.41631.jesus.navarro@bvox.net> Hi: This is my first message to this list, so pleased to meet you all. Using dovecot 2.0.17 from packages at xi.rename-it.nl on a Debian "Squeeze" i686. Mail storage is a local ext3 partition (I attached the output of dovecot -n to this message). I'm having problems on a maildir due to dovecot returning an UID 0 to an UID THREAD REFS command: in <== TAG5 UID THREAD REFS us-ascii SINCE 18-Jul-2011 out <== * THREAD (0)(51 52)(53)(54 55 56)(57)(58)(59 60)(61) TAG5 OK Thread completed. The issuer is an atmail webmail that after the previous output will try an UID FETCH 0 that will fail with a "TAG6 BAD Error in IMAP command UID FETCH: Invalid uidset" message. I think that, as per a previous answer from Timo Sirainen*1, this should be considered a dovecot's bug, am I right? Anyway, what should I try to find why is this exactly happening? TIA *1 http://www.dovecot.org/list/dovecot/2011-November/061992.html -------------- next part -------------- # 2.0.17 (687949948a83): /etc/dovecot/dovecot.conf # OS: Linux 2.6.29-xs5.5.0.15 i686 Debian 6.0.3 ext3 auth_cache_negative_ttl = 10 mins auth_cache_size = 10 M auth_debug = yes auth_debug_passwords = yes auth_mechanisms = plain login digest-md5 cram-md5 auth_username_chars = abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ01234567890.-_@: auth_verbose = yes disable_plaintext_auth = no mail_gid = vmail mail_location = maildir:/var/vmail/%d/%n mail_plugins = " notify xmpp_pubsub fts fts_squat zlib" mail_privileged_group = mail mail_uid = vmail managesieve_notify_capability = mailto managesieve_sieve_capability = fileinto reject envelope encoded-character vacation subaddress comparator-i;ascii-numeric relational regex imap4flags copy include variables body enotify environment mailbox date ihave namespace { inbox = yes location = prefix = separator = / } passdb { args = /etc/dovecot/dovecot-sql.conf driver = sql } plugin { enotify_xmpp_jid = dovecot at openfire/%l enotify_xmpp_password = [EDITED] enotify_xmpp_server = [EDITED] enotify_xmpp_use_tls = no fts = squat fts_squat = partial=4 full=10 mail_log_events = delete undelete expunge copy mailbox_delete mailbox_rename mail_log_fields = uid box msgid size vsize flags mail_log_group_events = no sieve = ~/.dovecot.sieve sieve_after = /var/lib/dovecot.sieve/after.d/ sieve_before = /var/lib/dovecot.sieve/before.d/ sieve_dir = ~/sieve sieve_global_path = /var/lib/dovecot.sieve/default.sieve xmpp_pubsub_events = delete undelete expunge copy mailbox_delete mailbox_rename xmpp_pubsub_fields = uid box msgid size vsize flags } protocols = " imap lmtp sieve pop3" service auth { unix_listener auth-userdb { group = vmail mode = 0600 user = vmail } } service imap-login { service_count = 0 } service managesieve-login { inet_listener sieve { port = 4190 } inet_listener sieve_deprecated { port = 2000 } } ssl_cert = References: <20120113224607.GS4844@bender.csupomona.edu> Message-ID: <10122B91-B309-413C-BA8F-43BF2DE0C1C7@iki.fi> On 14.1.2012, at 1.19, Mark Moseley wrote: >>> Also another idea to avoid them in the first place: >>> >>> service auth-worker { >>> idle_kill = 20 >>> } >> >> Ah, set the auth-worker timeout to less than the mysql timeout to >> prevent a stale mysql connection from ever being used. I'll try that, >> thanks. > > I gave that a try. Sometimes it seems to kill off the auth-worker but > not till after a minute or so (with idle_kill = 20). Other times, the > worker stays around for more like 5 minutes (I gave up watching), > despite being idle -- and I'm the only person connecting to it, so > it's definitely idle. Does auth-worker perhaps only wake up every so > often to check its idle status? This is fixed in v2.1 hg. The default idle_kill of 60 seconds seems to have gotten rid of the "MySQL server has gone away" errors completely. So I guess the problem was that during some peak times a ton of auth worker processes were created, but afterwards they weren't used until the next peak happened, and then they failed. http://hg.dovecot.org/dovecot-2.1/rev/3963862a4086 http://hg.dovecot.org/dovecot-2.1/rev/58556a90259f From tss at iki.fi Fri Jan 20 19:17:24 2012 From: tss at iki.fi (Timo Sirainen) Date: Fri, 20 Jan 2012 19:17:24 +0200 Subject: [Dovecot] MySQL server has gone away In-Reply-To: <20120114001912.GZ4844@bender.csupomona.edu> References: <4F108834.60709@schetterer.org> <20120114001912.GZ4844@bender.csupomona.edu> Message-ID: <2B3DAEEA-9281-4E5B-BB90-4FCE9C61C9E4@iki.fi> On 14.1.2012, at 2.19, Paul B. Henson wrote: > On Fri, Jan 13, 2012 at 11:38:28AM -0800, Robert Schetterer wrote: > >> by the way , if you use sql for auth have you tried auth caching ? >> >> http://wiki.dovecot.org/Authentication/Caching > > That page says you can send a USR2 signal to the auth process for cache > stats? That doesn't seem to work. OTOH, that page is for version 1, not > 2; is there some other way to generate cache stats in version 2? Works for me. Are you maybe sending it to wrong auth process (auth worker instead of master)? From tss at iki.fi Fri Jan 20 21:14:07 2012 From: tss at iki.fi (Timo Sirainen) Date: Fri, 20 Jan 2012 21:14:07 +0200 Subject: [Dovecot] Notify plugin: segmentation fault In-Reply-To: <4F195F9B.3030202@fun.de> References: <4F195F9B.3030202@fun.de> Message-ID: <4F19BCFF.904@iki.fi> On 01/20/2012 02:35 PM, Ewald Dieterich wrote: > I'm trying to develop a plugin that uses the hooks provided by the > notify plugin. The notify plugin segfaults if you don't set the > mailbox_rename hook. I attached a patch to notify-plugin.c from > Dovecot 2.0.16 that should fix this. Fixed, thanks. From tss at iki.fi Fri Jan 20 21:16:01 2012 From: tss at iki.fi (Timo Sirainen) Date: Fri, 20 Jan 2012 21:16:01 +0200 Subject: [Dovecot] UID 0 problem while issuing an UID THREAD REFS command In-Reply-To: <201201201724.41631.jesus.navarro@bvox.net> References: <201201201724.41631.jesus.navarro@bvox.net> Message-ID: <4F19BD71.9000603@iki.fi> On 01/20/2012 06:24 PM, Jes?s M. Navarro wrote: > I'm having problems on a maildir due to dovecot returning an UID 0 to an UID > THREAD REFS command: > > I think that, as per a previous answer from Timo Sirainen*1, this should be > considered a dovecot's bug, am I right? Anyway, what should I try to find why > is this exactly happening? Yes, it's a bug. > *1 http://www.dovecot.org/list/dovecot/2011-November/061992.html Same question as in that mail: Could you instead send me such a mailbox where you can reproduce this problem? Probably sending dovecot.index, dovecot.index.log and dovecot.index.thread files would be enough. None of those contain any sensitive information. From tss at iki.fi Fri Jan 20 21:19:57 2012 From: tss at iki.fi (Timo Sirainen) Date: Fri, 20 Jan 2012 21:19:57 +0200 Subject: [Dovecot] mail_max_userip_connections exceeded. In-Reply-To: References: Message-ID: <4F19BE5D.20603@iki.fi> On 01/20/2012 06:06 PM, Simon Brereton wrote: > I have this error in the logs. > /var/log/mail.log.1:2490:Jan 19 12:02:55 mail dovecot: imap-login: > Maximum number of connections from user+IP exceeded > (mail_max_userip_connections): user=, method=PLAIN, > rip=127.0.0.1, secured > > I never changed this from the default 10. When I googled this error > there was a thread on this list from May 2011 that indicated one would > need one connection per user per subscribed folder. However, I know > that user doesn't have 10 folders, let alone 10 subscribed folders! I > can increase, it but it's not going to scale well. And there are > people on this list with many 1000x users than I have - so how do they > deal with that? > > 127.0.0.1 is obviously webmail (IMP5). > > So, how/why am I seeing this, and should I be concerned? Well, it really does look like IMP is using more than 10 connections at the same time. Or perhaps some of the existing connections are just hanging for some reason after IMP already discarded them, such as maybe a very long running SEARCH command was started and IMP then gave up. You could look at the process list (with verbose_proctitle=yes) and check if the user has other processes hanging at the time when this error is logged. From tss at iki.fi Fri Jan 20 21:34:07 2012 From: tss at iki.fi (Timo Sirainen) Date: Fri, 20 Jan 2012 21:34:07 +0200 Subject: [Dovecot] LMTP ignoring tcpwrappers In-Reply-To: <1327012212.2003.32.camel@manbearpig.lan.kantoor.vevida.net> References: <1327012212.2003.32.camel@manbearpig.lan.kantoor.vevida.net> Message-ID: On 20.1.2012, at 0.30, Harm Weites wrote: > we want to use dovecot LMTP for efficient mail delivery from our MX > servers (running postfix 2.8) to our storage servers (dovecot 2.0.17). > However, the one problem we see is the lack of access control when using > LMTP. It apears that every client in our network who has access to the > storage machines can drop a message in a Maildir of any user on that > storage server. Is it a real problem? Can't they just as easily drop messages to other users' maildirs simply by sending the mail via SMTP? > To prevent this behaviour it would be nice to use > libwrap, just as it can be used for POP3/IMAP protocols. > This, however, seems to be impossible using the configuration as > mentioned on the dovecot wiki: > > login_access_sockets = tcpwrap > > This seems to imply it only works for a login, and LMTP does not use > that. The above works perfectly when trying to block access to IMAP or > POP3 in /etc/hosts.deny, though a setting for LMTP is simply ignored. Right. I'm not sure if I'd even want to add such feature to LMTP. It doesn't really feel like it belongs there. > Is there a configuration setting needed for this to work for LMTP, or is > it simply not possible (yet) and does libwrap support for LMTP requires > a patch? Not possible in Dovecot currently. You could use firewall rules. From tss at iki.fi Fri Jan 20 21:44:19 2012 From: tss at iki.fi (Timo Sirainen) Date: Fri, 20 Jan 2012 21:44:19 +0200 Subject: [Dovecot] Performance of Maildir vs sdbox/mdbox In-Reply-To: <4F18D12F.2050809@hardwarefreak.com> References: <02f401ccd5de$ef17c470$cd474d50$@standen.id.au> <1326894871.11500.45.camel@innu> <4F179E68.5020408@hardwarefreak.com> <44D201BF-3A31-4354-8B55-0BFE11721601@iki.fi> <4F18AC6A.4050508@hardwarefreak.com> <4F18D12F.2050809@hardwarefreak.com> Message-ID: On 20.1.2012, at 4.27, Stan Hoeppner wrote: >> I spent months looking into NFS related issues. I read through Linux and FreeBSD kernel source codes to figure out if there's something I could do to avoid the problems I see. I sent some patches to try to improve things, which of course didn't get accepted (some alternative ways might have been, but it would have required much more work from my part). The mail_nfs_* settings are the result of what I found out. They don't fully work, so I gave up. > > Yeah, I recall some of your posts from that time, and your frustration. > If an NFS config option existed to simply turn off the NFS client > caching, would that resolve most/all of the remaining issues? Or is the > problem more complex than just the file caching? I ask as it would seem > creating such a Boolean NFS config option should be simple to implement. > If the devs could be convinced of the need for it. It would work, but the performance would suck. >> There are several huge Dovecot+NFS setups. They use director. It works well enough (and with the recent fixes, I'd hope perfectly). > > Are any of these huge setups using mdbox? Or does it make a difference? I think they're all Maildirs currently, but it shouldn't make a difference. The index files are the ones most easily corrupted, so if they work then everything else should work just as well. In those director setups there have been no index corruption errors. > I.e. Indexes are indexes whether they be maildir or mdbox. Would > Director alone allow the OP to avoid the cache corruption issues > discussed in this thread? Or would there still be problems due to the > split LDA setup? By using LMTP proxying with director there wouldn't be any problems. Or using director for IMAP/POP3 and not using dovecot-lda for mail deliveries would work too. >>> In this case the OP has Netapp storage. Netapp units support both NFS >>> exports as well as iSCSI LUNs. If the OP could utilize iSCSI instead of >>> NFS, switching to GFS2 or OCFS, do you see these cluster filesystem as >>> preferable for mdbox? >> >> I don't have personal experience with cluster filesystems in recent years (other than glusterfs, which had some problems, but most(/all?) were fixed already or are available from their commercial support..). Based on what I've heard, I'm guessing they work better than random-access-NFS, but even if there are no actual corruption problems, it sounds like their performance isn't very good. > > So would an ideal long term solution to indexes in a cluster (NFS or > clusterFS) environment be something like Dovecot's own index metadata > broker daemon/lock manager that controls access to the files/indexes? > Either a distributed token based architecture, or maybe something > 'simple' such as a master node which all others send index updates to > with the master performing the actual writes to the files, similar to a > database architecture? The former likely being more difficult to > implement, the latter having potential scalability and SPOF issues. > > Or is the percentage of Dovecot cluster deployments so small that it's > difficult to justify the development investment for such a thing? I'm not sure if such daemons would be of much help. For best performance the user's mail access should be redirected to the same server in any case, and doing that solves all the other problems as well. I've a few other clustering plans besides a regular NFS based setup, but all of them rely on user normally being redirected to the same server (exception: split brain operation when mails are replicated to multiple data centers). From tss at iki.fi Fri Jan 20 21:48:00 2012 From: tss at iki.fi (Timo Sirainen) Date: Fri, 20 Jan 2012 21:48:00 +0200 Subject: [Dovecot] Performance of Maildir vs sdbox/mdbox In-Reply-To: <4F191B05.9020409@schetterer.org> References: <02f401ccd5de$ef17c470$cd474d50$@standen.id.au> <1326894871.11500.45.camel@innu> <4F179E68.5020408@hardwarefreak.com> <44D201BF-3A31-4354-8B55-0BFE11721601@iki.fi> <4F18AC6A.4050508@hardwarefreak.com> <4F191B05.9020409@schetterer.org> Message-ID: <4623523A-742E-4C32-82A0-0918F8B2DFE4@iki.fi> On 20.1.2012, at 9.43, Robert Schetterer wrote: > i.e i think it should be poosible to design partitioning with ldap or sql > to i.e split up heavy and big mailboxes in seperate storage partitions etc > am i right here Timo ? You can use per-user home or mail_location that points to different storages. If you want only some folders in separate storages, you could use symlinks, but deleting such a folder probably wouldn't delete the mails (or at least not all files). From tss at iki.fi Fri Jan 20 21:58:01 2012 From: tss at iki.fi (Timo Sirainen) Date: Fri, 20 Jan 2012 21:58:01 +0200 Subject: [Dovecot] Clients show .subscriptions folder In-Reply-To: References: Message-ID: <816AB6CB-989A-4D87-8FC0-80E8BE880539@iki.fi> On 10.1.2012, at 18.34, Mark Sapiro wrote: > Since upgrading from dovecot-2.1.rc1 to dovecot-2.1.rc3, some clients > are showing a .subscriptions file in the user's mbox path as a folder. Fixed: http://hg.dovecot.org/dovecot-2.1/rev/958ef86e7f5b From tss at iki.fi Fri Jan 20 23:06:57 2012 From: tss at iki.fi (Timo Sirainen) Date: Fri, 20 Jan 2012 23:06:57 +0200 Subject: [Dovecot] MySQL server has gone away In-Reply-To: <10122B91-B309-413C-BA8F-43BF2DE0C1C7@iki.fi> References: <20120113224607.GS4844@bender.csupomona.edu> <10122B91-B309-413C-BA8F-43BF2DE0C1C7@iki.fi> Message-ID: <9E57D55C-5F19-4291-A2E7-BC06678B2F79@iki.fi> On 20.1.2012, at 19.16, Timo Sirainen wrote: > This is fixed in v2.1 hg. The default idle_kill of 60 seconds seems to have gotten rid of the "MySQL server has gone away" errors completely. So I guess the problem was that during some peak times a ton of auth worker processes were created, but afterwards they weren't used until the next peak happened, and then they failed. Hmh. Still doesn't work 100%: auth-worker(28788): Error: mysql: Query failed, retrying: MySQL server has gone away (idled for 181 secs) auth-worker(7413): Error: mysql: Query failed, retrying: MySQL server has gone away (idled for 298 secs) I'm not really sure why it's not killing itself after 60 seconds of idling. Probably related to how mysql code tracks idle time and how idle_kill tracks it.. Anyway, those errors are much more rare now. From henson at acm.org Sat Jan 21 02:00:51 2012 From: henson at acm.org (Paul B. Henson) Date: Fri, 20 Jan 2012 16:00:51 -0800 Subject: [Dovecot] MySQL server has gone away In-Reply-To: <9E57D55C-5F19-4291-A2E7-BC06678B2F79@iki.fi> References: <20120113224607.GS4844@bender.csupomona.edu> <10122B91-B309-413C-BA8F-43BF2DE0C1C7@iki.fi> <9E57D55C-5F19-4291-A2E7-BC06678B2F79@iki.fi> Message-ID: <4F1A0033.8060202@acm.org> On 1/20/2012 1:06 PM, Timo Sirainen wrote: > Hmh. Still doesn't work 100%: > > auth-worker(28788): Error: mysql: Query failed, retrying: MySQL > server has gone away (idled for 181 secs) auth-worker(7413): Error: > mysql: Query failed, retrying: MySQL server has gone away (idled for > 298 secs) > > I'm not really sure why it's not killing itself after 60 seconds of > idling. Probably related to how mysql code tracks idle time and how > idle_kill tracks it.. Anyway, those errors are much more rare now. The mysql server starts tracking idle time as beginning after the last network communication with the client. So presumably if the auth worker gets marked as not idle by anything not involving interaction with the mysql server, they could get out of sync. Before you posted a potential fix to the idle timeout, I was looking at other possible ways to resolve the issue. Currently, an authentication request is tried exactly twice -- one initial try, and one retry. Looking at driver-sqlpool.c: if (result->failed_try_retry && !request->retried) { Currently, retried is a boolean. What if retried was an integer instead, and a new configuration variable allowed you to specify how many times an authentication attempt should be retried? The default could be 2, which would result in exactly the same behavior. But then you could set it to 3 or 4 to prevent a request from hitting a timed out connection twice and failing completely. Ideally, a better fix would be for the client not to consider a "MySQL server has gone away" return as a failure, but instead immediately reconnect and try again without marking it as a retry. However, from reviewing the code, that would be a much more difficult and invasive change. Changing the existing retried variable to an integer count rather than a boolean is pretty simple. -- Paul B. Henson | (909) 979-6361 | http://www.csupomona.edu/~henson/ Operating Systems and Network Analyst | henson at csupomona.edu California State Polytechnic University | Pomona CA 91768 From jtam.home at gmail.com Sat Jan 21 02:48:54 2012 From: jtam.home at gmail.com (Joseph Tam) Date: Fri, 20 Jan 2012 16:48:54 -0800 (PST) Subject: [Dovecot] mail_max_userip_connections exceeded. In-Reply-To: References: Message-ID: Simon Brereton writes: > /var/log/mail.log.1:2490:Jan 19 12:02:55 mail dovecot: imap-login: > Maximum number of connections from user+IP exceeded > (mail_max_userip_connections): user=, method=PLAIN, > rip=127.0.0.1, secured > > I never changed this from the default 10. When I googled this error > there was a thread on this list from May 2011 that indicated one would > need one connection per user per subscribed folder. However, I know > that user doesn't have 10 folders, let alone 10 subscribed folders! I > can increase, it but it's not going to scale well. And there are > people on this list with many 1000x users than I have - so how do they > deal with that? > > 127.0.0.1 is obviously webmail (IMP5). IMAP proxy or lack of proxy? IMAP proxy could be a problem if the user had opened more than 10 (unique) mailboxes. The proxy would keep this connection open until a timeout, and after some time, could accumulate more connections than your limit. The lack of proxy could solve your problem if for some reason your webmail software is not closing the IMAP connection properly (I assume IMP does a connect/authenticate/IMAP command/logout for every webmail operation). Every connection (even to the same mailbox) would open up a new connection. The proxy software will recognize the reconnnection and funnel it through its cached connection. You can lsof the user's IMAP processes (or troll through /proc/{imap-process} or what you have) to figure out which mailboxes it has opened. On my system, file descriptor 9 and 11 gives you the names of the index files that indicate which mailboxes are being accessed. Joseph Tam From mark at msapiro.net Sat Jan 21 03:02:37 2012 From: mark at msapiro.net (Mark Sapiro) Date: Fri, 20 Jan 2012 17:02:37 -0800 Subject: [Dovecot] Clients show .subscriptions folder In-Reply-To: <816AB6CB-989A-4D87-8FC0-80E8BE880539@iki.fi> Message-ID: Timo Sirainen wrote: >On 10.1.2012, at 18.34, Mark Sapiro wrote: > >> Since upgrading from dovecot-2.1.rc1 to dovecot-2.1.rc3, some clients >> are showing a .subscriptions file in the user's mbox path as a folder. > >Fixed: http://hg.dovecot.org/dovecot-2.1/rev/958ef86e7f5b Thanks Timo. I've installed the above and it seems fine. -- Mark Sapiro The highway is for gamblers, San Francisco Bay Area, California better use your sense - B. Dylan From dovecot at knutejohnson.com Sat Jan 21 03:04:46 2012 From: dovecot at knutejohnson.com (Knute Johnson) Date: Fri, 20 Jan 2012 17:04:46 -0800 Subject: [Dovecot] mail_max_userip_connections exceeded. In-Reply-To: References: Message-ID: <4F1A0F2E.9020907@knutejohnson.com> On 1/20/2012 4:48 PM, Joseph Tam wrote: > Simon Brereton writes: > >> /var/log/mail.log.1:2490:Jan 19 12:02:55 mail dovecot: imap-login: >> Maximum number of connections from user+IP exceeded >> (mail_max_userip_connections): user=, method=PLAIN, >> rip=127.0.0.1, secured >> >> I never changed this from the default 10. When I googled this error >> there was a thread on this list from May 2011 that indicated one would >> need one connection per user per subscribed folder. However, I know >> that user doesn't have 10 folders, let alone 10 subscribed folders! I >> can increase, it but it's not going to scale well. And there are >> people on this list with many 1000x users than I have - so how do they >> deal with that? >> >> 127.0.0.1 is obviously webmail (IMP5). > > IMAP proxy or lack of proxy? > > IMAP proxy could be a problem if the user had opened more than 10 (unique) > mailboxes. The proxy would keep this connection open until a timeout, and > after some time, could accumulate more connections than your limit. > > The lack of proxy could solve your problem if for some reason your webmail > software is not closing the IMAP connection properly (I assume IMP does a > connect/authenticate/IMAP command/logout for every webmail operation). > Every connection (even to the same mailbox) would open up a new connection. > The proxy software will recognize the reconnnection and funnel it through > its cached connection. > > You can lsof the user's IMAP processes (or troll through > /proc/{imap-process} or what you have) to figure out which mailboxes it > has opened. On my system, file descriptor 9 and 11 gives you the names > of the index files that indicate which mailboxes are being accessed. > > Joseph Tam I'm not sure that I saw the beginning of this thread but I got the same error. I traced it to the fact that my destktop and my phone email programs were both trying to access my imap from the same local network. I changed it to 20 and I haven't seen any more problems. I don't know if that would be a problem on a really heavily used server or not. -- Knute Johnson From henson at acm.org Sat Jan 21 03:34:41 2012 From: henson at acm.org (Paul B. Henson) Date: Fri, 20 Jan 2012 17:34:41 -0800 Subject: [Dovecot] MySQL server has gone away In-Reply-To: <2B3DAEEA-9281-4E5B-BB90-4FCE9C61C9E4@iki.fi> References: <4F108834.60709@schetterer.org> <20120114001912.GZ4844@bender.csupomona.edu> <2B3DAEEA-9281-4E5B-BB90-4FCE9C61C9E4@iki.fi> Message-ID: <4F1A1631.2000704@acm.org> On 1/20/2012 9:17 AM, Timo Sirainen wrote: > Works for me. Are you maybe sending it to wrong auth process (auth worker instead of master)? I had tried sending it to both; but the underlying problem turned out to be that the updated config hadn't actually been deployed yet 8-/ oops. Once I fixed that, sending the signal did generate the log output. Evidently nothing is printed out in the case where the authentication caching isn't enabled; maybe you should make it print out something like "Hey idiot, caching isn't turned on" ;). Thanks... -- Paul B. Henson | (909) 979-6361 | http://www.csupomona.edu/~henson/ Operating Systems and Network Analyst | henson at csupomona.edu California State Polytechnic University | Pomona CA 91768 From user+dovecot at localhost.localdomain.org Sat Jan 21 03:46:47 2012 From: user+dovecot at localhost.localdomain.org (Pascal Volk) Date: Sat, 21 Jan 2012 02:46:47 +0100 Subject: [Dovecot] 2.1.rc3 (1a722c7676bb): doveadm mailbox list withholds child mailboxes In-Reply-To: <4F0F8815.8070609@localhost.localdomain.org> References: <4F0F8815.8070609@localhost.localdomain.org> Message-ID: <4F1A1907.4070906@localhost.localdomain.org> On 01/13/2012 02:25 AM Pascal Volk wrote: > doveadm mailbox list -u user at example.com doesn't show child mailboxes. Looks like http://hg.dovecot.org/dovecot-2.1/rev/54e74090fb42 fixed the problem. Thanks Regards, Pascal -- The trapper recommends today: defaced.1202102 at localdomain.org From henson at acm.org Sat Jan 21 04:36:56 2012 From: henson at acm.org (Paul B. Henson) Date: Fri, 20 Jan 2012 18:36:56 -0800 Subject: [Dovecot] MySQL server has gone away In-Reply-To: <10122B91-B309-413C-BA8F-43BF2DE0C1C7@iki.fi> References: <20120113224607.GS4844@bender.csupomona.edu> <10122B91-B309-413C-BA8F-43BF2DE0C1C7@iki.fi> Message-ID: <20120121023656.GO4207@bender.csupomona.edu> On Fri, Jan 20, 2012 at 09:16:57AM -0800, Timo Sirainen wrote: > This is fixed in v2.1 hg. The default idle_kill of 60 seconds seems to > have gotten rid of the "MySQL server has gone away" errors completely. > So I guess the problem was that during some peak times a ton of auth > worker processes were created, but afterwards they weren't used until > the next peak happened, and then they failed. > > http://hg.dovecot.org/dovecot-2.1/rev/3963862a4086 > http://hg.dovecot.org/dovecot-2.1/rev/58556a90259f Hmm, I tried to apply this to 2.0.17, and that didn't really work out. Before I spend too much time trying to hand port the changes, do you know off hand if they simply won't apply to 2.0.17 due to other changes made since then? It looks like 2.1 might be out soon, I guess maybe I should just wait for that. Thanks... -- Paul B. Henson | (909) 979-6361 | http://www.csupomona.edu/~henson/ Operating Systems and Network Analyst | henson at csupomona.edu California State Polytechnic University | Pomona CA 91768 From admin at opsys.de Sat Jan 21 20:39:00 2012 From: admin at opsys.de (Markus Fritz) Date: Sat, 21 Jan 2012 19:39:00 +0100 Subject: [Dovecot] Sieve tempoary script folder Message-ID: Hello, I got the issue that sieve wants to write his tmp files to /etc/dovecot/. But I want sieve to write in a folder which it has write rights. I created a script to to put spam in the 'Spam' folder, put it in /etc/dovecot/.dovecot.sieve. When recieving a mail sieve want's to create a tmp file like /etc/dovecot/.dovecot.sieve.12033 How to change the desired tmp folder by sieve? From mikedvct at makuch.org Sun Jan 22 15:55:02 2012 From: mikedvct at makuch.org (Michael Makuch) Date: Sun, 22 Jan 2012 07:55:02 -0600 Subject: [Dovecot] where is subscribed list stored? Message-ID: <4F1C1536.1000407@makuch.org> I'm using $ /usr/sbin/dovecot --version 2.0.15 on $ cat /etc/fedora-release Fedora release 14 (Laughlin) and version 8 of Thunderbird. I use dovecot locally for internal only access to my email archives, of which I have many gigs of email archives. Over time I end up subscribing to a couple dozen different IMAP email folders. Problem is that periodically my list of subscribed folders get zapped to none, and I have to go and re-subscribe to a dozen or two folders again. Anyone seen this happen? It looks like the list of subscribed folders is here ~/Mail/.subscriptions and I can see in my daily backup that it reflects what appears in TBird. What might be zapping it? I use multiple email clients simultaneously on different hosts. (IOW I leave them open) Is this a problem? Does dovecot manage that in some way? Or is that my problem? I don't think this is the problem since this only occurs like a few times per year. If it were the problem I'd expect it to occur much more frequently. Thanks for any clues Mike From me at junc.org Sun Jan 22 16:10:09 2012 From: me at junc.org (Benny Pedersen) Date: Sun, 22 Jan 2012 15:10:09 +0100 Subject: [Dovecot] =?utf-8?q?where_is_subscribed_list_stored=3F?= In-Reply-To: <4F1C1536.1000407@makuch.org> References: <4F1C1536.1000407@makuch.org> Message-ID: On Sun, 22 Jan 2012 07:55:02 -0600, Michael Makuch wrote: > $ cat /etc/fedora-release > Fedora release 14 (Laughlin) > > and version 8 of Thunderbird. can you use thunderbird 9 ? does the account work with eg rouncube webmail ? my own question is, is it a dovecot problem ? do you modify files outside of imap protocol ? if so you asked for it :-) From jk at jkart.de Sun Jan 22 16:22:24 2012 From: jk at jkart.de (Jim Knuth) Date: Sun, 22 Jan 2012 15:22:24 +0100 Subject: [Dovecot] where is subscribed list stored? In-Reply-To: References: <4F1C1536.1000407@makuch.org> Message-ID: <4F1C1BA0.5060305@jkart.de> am 22.01.12 15:10 schrieb Benny Pedersen : > can you use thunderbird 9 ? > > does the account work with eg rouncube webmail ? I`ve TB9 AND Roundcube. No problems with Dovecot 2.017 here > > my own question is, is it a dovecot problem ? > > do you modify files outside of imap protocol ? > > if so you asked for it :-) -- Mit freundlichen Gr??en, with kind regards, Jim Knuth --------- Wahrhaftigkeit und Politik wohnen selten unter einem Dach. (Marie Antoinette) From stan at hardwarefreak.com Sun Jan 22 22:58:03 2012 From: stan at hardwarefreak.com (Stan Hoeppner) Date: Sun, 22 Jan 2012 14:58:03 -0600 Subject: [Dovecot] where is subscribed list stored? In-Reply-To: <4F1C1536.1000407@makuch.org> References: <4F1C1536.1000407@makuch.org> Message-ID: <4F1C785B.8020304@hardwarefreak.com> On 1/22/2012 7:55 AM, Michael Makuch wrote: > Anyone seen this happen? It looks like the list of subscribed folders is > here ~/Mail/.subscriptions and I can see in my daily backup that it > reflects what appears in TBird. What might be zapping it? I use multiple > email clients simultaneously on different hosts. (IOW I leave them open) > Is this a problem? Does dovecot manage that in some way? Or is that my > problem? I don't think this is the problem since this only occurs like a > few times per year. If it were the problem I'd expect it to occur much > more frequently. What do your Dovecot logs and TB Activity Manager tell you, if anything? How about logging on the other MUAs? You are a human being, and are thus limited to physical interaction with a single host at any point in time. How are you "using" multiple MUAs on multiple hosts simultaneously? Can you describe this workflow? I'm guessing you're performing some kind of automated tasks with each MUA, and that is likely the root of the problem. Please describe these automated tasks. -- Stan From jesus.navarro at bvox.net Mon Jan 23 14:55:13 2012 From: jesus.navarro at bvox.net (=?utf-8?q?Jes=C3=BAs_M=2E?= Navarro) Date: Mon, 23 Jan 2012 13:55:13 +0100 Subject: [Dovecot] UID 0 problem while issuing an UID THREAD REFS command In-Reply-To: <4F19BD71.9000603@iki.fi> References: <201201201724.41631.jesus.navarro@bvox.net> <4F19BD71.9000603@iki.fi> Message-ID: <201201231355.15051.jesus.navarro@bvox.net> Hi again, Timo: On Viernes, 20 de Enero de 2012 20:16:01 Timo Sirainen escribi?: > On 01/20/2012 06:24 PM, Jes?s M. Navarro wrote: > > I'm having problems on a maildir due to dovecot returning an UID 0 to an > > UID THREAD REFS command: [...] > Could you instead send me such a mailbox where you can reproduce this > problem? Probably sending dovecot.index, dovecot.index.log and > dovecot.index.thread files would be enough. None of those contain any > sensitive information. Thank you very much. I'm sending to your personal address a whole maildir that reproduces the bug (it's very short) to avoid having it published in the mail archives. From l.chelchowski at slupsk.eurocar.pl Mon Jan 23 15:58:22 2012 From: l.chelchowski at slupsk.eurocar.pl (l.chelchowski) Date: Mon, 23 Jan 2012 14:58:22 +0100 Subject: [Dovecot] Quota-warning and setresgid In-Reply-To: References: Message-ID: Anyone? W dniu 2012-01-10 10:34, l.chelchowski napisa?(a): > Hi! > > Please help me with this. > The problem exists when quota-warning is executing: > > LOG: > Jan 10 10:15:06 lmtp(85973): Debug: none: root=, index=, control=, > inbox=, alt= > Jan 10 10:15:06 lmtp(85973): Info: Connect from local > Jan 10 10:15:06 lmtp(85973): Debug: Loading modules from directory: > /usr/local/lib/dovecot > Jan 10 10:15:06 lmtp(85973): Debug: Module loaded: > /usr/local/lib/dovecot/lib10_quota_plugin.so > Jan 10 10:15:06 lmtp(85973): Debug: Module loaded: > /usr/local/lib/dovecot/lib90_sieve_plugin.so > Jan 10 10:15:06 lmtp(85973): Debug: auth input: tester at domain.eu > home=/home/vmail/domain.eu/tester/ > mail=maildir:/home/vmail/domain.eu/tester/:INDEX=/var/mail/vmail/domain.eu/tester at domain.eu/index/public > uid=101 gid=12 quota_rule=*:storage=2097 acl_groups= > Jan 10 10:15:06 lmtp(85973): Debug: Added userdb setting: > mail=maildir:/home/vmail/domain.eu/tester/:INDEX=/var/mail/vmail/domain.eu/tester at domain.eu/index/public > Jan 10 10:15:06 lmtp(85973): Debug: Added userdb setting: > plugin/quota_rule=*:storage=2097 > Jan 10 10:15:06 lmtp(85973): Debug: Added userdb setting: > plugin/acl_groups= > Jan 10 10:15:06 lmtp(85973, tester at domain.eu): Debug: Effective > uid=101, gid=12, home=/home/vmail/domain.eu/tester/ > Jan 10 10:15:06 lmtp(85973, tester at domain.eu): Debug: Quota root: > name=user backend=dict args=:proxy::quotadict > Jan 10 10:15:06 lmtp(85973, tester at domain.eu): Debug: Quota rule: > root=user mailbox=* bytes=2147328 messages=0 > Jan 10 10:15:06 lmtp(85973, tester at domain.eu): Debug: Quota rule: > root=user mailbox=Trash bytes=+429465 (20%) messages=0 > Jan 10 10:15:06 lmtp(85973, tester at domain.eu): Debug: Quota rule: > root=user mailbox=SPAM bytes=+429465 (20%) messages=0 > Jan 10 10:15:06 lmtp(85973, tester at domain.eu): Debug: Quota warning: > bytes=1717862 (80%) messages=0 reverse=no command=quota-warning 80 > tester at domain.eu > Jan 10 10:15:06 lmtp(85973, tester at domain.eu): Debug: Quota warning: > bytes=1932595 (90%) messages=0 reverse=no command=quota-warning 90 > tester at domain.eu > Jan 10 10:15:06 lmtp(85973, tester at domain.eu): Debug: Quota warning: > bytes=2039961 (95%) messages=0 reverse=no command=quota-warning 95 > tester at domain.eu > Jan 10 10:15:06 lmtp(85973, tester at domain.eu): Debug: dict quota: > user=tester at domain.eu, uri=proxy::quotadict, noenforcing=0 > Jan 10 10:15:06 lmtp(85973, tester at domain.eu): Debug: Namespace : > type=private, prefix=, sep=/, inbox=yes, hidden=no, list=yes, > subscriptions=yes > location=maildir:/home/vmail/domain.eu/tester/:INDEX=/var/mail/vmail/domain.eu/tester at domain.eu/index/public > Jan 10 10:15:06 lmtp(85973, tester at domain.eu): Debug: maildir++: > root=/home/vmail/domain.eu/tester, > index=/var/mail/vmail/domain.eu/tester at domain.eu/index/public, control=, > inbox=/home/vmail/domain.eu/tester, alt= > Jan 10 10:15:06 lmtp(85973, tester at domain.eu): Debug: Namespace : > type=public, prefix=Public/, sep=/, inbox=no, hidden=no, list=children, > subscriptions=yes > location=maildir:/home/vmail/public/:CONTROL=/var/mail/vmail/domain.eu/tester/control/public:INDEX=/var/mail/vmail/domain.eu/tester/index/public:LAYOUT=fs > Jan 10 10:15:06 lmtp(85973, tester at domain.eu): Debug: fs: > root=/home/vmail/public, > index=/var/mail/vmail/domain.eu/tester/index/public, > control=/var/mail/vmail/domain.eu/tester/control/public, inbox=, alt= > Jan 10 10:15:06 lmtp(85973, tester at domain.eu): Debug: Namespace : > type=shared, prefix=Shared/%u/, sep=/, inbox=no, hidden=no, > list=children, subscriptions=no > location=maildir:%h/:INDEX=/var/mail/vmail/domain.eu/tester at domain.eu/index/shared/%u > Jan 10 10:15:06 lmtp(85973, tester at domain.eu): Debug: shared: > root=/var/run/dovecot, index=, control=, inbox=, alt= > ... > Jan 10 10:15:06 lmtp(85973, tester at domain.eu): Debug: quota: Executing > warning: quota-warning 95 tester at domain.eu > Jan 10 10:15:06 lmtp(85973, tester at domain.eu): Info: > bLUfAJoBDE/VTwEA9hAjDg: sieve: msgid=<4F0C0180.3040704 at domain.eu>: > stored mail into mailbox 'INBOX' > Jan 10 10:15:06 lmtp(85973): Info: Disconnect from local: Client quit > (in reset) > Jan 10 10:15:06 lda: Debug: Loading modules from directory: > /usr/local/lib/dovecot > Jan 10 10:15:06 lda: Debug: Module loaded: > /usr/local/lib/dovecot/lib01_acl_plugin.so > Jan 10 10:15:06 lda: Debug: Module loaded: > /usr/local/lib/dovecot/lib10_quota_plugin.so > Jan 10 10:15:06 lda: Debug: Module loaded: > /usr/local/lib/dovecot/lib90_sieve_plugin.so > Jan 10 10:15:06 lda: Debug: auth input: tester at domain.eu > home=/home/vmail/domain.eu/tester/ > mail=maildir:/home/vmail/domain.eu/tester/:INDEX=/var/mail/vmail/domain.eu/tester at domain.eu/index/public > uid=101 gid=12 quota_rule=*:storage=2097 acl_groups= > Jan 10 10:15:06 lda: Debug: Added userdb setting: > mail=maildir:/home/vmail/domain.eu/tester/:INDEX=/var/mail/vmail/domain.eu/tester at domain.eu/index/public > Jan 10 10:15:06 lda: Debug: Added userdb setting: > plugin/quota_rule=*:storage=2097 > Jan 10 10:15:06 lda: Debug: Added userdb setting: plugin/acl_groups= > Jan 10 10:15:06 lda(tester at domain.eu): Fatal: > setresgid(12(mail),12(mail),101(vmail)) failed with euid=101(vmail): > Operation not permitted > Jan 10 10:15:06 master: Error: service(quota-warning): child 85974 > returned error 75 > > dovecot -n > # 2.0.16: /usr/local/etc/dovecot/dovecot.conf > # OS: FreeBSD 8.2-RELEASE-p3 amd64 > auth_master_user_separator = * > auth_mechanisms = plain login cram-md5 > auth_username_format = %Lu > dict { > quotadict = mysql:/usr/local/etc/dovecot/dovecot-dict-sql.conf > } > disable_plaintext_auth = no > first_valid_gid = 12 > first_valid_uid = 101 > log_path = /var/log/dovecot.log > mail_debug = yes > mail_gid = vmail > mail_plugins = " quota acl" > mail_privileged_group = vmail > mail_uid = vmail > managesieve_notify_capability = mailto > managesieve_sieve_capability = fileinto reject envelope > encoded-character vacation subaddress comparator-i;ascii-numeric > relational regex imap4flags copy include variables body enotify > environment mailbox date > namespace { > inbox = yes > location = > prefix = > separator = / > type = private > } > namespace { > list = children > location = > maildir:/home/vmail/public/:CONTROL=/var/mail/vmail/%d/%n/control/public:INDEX=/var/mail/vmail/%d/%n/index/public:LAYOUT=fs > prefix = Public/ > separator = / > subscriptions = yes > type = public > } > namespace { > list = children > location = maildir:%%h/:INDEX=/var/mail/vmail/%d/%u/index/shared/%%u > prefix = Shared/%%u/ > separator = / > subscriptions = no > type = shared > } > passdb { > args = /usr/local/etc/dovecot/dovecot-sql.conf > driver = sql > } > passdb { > args = /usr/local/etc/dovecot/passwd.masterusers > driver = passwd-file > master = yes > pass = yes > } > plugin { > acl = vfile:/usr/local/etc/dovecot/acls > acl_shared_dict = > file:/usr/local/etc/dovecot/shared/shared-mailboxes.db > autocreate = Trash > autocreate2 = Junk > autocreate3 = Sent > autocreate4 = Drafts > autocreate5 = Archives > autosubscribe = Trash > autosubscribe2 = Junk > autosubscribe3 = Sent > autosubscribe4 = Drafts > autosubscribe5 = Public/Poczta > autosubscribe6 = Archives > fts = squat > fts_squat = partial=4 full=10 > quota = dict:user::proxy::quotadict > quota_rule2 = Trash:storage=+20%% > quota_rule3 = SPAM:storage=+20%% > quota_warning = storage=80%% quota-warning 80 %u > quota_warning2 = storage=90%% quota-warning 90 %u > quota_warning3 = storage=95%% quota-warning 95 %u > sieve = ~/.dovecot.sieve > sieve_before = /usr/local/etc/dovecot/sieve/default.sieve > sieve_dir = ~/sieve > sieve_global_dir = /usr/local/etc/dovecot/sieve > sieve_global_path = /usr/local/etc/dovecot/sieve/default.sieve > } > protocols = imap pop3 sieve lmtp > service auth { > unix_listener /var/spool/postfix/private/auth { > group = mail > mode = 0660 > user = postfix > } > unix_listener auth-userdb { > group = mail > mode = 0660 > user = vmail > } > } > service dict { > unix_listener dict { > mode = 0600 > user = vmail > } > } > service imap { > executable = imap postlogin > } > service lmtp { > unix_listener /var/spool/postfix/private/dovecot-lmtp { > group = postfix > mode = 0660 > user = postfix > } > } > service managesieve { > drop_priv_before_exec = yes > } > service pop3 { > drop_priv_before_exec = yes > } > service postlogin { > executable = script-login rawlog > } > service quota-warning { > executable = script /usr/local/bin/quota-warning.sh > unix_listener quota-warning { > user = vmail > } > user = vmail > } > ssl = no > userdb { > args = /usr/local/etc/dovecot/dovecot-sql.conf > driver = sql > } > verbose_proctitle = yes > protocol imap { > imap_client_workarounds = delay-newmail tb-extra-mailbox-sep > mail_plugins = " acl imap_acl autocreate fts fts_squat quota > imap_quota" > } > protocol lmtp { > mail_plugins = quota sieve > } > protocol pop3 { > pop3_client_workarounds = outlook-no-nuls oe-ns-eoh > pop3_uidl_format = %08Xu%08Xv > } > protocol lda { > deliver_log_format = msgid=%m: %$ > mail_plugins = sieve acl quota > postmaster_address = postmaster at domain.eu > sendmail_path = /usr/sbin/sendmail > } -- Pozdrawiam ?ukasz From a.othman at cairosource.com Mon Jan 23 16:30:32 2012 From: a.othman at cairosource.com (Amira Othman) Date: Mon, 23 Jan 2012 16:30:32 +0200 Subject: [Dovecot] change smtp port Message-ID: <000801ccd9db$91671960$b4354c20$@othman@cairosource.com> Hi all I am using postfix 2.8 with dovecot-1.2.17-0_116.el5 on cento 5.7 server . when I changed smtp port from 25 to 587 from postfix configuration my mail server stops to receive emails. I think it sounds strange and I don't understand why this happen any one can help me Regards From giles at coochey.net Mon Jan 23 16:33:27 2012 From: giles at coochey.net (Giles Coochey) Date: Mon, 23 Jan 2012 14:33:27 +0000 Subject: [Dovecot] change smtp port In-Reply-To: <000801ccd9db$91671960$b4354c20$@othman@cairosource.com> References: <000801ccd9db$91671960$b4354c20$@othman@cairosource.com> Message-ID: On 2012-01-23 14:30, Amira Othman wrote: > Hi all > > I am using postfix 2.8 with dovecot-1.2.17-0_116.el5 on cento 5.7 > server . > when I changed smtp port from 25 to 587 from postfix configuration my > mail > server stops to receive emails. I think it sounds strange and I don't > understand why this happen any one can help me > > > > Regards If this SMTP server is your MX record, then you need to use port 25. Only use the 587 port for authenticated submissions from your own users for outgoing email. -- Message sent via my webmail account. From Ralf.Hildebrandt at charite.de Mon Jan 23 16:33:43 2012 From: Ralf.Hildebrandt at charite.de (Ralf Hildebrandt) Date: Mon, 23 Jan 2012 15:33:43 +0100 Subject: [Dovecot] change smtp port In-Reply-To: <000801ccd9db$91671960$b4354c20$@othman@cairosource.com> References: <000801ccd9db$91671960$b4354c20$@othman@cairosource.com> Message-ID: <20120123143343.GI29761@charite.de> * Amira Othman : > Hi all > > I am using postfix 2.8 with dovecot-1.2.17-0_116.el5 on cento 5.7 server . > when I changed smtp port from 25 to 587 from postfix configuration my mail > server stops to receive emails. That's normal. > I think it sounds strange and I don't understand why this happen any > one can help me Mail from other systems comes in via port 25. Once you change the port, nobody can send mail to your server. Easy, no? -- Ralf Hildebrandt Gesch?ftsbereich IT | Abteilung Netzwerk Charit? - Universit?tsmedizin Berlin Campus Benjamin Franklin Hindenburgdamm 30 | D-12203 Berlin Tel. +49 30 450 570 155 | Fax: +49 30 450 570 962 ralf.hildebrandt at charite.de | http://www.charite.de From a.othman at cairosource.com Mon Jan 23 16:38:06 2012 From: a.othman at cairosource.com (Amira Othman) Date: Mon, 23 Jan 2012 16:38:06 +0200 Subject: [Dovecot] change smtp port In-Reply-To: <20120123143343.GI29761@charite.de> References: <000801ccd9db$91671960$b4354c20$@othman@cairosource.com> <20120123143343.GI29761@charite.de> Message-ID: <001601ccd9dc$9ff04d20$dfd0e760$@othman@cairosource.com> And there is no way to receive incoming emails not on port 25 ? > Hi all > > I am using postfix 2.8 with dovecot-1.2.17-0_116.el5 on cento 5.7 server . > when I changed smtp port from 25 to 587 from postfix configuration my mail > server stops to receive emails. That's normal. > I think it sounds strange and I don't understand why this happen any > one can help me Mail from other systems comes in via port 25. Once you change the port, nobody can send mail to your server. Easy, no? -- Ralf Hildebrandt Gesch?ftsbereich IT | Abteilung Netzwerk Charit? - Universit?tsmedizin Berlin Campus Benjamin Franklin Hindenburgdamm 30 | D-12203 Berlin Tel. +49 30 450 570 155 | Fax: +49 30 450 570 962 ralf.hildebrandt at charite.de | http://www.charite.de From giles at coochey.net Mon Jan 23 16:41:52 2012 From: giles at coochey.net (Giles Coochey) Date: Mon, 23 Jan 2012 14:41:52 +0000 Subject: [Dovecot] change smtp port In-Reply-To: <001601ccd9dc$9ff04d20$dfd0e760$@othman@cairosource.com> References: <000801ccd9db$91671960$b4354c20$@othman@cairosource.com> <20120123143343.GI29761@charite.de> <001601ccd9dc$9ff04d20$dfd0e760$@othman@cairosource.com> Message-ID: On 2012-01-23 14:38, Amira Othman wrote: > And there is no way to receive incoming emails not on port 25 ? > > No. http://en.wikipedia.org/wiki/Simple_Mail_Transfer_Protocol From CMarcus at Media-Brokers.com Mon Jan 23 16:50:09 2012 From: CMarcus at Media-Brokers.com (Charles Marcus) Date: Mon, 23 Jan 2012 09:50:09 -0500 Subject: [Dovecot] change smtp port In-Reply-To: References: <000801ccd9db$91671960$b4354c20$@othman@cairosource.com> <20120123143343.GI29761@charite.de> <001601ccd9dc$9ff04d20$dfd0e760$@othman@cairosource.com> Message-ID: <4F1D73A1.2010504@Media-Brokers.com> On 2012-01-23 9:41 AM, Giles Coochey wrote: > On 2012-01-23 14:38, Amira Othman wrote: >> And there is no way to receive incoming emails not on port 25 ? > No. > http://en.wikipedia.org/wiki/Simple_Mail_Transfer_Protocol Well, not precisely correct... You *could* use a router that does port translation (translates incoming port 25 connections to port 587), but that would be extremely ugly and kludgy and I certainly don't recommend it. Amira - what you need to do is re-enable port 25, and then enable the submission service (port 587) at the same time (just uncomment the relevant lines in master.cf), and require your users to use the submission port for relaying their mail. -- Best regards, Charles From giles at coochey.net Mon Jan 23 17:01:57 2012 From: giles at coochey.net (Giles Coochey) Date: Mon, 23 Jan 2012 15:01:57 +0000 Subject: [Dovecot] change smtp port In-Reply-To: <4F1D73A1.2010504@Media-Brokers.com> References: <000801ccd9db$91671960$b4354c20$@othman@cairosource.com> <20120123143343.GI29761@charite.de> <001601ccd9dc$9ff04d20$dfd0e760$@othman@cairosource.com> <4F1D73A1.2010504@Media-Brokers.com> Message-ID: On 2012-01-23 14:50, Charles Marcus wrote: > On 2012-01-23 9:41 AM, Giles Coochey wrote: >> On 2012-01-23 14:38, Amira Othman wrote: >>> And there is no way to receive incoming emails not on port 25 ? > >> No. >> http://en.wikipedia.org/wiki/Simple_Mail_Transfer_Protocol > > Well, not precisely correct... > Now true, you can do anything you like internally, but if you want to listen and speak with the rest of the Internet, you should be RFC compliant. RFC821 Connection Establishment The SMTP transmission channel is a TCP connection established between the sender process port U and the receiver process port L. This single full duplex connection is used as the transmission channel. This protocol is assigned the service port 25 (31 octal), that is L=25. RFC531 4.5.4.2. Receiving Strategy The SMTP server SHOULD attempt to keep a pending listen on the SMTP port (specified by IANA as port 25) at all times. This requires the support of multiple incoming TCP connections for SMTP. Some limit MAY be imposed, but servers that cannot handle more than one SMTP transaction at a time are not in conformance with the intent of this specification. As discussed above, when the SMTP server receives mail from a particular host address, it could activate its own SMTP queuing mechanisms to retry any mail pending for that host address. From rasca at miamammausalinux.org Mon Jan 23 17:04:17 2012 From: rasca at miamammausalinux.org (RaSca) Date: Mon, 23 Jan 2012 16:04:17 +0100 Subject: [Dovecot] Quota is not working (Debian Squeeze - Dovecot 1.2) SOLVED In-Reply-To: <1326898601.11500.56.camel@innu> References: <4F13FF00.1050108@miamammausalinux.org> <1326898601.11500.56.camel@innu> Message-ID: <4F1D76F1.9070106@miamammausalinux.org> Il giorno Mer 18 Gen 2012 15:56:41 CET, Timo Sirainen ha scritto: [...] > You're using SQL only for passdb lookup. [...] > user_query isn't used, because you aren't using userdb sql. Hi Timo, thank you, I confirm everything you wrote. In order to help someone with the same problem, when using virtual profiles in mysql, there must be declared both passwd sql (necessary to verify the authentication) and userdb sql (necessary to verify the user informations). For every value that has not a specific user override it is possible to declare a global value in the plugin area (and there must be also "a quota = maildir:User quota" declaration). In the end, with this configuration the quota plugin works (the sql file remains the same I first posted): protocols = imap pop3 disable_plaintext_auth = no log_timestamp = "%Y-%m-%d %H:%M:%S " mail_location = maildir:/mail/mailboxes/%d/%u mail_privileged_group = mail #mail_debug = yes #auth_debug = yes mail_nfs_storage = yes mmap_disable=yes fsync_disable=no mail_nfs_index = yes protocol imap { mail_plugins = quota imap_quota } protocol pop3 { pop3_uidl_format = %08Xu%08Xv mail_plugins = quota } protocol managesieve { } protocol lda { auth_socket_path = /var/run/dovecot/auth-master postmaster_address = postmaster@ mail_plugins = sieve quota quota_full_tempfail = no } auth default { mechanisms = plain userdb sql { args = /etc/dovecot/dovecot-sql.conf } passdb sql { args = /etc/dovecot/dovecot-sql.conf } user = root socket listen { master { path = /var/run/dovecot/auth-master mode = 0600 user = vmail } client { path = /var/spool/postfix/private/auth mode = 0660 user = postfix group = postfix } } } plugin { quota = maildir:User quota quota2 = fs:Disk quota quota_rule = *:storage=1G quota_warning = storage=95%% /mail/scripts/quota-warning.sh 95 quota_warning2 = storage=80%% /mail/scripts/quota-warning.sh 80 sieve_global_path = /mail/sieve/globalsieverc } -- RaSca Mia Mamma Usa Linux: Niente ? impossibile da capire, se lo spieghi bene! rasca at miamammausalinux.org http://www.miamammausalinux.org From noeldude at gmail.com Mon Jan 23 18:14:11 2012 From: noeldude at gmail.com (Noel) Date: Mon, 23 Jan 2012 10:14:11 -0600 Subject: [Dovecot] change smtp port In-Reply-To: <4f1d70f6.4d25cc0a.5325.ffff88a8SMTPIN_ADDED@mx.google.com> References: <000801ccd9db$91671960$b4354c20$@othman@cairosource.com> <20120123143343.GI29761@charite.de> <4f1d70f6.4d25cc0a.5325.ffff88a8SMTPIN_ADDED@mx.google.com> Message-ID: <4F1D8753.9040900@gmail.com> On 1/23/2012 8:38 AM, Amira Othman wrote: > And there is no way to receive incoming emails not on port 25 ? > You can't randomly change the port you receive mail on because external MTAs have no way to find what port you're using. They will *always* use port 25 and nothing else. If your problem is that your Internet Service Provider is blocking port 25, you can contact them. Some ISPs will unblock port 25 on request, or might even have an online form you can fill out. If you can't get help from the ISP, you need a remailer service -- some outside proxy that accepts the mail for you and forwards connections to some different port on your computer. I don't know of any free services that do this; dyndns and others offer this for a fee, sometimes combined with spam/virus filtering. -- Noel Jones From moseleymark at gmail.com Mon Jan 23 21:13:56 2012 From: moseleymark at gmail.com (Mark Moseley) Date: Mon, 23 Jan 2012 11:13:56 -0800 Subject: [Dovecot] Director questions Message-ID: In playing with dovecot director, a couple of things came up, one related to the other: 1) Is there an effective maximum of directors that shouldn't be exceeded? That is, even if technically possible, that I shouldn't go over? Since we're 100% NFS, we've scaled servers horizontally quite a bit. At this point, we've got servers operating as MTAs, servers doing IMAP/POP directly, and servers separately doing IMAP/POP as webmail backends. Works just dandy for our existing setup. But to director-ize all of them, I'm looking at a director ring of maybe 75-85 servers, which is a bit unnerving, since I don't know if the ring will be able to keep up. Is there a scale where it'll bog down? 2) If it is too big, is there any way, that I might be missing, to use remote directors? It looks as if directors have to live locally on the same box as the proxy. For my MTAs, where they're not customer-facing, I'm much less worried about the latency it'd introduce. Likewise with my webmail servers, the extra latency would probably be trivial compared to the rest of the request--but then again, might not. But for direct IMAP, the latency likely be more noticeable. So ideally I'd be able to make my IMAP servers (well, the frontside of the proxy, that is) be the director pool, while leaving my MTAs to talk to the director remotely, and possibly my webmail servers remote too. Is that a remote possibility? From tss at iki.fi Mon Jan 23 21:37:02 2012 From: tss at iki.fi (Timo Sirainen) Date: Mon, 23 Jan 2012 21:37:02 +0200 Subject: [Dovecot] Director questions In-Reply-To: References: Message-ID: On 23.1.2012, at 21.13, Mark Moseley wrote: > In playing with dovecot director, a couple of things came up, one > related to the other: > > 1) Is there an effective maximum of directors that shouldn't be > exceeded? That is, even if technically possible, that I shouldn't go > over? There's no definite number, but each director adds some extra traffic to network and sometimes extra latency to lookups. So you should have only as many as you need. > Since we're 100% NFS, we've scaled servers horizontally quite a > bit. At this point, we've got servers operating as MTAs, servers doing > IMAP/POP directly, and servers separately doing IMAP/POP as webmail > backends. Works just dandy for our existing setup. But to director-ize > all of them, I'm looking at a director ring of maybe 75-85 servers, > which is a bit unnerving, since I don't know if the ring will be able > to keep up. Is there a scale where it'll bog down? That's definitely too many directors. So far the largest installation I know of has 4 directors. Another one will maybe have 6-10 to handle 2Gbps traffic. > 2) If it is too big, is there any way, that I might be missing, to use > remote directors? It looks as if directors have to live locally on the > same box as the proxy. For my MTAs, where they're not customer-facing, > I'm much less worried about the latency it'd introduce. Likewise with > my webmail servers, the extra latency would probably be trivial > compared to the rest of the request--but then again, might not. But > for direct IMAP, the latency likely be more noticeable. So ideally I'd > be able to make my IMAP servers (well, the frontside of the proxy, > that is) be the director pool, while leaving my MTAs to talk to the > director remotely, and possibly my webmail servers remote too. Is that > a remote possibility? I guess that could be a possibility, but .. Why do you need so many proxies at all? Couldn't all of your traffic go through just a few dedicated proxy/director servers? From harm at vevida.nl Mon Jan 23 22:52:34 2012 From: harm at vevida.nl (Harm Weites) Date: Mon, 23 Jan 2012 21:52:34 +0100 Subject: [Dovecot] LMTP ignoring tcpwrappers In-Reply-To: References: <1327012212.2003.32.camel@manbearpig.lan.kantoor.vevida.net> Message-ID: <1327351954.1940.15.camel@manbearpig> Timo Sirainen schreef op vr 20-01-2012 om 21:34 [+0200]: > On 20.1.2012, at 0.30, Harm Weites wrote: > > > we want to use dovecot LMTP for efficient mail delivery from our MX > > servers (running postfix 2.8) to our storage servers (dovecot 2.0.17). > > However, the one problem we see is the lack of access control when using > > LMTP. It apears that every client in our network who has access to the > > storage machines can drop a message in a Maildir of any user on that > > storage server. > > Is it a real problem? Can't they just as easily drop messages to other users' maildirs simply by sending the mail via SMTP? > This is true, though, in that case messages or not passing our content scanners which is something we do not want. Hence the thought of configuring tcpwrappers, as can be done with the other two protocols, to only allow access to LMTP from our MX servers. > > To prevent this behaviour it would be nice to use > > libwrap, just as it can be used for POP3/IMAP protocols. > > This, however, seems to be impossible using the configuration as > > mentioned on the dovecot wiki: > > > > login_access_sockets = tcpwrap > > > > This seems to imply it only works for a login, and LMTP does not use > > that. The above works perfectly when trying to block access to IMAP or > > POP3 in /etc/hosts.deny, though a setting for LMTP is simply ignored. > > Right. I'm not sure if I'd even want to add such feature to LMTP. It doesn't really feel like it belongs there. > Would you rather implement something completely different to cater in access-control, or just leave things as they are now? > > Is there a configuration setting needed for this to work for LMTP, or is > > it simply not possible (yet) and does libwrap support for LMTP requires > > a patch? > > Not possible in Dovecot currently. You could use firewall rules. Yes indeed, using some firewall rules and perhaps an extra vlan sounds ok, though I would like to use something a little less low-level. From moseleymark at gmail.com Mon Jan 23 23:44:26 2012 From: moseleymark at gmail.com (Mark Moseley) Date: Mon, 23 Jan 2012 13:44:26 -0800 Subject: [Dovecot] Director questions In-Reply-To: References: Message-ID: On Mon, Jan 23, 2012 at 11:37 AM, Timo Sirainen wrote: > On 23.1.2012, at 21.13, Mark Moseley wrote: > >> In playing with dovecot director, a couple of things came up, one >> related to the other: >> >> 1) Is there an effective maximum of directors that shouldn't be >> exceeded? That is, even if technically possible, that I shouldn't go >> over? > > There's no definite number, but each director adds some extra traffic to network and sometimes extra latency to lookups. So you should have only as many as you need. Ok. >> Since we're 100% NFS, we've scaled servers horizontally quite a >> bit. At this point, we've got servers operating as MTAs, servers doing >> IMAP/POP directly, and servers separately doing IMAP/POP as webmail >> backends. Works just dandy for our existing setup. But to director-ize >> all of them, I'm looking at a director ring of maybe 75-85 servers, >> which is a bit unnerving, since I don't know if the ring will be able >> to keep up. Is there a scale where it'll bog down? > > That's definitely too many directors. So far the largest installation I know of has 4 directors. Another one will maybe have 6-10 to handle 2Gbps traffic. Ok >> 2) If it is too big, is there any way, that I might be missing, to use >> remote directors? It looks as if directors have to live locally on the >> same box as the proxy. For my MTAs, where they're not customer-facing, >> I'm much less worried about the latency it'd introduce. Likewise with >> my webmail servers, the extra latency would probably be trivial >> compared to the rest of the request--but then again, might not. But >> for direct IMAP, the latency likely be more noticeable. So ideally I'd >> be able to make my IMAP servers (well, the frontside of the proxy, >> that is) be the director pool, while leaving my MTAs to talk to the >> director remotely, and possibly my webmail servers remote too. Is that >> a remote possibility? > > I guess that could be a possibility, but .. Why do you need so many proxies at all? Couldn't all of your traffic go through just a few dedicated proxy/director servers? I'm probably conceptualizing it wrongly. In our system, since it's NFS, we have everything pooled. For a given mailbox, any number of MTA (Exim) boxes could actually do the delivery, any number of IMAP servers can do IMAP for that mailbox, and any number of webmail servers could do IMAP too for that mailbox. So our horizontal scaling, server-wise, is just adding more servers to pools. This is on the order of a few million mailboxes, per datacenter. It's less messy than it probably sounds :) I was assuming that at any spot where a server touched the actual mailbox, it would need to instead proxy to a set of backend servers. Is that accurate or way off? If it is accurate, it sounds like we'd need to shuffle things up a bit. From janfrode at tanso.net Mon Jan 23 23:48:00 2012 From: janfrode at tanso.net (Jan-Frode Myklebust) Date: Mon, 23 Jan 2012 22:48:00 +0100 Subject: [Dovecot] make imap search less verbose Message-ID: <20120123214800.GA3112@dibs.tanso.net> We have an imap-client (SOGo) that doesn't handle this status output while searching: * OK Searched 76% of the mailbox, ETA 0:50 Is there any way to disable this output on the dovecot-side? -jf From tss at iki.fi Mon Jan 23 23:56:49 2012 From: tss at iki.fi (Timo Sirainen) Date: Mon, 23 Jan 2012 23:56:49 +0200 Subject: [Dovecot] make imap search less verbose In-Reply-To: <20120123214800.GA3112@dibs.tanso.net> References: <20120123214800.GA3112@dibs.tanso.net> Message-ID: On 23.1.2012, at 23.48, Jan-Frode Myklebust wrote: > We have an imap-client (SOGo) that doesn't handle this status output while > searching: > > * OK Searched 76% of the mailbox, ETA 0:50 > > Is there any way to disable this output on the dovecot-side? No way to disable it without modifying code. I think SOGo should fix it anyway.. From janfrode at tanso.net Tue Jan 24 00:19:05 2012 From: janfrode at tanso.net (Jan-Frode Myklebust) Date: Mon, 23 Jan 2012 23:19:05 +0100 Subject: [Dovecot] make imap search less verbose In-Reply-To: References: <20120123214800.GA3112@dibs.tanso.net> Message-ID: <20120123221905.GA3717@dibs.tanso.net> On Mon, Jan 23, 2012 at 11:56:49PM +0200, Timo Sirainen wrote: > > No way to disable it without modifying code. I think SOGo should fix it anyway.. > Ok, thanks. SOGo will get fixed. I was just looking for a quick workaround while we wait for updated sogo. -jf From tss at iki.fi Tue Jan 24 01:19:47 2012 From: tss at iki.fi (Timo Sirainen) Date: Tue, 24 Jan 2012 01:19:47 +0200 Subject: [Dovecot] make imap search less verbose In-Reply-To: <20120123221905.GA3717@dibs.tanso.net> References: <20120123214800.GA3112@dibs.tanso.net> <20120123221905.GA3717@dibs.tanso.net> Message-ID: <6F0CE9DA-1344-4299-AC6C-616B22F54609@iki.fi> On 24.1.2012, at 0.19, Jan-Frode Myklebust wrote: > On Mon, Jan 23, 2012 at 11:56:49PM +0200, Timo Sirainen wrote: >> >> No way to disable it without modifying code. I think SOGo should fix it anyway.. >> > > Ok, thanks. SOGo will get fixed. I was just looking for a quick > workaround while we wait for updated sogo. With Dovecot you can do: diff -r 759e879c4c42 src/lib-storage/index/index-search.c --- a/src/lib-storage/index/index-search.c Fri Jan 20 18:59:16 2012 +0200 +++ b/src/lib-storage/index/index-search.c Tue Jan 24 01:19:18 2012 +0200 @@ -1200,9 +1200,9 @@ text = t_strdup_printf("Searched %d%% of the mailbox, " "ETA %d:%02d", (int)percentage, secs/60, secs%60); - box->storage->callbacks. + /*box->storage->callbacks. notify_ok(box, text, - box->storage->callback_context); + box->storage->callback_context);*/ } T_END; } ctx->last_notify = ioloop_timeval; From tss at iki.fi Tue Jan 24 03:58:23 2012 From: tss at iki.fi (Timo Sirainen) Date: Tue, 24 Jan 2012 03:58:23 +0200 Subject: [Dovecot] dbox + SIS + zlib fixed Message-ID: I think a few people have complained about this combination being somewhat broken, resulting in bogus "cached message size wrong" errors sometimes. This fixes it: http://hg.dovecot.org/dovecot-2.0/rev/9b2931607063 From lists at necoro.eu Tue Jan 24 11:22:48 2012 From: lists at necoro.eu (=?ISO-8859-15?Q?Ren=E9_Neumann?=) Date: Tue, 24 Jan 2012 10:22:48 +0100 Subject: [Dovecot] Capabilities of imapc Message-ID: <4F1E7868.2060102@necoro.eu> Hi *, I can't find any decent information about the capabilities of imapc in the planned future dovecot releases. As I think about using imapc, I'll just give the two use-cases I see for me. Will this be possible with imapc? 1) One (or more) folders in a mailbox which are proxied? 2) Proxy a whole mailbox _and use the folders in it as shared folders_. That means account X on Server 1 (the dovecot box) is proxied via imapc to Server 2 (some other server). The folders of this account on Server 1 are then shared with account Y. When account Y uses these folders they are always up-to-date (so no action of account X is required). The second use-case is just some (ugly) workaround in case the first one is not possible. Thanks, Ren? From tss at iki.fi Tue Jan 24 11:31:27 2012 From: tss at iki.fi (Timo Sirainen) Date: Tue, 24 Jan 2012 11:31:27 +0200 Subject: [Dovecot] Capabilities of imapc In-Reply-To: <4F1E7868.2060102@necoro.eu> References: <4F1E7868.2060102@necoro.eu> Message-ID: <89981E22-65F4-415A-995C-E460093BE21B@iki.fi> On 24.1.2012, at 11.22, Ren? Neumann wrote: > I can't find any decent information about the capabilities of imapc in > the planned future dovecot releases. Mainly it's about adding support for more IMAP commands (e.g. SEARCH), so that it doesn't necessarily have to be used as a rather dummy storage. (Although it always has to be possible to be used as a dummy storage, like it is now.) > As I think about using imapc, I'll just give the two use-cases I see for > me. Will this be possible with imapc? > > 1) One (or more) folders in a mailbox which are proxied? Currently because imapc_* settings are global, you can't have more than one imapc destination. This will be fixed at some point. Otherwise this works the same way as other storage backends: You create namespace(s) for the folders you want to proxy. > 2) Proxy a whole mailbox _and use the folders in it as shared folders_. > That means account X on Server 1 (the dovecot box) is proxied via imapc > to Server 2 (some other server). The folders of this account on Server 1 > are then shared with account Y. When account Y uses these folders they > are always up-to-date (so no action of account X is required). This should be possible, yes. From lists at necoro.eu Tue Jan 24 12:15:48 2012 From: lists at necoro.eu (=?ISO-8859-1?Q?Ren=E9_Neumann?=) Date: Tue, 24 Jan 2012 11:15:48 +0100 Subject: [Dovecot] Capabilities of imapc In-Reply-To: <89981E22-65F4-415A-995C-E460093BE21B@iki.fi> References: <4F1E7868.2060102@necoro.eu> <89981E22-65F4-415A-995C-E460093BE21B@iki.fi> Message-ID: <4F1E84D4.20102@necoro.eu> Am 24.01.2012 10:31, schrieb Timo Sirainen: >> As I think about using imapc, I'll just give the two use-cases I see for >> me. Will this be possible with imapc? >> >> 1) One (or more) folders in a mailbox which are proxied? > > Currently because imapc_* settings are global, you can't have more than one imapc destination. This will be fixed at some point. Otherwise this works the same way as other storage backends: You create namespace(s) for the folders you want to proxy. Ah - this sounds good. I'll try as soon as dovecot-2.1 is released (because 2.0.17 does not include imapc, right?) Thanks, Ren? From CMarcus at Media-Brokers.com Tue Jan 24 13:23:14 2012 From: CMarcus at Media-Brokers.com (Charles Marcus) Date: Tue, 24 Jan 2012 06:23:14 -0500 Subject: [Dovecot] change smtp port In-Reply-To: <4F1D8753.9040900@gmail.com> References: <000801ccd9db$91671960$b4354c20$@othman@cairosource.com> <20120123143343.GI29761@charite.de> <4f1d70f6.4d25cc0a.5325.ffff88a8SMTPIN_ADDED@mx.google.com> <4F1D8753.9040900@gmail.com> Message-ID: <4F1E94A2.6050409@Media-Brokers.com> On 2012-01-23 11:14 AM, Noel wrote: > If your problem is that your Internet Service Provider is blocking > port 25, you can contact them. Some ISPs will unblock port 25 on > request, or might even have an online form you can fill out. The OP specifically said that *he* had changed the port from 25 to 587... obviously he doesn't understand how smtp works... -- Best regards, Charles From joshua at hybrid.pl Tue Jan 24 13:51:28 2012 From: joshua at hybrid.pl (Jacek Osiecki) Date: Tue, 24 Jan 2012 12:51:28 +0100 (CET) Subject: [Dovecot] change smtp port In-Reply-To: <4F1E94A2.6050409@Media-Brokers.com> References: <000801ccd9db$91671960$b4354c20$@othman@cairosource.com> <20120123143343.GI29761@charite.de> <4f1d70f6.4d25cc0a.5325.ffff88a8SMTPIN_ADDED@mx.google.com> <4F1D8753.9040900@gmail.com> <4F1E94A2.6050409@Media-Brokers.com> Message-ID: On Tue, 24 Jan 2012, Charles Marcus wrote: > On 2012-01-23 11:14 AM, Noel wrote: >> If your problem is that your Internet Service Provider is blocking >> port 25, you can contact them. Some ISPs will unblock port 25 on >> request, or might even have an online form you can fill out. > The OP specifically said that *he* had changed the port from 25 to 587... > obviously he doesn't understand how smtp works... Most probably he wanted to enable his users to send emails via his mail server using port 587, because some may have blocked access to port 25. Proper solution is to open additionally port 587 and require users to authenticate in order to send mails through the server. If it is too complicated in postfix, admin can simply map port 587 to 25 - most probably that would work well. Best regards, -- Jacek Osiecki joshua at ceti.pl GG:3828944 I don't want something I need. I want something I want. From CMarcus at Media-Brokers.com Tue Jan 24 14:18:46 2012 From: CMarcus at Media-Brokers.com (Charles Marcus) Date: Tue, 24 Jan 2012 07:18:46 -0500 Subject: [Dovecot] change smtp port In-Reply-To: References: <000801ccd9db$91671960$b4354c20$@othman@cairosource.com> <20120123143343.GI29761@charite.de> <4f1d70f6.4d25cc0a.5325.ffff88a8SMTPIN_ADDED@mx.google.com> <4F1D8753.9040900@gmail.com> <4F1E94A2.6050409@Media-Brokers.com> Message-ID: <4F1EA1A6.2080007@Media-Brokers.com> On 2012-01-24 6:51 AM, Jacek Osiecki wrote: > On Tue, 24 Jan 2012, Charles Marcus wrote: >> On 2012-01-23 11:14 AM, Noel wrote: >>> If your problem is that your Internet Service Provider is blocking >>> port 25, you can contact them. Some ISPs will unblock port 25 on >>> request, or might even have an online form you can fill out. >> The OP specifically said that *he* had changed the port from 25 to >> 587... obviously he doesn't understand how smtp works... > Most probably he wanted to enable his users to send emails via his mail > server using port 587, because some may have blocked access to port 25. Which obviously means he has not even a basic understanding of how smtp works. > Proper solution is to open additionally port 587 and require users to > authenticate in order to send mails through the server. If it is too > complicated in postfix, Which is precisely why I (and a few others) gave him those instructions... > admin can simply map port 587 to 25 - most probably that would work well. Of course it will work... but it is most definitely *not* recommended, and not only that, will totally defeat achieving the goal of using the submission port (because *all* port 587 traffic would be routed to port 25)... I only mentioned that this could be done in answer to someone who said it couldn't... -- Best regards, Charles From a.othman at cairosource.com Tue Jan 24 14:51:59 2012 From: a.othman at cairosource.com (Amira Othman) Date: Tue, 24 Jan 2012 14:51:59 +0200 Subject: [Dovecot] change smtp port In-Reply-To: <4F1EA1A6.2080007@Media-Brokers.com> References: <000801ccd9db$91671960$b4354c20$@othman@cairosource.com> <20120123143343.GI29761@charite.de> <4f1d70f6.4d25cc0a.5325.ffff88a8SMTPIN_ADDED@mx.google.com> <4F1D8753.9040900@gmail.com> <4F1E94A2.6050409@Media-Brokers.com> <4F1EA1A6.2080007@Media-Brokers.com> Message-ID: <001801ccda96$f843e350$e8cba9f0$@othman@cairosource.com> Thanks for reply The problem that ISP for some reason port 25 is not stable and refuse connection for several times so I tried to change port to 587 instead of 25 to keep sending emails. And I though that I can stop using port 25 as it's not always working from ISP -----Original Message----- From: dovecot-bounces at dovecot.org [mailto:dovecot-bounces at dovecot.org] On Behalf Of Charles Marcus Sent: Tuesday, January 24, 2012 2:19 PM To: dovecot at dovecot.org Subject: Re: [Dovecot] change smtp port On 2012-01-24 6:51 AM, Jacek Osiecki wrote: > On Tue, 24 Jan 2012, Charles Marcus wrote: >> On 2012-01-23 11:14 AM, Noel wrote: >>> If your problem is that your Internet Service Provider is blocking >>> port 25, you can contact them. Some ISPs will unblock port 25 on >>> request, or might even have an online form you can fill out. >> The OP specifically said that *he* had changed the port from 25 to >> 587... obviously he doesn't understand how smtp works... > Most probably he wanted to enable his users to send emails via his mail > server using port 587, because some may have blocked access to port 25. Which obviously means he has not even a basic understanding of how smtp works. > Proper solution is to open additionally port 587 and require users to > authenticate in order to send mails through the server. If it is too > complicated in postfix, Which is precisely why I (and a few others) gave him those instructions... > admin can simply map port 587 to 25 - most probably that would work well. Of course it will work... but it is most definitely *not* recommended, and not only that, will totally defeat achieving the goal of using the submission port (because *all* port 587 traffic would be routed to port 25)... I only mentioned that this could be done in answer to someone who said it couldn't... -- Best regards, Charles From noeldude at gmail.com Tue Jan 24 15:39:43 2012 From: noeldude at gmail.com (Noel) Date: Tue, 24 Jan 2012 07:39:43 -0600 Subject: [Dovecot] change smtp port In-Reply-To: <4F1E94A2.6050409@Media-Brokers.com> References: <000801ccd9db$91671960$b4354c20$@othman@cairosource.com> <20120123143343.GI29761@charite.de> <4f1d70f6.4d25cc0a.5325.ffff88a8SMTPIN_ADDED@mx.google.com> <4F1D8753.9040900@gmail.com> <4F1E94A2.6050409@Media-Brokers.com> Message-ID: <4F1EB49F.4090300@gmail.com> On 1/24/2012 5:23 AM, Charles Marcus wrote: > On 2012-01-23 11:14 AM, Noel wrote: >> If your problem is that your Internet Service Provider is blocking >> port 25, you can contact them. Some ISPs will unblock port 25 on >> request, or might even have an online form you can fill out. > > The OP specifically said that *he* had changed the port from 25 to > 587... ... because port 25 didn't work. > obviously he doesn't understand how smtp works... > and we can assume he's here to learn, not to get flamed. Anyway, this is OT for dovecot. Over and out. -- Noel Jones From devurandom at gmx.net Tue Jan 24 16:43:22 2012 From: devurandom at gmx.net (Dennis Schridde) Date: Tue, 24 Jan 2012 15:43:22 +0100 Subject: [Dovecot] Trying to get metadata plugin working In-Reply-To: <201201161651.46232.thomas@koch.ro> References: <201201161651.46232.thomas@koch.ro> Message-ID: <2007528.Wh0gVP3DHS@samson> Hi Thomas and List! Am Montag, 16. Januar 2012, 16:51:45 schrieb Thomas Koch: > dict: Error: file dict commit: file_dotlock_open(~/Maildir/shared-metadata) > failed: No such file or directory The dovecot-metadata is still a work in progress, despite my earlier message reading differently. I assumed because Akonadi began to work (my telnet tests were already successful since a while), that the dovecot plugin would also work, but noticed later that everything was a coincidence. Anyway, my config is: plugin { metadata_dict = proxy::metadata } dict { metadata = file:/var/lib/dovecot/shared-metadata } This appears to work for me - I think the key is the proxy::. --Dennis -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 198 bytes Desc: This is a digitally signed message part. URL: From CMarcus at Media-Brokers.com Tue Jan 24 16:58:29 2012 From: CMarcus at Media-Brokers.com (Charles Marcus) Date: Tue, 24 Jan 2012 09:58:29 -0500 Subject: [Dovecot] change smtp port In-Reply-To: <001801ccda96$f843e350$e8cba9f0$@othman@cairosource.com> References: <000801ccd9db$91671960$b4354c20$@othman@cairosource.com> <20120123143343.GI29761@charite.de> <4f1d70f6.4d25cc0a.5325.ffff88a8SMTPIN_ADDED@mx.google.com> <4F1D8753.9040900@gmail.com> <4F1E94A2.6050409@Media-Brokers.com> <4F1EA1A6.2080007@Media-Brokers.com> <001801ccda96$f843e350$e8cba9f0$@othman@cairosource.com> Message-ID: <4F1EC715.8020700@Media-Brokers.com> On 2012-01-24 7:51 AM, Amira Othman wrote: > Thanks for reply > > The problem that ISP for some reason port 25 is not stable and refuse > connection for several times so I tried to change port to 587 instead > of 25 to keep sending emails. And I though that I can stop using port > 25 as it's not always working from ISP As I said, you obviously do not understand how smtp works. This is made obvious by your questions, and failure to understand that port 25 is *the* port for receiving email on the public internet. Period. If your main problem with port 25 is *sending* (relaying outbound) mails, then you will need to take this up with your ISP. If they are unable or unwilling to address the problem, one option would be to setup your system to relay through some other smtp relay service on the internet using port 587 as you apparently read somwehere, but you don't do this by changing the main smtpd daemon to port 587, because as you discovered, you won't be able to receive *any* emails like this. That said, I fail to see any relevance to dovecot in this thread... -- Best regards, Charles From CMarcus at Media-Brokers.com Tue Jan 24 17:07:04 2012 From: CMarcus at Media-Brokers.com (Charles Marcus) Date: Tue, 24 Jan 2012 10:07:04 -0500 Subject: [Dovecot] change smtp port In-Reply-To: <4F1EB49F.4090300@gmail.com> References: <000801ccd9db$91671960$b4354c20$@othman@cairosource.com> <20120123143343.GI29761@charite.de> <4f1d70f6.4d25cc0a.5325.ffff88a8SMTPIN_ADDED@mx.google.com> <4F1D8753.9040900@gmail.com> <4F1E94A2.6050409@Media-Brokers.com> <4F1EB49F.4090300@gmail.com> Message-ID: <4F1EC918.2060003@Media-Brokers.com> On 2012-01-24 8:39 AM, Noel wrote: > On 1/24/2012 5:23 AM, Charles Marcus wrote: >> The OP specifically said that *he* had changed the port from 25 to >> 587... > ... because port 25 didn't work. For *sending*... And his complaint was that changing the port for the main smtpd process caused him to not be able to *receive* email... >> obviously he doesn't understand how smtp works... > and we can assume he's here to learn, not to get flamed. What!? Please point out how simply pointing out the obvious - that someone doesn't understand something - is the same as *flaming* them... Please... > Anyway, this is OT for dovecot. Over and out. Agreed on that one... nip/tuck From divizio at exentrica.it Tue Jan 24 17:58:34 2012 From: divizio at exentrica.it (Luca Di Vizio) Date: Tue, 24 Jan 2012 16:58:34 +0100 Subject: [Dovecot] [PATCH] autoconf small fix Message-ID: Hi Timo, the attached patch seems to solve a warning from autoconf: libtoolize: Consider adding `AC_CONFIG_MACRO_DIR([m4])' to configure.in and libtoolize: rerunning libtoolize, to keep the correct libtool macros in-tree. Best regards, Luca -------------- next part -------------- A non-text attachment was scrubbed... Name: autoconf.patch Type: text/x-patch Size: 279 bytes Desc: not available URL: From support at palatineweb.com Tue Jan 24 18:35:10 2012 From: support at palatineweb.com (Palatine Web Support) Date: Tue, 24 Jan 2012 16:35:10 +0000 Subject: [Dovecot] =?utf-8?q?Imap_Quota_Exceeded_-_But_Still_Receiving_Ema?= =?utf-8?q?ils=3F?= Message-ID: Hello I am trying to setup dovecot maildir quota, but even though it seems to be working fine, I am still receiving emails into my inbox even though I have exceeded my quota. Here is my dovecot config: plugin { quota = maildir:User Quota quota_rule2 = Trash:storage=+100M } And my SQL config file for Dovecot (dovecot-sql.conf): user_query = SELECT '/var/vmail/%d/%n' as home, 'maildir:/var/vmail/%d/%n' as mail, 150 AS uid, 8 AS gid, CONCAT('*:storage=', quota) AS quota_rule FROM mailbox WHERE username = '%u' AND active = '1' CONCAT('*:storage=', quota) AS quota_rule quota_rule = *:storage=3M So it picks up my set quota of 3MB but dovecot is not rejecting emails if I am over my quota. Can anyone help? Thanks. Carl From lists at wildgooses.com Wed Jan 25 00:06:55 2012 From: lists at wildgooses.com (Ed W) Date: Tue, 24 Jan 2012 22:06:55 +0000 Subject: [Dovecot] Password auth scheme question with mysql Message-ID: <4F1F2B7F.3070005@wildgooses.com> Hi, I have a current auth database using mysql with a "password" column in plain text. The config has "default_pass_scheme = PLAIN" specified In preparation for a more adaptable system I changed a password entry from "asdf" to "{PLAIN}asdf", but now auth fails. Works fine if I change it back to just "asdf". (I don't believe it's a caching problem) What might I be missing? I was under the impression that the password column can include a {scheme} prefix to indicate the password scheme (presumably this also means a password cannot start with a "{"?). Is this still true when using mysql and default_pass_scheme ? Thanks for any hints? Ed W From lists at wildgooses.com Wed Jan 25 00:51:31 2012 From: lists at wildgooses.com (Ed W) Date: Tue, 24 Jan 2012 22:51:31 +0000 Subject: [Dovecot] Password auth scheme question with mysql In-Reply-To: <4F1F2B7F.3070005@wildgooses.com> References: <4F1F2B7F.3070005@wildgooses.com> Message-ID: <4F1F35F3.9070303@wildgooses.com> On 24/01/2012 22:06, Ed W wrote: > Hi, I have a current auth database using mysql with a "password" > column in plain text. The config has "default_pass_scheme = PLAIN" > specified > > In preparation for a more adaptable system I changed a password entry > from "asdf" to "{PLAIN}asdf", but now auth fails. Works fine if I > change it back to just "asdf". (I don't believe it's a caching problem) > > What might I be missing? I was under the impression that the password > column can include a {scheme} prefix to indicate the password scheme > (presumably this also means a password cannot start with a "{"?). Is > this still true when using mysql and default_pass_scheme ? Hmm, so I try: # doveadm pw -p asdf -s sha256 {SHA256}8OTC92xYkW7CWPJGhRvqCR0U1CR6L8PhhpRGGxgW4Ts= I enter this hash into my database column, then enabling debug logging I see this in the logs: Jan 24 22:40:44 mail1 dovecot: auth: Debug: cache(demo at mailasail.com,1.2.24.129): SHA256({PLAIN}asdf) != '8OTC92xYkW7CWPJGhRvqCR0U1CR6L8PhhpRGGxgW4Ts=' Jan 24 22:40:44 mail1 dovecot: auth-worker: Debug: sql(demo at blah.com,1.2.24.129): query: SELECT NULLIF(mail_host, '1.2.24.129') as proxy, NULLIF(mail_host, '1.2.24.129') as host, email as user, password, password as pass, home userdb_home, concat(home, '/', maildir) as userdb_mail, 200 as userdb_uid, 200 as userdb_gid FROM users WHERE email = if('blah.com'<>'','demo at blah.com','demo at blah.com@mailasail.com') and flag_active=1 Jan 24 22:40:44 mail1 dovecot: auth-worker: sql(demo at blah.com,1.2.24.129): Password mismatch (given password: {PLAIN}asdf) Jan 24 22:40:44 mail1 dovecot: auth-worker: Error: md5_verify(demo at mailasail.com): Not a valid MD5-CRYPT or PLAIN-MD5 password Jan 24 22:40:44 mail1 dovecot: auth-worker: Error: ssha256_verify(demo at mailasail.com): SSHA256 password too short Jan 24 22:40:44 mail1 dovecot: auth-worker: Error: ssha512_verify(demo at mailasail.com): SSHA512 password too short Jan 24 22:40:44 mail1 dovecot: auth-worker: Warning: Invalid OTP data in passdb Jan 24 22:40:44 mail1 dovecot: auth-worker: Warning: Invalid OTP data in passdb Jan 24 22:40:44 mail1 dovecot: auth-worker: Debug: sql(demo at blah.com,1.2.24.129): SHA256({PLAIN}asdf) != '8OTC92xYkW7CWPJGhRvqCR0U1CR6L8PhhpRGGxgW4Ts=' Forgot to say. this is with dovecot 2.0.17 Thanks for any pointers Ed W From lists at wildgooses.com Wed Jan 25 01:09:53 2012 From: lists at wildgooses.com (Ed W) Date: Tue, 24 Jan 2012 23:09:53 +0000 Subject: [Dovecot] Password auth scheme question with mysql In-Reply-To: <4F1F35F3.9070303@wildgooses.com> References: <4F1F2B7F.3070005@wildgooses.com> <4F1F35F3.9070303@wildgooses.com> Message-ID: <4F1F3A41.8020206@wildgooses.com> On 24/01/2012 22:51, Ed W wrote: > Hmm, so I try: > > # doveadm pw -p asdf -s sha256 > {SHA256}8OTC92xYkW7CWPJGhRvqCR0U1CR6L8PhhpRGGxgW4Ts= > > I enter this hash into my database column, then enabling debug logging > I see this in the logs: > .. > Jan 24 22:40:44 mail1 dovecot: auth-worker: Debug: > sql(demo at blah.com,1.2.24.129): SHA256({PLAIN}asdf) != > '8OTC92xYkW7CWPJGhRvqCR0U1CR6L8PhhpRGGxgW4Ts=' Gah. Ok, so I discovered the "doveadm auth" command: # doveadm auth -x service=pop3 demo asdf passdb: demo auth succeeded extra fields: user=demo at blah.com proxy host=1.2.24.129 pass={SHA256}8OTC92xYkW7CWPJGhRvqCR0U1CR6L8PhhpRGGxgW4Ts= So why do I get an auth failed and the log files I showed in my last email when I use "telnet localhost 110" and then the commands: user demo pass asdf Help please...? Ed W From lists at wildgooses.com Wed Jan 25 02:03:35 2012 From: lists at wildgooses.com (Ed W) Date: Wed, 25 Jan 2012 00:03:35 +0000 Subject: [Dovecot] Password auth scheme question with mysql In-Reply-To: <4F1F2B7F.3070005@wildgooses.com> References: <4F1F2B7F.3070005@wildgooses.com> Message-ID: <4F1F46D7.7050600@wildgooses.com> On 24/01/2012 22:06, Ed W wrote: > Hi, I have a current auth database using mysql with a "password" > column in plain text. The config has "default_pass_scheme = PLAIN" > specified > > In preparation for a more adaptable system I changed a password entry > from "asdf" to "{PLAIN}asdf", but now auth fails. Works fine if I > change it back to just "asdf". (I don't believe it's a caching problem) > > What might I be missing? I was under the impression that the password > column can include a {scheme} prefix to indicate the password scheme > (presumably this also means a password cannot start with a "{"?). Is > this still true when using mysql and default_pass_scheme ? Bahh. Partly figured this out now - sorry for the noise - looks like a config error on my side: I have traced this to my proxy setup, which appears not to work as expected. Basically all works fine when I test to the main server IP, but fails when I test "localhost", since it triggers me to be proxied to the main IP address (same machine, just using the external IP). The error seems to be that I set the "pass" variable in my password_query to set the master password for the upstream proxied to server. I can't actually remember now why this was required, but it was necessary to allow the proxy to work correctly in the past. I guess this assumption needs revisiting now since it can't be used if the plain password isn't in the database... For interest, here is my auth setup: password_query = SELECT NULLIF(mail_host, '%l') as proxy, NULLIF(mail_host, '%l') as host, \ email as user, password, \ password as pass, \ home userdb_home, concat(home, '/', maildir) as userdb_mail, \ 1234 as userdb_uid, 1234 as userdb_gid \ FROM users \ WHERE email = if('%d'<>'','%u','%u at mailasail.com') and flag_active=1 "mail_host" in this case holds the IP of the machine holding the users mailbox (hence it's easy to push mailboxes to a specific machine and the users get proxied to it) Sorry for the noise Ed W From jd.beaubien at gmail.com Wed Jan 25 05:22:10 2012 From: jd.beaubien at gmail.com (Jean-Daniel Beaubien) Date: Tue, 24 Jan 2012 22:22:10 -0500 Subject: [Dovecot] Persistence of UIDs Message-ID: Hi everyone, I have a question concerning UIDs. How persistant are they? I am thinking about building some form of webmail specialized for some specific business purpose and I am thinking of building a sort of cache in a DB by storing the email addr, date, subject and UID for quick lookups and search of correspondance. I am doing this because I am having issue with multiple people searching thru email folders that have 100k+ emails (which is another problem in itself, searches don't seem to scale well when folder goes above 60k emails). So to come back to my question, can I store the UIDs and reuse those UIDs later on to obtain the body of the email??? Or can the UIDs change on the server and they will not be valid anymore?. My setup is: - dovecot 1.x (will migrate to 2.x soon) - maildir - everything stored on an intel 320 SSD (index and maildir folder) Thanks, -JD From slusarz at curecanti.org Wed Jan 25 07:27:02 2012 From: slusarz at curecanti.org (Michael M Slusarz) Date: Tue, 24 Jan 2012 22:27:02 -0700 Subject: [Dovecot] Persistence of UIDs In-Reply-To: References: Message-ID: <20120124222702.Horde.UpAiY4F5lbhPH5KmSLoWegA@bigworm.curecanti.org> Quoting Jean-Daniel Beaubien : > I have a question concerning UIDs. How persistant are they? [snip] > So to come back to my question, can I store the UIDs and reuse those UIDs > later on to obtain the body of the email??? Or can the UIDs change on the > server and they will not be valid anymore?. You really need to read RFC 3501 (http://tools.ietf.org/html/rfc3501), specifically section 2.3.1.1. Short answer: UIDs will almost always be persistent, but you always need to check UIDVALIDITY in the tiny chance that they may be invalidated. michael From dmiller at amfes.com Wed Jan 25 07:38:47 2012 From: dmiller at amfes.com (Daniel L. Miller) Date: Tue, 24 Jan 2012 21:38:47 -0800 Subject: [Dovecot] Imap Quota Exceeded - But Still Receiving Emails? In-Reply-To: References: Message-ID: On 1/24/2012 8:35 AM, Palatine Web Support wrote: > > Here is my dovecot config: > > plugin { > quota = maildir:User Quota > quota_rule2 = Trash:storage=+100M > } [..] > > So it picks up my set quota of 3MB but dovecot is not rejecting emails > if I am over my quota. > > Can anyone help? > Is the quota plugin being loaded? What is the output of: doveconf | grep -B 2 plug -- Daniel From dovecot at bravenec.eu Wed Jan 25 09:05:47 2012 From: dovecot at bravenec.eu (Petr Bravenec) Date: Wed, 25 Jan 2012 08:05:47 +0100 Subject: [Dovecot] Dovecot antispam plugint got an empty message Message-ID: <201201250805.47430.dovecot@bravenec.eu> Few weeks ago I upgraded dovecot from 1.2 to 2.0.16 and antispam plugin to 2.0_pre20101222. Since the upgrade I'm not able to move messages to my Junk folder. In the maillog I have found this message: dspam[25060]: empty message (no data received) Message is copied from my INBOX to Junk folder, but dspam got an empty message and sent an error return code. So the moving operation is not successfull and the original message in INBOX was not deleted. The dspam was not trained (got an empty message). Looking to source code of dspam and antispam plugin I suspect the dovecot not to sending any content to plugin. Can you help me, please? Petr Bravenec -------------- next part -------------- # 2.0.16: /etc/dovecot/dovecot.conf # OS: Linux 3.1.6-gentoo x86_64 Gentoo Base System release 2.0.3 ext4 auth_mechanisms = plain login base_dir = /var/run/dovecot/ dict { acl = pgsql:/etc/dovecot/dovecot-acl.conf } disable_plaintext_auth = no first_valid_gid = 98 first_valid_uid = 98 last_valid_gid = 98 last_valid_uid = 98 listen = *, [::] mail_location = maildir:/home/dovecot/%u/maildir managesieve_notify_capability = mailto managesieve_sieve_capability = fileinto reject envelope encoded-character vacation subaddress comparator-i;ascii-numeric relational regex imap4flags copy include variables body enotify environment mailbox date ihave namespace { inbox = yes location = prefix = separator = . type = private } namespace { inbox = no list = children location = maildir:/home/dovecot/%%n/maildir:INDEX=/home/dovecot/%n/shared/%%n prefix = Ostatni.%%n. separator = . subscriptions = no type = shared } namespace { inbox = no list = children location = maildir:/home/dovecot/Sdilene/maildir:INDEX=/home/dovecot/%n/public prefix = Sdilene. separator = . subscriptions = no type = public } passdb { args = session=yes driver = pam } plugin { acl = vfile acl_shared_dict = proxy::acl antispam_backend = dspam antispam_dspam_args = --user;%u;--source=error antispam_dspam_binary = /usr/bin/dspam antispam_dspam_notspam = --class=innocent antispam_dspam_result_header = X-DSPAM-Result antispam_dspam_spam = --class=spam antispam_mail_tmpdir = /tmp antispam_signature = X-DSPAM-Signature antispam_signature_missing = move antispam_spam = Junk antispam_trash = Trash antispam_unsure = sieve = /home/dovecot/%u/sieve.default sieve_before = /etc/dovecot/sieve/dspam.sieve sieve_dir = /home/dovecot/%u/sieve } protocols = imap sieve service auth { unix_listener /var/spool/postfix/private/auth { group = postfix mode = 0660 user = postfix } unix_listener auth-master { group = vmails mode = 0660 user = dspam } unix_listener auth-userdb { group = vmails mode = 0660 user = dspam } user = root } service dict { unix_listener dict { group = vmails mode = 0660 user = dspam } } ssl_cert = Hi, I am using dovecot 2.0.16, and assigend globally procmailrc (/etc/procmailrc) which delivers mails to user's home directory in maildir formate. Also I assined quota to User through setquota (edquota) command, If the quota excedded then this case user's mail store to /var/spool/mail/user. After incresing quota how to delivered these mails to user's home dir in maildir formate automatically. Thanks & Regards, Arun Kumar Gupta -- This message has been scanned for viruses and dangerous content by MailScanner, and is believed to be clean. From tss at iki.fi Wed Jan 25 14:45:31 2012 From: tss at iki.fi (Timo Sirainen) Date: Wed, 25 Jan 2012 14:45:31 +0200 Subject: [Dovecot] Persistence of UIDs In-Reply-To: References: Message-ID: <9EDDD923-760E-4D24-BFB7-290172418A62@iki.fi> On 25.1.2012, at 5.22, Jean-Daniel Beaubien wrote: > I have a question concerning UIDs. How persistant are they? With Dovecot persistent enough. But as Michael said, check UIDVALIDITY. > I am thinking about building some form of webmail specialized for some > specific business purpose and I am thinking of building a sort of cache in > a DB by storing the email addr, date, subject and UID for quick lookups and > search of correspondance. Dovecot should already have such cache. If there are problems with that, I think it would be better to fix it on Dovecot's side rather than adding a second cache. > I am doing this because I am having issue with multiple people searching > thru email folders that have 100k+ emails (which is another problem in > itself, searches don't seem to scale well when folder goes above 60k > emails). Maybe enable fts-solr or fts-lucene? (Both work much better in v2.1.) From jd.beaubien at gmail.com Wed Jan 25 15:34:59 2012 From: jd.beaubien at gmail.com (Jean-Daniel Beaubien) Date: Wed, 25 Jan 2012 08:34:59 -0500 Subject: [Dovecot] Persistence of UIDs In-Reply-To: <9EDDD923-760E-4D24-BFB7-290172418A62@iki.fi> References: <9EDDD923-760E-4D24-BFB7-290172418A62@iki.fi> Message-ID: On Wed, Jan 25, 2012 at 7:45 AM, Timo Sirainen wrote: > On 25.1.2012, at 5.22, Jean-Daniel Beaubien wrote: > > > I have a question concerning UIDs. How persistant are they? > > With Dovecot persistent enough. But as Michael said, check UIDVALIDITY. > > > I am thinking about building some form of webmail specialized for some > > specific business purpose and I am thinking of building a sort of cache > in > > a DB by storing the email addr, date, subject and UID for quick lookups > and > > search of correspondance. > > Dovecot should already have such cache. If there are problems with that, I > think it would be better to fix it on Dovecot's side rather than adding a > second cache. > Very true. Has there been many search/index improvements since 1.0.9? I read thru the release notes but nothing jumped out at me. > > > I am doing this because I am having issue with multiple people searching > > thru email folders that have 100k+ emails (which is another problem in > > itself, searches don't seem to scale well when folder goes above 60k > > emails). > > Maybe enable fts-solr or fts-lucene? (Both work much better in v2.1.) > I was under the impression that lucene was for full-text search. I'm just doing simple from/to field searches. I will get a few numbers together about folder_size --> search time and I will post them tonight. -jd From CMarcus at Media-Brokers.com Wed Jan 25 15:40:18 2012 From: CMarcus at Media-Brokers.com (Charles Marcus) Date: Wed, 25 Jan 2012 08:40:18 -0500 Subject: [Dovecot] Persistence of UIDs In-Reply-To: References: <9EDDD923-760E-4D24-BFB7-290172418A62@iki.fi> Message-ID: <4F200642.4020008@Media-Brokers.com> On 2012-01-25 8:34 AM, Jean-Daniel Beaubien wrote: > On Wed, Jan 25, 2012 at 7:45 AM, Timo Sirainen wrote: >> Dovecot should already have such cache. If there are problems with that, I >> think it would be better to fix it on Dovecot's side rather than adding a >> second cache. > Very true. Has there been many search/index improvements since 1.0.9? I > read thru the release notes but nothing jumped out at me. Seriously?? 1.0.9 is *very* old, and even no longer really supported. Upgrade. Really. It isn't that hard. There is zero reason to stay on an unsupported version. >>> I am doing this because I am having issue with multiple people searching >>> thru email folders that have 100k+ emails (which is another problem in >>> itself, searches don't seem to scale well when folder goes above 60k >>> emails). >> Maybe enable fts-solr or fts-lucene? (Both work much better in v2.1.) > I was under the impression that lucene was for full-text search. I'm just > doing simple from/to field searches. > > I will get a few numbers together about folder_size --> search time and I > will post them tonight. Don't waste your time testing such an old and unsupported version, I'm sure Timo has no interest in any such numbers - *unless* you are planning on doing said tests on *both* the 1.0.9 version *and* the latest 2.0.x or 2.1 build and provide a *comparison* - *that* may be interesting... -- Best regards, Charles From tss at iki.fi Wed Jan 25 15:47:28 2012 From: tss at iki.fi (Timo Sirainen) Date: Wed, 25 Jan 2012 15:47:28 +0200 Subject: [Dovecot] Persistence of UIDs In-Reply-To: References: <9EDDD923-760E-4D24-BFB7-290172418A62@iki.fi> Message-ID: <469769DA-F849-4AE3-AB82-BB4AE05E0F11@iki.fi> On 25.1.2012, at 15.34, Jean-Daniel Beaubien wrote: >>> I am thinking about building some form of webmail specialized for some >>> specific business purpose and I am thinking of building a sort of cache >> in >>> a DB by storing the email addr, date, subject and UID for quick lookups >> and >>> search of correspondance. >> >> Dovecot should already have such cache. If there are problems with that, I >> think it would be better to fix it on Dovecot's side rather than adding a >> second cache. >> > > Very true. Has there been many search/index improvements since 1.0.9? I > read thru the release notes but nothing jumped out at me. Disk I/O usage is the same probably, CPU usage is less in newer versions. >>> I am doing this because I am having issue with multiple people searching >>> thru email folders that have 100k+ emails (which is another problem in >>> itself, searches don't seem to scale well when folder goes above 60k >>> emails). >> >> Maybe enable fts-solr or fts-lucene? (Both work much better in v2.1.) >> > > I was under the impression that lucene was for full-text search. I'm just > doing simple from/to field searches. In v2.1 from/to fields are also searched via FTS. From Juergen.Obermann at hrz.uni-giessen.de Wed Jan 25 16:43:11 2012 From: Juergen.Obermann at hrz.uni-giessen.de (=?UTF-8?Q?J=C3=BCrgen_Obermann?=) Date: Wed, 25 Jan 2012 15:43:11 +0100 Subject: [Dovecot] problem compiling imaptest under solaris Message-ID: <89f61bff49f4c5343be06dd45459b14a@imapproxy.hrz> Hallo, today I tried to compile imaptest under solaris 10 with studio 11 compiler and got the following error: gmake[2]: Entering directory `/net/fileserv/export/sunsrc/src/imaptest-20111119/src' source='client.c' object='client.o' libtool=no \ DEPDIR=.deps depmode=none /bin/bash ../depcomp \ cc -DHAVE_CONFIG_H -I. -I. -I.. -I/opt/local/include/dovecot -I/usr/local/include -fast -xarch=v8plusa -I/usr/sfw/include -c client.c "/opt/local/include/dovecot/imap-util.h", line 6: warning: useless declaration "client-state.h", line 6: warning: useless declaration "client.c", line 655: operand cannot have void type: op "==" "client.c", line 655: operands have incompatible types: const void "==" int cc: acomp failed for client.c what can I do? Thanks for any help, J?rgen -- J?rgen Obermann Hochschulrechenzentrum der Justus-Liebig-Universit?t Gie?en Heinrich-Buff-Ring 44 Tel. 0641-9913054 From tom at whyscream.net Wed Jan 25 18:19:18 2012 From: tom at whyscream.net (Tom Hendrikx) Date: Wed, 25 Jan 2012 17:19:18 +0100 Subject: [Dovecot] Dovecot antispam plugint got an empty message In-Reply-To: <201201250805.47430.dovecot@bravenec.eu> References: <201201250805.47430.dovecot@bravenec.eu> Message-ID: <4F202B86.9000102@whyscream.net> On 25-01-12 08:05, Petr Bravenec wrote: > Few weeks ago I upgraded dovecot from 1.2 to 2.0.16 and antispam plugin to > 2.0_pre20101222. Since the upgrade I'm not able to move messages to my Junk > folder. In the maillog I have found this message: > > dspam[25060]: empty message (no data received) > Gentoo has included the antispam plugin from Johannes historically, but added the fork by Eugene to support upgrades to dovecot 2.0. It is not really made clear by the gentoo ebuild is that the forked plugin needs a slightly different config. I use the config below with dovecot 2.0.17 and a git checkout for dovecot-antispam: ===8<======== plugin { antispam_signature = X-DSPAM-Signature antispam_signature_missing = move antispam_spam_pattern_ignorecase = Junk;Junk.* antispam_trash_pattern_ignorecase = Trash;Deleted Items;Deleted Messages # Backend specific antispam_backend = dspam antispam_dspam_binary = /usr/bin/dspamc antispam_dspam_args = --user;%u;--deliver=;--source=error;--signature=%%s antispam_dspam_spam = --class=spam antispam_dspam_notspam = --class=innocent #antispam_dspam_result_header = X-DSPAM-Result } -- Regards, Tom From CMarcus at Media-Brokers.com Wed Jan 25 18:42:39 2012 From: CMarcus at Media-Brokers.com (Charles Marcus) Date: Wed, 25 Jan 2012 11:42:39 -0500 Subject: [Dovecot] move mails from spool to users home dir(maildir formate) automatically In-Reply-To: References: Message-ID: <4F2030FF.1080304@Media-Brokers.com> On 2012-01-25 3:19 AM, Arun Gupta wrote: > I am using dovecot 2.0.16, and assigend globally procmailrc > (/etc/procmailrc) which delivers mails to user's home directory in > maildir formate. Also I assined quota to User through setquota (edquota) > command, If the quota excedded then this case user's mail store to > /var/spool/mail/user. After incresing quota how to delivered these mails > to user's home dir in maildir formate automatically. Best practice is to reject mail for users over quota (as long as you do this during the smtp transaction... Otherwise, whats the point? (they can still fill up your server)... -- Best regards, Charles From dmiller at amfes.com Wed Jan 25 18:55:09 2012 From: dmiller at amfes.com (Daniel L. Miller) Date: Wed, 25 Jan 2012 08:55:09 -0800 Subject: [Dovecot] Imap Quota Exceeded - But Still Receiving Emails? In-Reply-To: <58f41e2e84d4befd5b09a1cb913e57b4@palatineweb.com> References: <4F1F9567.1030804@amfes.com> <58f41e2e84d4befd5b09a1cb913e57b4@palatineweb.com> Message-ID: On 1/25/2012 1:39 AM, Palatine Web Support wrote: > On 2012-01-25 05:38, Daniel L. Miller wrote: >> On 1/24/2012 8:35 AM, Palatine Web Support wrote: >>> >>> Here is my dovecot config: >>> >>> plugin { >>> quota = maildir:User Quota >>> quota_rule2 = Trash:storage=+100M >>> } >> [..] >>> >>> So it picks up my set quota of 3MB but dovecot is not rejecting >>> emails if I am over my quota. >>> >>> Can anyone help? >>> >> Is the quota plugin being loaded? What is the output of: >> >> doveconf | grep -B 2 plug > > Hi Daniel > > I tried the command and it returned the command was not found. I have > installed: > > apt-get install dovecot-common > apt-get install dovecot-dev > apt-get install dovecot-imapd > > > Which package does the binary doveconf come from? You need to make sure to reply to the list - not just to me. If you don't have doveconf...what version of Dovecot are you using? -- Daniel From dmiller at amfes.com Wed Jan 25 19:01:30 2012 From: dmiller at amfes.com (Daniel L. Miller) Date: Wed, 25 Jan 2012 09:01:30 -0800 Subject: [Dovecot] Imap Quota Exceeded - But Still Receiving Emails? In-Reply-To: <747f97172fd71affd2ee5b5ebcc5d16c@palatineweb.com> References: <4F1F9567.1030804@amfes.com> <747f97172fd71affd2ee5b5ebcc5d16c@palatineweb.com> Message-ID: On 1/25/2012 2:01 AM, Palatine Web Support wrote: > On 2012-01-25 05:38, Daniel L. Miller wrote: >> On 1/24/2012 8:35 AM, Palatine Web Support wrote: >>> >>> Here is my dovecot config: >>> >>> plugin { >>> quota = maildir:User Quota >>> quota_rule2 = Trash:storage=+100M >>> } >> [..] >>> >>> So it picks up my set quota of 3MB but dovecot is not rejecting >>> emails if I am over my quota. >>> >>> Can anyone help? >>> >> Is the quota plugin being loaded? What is the output of: >> >> doveconf | grep -B 2 plug > > The modules are being loaded. From the log file with debugging enabled: > > Jan 25 09:59:58 mail dovecot: IMAP(xxx at xxx.com): Loading modules from > directory: /usr/lib/dovecot/modules/imap > Jan 25 09:59:58 mail dovecot: IMAP(xxx at xxx.com): Module loaded: > /usr/lib/dovecot/modules/imap/lib10_quota_plugin.so > Jan 25 09:59:58 mail dovecot: IMAP(xxx at xxx.com): Module loaded: > /usr/lib/dovecot/modules/imap/lib11_imap_quota_plugin.so > Jan 25 09:59:58 mail dovecot: IMAP(xxx at xxx.com): Effective uid=150, > gid=8, home=/var/vmail/xxx.com/support > Jan 25 09:59:58 mail dovecot: IMAP(xxx at xxx.com): Quota root: name=User > Quota backend=dirsize args= > Jan 25 09:59:58 mail dovecot: IMAP(xxx at xxx.com): Quota rule: root=User > Quota mailbox=* bytes=3145728 messages=0 > Jan 25 09:59:58 mail dovecot: IMAP(xxx at xxx.com): Quota rule: root=User > Quota mailbox=Trash bytes=104857600 messages=0 > Jan 25 09:59:58 mail dovecot: IMAP(xxx at xxx.com): maildir: > data=/var/vmail/xxx.com/support > Jan 25 09:59:58 mail dovecot: IMAP(xxx at xxx.com): maildir++: > root=/var/vmail/xxx.com/support, index=, control=, > inbox=/var/vmail/xxx.com/support > Jan 25 09:59:58 mail dovecot: IMAP(xxx at xxx.com): Namespace : Using > permissions from /var/vmail/xxx.com/support: mode=0700 gid=-1 > Jan 25 09:59:58 mail dovecot: IMAP(xxx at xxx.com): Disconnected: Logged > out bytes=82/573 > Jan 25 09:59:58 mail dovecot: IMAP(xxx at xxx.com): Disconnected: Logged > out bytes=269/8243 > I don't know if it makes any difference, but in your config file, try changing: plugin { quota = maildir:User Quota to plugin { quota = maildir:User quota (lowercase the "quota") -- Daniel From tss at iki.fi Thu Jan 26 01:03:58 2012 From: tss at iki.fi (Timo Sirainen) Date: Thu, 26 Jan 2012 01:03:58 +0200 Subject: [Dovecot] v2.1.rc5 released Message-ID: <901DE631-5790-41BF-89A9-73BBB544FDF7@iki.fi> http://dovecot.org/releases/2.1/rc/dovecot-2.1.rc5.tar.gz http://dovecot.org/releases/2.1/rc/dovecot-2.1.rc5.tar.gz.sig I'm still lagging behind reading emails. v2.1.0 will be released after I've finished that. RC5 is already stable and used in production, but I want to make sure that I haven't missed anything important that was reported previously. Most of the recent fixed bugs existed also in v2.0 series. Changes since rc3: * Temporary authentication failures sent to IMAP/POP3 clients now includes the server's hostname and timestamp. This makes it easier to find the error message from logs. + auth: Implemented support for Postfix's "TCP map" sockets for user existence lookups. + auth: Idling auth worker processes are now stopped. This reduces error messages about MySQL disconnections. - director: With >2 directors ring syncing might have stalled during director connect/disconnect, causing logins to fail. - LMTP client/proxy: Fixed potential hanging when sending (big) mails - Compressed mails with external attachments (dbox + SIS + zlib) failed sometimes with bogus "cached message size wrong" errors. (I skipped rc4 release, because I accidentally tagged it too early in hg.) From tss at iki.fi Thu Jan 26 01:15:31 2012 From: tss at iki.fi (Timo Sirainen) Date: Thu, 26 Jan 2012 01:15:31 +0200 Subject: [Dovecot] FOSDEM Message-ID: <91D95FB6-D651-4A82-BC16-241F4DDAEF78@iki.fi> I'll be in FOSDEM giving a small lightning talk about Dovecot: http://fosdem.org/2012/schedule/event/dovecot I'll also be around in FOSDEM the whole time, so if you're there and want to talk to me about anything, send me an email at some point. Poll to dovecot-news list people: Do you want to see these kind of news about my upcoming talks sent to the list? Probably happens a few times/year. A simple "yes" or "no" reply to this mail privately to me is enough. From petr at bravenec.eu Wed Jan 25 23:17:38 2012 From: petr at bravenec.eu (Petr Bravenec) Date: Wed, 25 Jan 2012 22:17:38 +0100 Subject: [Dovecot] Dovecot antispam plugint got an empty message In-Reply-To: <4F202B86.9000102@whyscream.net> References: <201201250805.47430.dovecot@bravenec.eu> <4F202B86.9000102@whyscream.net> Message-ID: <7860878.6BHtT8IiNC@hrabos> Thank you, I have reconfigured my dovecot on gentoo and it looks now that it worked properly. Regards, Petr Bravenec Dne Wednesday 25 of January 2012 17:19:18 Tom Hendrikx napsal(a): > On 25-01-12 08:05, Petr Bravenec wrote: > > Few weeks ago I upgraded dovecot from 1.2 to 2.0.16 and antispam plugin > > to 2.0_pre20101222. Since the upgrade I'm not able to move messages to > > my Junk folder. In the maillog I have found this message: > > > > dspam[25060]: empty message (no data received) > > Gentoo has included the antispam plugin from Johannes historically, but > added the fork by Eugene to support upgrades to dovecot 2.0. It is not > really made clear by the gentoo ebuild is that the forked plugin needs a > slightly different config. > > I use the config below with dovecot 2.0.17 and a git checkout for > dovecot-antispam: > > ===8<======== > plugin { > antispam_signature = X-DSPAM-Signature > antispam_signature_missing = move > antispam_spam_pattern_ignorecase = Junk;Junk.* > antispam_trash_pattern_ignorecase = Trash;Deleted Items;Deleted > Messages > > # Backend specific > antispam_backend = dspam > antispam_dspam_binary = /usr/bin/dspamc > antispam_dspam_args = > --user;%u;--deliver=;--source=error;--signature=%%s > antispam_dspam_spam = --class=spam > antispam_dspam_notspam = --class=innocent > #antispam_dspam_result_header = X-DSPAM-Result > } > > > -- > Regards, > Tom From user+dovecot at localhost.localdomain.org Thu Jan 26 01:24:50 2012 From: user+dovecot at localhost.localdomain.org (Pascal Volk) Date: Thu, 26 Jan 2012 00:24:50 +0100 Subject: [Dovecot] FOSDEM In-Reply-To: <91D95FB6-D651-4A82-BC16-241F4DDAEF78@iki.fi> References: <91D95FB6-D651-4A82-BC16-241F4DDAEF78@iki.fi> Message-ID: <4F208F42.4020007@localhost.localdomain.org> On 01/26/2012 12:15 AM Timo Sirainen wrote: > I'll be in FOSDEM giving a small lightning talk about Dovecot: http://fosdem.org/2012/schedule/event/dovecot > > I'll also be around in FOSDEM the whole time, so if you're there and want to talk to me about anything, send me an email at some point. I'll be there too. > Poll to dovecot-news list people: Do you want to see these kind of news about my upcoming talks sent to the list? Probably happens a few times/year. A simple "yes" or "no" reply to this mail privately to me is enough. yes Regards, Pascal -- The trapper recommends today: f007ba11.1202600 at localdomain.org From dmiller at amfes.com Thu Jan 26 01:37:16 2012 From: dmiller at amfes.com (Daniel L. Miller) Date: Wed, 25 Jan 2012 15:37:16 -0800 Subject: [Dovecot] Crash on mail folder delete Message-ID: Attempting to delete a folder from within the trash folder using Thunderbird. I see the following in the log: Jan 25 15:36:22 bubba dovecot: imap(dmiller at amfes.com): Panic: file mailbox-list-fs.c: line 156 (fs_list_get_path): assertion failed: (mailbox_list_is_valid_pattern(_list, name)) Jan 25 15:36:22 bubba dovecot: imap(dmiller at amfes.com): Error: Raw backtrace: /usr/local/lib/dovecot/libdovecot.so.0(+0x3efba) [0x7f5fe9f86fba] -> /usr/local/lib/dovecot/libdovecot.so.0(+0x3f006) [0x7f5fe9f87006] -> /usr/local/lib/dovecot/libdovecot.so.0(+0x17f5a) [0x7f5fe9f5ff5a] -> /usr/local/lib/dovecot/libdovecot-storage.so.0(+0x47287) [0x7f5fea214287] -> /usr/local/lib/dovecot/lib01_acl_plugin.so(+0x6c71) [0x7f5fe8b9cc71] -> /usr/local/lib/dovecot/lib01_acl_plugin.so(+0x6d47) [0x7f5fe8b9cd47] -> /usr/local/lib/dovecot/lib01_acl_plugin.so(acl_mailbox_allocated+0x9e) [0x7f5fe8ba061e] -> /usr/local/lib/dovecot/libdovecot-storage.so.0(hook_mailbox_allocated+0x62) [0x7f5fea2085b2] -> /usr/local/lib/dovecot/libdovecot-storage.so.0(mailbox_alloc+0xb2) [0x7f5fea2073d2] -> dovecot/imap [dmiller at amfes.com 192.168.0.91 delete](cmd_delete+0x72) [0x409922] -> dovecot/imap [dmiller at amfes.com 192.168.0.91 delete](command_exec+0x3d) [0x4109ad] -> dovecot/imap [dmiller at amfes.com 192.168.0.91 delete]() [0x40f97e] -> dovecot/imap [dmiller at amfes.com 192.168.0.91 delete]() [0x40fa5d] -> dovecot/imap [dmiller at amfes.com 192.168.0.91 delete](client_handle_input+0x135) [0x40fc85] -> dovecot/imap [dmiller at amfes.com 192.168.0.91 delete](client_input+0x5f) [0x4105af] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_call_io+0x36) [0x7f5fe9f93406] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_handler_run+0x9f) [0x7f5fe9f9448f] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_run+0x28) [0x7f5fe9f933a8] -> /usr/local/lib/dovecot/libdovecot.so.0(master_service_run+0x13) [0x7f5fe9f803b3] -> dovecot/imap [dmiller at amfes.com 192.168.0.91 delete](main+0x301) [0x418a61] -> /lib/libc.so.6(__libc_start_main+0xfe) [0x7f5fe9be3d8e] -> dovecot/imap [dmiller at amfes.com 192.168.0.91 delete]() [0x4083f9] Jan 25 15:36:23 bubba dovecot: imap(dmiller at amfes.com): Panic: file mailbox-list-fs.c: line 156 (fs_list_get_path): assertion failed: (mailbox_list_is_valid_pattern(_list, name)) Jan 25 15:36:23 bubba dovecot: imap(dmiller at amfes.com): Error: Raw backtrace: /usr/local/lib/dovecot/libdovecot.so.0(+0x3efba) [0x7f33673dafba] -> /usr/local/lib/dovecot/libdovecot.so.0(+0x3f006) [0x7f33673db006] -> /usr/local/lib/dovecot/libdovecot.so.0(+0x17f5a) [0x7f33673b3f5a] -> /usr/local/lib/dovecot/libdovecot-storage.so.0(+0x47287) [0x7f3367668287] -> /usr/local/lib/dovecot/lib01_acl_plugin.so(+0x6c71) [0x7f3365ff0c71] -> /usr/local/lib/dovecot/lib01_acl_plugin.so(+0x6d47) [0x7f3365ff0d47] -> /usr/local/lib/dovecot/lib01_acl_plugin.so(acl_mailbox_allocated+0x9e) [0x7f3365ff461e] -> /usr/local/lib/dovecot/libdovecot-storage.so.0(hook_mailbox_allocated+0x62) [0x7f336765c5b2] -> /usr/local/lib/dovecot/libdovecot-storage.so.0(mailbox_alloc+0xb2) [0x7f336765b3d2] -> dovecot/imap [dmiller at amfes.com 192.168.0.91 delete](cmd_delete+0x72) [0x409922] -> dovecot/imap [dmiller at amfes.com 192.168.0.91 delete](command_exec+0x3d) [0x4109ad] -> dovecot/imap [dmiller at amfes.com 192.168.0.91 delete]() [0x40f97e] -> dovecot/imap [dmiller at amfes.com 192.168.0.91 delete]() [0x40fa5d] -> dovecot/imap [dmiller at amfes.com 192.168.0.91 delete](client_handle_input+0x135) [0x40fc85] -> dovecot/imap [dmiller at amfes.com 192.168.0.91 delete](client_input+0x5f) [0x4105af] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_call_io+0x36) [0x7f33673e7406] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_handler_run+0x9f) [0x7f33673e848f] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_run+0x28) [0x7f33673e73a8] -> /usr/local/lib/dovecot/libdovecot.so.0(master_service_run+0x13) [0x7f33673d43b3] -> dovecot/imap [dmiller at amfes.com 192.168.0.91 delete](main+0x301) [0x418a61] -> /lib/libc.so.6(__libc_start_main+0xfe) [0x7f3367037d8e] -> dovecot/imap [dmiller at amfes.com 192.168.0.91 delete]() [0x4083f9] Jan 25 15:36:23 bubba dovecot: imap(dmiller at amfes.com): Fatal: master: service(imap): child 6074 killed with signal 6 (core dumps disabled) Jan 25 15:36:23 bubba dovecot: imap(dmiller at amfes.com): Fatal: master: service(imap): child 6589 killed with signal 6 (core dumps disabled) -- Daniel From doctor at doctor.nl2k.ab.ca Thu Jan 26 01:39:30 2012 From: doctor at doctor.nl2k.ab.ca (The Doctor) Date: Wed, 25 Jan 2012 16:39:30 -0700 Subject: [Dovecot] v2.1.rc5 released In-Reply-To: <901DE631-5790-41BF-89A9-73BBB544FDF7@iki.fi> References: <901DE631-5790-41BF-89A9-73BBB544FDF7@iki.fi> Message-ID: <20120125233930.GA17183@doctor.nl2k.ab.ca> On Thu, Jan 26, 2012 at 01:03:58AM +0200, Timo Sirainen wrote: > http://dovecot.org/releases/2.1/rc/dovecot-2.1.rc5.tar.gz > http://dovecot.org/releases/2.1/rc/dovecot-2.1.rc5.tar.gz.sig > > I'm still lagging behind reading emails. v2.1.0 will be released after I've finished that. RC5 is already stable and used in production, but I want to make sure that I haven't missed anything important that was reported previously. Most of the recent fixed bugs existed also in v2.0 series. > > Changes since rc3: > > * Temporary authentication failures sent to IMAP/POP3 clients > now includes the server's hostname and timestamp. This makes it > easier to find the error message from logs. > > + auth: Implemented support for Postfix's "TCP map" sockets for > user existence lookups. > + auth: Idling auth worker processes are now stopped. This reduces > error messages about MySQL disconnections. > - director: With >2 directors ring syncing might have stalled during > director connect/disconnect, causing logins to fail. > - LMTP client/proxy: Fixed potential hanging when sending (big) mails > - Compressed mails with external attachments (dbox + SIS + zlib) failed > sometimes with bogus "cached message size wrong" errors. > > (I skipped rc4 release, because I accidentally tagged it too early in hg.) All right, can you get configure to detect --as-needed flag for ld? This is show stopping for me. -- Member - Liberal International This is doctor at nl2k.ab.ca Ici doctor at nl2k.ab.ca God, Queen and country! Never Satan President Republic! Beware AntiChrist rising! https://www.fullyfollow.me/rootnl2k Birthdate : 29 Jan 1969 Croydon, Surrey, UK From tss at iki.fi Thu Jan 26 01:42:11 2012 From: tss at iki.fi (Timo Sirainen) Date: Thu, 26 Jan 2012 01:42:11 +0200 Subject: [Dovecot] v2.1.rc5 released In-Reply-To: <20120125233930.GA17183@doctor.nl2k.ab.ca> References: <901DE631-5790-41BF-89A9-73BBB544FDF7@iki.fi> <20120125233930.GA17183@doctor.nl2k.ab.ca> Message-ID: On 26.1.2012, at 1.39, The Doctor wrote: > All right, can you get configure to detect --as-needed flag for ld? > > This is show stopping for me. It should only be used with GNU ld. What ld and OS do you use? configure --without-gnu-ld probably works also? From tss at iki.fi Thu Jan 26 01:42:46 2012 From: tss at iki.fi (Timo Sirainen) Date: Thu, 26 Jan 2012 01:42:46 +0200 Subject: [Dovecot] Crash on mail folder delete In-Reply-To: References: Message-ID: <54B5D728-EC26-4633-A927-7EC040043BF5@iki.fi> On 26.1.2012, at 1.37, Daniel L. Miller wrote: > Attempting to delete a folder from within the trash folder using Thunderbird. I see the following in the log: Dovecot version? From dmiller at amfes.com Thu Jan 26 01:43:26 2012 From: dmiller at amfes.com (Daniel L. Miller) Date: Wed, 25 Jan 2012 15:43:26 -0800 Subject: [Dovecot] Crash on mail folder delete In-Reply-To: <54B5D728-EC26-4633-A927-7EC040043BF5@iki.fi> References: <4F20922C.60206@amfes.com> <54B5D728-EC26-4633-A927-7EC040043BF5@iki.fi> Message-ID: On 1/25/2012 3:42 PM, Timo Sirainen wrote: > On 26.1.2012, at 1.37, Daniel L. Miller wrote: > >> Attempting to delete a folder from within the trash folder using Thunderbird. I see the following in the log: > Dovecot version? > 2.1.rc3. I'm compiling rc5 now... -- Daniel From doctor at doctor.nl2k.ab.ca Thu Jan 26 02:01:26 2012 From: doctor at doctor.nl2k.ab.ca (The Doctor) Date: Wed, 25 Jan 2012 17:01:26 -0700 Subject: [Dovecot] v2.1.rc5 released In-Reply-To: References: <901DE631-5790-41BF-89A9-73BBB544FDF7@iki.fi> <20120125233930.GA17183@doctor.nl2k.ab.ca> Message-ID: <20120126000126.GA19765@doctor.nl2k.ab.ca> On Thu, Jan 26, 2012 at 01:42:11AM +0200, Timo Sirainen wrote: > On 26.1.2012, at 1.39, The Doctor wrote: > > > All right, can you get configure to detect --as-needed flag for ld? > > > > This is show stopping for me. > > It should only be used with GNU ld. What ld and OS do you use? configure --without-gnu-ld probably works also? My /usr/bin/ld GNU ld version 2.13.1 Copyright 2002 Free Software Foundation, Inc. This program is free software; you may redistribute it under the terms of the GNU General Public License. This program has absolutely no warranty. on BSD/OS 4.3.1 -- Member - Liberal International This is doctor at nl2k.ab.ca Ici doctor at nl2k.ab.ca God, Queen and country! Never Satan President Republic! Beware AntiChrist rising! https://www.fullyfollow.me/rootnl2k Birthdate : 29 Jan 1969 Croydon, Surrey, UK From dmiller at amfes.com Thu Jan 26 02:04:08 2012 From: dmiller at amfes.com (Daniel L. Miller) Date: Wed, 25 Jan 2012 16:04:08 -0800 Subject: [Dovecot] Crash on mail folder delete In-Reply-To: <4F20939E.4010903@amfes.com> References: <4F20922C.60206@amfes.com> <54B5D728-EC26-4633-A927-7EC040043BF5@iki.fi> <4F20939E.4010903@amfes.com> Message-ID: On 1/25/2012 3:43 PM, Daniel L. Miller wrote: > On 1/25/2012 3:42 PM, Timo Sirainen wrote: >> On 26.1.2012, at 1.37, Daniel L. Miller wrote: >> >>> Attempting to delete a folder from within the trash folder using >>> Thunderbird. I see the following in the log: >> Dovecot version? >> > 2.1.rc3. I'm compiling rc5 now... > Error still there on rc5. Jan 25 16:03:47 bubba dovecot: imap(dmiller at amfes.com): Panic: file mailbox-list-fs.c: line 156 (fs_list_get_path): assertion failed: (mailbox_list_is_valid_pattern(_list, name)) Jan 25 16:03:47 bubba dovecot: imap(dmiller at amfes.com): Error: Raw backtrace: /usr/local/lib/dovecot/libdovecot.so.0(+0x3f1ba) [0x7f7c3f0331ba] -> /usr/local/lib/dovecot/libdovecot.so.0(+0x3f206) [0x7f7c3f033206] -> /usr/local/lib/dovecot/libdovecot.so.0(+0x1804a) [0x7f7c3f00c04a] -> /usr/local/lib/dovecot/libdovecot-storage.so.0(+0x47317) [0x7f7c3f2c0317] -> /usr/local/lib/dovecot/lib01_acl_plugin.so(+0x6c71) [0x7f7c3dc48c71] -> /usr/local/lib/dovecot/lib01_acl_plugin.so(+0x6d47) [0x7f7c3dc48d47] -> /usr/local/lib/dovecot/lib01_acl_plugin.so(acl_mailbox_allocated+0x9e) [0x7f7c3dc4c61e] -> /usr/local/lib/dovecot/libdovecot-storage.so.0(hook_mailbox_allocated+0x62) [0x7f7c3f2b4662] -> /usr/local/lib/dovecot/libdovecot-storage.so.0(mailbox_alloc+0xb2) [0x7f7c3f2b3482] -> dovecot/imap [dmiller at amfes.com 192.168.0.91 delete](cmd_delete+0x72) [0x409972] -> dovecot/imap [dmiller at amfes.com 192.168.0.91 delete](command_exec+0x3d) [0x4109fd] -> dovecot/imap [dmiller at amfes.com 192.168.0.91 delete]() [0x40f9ce] -> dovecot/imap [dmiller at amfes.com 192.168.0.91 delete]() [0x40faad] -> dovecot/imap [dmiller at amfes.com 192.168.0.91 delete](client_handle_input+0x135) [0x40fcd5] -> dovecot/imap [dmiller at amfes.com 192.168.0.91 delete](client_input+0x5f) [0x4105ff] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_call_io+0x36) [0x7f7c3f03f5d6] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_handler_run+0x9f) [0x7f7c3f04065f] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_run+0x28) [0x7f7c3f03f578] -> /usr/local/lib/dovecot/libdovecot.so.0(master_service_run+0x13) [0x7f7c3f02c593] -> dovecot/imap [dmiller at amfes.com 192.168.0.91 delete](main+0x2a5) [0x418a55] -> /lib/libc.so.6(__libc_start_main+0xfe) [0x7f7c3ec8fd8e] -> dovecot/imap [dmiller at amfes.com 192.168.0.91 delete]() [0x408449] Jan 25 16:03:48 bubba dovecot: imap(dmiller at amfes.com): Panic: file mailbox-list-fs.c: line 156 (fs_list_get_path): assertion failed: (mailbox_list_is_valid_pattern(_list, name)) Jan 25 16:03:48 bubba dovecot: imap(dmiller at amfes.com): Error: Raw backtrace: /usr/local/lib/dovecot/libdovecot.so.0(+0x3f1ba) [0x7f9e52e211ba] -> /usr/local/lib/dovecot/libdovecot.so.0(+0x3f206) [0x7f9e52e21206] -> /usr/local/lib/dovecot/libdovecot.so.0(+0x1804a) [0x7f9e52dfa04a] -> /usr/local/lib/dovecot/libdovecot-storage.so.0(+0x47317) [0x7f9e530ae317] -> /usr/local/lib/dovecot/lib01_acl_plugin.so(+0x6c71) [0x7f9e51a36c71] -> /usr/local/lib/dovecot/lib01_acl_plugin.so(+0x6d47) [0x7f9e51a36d47] -> /usr/local/lib/dovecot/lib01_acl_plugin.so(acl_mailbox_allocated+0x9e) [0x7f9e51a3a61e] -> /usr/local/lib/dovecot/libdovecot-storage.so.0(hook_mailbox_allocated+0x62) [0x7f9e530a2662] -> /usr/local/lib/dovecot/libdovecot-storage.so.0(mailbox_alloc+0xb2) [0x7f9e530a1482] -> dovecot/imap [dmiller at amfes.com 192.168.0.91 delete](cmd_delete+0x72) [0x409972] -> dovecot/imap [dmiller at amfes.com 192.168.0.91 delete](command_exec+0x3d) [0x4109fd] -> dovecot/imap [dmiller at amfes.com 192.168.0.91 delete]() [0x40f9ce] -> dovecot/imap [dmiller at amfes.com 192.168.0.91 delete]() [0x40faad] -> dovecot/imap [dmiller at amfes.com 192.168.0.91 delete](client_handle_input+0x135) [0x40fcd5] -> dovecot/imap [dmiller at amfes.com 192.168.0.91 delete](client_input+0x5f) [0x4105ff] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_call_io+0x36) [0x7f9e52e2d5d6] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_handler_run+0x9f) [0x7f9e52e2e65f] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_run+0x28) [0x7f9e52e2d578] -> /usr/local/lib/dovecot/libdovecot.so.0(master_service_run+0x13) [0x7f9e52e1a593] -> dovecot/imap [dmiller at amfes.com 192.168.0.91 delete](main+0x2a5) [0x418a55] -> /lib/libc.so.6(__libc_start_main+0xfe) [0x7f9e52a7dd8e] -> dovecot/imap [dmiller at amfes.com 192.168.0.91 delete]() [0x408449] Jan 25 16:03:48 bubba dovecot: imap(dmiller at amfes.com): Fatal: master: service(imap): child 3300 killed with signal 6 (core dumps disabled) Jan 25 16:03:48 bubba dovecot: imap(dmiller at amfes.com): Fatal: master: service(imap): child 3267 killed with signal 6 (core dumps disabled) -- Daniel From jd.beaubien at gmail.com Thu Jan 26 03:40:16 2012 From: jd.beaubien at gmail.com (Jean-Daniel Beaubien) Date: Wed, 25 Jan 2012 20:40:16 -0500 Subject: [Dovecot] Persistence of UIDs In-Reply-To: <469769DA-F849-4AE3-AB82-BB4AE05E0F11@iki.fi> References: <9EDDD923-760E-4D24-BFB7-290172418A62@iki.fi> <469769DA-F849-4AE3-AB82-BB4AE05E0F11@iki.fi> Message-ID: On Wed, Jan 25, 2012 at 8:47 AM, Timo Sirainen wrote: > On 25.1.2012, at 15.34, Jean-Daniel Beaubien wrote: > > >>> I am thinking about building some form of webmail specialized for some > >>> specific business purpose and I am thinking of building a sort of cache > >> in > >>> a DB by storing the email addr, date, subject and UID for quick lookups > >> and > >>> search of correspondance. > >> > >> Dovecot should already have such cache. If there are problems with > that, I > >> think it would be better to fix it on Dovecot's side rather than adding > a > >> second cache. > >> > > > > Very true. Has there been many search/index improvements since 1.0.9? I > > read thru the release notes but nothing jumped out at me. > > Disk I/O usage is the same probably, CPU usage is less in newer versions. > > >>> I am doing this because I am having issue with multiple people > searching > >>> thru email folders that have 100k+ emails (which is another problem in > >>> itself, searches don't seem to scale well when folder goes above 60k > >>> emails). > >> > >> Maybe enable fts-solr or fts-lucene? (Both work much better in v2.1.) > >> > > > > I was under the impression that lucene was for full-text search. I'm just > > doing simple from/to field searches. > > In v2.1 from/to fields are also searched via FTS. > Ok, I managed to compile 2.1 rc5 on an old ubuntu 8.04 without any issue. However, the config file is giving me a bit of a hard time, I'll figure this part out tomorrow. I'd just like to confirm that there is no risk to the actual mail data is ever something is badly configured when I start dovecot 2.1. I am managing this old server on my spare time for a friend, so I don't want to loose 2million+ emails and have to deal with those consequences :) From gedalya at gedalya.net Thu Jan 26 06:31:20 2012 From: gedalya at gedalya.net (Gedalya) Date: Wed, 25 Jan 2012 23:31:20 -0500 Subject: [Dovecot] IMAP to Maildir Migration preserving UIDs? Message-ID: <4F20D718.9010805@gedalya.net> Hello all, I'm facing the need to migrate from a proprietary IMAP server to Dovecot. The migration must be as smooth and transparent as possible. The mailbox format I would want to use is Maildir++. The storage format used by the current server is unknown, and I don't look forward to trying to reverse-engineer it. This leaves me with the option of reading the mailboxes using IMAP. There are tools like offlineimap or mbsync, and they do store the UID and UIDVALIDITY info. The last piece of the puzzle is a process to properly create the dovecot-uidlist and dovecot-uidvalidity files. So far I wasn't able to find anything on this. Are there any tips? Are there any tools available to do this job, or part of it? In either case I need this done, and I'll have to create whatever I can't find available. If there isn't anything out there that I'm yet to become aware of, then I'm looking at creating something like an offlineimap post-processing routine? Any help would be much appreciated. Gedalya From arung at cdac.in Thu Jan 26 07:13:07 2012 From: arung at cdac.in (Arun Gupta) Date: Thu, 26 Jan 2012 10:43:07 +0530 (IST) Subject: [Dovecot] dovecot Digest, Vol 105, Issue 57 In-Reply-To: References: Message-ID: Dear Sir, Thanks for your reply and I really agreed your point about 'reject mail for users over quota', but I don't want to do it if it is possible to without reject mails to deliver mails from spool to user's home directory automatically, kindly provide solution. I will be highly obliged all of you. -- Thanks & Regards, Arun Kumar Gupta > formate) automatically > Message-ID: > Content-Type: TEXT/PLAIN; format=flowed; charset=US-ASCII > > > Hi, > > I am using dovecot 2.0.16, and assigend globally procmailrc > (/etc/procmailrc) which delivers mails to user's home directory in maildir > formate. Also I assined quota to User through setquota (edquota) command, > If the quota excedded then this case user's mail store to > /var/spool/mail/user. After incresing quota how to delivered these mails > to user's home dir in maildir formate automatically. > > Thanks & Regards, > > Arun Kumar Gupta > Best practice is to reject mail for users over quota (as long as you do this during the smtp transaction... Otherwise, whats the point? (they can still fill up your server)... -- Best regards, Charles -- This message has been scanned for viruses and dangerous content by MailScanner, and is believed to be clean. From mark.zealey at webfusion.com Thu Jan 26 12:14:57 2012 From: mark.zealey at webfusion.com (Mark Zealey) Date: Thu, 26 Jan 2012 10:14:57 +0000 Subject: [Dovecot] auth-worker temporary failures causing lmtp 500 rejection Message-ID: <4F2127A1.2010302@webfusion.com> Hi there, I'm using dovecot 2.0.16 with a mysql user database. From time to time when we have a big influx of messages (perhaps more than 30 concurrent rcpt to:<> sessions at the same time so no auth-workers free?) or when we have a transient issue connecting to the database server, we see the message: Jan 25 16:38:23 mailbox dovecot: auth-worker: sql(foo at bar.com,1.2.3.4): Unknown user and the lmtp process returns: 550 5.1.1 User doesn't exist: foo at bar.com This would be correct for a permanent error where the user doesn't exist in our database, however it seems to be doing this on transient errors too. Is this an issue with the code or perhaps some setting I have missed? Thanks, Mark From CMarcus at Media-Brokers.com Thu Jan 26 14:03:56 2012 From: CMarcus at Media-Brokers.com (Charles Marcus) Date: Thu, 26 Jan 2012 07:03:56 -0500 Subject: [Dovecot] Persistence of UIDs In-Reply-To: References: <9EDDD923-760E-4D24-BFB7-290172418A62@iki.fi> <469769DA-F849-4AE3-AB82-BB4AE05E0F11@iki.fi> Message-ID: <4F21412C.9060105@Media-Brokers.com> On 2012-01-25 8:40 PM, Jean-Daniel Beaubien wrote: > I'd just like to confirm that there is no risk to the actual mail data is > ever something is badly configured when I start dovecot 2.1. I am managing > this old server on my spare time for a friend, so I don't want to loose > 2million+ emails and have to deal with those consequences:) There are *always* risks associated with things like this... maybe the chance is low, but no guarantees... As always, it is *your* responsibility to *backup* *first*... -- Best regards, Charles From CMarcus at Media-Brokers.com Thu Jan 26 14:06:44 2012 From: CMarcus at Media-Brokers.com (Charles Marcus) Date: Thu, 26 Jan 2012 07:06:44 -0500 Subject: [Dovecot] IMAP to Maildir Migration preserving UIDs? In-Reply-To: <4F20D718.9010805@gedalya.net> References: <4F20D718.9010805@gedalya.net> Message-ID: <4F2141D4.806@Media-Brokers.com> On 2012-01-25 11:31 PM, Gedalya wrote: > This leaves me with the option of reading the mailboxes using IMAP. > There are tools like offlineimap or mbsync, Not familiar with those, but I think imapsync will do what you want? http://imapsync.lamiral.info/ I do see that it references those two though... -- Best regards, Charles From support at palatineweb.com Thu Jan 26 14:09:09 2012 From: support at palatineweb.com (Palatine Web Support) Date: Thu, 26 Jan 2012 12:09:09 +0000 Subject: [Dovecot] =?utf-8?q?Imap_Quota_Exceeded_-_But_Still_Receiving_Ema?= =?utf-8?q?ils=3F?= In-Reply-To: References: <4F1F9567.1030804@amfes.com> <747f97172fd71affd2ee5b5ebcc5d16c@palatineweb.com> Message-ID: <32bb69a587decb1d09d618792dc1ed8d@palatineweb.com> On 2012-01-25 17:01, Daniel L. Miller wrote: > On 1/25/2012 2:01 AM, Palatine Web Support wrote: >> On 2012-01-25 05:38, Daniel L. Miller wrote: >>> On 1/24/2012 8:35 AM, Palatine Web Support wrote: >>>> >>>> Here is my dovecot config: >>>> >>>> plugin { >>>> quota = maildir:User Quota >>>> quota_rule2 = Trash:storage=+100M >>>> } >>> [..] >>>> >>>> So it picks up my set quota of 3MB but dovecot is not rejecting >>>> emails if I am over my quota. >>>> >>>> Can anyone help? >>>> >>> Is the quota plugin being loaded? What is the output of: >>> >>> doveconf | grep -B 2 plug >> >> The modules are being loaded. From the log file with debugging >> enabled: >> >> Jan 25 09:59:58 mail dovecot: IMAP(xxx at xxx.com): Loading modules >> from directory: /usr/lib/dovecot/modules/imap >> Jan 25 09:59:58 mail dovecot: IMAP(xxx at xxx.com): Module loaded: >> /usr/lib/dovecot/modules/imap/lib10_quota_plugin.so >> Jan 25 09:59:58 mail dovecot: IMAP(xxx at xxx.com): Module loaded: >> /usr/lib/dovecot/modules/imap/lib11_imap_quota_plugin.so >> Jan 25 09:59:58 mail dovecot: IMAP(xxx at xxx.com): Effective uid=150, >> gid=8, home=/var/vmail/xxx.com/support >> Jan 25 09:59:58 mail dovecot: IMAP(xxx at xxx.com): Quota root: >> name=User Quota backend=dirsize args= >> Jan 25 09:59:58 mail dovecot: IMAP(xxx at xxx.com): Quota rule: >> root=User Quota mailbox=* bytes=3145728 messages=0 >> Jan 25 09:59:58 mail dovecot: IMAP(xxx at xxx.com): Quota rule: >> root=User Quota mailbox=Trash bytes=104857600 messages=0 >> Jan 25 09:59:58 mail dovecot: IMAP(xxx at xxx.com): maildir: >> data=/var/vmail/xxx.com/support >> Jan 25 09:59:58 mail dovecot: IMAP(xxx at xxx.com): maildir++: >> root=/var/vmail/xxx.com/support, index=, control=, >> inbox=/var/vmail/xxx.com/support >> Jan 25 09:59:58 mail dovecot: IMAP(xxx at xxx.com): Namespace : Using >> permissions from /var/vmail/xxx.com/support: mode=0700 gid=-1 >> Jan 25 09:59:58 mail dovecot: IMAP(xxx at xxx.com): Disconnected: >> Logged out bytes=82/573 >> Jan 25 09:59:58 mail dovecot: IMAP(xxx at xxx.com): Disconnected: >> Logged out bytes=269/8243 >> > > I don't know if it makes any difference, but in your config file, try > changing: > plugin { > quota = maildir:User Quota > > to > > plugin { > quota = maildir:User quota > > (lowercase the "quota") The quota is working fine now. The problem was I had my transport agent set to virtual when it should have been set to dovecot. Thanks. From tss at iki.fi Thu Jan 26 14:21:32 2012 From: tss at iki.fi (Timo Sirainen) Date: Thu, 26 Jan 2012 14:21:32 +0200 Subject: [Dovecot] Persistence of UIDs In-Reply-To: <4F21412C.9060105@Media-Brokers.com> References: <9EDDD923-760E-4D24-BFB7-290172418A62@iki.fi> <469769DA-F849-4AE3-AB82-BB4AE05E0F11@iki.fi> <4F21412C.9060105@Media-Brokers.com> Message-ID: <055C0680-BAF5-4617-918D-E12C09266006@iki.fi> On 26.1.2012, at 14.03, Charles Marcus wrote: > On 2012-01-25 8:40 PM, Jean-Daniel Beaubien wrote: >> I'd just like to confirm that there is no risk to the actual mail data is >> ever something is badly configured when I start dovecot 2.1. I am managing >> this old server on my spare time for a friend, so I don't want to loose >> 2million+ emails and have to deal with those consequences:) > > There are *always* risks associated with things like this... maybe the chance is low, but no guarantees... Risks of some trouble, yes .. but you have to be highly creative if you want to accidentally lose any mails. I can't think of any way to do that without explicitly deleting files from filesystem or via IMAP/POP3 client. From tss at iki.fi Thu Jan 26 14:27:15 2012 From: tss at iki.fi (Timo Sirainen) Date: Thu, 26 Jan 2012 14:27:15 +0200 Subject: [Dovecot] IMAP to Maildir Migration preserving UIDs? In-Reply-To: <4F20D718.9010805@gedalya.net> References: <4F20D718.9010805@gedalya.net> Message-ID: On 26.1.2012, at 6.31, Gedalya wrote: > I'm facing the need to migrate from a proprietary IMAP server to Dovecot. The migration must be as smooth and transparent as possible. > > The mailbox format I would want to use is Maildir++. > > The storage format used by the current server is unknown, and I don't look forward to trying to reverse-engineer it. This leaves me with the option of reading the mailboxes using IMAP. There are tools like offlineimap or mbsync, and they do store the UID and UIDVALIDITY info. The last piece of the puzzle is a process to properly create the dovecot-uidlist and dovecot-uidvalidity files. So far I wasn't able to find anything on this. Are there any tips? Are there any tools available to do this job, or part of it? Get Dovecot v2.1 and configure it to work. Then for migration add to dovecot.conf: imapc_host = imap.example.com imapc_port = 993 imapc_ssl = imaps imapc_ssl_ca_dir = /etc/ssl/certs mail_prefetch_count = 50 And do the migration one user at a time: doveadm -o imapc_user=USERNAME -o imapc_password=PASSWORD backup -R imapc: From tss at iki.fi Thu Jan 26 14:31:43 2012 From: tss at iki.fi (Timo Sirainen) Date: Thu, 26 Jan 2012 14:31:43 +0200 Subject: [Dovecot] auth-worker temporary failures causing lmtp 500 rejection In-Reply-To: <4F2127A1.2010302@webfusion.com> References: <4F2127A1.2010302@webfusion.com> Message-ID: <1BEC91E0-E617-4EDA-88E8-A1877CF4C3D9@iki.fi> On 26.1.2012, at 12.14, Mark Zealey wrote: > I'm using dovecot 2.0.16 with a mysql user database. From time to time when we have a big influx of messages (perhaps more than 30 concurrent rcpt to:<> sessions at the same time so no auth-workers free?) or when we have a transient issue connecting to the database server, we see the message: > > Jan 25 16:38:23 mailbox dovecot: auth-worker: sql(foo at bar.com,1.2.3.4): Unknown user This happens only when the SQL query doesn't return any rows, but does return success. > and the lmtp process returns: > > 550 5.1.1 User doesn't exist: foo at bar.com > > This would be correct for a permanent error where the user doesn't exist in our database, however it seems to be doing this on transient errors too. Is this an issue with the code or perhaps some setting I have missed? The problem is that temporary errors are returning "unknown user". Can you reproduce this somehow? Like if you stop MySQL it always returns that "Unknown user"? From ar-dovecotlist at acrconsulting.co.uk Thu Jan 26 14:38:17 2012 From: ar-dovecotlist at acrconsulting.co.uk (Andrew Richards) Date: 26 Jan 2012 12:38:17 +0000 Subject: [Dovecot] IMAP to Maildir Migration preserving UIDs? In-Reply-To: <4F20D718.9010805@gedalya.net> References: <4F20D718.9010805@gedalya.net> Message-ID: <201201261238.17182.ar-dovecotlist@acrconsulting.co.uk> On Thursday 26 January 2012 04:31:20 Gedalya wrote: > I'm facing the need to migrate from a proprietary IMAP server to > Dovecot. The migration must be as smooth and transparent as possible. Ignoring the migration of individual mailboxes addressed in other replies, I trust you've met Perdition - very useful for this sort of situation, http://horms.net/projects/perdition/ to provide an IMAP "Server" (actually proxy) that knows where the real mailboxes are located, and directs connections accordingly. That way you can migrate mailboxes one-by-one as you've migrated them, helpful to test a few mailboxes first without affecting the bulk of users' mailboxes atall. cheers, Andrew. From gedalya at gedalya.net Thu Jan 26 15:11:32 2012 From: gedalya at gedalya.net (Gedalya) Date: Thu, 26 Jan 2012 08:11:32 -0500 Subject: [Dovecot] IMAP to Maildir Migration preserving UIDs? In-Reply-To: <4F2141D4.806@Media-Brokers.com> References: <4F20D718.9010805@gedalya.net> <4F2141D4.806@Media-Brokers.com> Message-ID: <4F215104.4000409@gedalya.net> On 01/26/2012 07:06 AM, Charles Marcus wrote: > On 2012-01-25 11:31 PM, Gedalya wrote: >> This leaves me with the option of reading the mailboxes using IMAP. >> There are tools like offlineimap or mbsync, > > Not familiar with those, but I think imapsync will do what you want? > > http://imapsync.lamiral.info/ > > I do see that it references those two though... > As I understand, there is no way an IMAP-to-IAMP process can preserve UIDs, since new UIDs are assigned for every message by the target server. Also, imapsync found 0 messages in all mailboxes on my evil to-be-eliminated server, something I didn't bother troubleshooting much. Timo's idea sounds interesting, time to look into 2.1 ! From gedalya at gedalya.net Thu Jan 26 15:18:32 2012 From: gedalya at gedalya.net (Gedalya) Date: Thu, 26 Jan 2012 08:18:32 -0500 Subject: [Dovecot] IMAP to Maildir Migration preserving UIDs? In-Reply-To: <201201261238.17182.ar-dovecotlist@acrconsulting.co.uk> References: <4F20D718.9010805@gedalya.net> <201201261238.17182.ar-dovecotlist@acrconsulting.co.uk> Message-ID: <4F2152A8.2040302@gedalya.net> On 01/26/2012 07:38 AM, Andrew Richards wrote: > On Thursday 26 January 2012 04:31:20 Gedalya wrote: >> I'm facing the need to migrate from a proprietary IMAP server to >> Dovecot. The migration must be as smooth and transparent as possible. > Ignoring the migration of individual mailboxes addressed in other replies, I > trust you've met Perdition - very useful for this sort of situation, > http://horms.net/projects/perdition/ > > to provide an IMAP "Server" (actually proxy) that knows where the real > mailboxes are located, and directs connections accordingly. That way you can > migrate mailboxes one-by-one as you've migrated them, helpful to test a few > mailboxes first without affecting the bulk of users' mailboxes atall. > > cheers, > > Andrew. Sounds very cool. I already have dovecot set up as a proxy, working, and it should allow me to forcefully disconnect users and lock them out while they are being migrated and then once they are done they'll be served locally rather than proxied. My main problem is that most connections are simply coming directly to the old server, using the deprecated hostname. I need all clients to use the right hostnames, or clog up this new server with redirectors and proxies for all the junk done on the old server.. bummer. What I might want to look into is actually setting up a proxy like this but on the evil (windows) server - to get *him* to pass on those requests he shouldn't be handling. From CMarcus at Media-Brokers.com Thu Jan 26 16:11:27 2012 From: CMarcus at Media-Brokers.com (Charles Marcus) Date: Thu, 26 Jan 2012 09:11:27 -0500 Subject: [Dovecot] IMAP to Maildir Migration preserving UIDs? In-Reply-To: <4F215104.4000409@gedalya.net> References: <4F20D718.9010805@gedalya.net> <4F2141D4.806@Media-Brokers.com> <4F215104.4000409@gedalya.net> Message-ID: <4F215F0F.9010106@Media-Brokers.com> On 2012-01-26 8:11 AM, Gedalya wrote: > As I understand, there is no way an IMAP-to-IAMP process can preserve > UIDs, since new UIDs are assigned for every message by the target server. > Also, imapsync found 0 messages in all mailboxes on my evil > to-be-eliminated server, something I didn't bother troubleshooting much. > Timo's idea sounds interesting, time to look into 2.1 ! Yep, it definitely sounds like the way to go... -- Best regards, Charles From Mark.Zealey at webfusion.com Thu Jan 26 16:37:49 2012 From: Mark.Zealey at webfusion.com (Mark Zealey) Date: Thu, 26 Jan 2012 14:37:49 +0000 Subject: [Dovecot] auth-worker temporary failures causing lmtp 500 rejection In-Reply-To: <1BEC91E0-E617-4EDA-88E8-A1877CF4C3D9@iki.fi> References: <4F2127A1.2010302@webfusion.com>, <1BEC91E0-E617-4EDA-88E8-A1877CF4C3D9@iki.fi> Message-ID: I've tried reproducing by having long running auth queries in the sql and KILLing them on the server, restarting the mysql service, and setting max auth workers to 1 and running 2 sessions at the same time (with long-running auth queries), but to no effect. There must be something else going on here; I saw it in particular when exim on our frontend servers had queued a large number of messages and suddenly released them all at once hence the auth-worker hypothesis although the log messages do not support this. I'll try to see if I can trigger this manually although we have been doing some massively parallel testing previously and not seen this. Mark ________________________________________ From: Timo Sirainen [tss at iki.fi] Sent: 26 January 2012 12:31 To: Mark Zealey Cc: dovecot at dovecot.org Subject: Re: [Dovecot] auth-worker temporary failures causing lmtp 500 rejection On 26.1.2012, at 12.14, Mark Zealey wrote: > I'm using dovecot 2.0.16 with a mysql user database. From time to time when we have a big influx of messages (perhaps more than 30 concurrent rcpt to:<> sessions at the same time so no auth-workers free?) or when we have a transient issue connecting to the database server, we see the message: > > Jan 25 16:38:23 mailbox dovecot: auth-worker: sql(foo at bar.com,1.2.3.4): Unknown user This happens only when the SQL query doesn't return any rows, but does return success. > and the lmtp process returns: > > 550 5.1.1 User doesn't exist: foo at bar.com > > This would be correct for a permanent error where the user doesn't exist in our database, however it seems to be doing this on transient errors too. Is this an issue with the code or perhaps some setting I have missed? The problem is that temporary errors are returning "unknown user". Can you reproduce this somehow? Like if you stop MySQL it always returns that "Unknown user"? From lists at wildgooses.com Thu Jan 26 18:02:28 2012 From: lists at wildgooses.com (Ed W) Date: Thu, 26 Jan 2012 16:02:28 +0000 Subject: [Dovecot] IMAP to Maildir Migration preserving UIDs? In-Reply-To: <4F2152A8.2040302@gedalya.net> References: <4F20D718.9010805@gedalya.net> <201201261238.17182.ar-dovecotlist@acrconsulting.co.uk> <4F2152A8.2040302@gedalya.net> Message-ID: <4F217914.1050501@wildgooses.com> Hi > Sounds very cool. I already have dovecot set up as a proxy, working, > and it should allow me to forcefully disconnect users and lock them > out while they are being migrated and then once they are done they'll > be served locally rather than proxied. My main problem is that most > connections are simply coming directly to the old server, using the > deprecated hostname. I need all clients to use the right hostnames, or > clog up this new server with redirectors and proxies for all the junk > done on the old server.. bummer. Why not put the old server IP to redirect to the new machine, then give the old machine some new temp IP in order to proxy back to it? That way you can do the proxying on the dovecot machine, which as you already established is working ok? Good luck Ed W From lists at wildgooses.com Thu Jan 26 18:06:24 2012 From: lists at wildgooses.com (Ed W) Date: Thu, 26 Jan 2012 16:06:24 +0000 Subject: [Dovecot] auth-worker temporary failures causing lmtp 500 rejection In-Reply-To: References: <4F2127A1.2010302@webfusion.com>, <1BEC91E0-E617-4EDA-88E8-A1877CF4C3D9@iki.fi> Message-ID: <4F217A00.8090504@wildgooses.com> On 26/01/2012 14:37, Mark Zealey wrote: > I've tried reproducing by having long running auth queries in the sql and KILLing them on the server, restarting the mysql service, and setting max auth workers to 1 and running 2 sessions at the same time (with long-running auth queries), but to no effect. There must be something else going on here; I saw it in particular when exim on our frontend servers had queued a large number of messages and suddenly released them all at once hence the auth-worker hypothesis although the log messages do not support this. I'll try to see if I can trigger this manually although we have been doing some massively parallel testing previously and not seen this. > Could it be a *timeout* rather than lack of worker processes? Theory would be that disk starvation causes other processes to take a long time to respond, hence the worker is *alive*, but doesn't return a response quickly enough, which in turn causes the "unknown user" message? You could try a different disk io scheduler, or ionice to control the effect of these big bursts of disk activity on other processes? (Most MTA programs such as postfix and qmail do a lot of fsyncs - this will cause a lot of IO activity and could easily starve other processes on the same box?) Good luck Ed W From gedalya at gedalya.net Thu Jan 26 18:30:53 2012 From: gedalya at gedalya.net (Gedalya) Date: Thu, 26 Jan 2012 11:30:53 -0500 Subject: [Dovecot] IMAP to Maildir Migration preserving UIDs? In-Reply-To: <4F217914.1050501@wildgooses.com> References: <4F20D718.9010805@gedalya.net> <201201261238.17182.ar-dovecotlist@acrconsulting.co.uk> <4F2152A8.2040302@gedalya.net> <4F217914.1050501@wildgooses.com> Message-ID: <4F217FBD.6070908@gedalya.net> On 01/26/2012 11:02 AM, Ed W wrote: > Hi > >> Sounds very cool. I already have dovecot set up as a proxy, working, >> and it should allow me to forcefully disconnect users and lock them >> out while they are being migrated and then once they are done they'll >> be served locally rather than proxied. My main problem is that most >> connections are simply coming directly to the old server, using the >> deprecated hostname. I need all clients to use the right hostnames, >> or clog up this new server with redirectors and proxies for all the >> junk done on the old server.. bummer. > > Why not put the old server IP to redirect to the new machine, then > give the old machine some new temp IP in order to proxy back to it? > That way you can do the proxying on the dovecot machine, which as you > already established is working ok? > > Good luck > > Ed W Yeap, taht's what I'm doing to do, except that I would have to proxy more than just IMAP and POP - it's a one-does-it-all kind of machine accepting mail delivered from the outside, relaying outgoing mail, does webmail, does all this things very poorly... I have the choice of forcing all users to change to the new, dedicated servers doing these things, or reimplementing / porxying all of this on my new dovecot server which I so desperately want to keep neat and tidy... From tss at iki.fi Thu Jan 26 18:50:06 2012 From: tss at iki.fi (Timo Sirainen) Date: Thu, 26 Jan 2012 18:50:06 +0200 Subject: [Dovecot] auth-worker temporary failures causing lmtp 500 rejection In-Reply-To: <4F217A00.8090504@wildgooses.com> References: <4F2127A1.2010302@webfusion.com>, <1BEC91E0-E617-4EDA-88E8-A1877CF4C3D9@iki.fi> <4F217A00.8090504@wildgooses.com> Message-ID: <85410DCB-B5A8-44F3-A942-031C5E4C932C@iki.fi> On 26.1.2012, at 18.06, Ed W wrote: > Could it be a *timeout* rather than lack of worker processes? The message in log was "Unknown user". The only reason this happens is if MySQL library's query functions returned success without any rows. No timeouts, crashes, or anything else can give that error message. So I'd the problem is either in MySQL library or MySQL server. Try if the attached patch gives any crashes. If it does, it means that mysql library returned mysql_errno()=0 (success) even though it should have returned a failure. Or you could even change it to only: i_assert(result->result != NULL); if you're not using MySQL for anything else than auth. The other possibility is if in driver_mysql_result_next_row() the mysql_fetch_row() returns NULL, but also there I'm checking mysql_errno(). -------------- next part -------------- A non-text attachment was scrubbed... Name: diff Type: application/octet-stream Size: 435 bytes Desc: not available URL: -------------- next part -------------- From lists at wildgooses.com Thu Jan 26 22:08:18 2012 From: lists at wildgooses.com (Ed W) Date: Thu, 26 Jan 2012 20:08:18 +0000 Subject: [Dovecot] IMAP to Maildir Migration preserving UIDs? In-Reply-To: <4F217FBD.6070908@gedalya.net> References: <4F20D718.9010805@gedalya.net> <201201261238.17182.ar-dovecotlist@acrconsulting.co.uk> <4F2152A8.2040302@gedalya.net> <4F217914.1050501@wildgooses.com> <4F217FBD.6070908@gedalya.net> Message-ID: <4F21B2B2.6030505@wildgooses.com> Hi > Yeap, taht's what I'm doing to do, except that I would have to proxy > more than just IMAP and POP - it's a one-does-it-all kind of machine > accepting mail delivered from the outside, relaying outgoing mail, > does webmail, does all this things very poorly... I have the choice of > forcing all users to change to the new, dedicated servers doing these > things, or reimplementing / porxying all of this on my new dovecot > server which I so desperately want to keep neat and tidy... In that case I would suggest perhaps that the IP is taken over by a dedicated firewall box (running the OS of your choice). The firewall could then be used to port forward the services to the individual machines responsible for each service. This would give you the benefit that you could easily move other services off/around We are clearly off topic to dovecot... Plenty of good firewall options. If you want small, compact and low power, then you can pickup a bunch off intel compatible boards around the low couple hundred ?s mark fairly easily. Run your favourite distro and firewall on them. If you hadn't seen them before, I quite like Lanner for appliances, eg: http://www.lannerinc.com/x86_Network_Appliances/x86_Desktop_Appliances For example if you added a small appliance running linux which runs that IP, then you could add intrusion detection, bounce the web traffic to the windows box (or even just certain URLs, other URLs could go to some hypothetical linux box, etc), port forwarding the mail to the new dovecot box, etc, etc. Incremental price would be surprisingly low, but lots of extra flexibility? Just a thought Good luck Ed W From stan at hardwarefreak.com Thu Jan 26 22:51:02 2012 From: stan at hardwarefreak.com (Stan Hoeppner) Date: Thu, 26 Jan 2012 14:51:02 -0600 Subject: [Dovecot] v2.1.rc5 released In-Reply-To: <20120126000126.GA19765@doctor.nl2k.ab.ca> References: <901DE631-5790-41BF-89A9-73BBB544FDF7@iki.fi> <20120125233930.GA17183@doctor.nl2k.ab.ca> <20120126000126.GA19765@doctor.nl2k.ab.ca> Message-ID: <4F21BCB6.6030908@hardwarefreak.com> On 1/25/2012 6:01 PM, The Doctor wrote: > BSD/OS 4.3.1 A defunct/dead operating system, last released in 2003, support withdrawn in 2004. BSDI went belly up. Wind River acquired and then killed BSD/OS. You're using a dead, 9 year old OS, that hasn't seen official updates for 8 years. Do you think it's fair to ask application developers to support the oddities of your one-of-a-kind, ancient, patchwork of a platform? We've had this discussion before. And I don't believe you ever provided a sane rational for continuing to use an OS that's been officially dead for 8 years. What is the reason you are unable or unwilling to migrate to a newer and supported no cost BSD variant, or Linux distro? You're trying to run bleeding edge Dovecot, compiling it from source, on an 8 year old platform... -- Stan From Mark.Zealey at webfusion.com Thu Jan 26 23:35:24 2012 From: Mark.Zealey at webfusion.com (Mark Zealey) Date: Thu, 26 Jan 2012 21:35:24 +0000 Subject: [Dovecot] auth-worker temporary failures causing lmtp 500 rejection In-Reply-To: <420B5E34BFEE9646B7198438F9978AE223E4CB48@mail01.internal.webfusion.com> References: <4F2127A1.2010302@webfusion.com>, <1BEC91E0-E617-4EDA-88E8-A1877CF4C3D9@iki.fi>, <420B5E34BFEE9646B7198438F9978AE223E4CB48@mail01.internal.webfusion.com> Message-ID: Hi Timo thanks for the patch; I have now analyzed network dumps & discovered that the cause is actually our frontend mail servers not dovecot - we were delivering to the wrong lmtp port which we then use in the mysql query hence getting empty records. Sorry about this! Mark ________________________________________ From: Mark Zealey Sent: 26 January 2012 14:37 To: Timo Sirainen Cc: dovecot at dovecot.org Subject: RE: [Dovecot] auth-worker temporary failures causing lmtp 500 rejection I've tried reproducing by having long running auth queries in the sql and KILLing them on the server, restarting the mysql service, and setting max auth workers to 1 and running 2 sessions at the same time (with long-running auth queries), but to no effect. There must be something else going on here; I saw it in particular when exim on our frontend servers had queued a large number of messages and suddenly released them all at once hence the auth-worker hypothesis although the log messages do not support this. I'll try to see if I can trigger this manually although we have been doing some massively parallel testing previously and not seen this. Mark ________________________________________ From: Timo Sirainen [tss at iki.fi] Sent: 26 January 2012 12:31 To: Mark Zealey Cc: dovecot at dovecot.org Subject: Re: [Dovecot] auth-worker temporary failures causing lmtp 500 rejection On 26.1.2012, at 12.14, Mark Zealey wrote: > I'm using dovecot 2.0.16 with a mysql user database. From time to time when we have a big influx of messages (perhaps more than 30 concurrent rcpt to:<> sessions at the same time so no auth-workers free?) or when we have a transient issue connecting to the database server, we see the message: > > Jan 25 16:38:23 mailbox dovecot: auth-worker: sql(foo at bar.com,1.2.3.4): Unknown user This happens only when the SQL query doesn't return any rows, but does return success. > and the lmtp process returns: > > 550 5.1.1 User doesn't exist: foo at bar.com > > This would be correct for a permanent error where the user doesn't exist in our database, however it seems to be doing this on transient errors too. Is this an issue with the code or perhaps some setting I have missed? The problem is that temporary errors are returning "unknown user". Can you reproduce this somehow? Like if you stop MySQL it always returns that "Unknown user"? From gedalya at gedalya.net Fri Jan 27 01:42:05 2012 From: gedalya at gedalya.net (Gedalya) Date: Thu, 26 Jan 2012 18:42:05 -0500 Subject: [Dovecot] IMAP to Maildir Migration preserving UIDs? In-Reply-To: References: <4F20D718.9010805@gedalya.net> Message-ID: <4F21E4CD.3070001@gedalya.net> On 01/26/2012 07:27 AM, Timo Sirainen wrote: > On 26.1.2012, at 6.31, Gedalya wrote: > >> I'm facing the need to migrate from a proprietary IMAP server to Dovecot. The migration must be as smooth and transparent as possible. >> >> The mailbox format I would want to use is Maildir++. >> >> The storage format used by the current server is unknown, and I don't look forward to trying to reverse-engineer it. This leaves me with the option of reading the mailboxes using IMAP. There are tools like offlineimap or mbsync, and they do store the UID and UIDVALIDITY info. The last piece of the puzzle is a process to properly create the dovecot-uidlist and dovecot-uidvalidity files. So far I wasn't able to find anything on this. Are there any tips? Are there any tools available to do this job, or part of it? > Get Dovecot v2.1 and configure it to work. Then for migration add to dovecot.conf: > > imapc_host = imap.example.com > imapc_port = 993 > imapc_ssl = imaps > imapc_ssl_ca_dir = /etc/ssl/certs > mail_prefetch_count = 50 > > And do the migration one user at a time: > > doveadm -o imapc_user=USERNAME -o imapc_password=PASSWORD backup -R imapc: > Still working on it on my side, but for now: # doveadm -o imapc_user=gedalya at thisdomain.com -o imapc_password=***** backup -u gedalya at thisdomain.com -R imapc: Segmentation fault syslog: Jan 26 18:34:29 imap01 kernel: [ 9055.766548] doveadm[8015]: segfault at 4 ip b7765752 sp bff90600 error 4 in libdovecot-storage.so.0.0.0[b769a000+ff000] Jan 26 18:34:53 imap01 kernel: [ 9078.883024] doveadm[8046]: segfault at 4 ip b7828752 sp bf964450 error 4 in libdovecot-storage.so.0.0.0[b775d000+ff000] (I tried twice) Also, I happen to have no idea what I'm doing, but still, segfault.. This is a debian testing "wheezy" machine I put up to do the initial playing around, i386, using Dovecot prebuilt binary packages from http://xi.rename-it.nl/debian/pool/testing-auto/dovecot-2.1/ From tss at iki.fi Fri Jan 27 01:46:16 2012 From: tss at iki.fi (Timo Sirainen) Date: Fri, 27 Jan 2012 01:46:16 +0200 Subject: [Dovecot] IMAP to Maildir Migration preserving UIDs? In-Reply-To: <4F21E4CD.3070001@gedalya.net> References: <4F20D718.9010805@gedalya.net> <4F21E4CD.3070001@gedalya.net> Message-ID: On 27.1.2012, at 1.42, Gedalya wrote: >> doveadm -o imapc_user=USERNAME -o imapc_password=PASSWORD backup -R imapc: >> > Still working on it on my side, but for now: > > # doveadm -o imapc_user=gedalya at thisdomain.com -o imapc_password=***** backup -u gedalya at thisdomain.com -R imapc: > Segmentation fault gdb backtrace would be helpful. You should be able to get that by running (as root): gdb --args doveadm ... bt full (assuming you haven't changed base_dir, otherwise it might fail) From gedalya at gedalya.net Fri Jan 27 02:00:44 2012 From: gedalya at gedalya.net (Gedalya) Date: Thu, 26 Jan 2012 19:00:44 -0500 Subject: [Dovecot] IMAP to Maildir Migration preserving UIDs? In-Reply-To: References: <4F20D718.9010805@gedalya.net> <4F21E4CD.3070001@gedalya.net> Message-ID: <4F21E92C.4090509@gedalya.net> On 01/26/2012 06:46 PM, Timo Sirainen wrote: > On 27.1.2012, at 1.42, Gedalya wrote: > >>> doveadm -o imapc_user=USERNAME -o imapc_password=PASSWORD backup -R imapc: >>> >> Still working on it on my side, but for now: >> >> # doveadm -o imapc_user=gedalya at thisdomain.com -o imapc_password=***** backup -u gedalya at thisdomain.com -R imapc: >> Segmentation fault > gdb backtrace would be helpful. You should be able to get that by running (as root): > > gdb --args doveadm ... > bt full > > (assuming you haven't changed base_dir, otherwise it might fail) > Does this help? GNU gdb (GDB) 7.3-debian Copyright (C) 2011 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. Type "show copying" and "show warranty" for details. This GDB was configured as "i486-linux-gnu". For bug reporting instructions, please see: ... Reading symbols from /usr/bin/doveadm...Reading symbols from /usr/lib/debug/usr/bin/doveadm...done. done. (gdb) run Starting program: /usr/bin/doveadm -o imapc_user=jedi at example.com -o imapc_password=**** backup -u jedi at example.com -R imapc: [Thread debugging using libthread_db enabled] Program received signal SIGSEGV, Segmentation fault. mailbox_log_iter_open_next (iter=0x80cbd90) at mailbox-log.c:213 213 mailbox-log.c: No such file or directory. in mailbox-log.c (gdb) bt full #0 mailbox_log_iter_open_next (iter=0x80cbd90) at mailbox-log.c:213 No locals. #1 0xb7fa7dd4 in mailbox_log_iter_init (log=0x0) at mailbox-log.c:239 iter = 0x80cbd90 #2 0x0806ffd3 in dsync_worker_get_list_mailbox_log (list=0x80b6180, worker=0x80c3138) at dsync-worker-local.c:316 log = iter = 0x8 rec = #3 dsync_worker_get_mailbox_log (worker=0x80c3138) at dsync-worker-local.c:386 ns = 0x80a5f90 ret = #4 0x0807032f in dsync_worker_get_mailbox_log (worker=0x80c3138) at dsync-worker-local.c:372 No locals. #5 local_worker_mailbox_iter_init (_worker=0x80c3138) at dsync-worker-local.c:410 worker = 0x80c3138 iter = 0x80b6920 patterns = {0x8076124 "*", 0x0} #6 0x08065a2f in dsync_brain_mailbox_list_init (brain=0x80b68e8, worker=0x80c3138) at dsync-brain.c:141 list = 0x80c5940 pool = 0x80c5930 #7 0x0806680f in dsync_brain_sync (brain=0x80b68e8) at dsync-brain.c:827 No locals. #8 dsync_brain_sync (brain=0x80b68e8) at dsync-brain.c:813 No locals. #9 0x08067038 in dsync_brain_sync_all (brain=0x80b68e8) at dsync-brain.c:895 old_state = DSYNC_STATE_GET_MAILBOXES __FUNCTION__ = "dsync_brain_sync_all" #10 0x08064cfd in cmd_dsync_run (_ctx=0x8098ec0, user=0x80a9e98) at doveadm-dsync.c:237 ctx = 0x8098ec0 worker1 = 0x80c3138 worker2 = 0x80aedb8 workertmp = brain = 0x80b68e8 #11 0x0805371e in doveadm_mail_next_user (error_r=0xbffffa1c, ctx=0x8098ec0, input=) at doveadm-mail.c:221 ret = #12 doveadm_mail_next_user (ctx=0x8098ec0, input=, error_r=0xbffffa1c) at doveadm-mail.c:187 error = ret = #13 0x08053b2e in doveadm_mail_single_user (ctx=0x8098ec0, input=0xbffffa6c) at doveadm-mail.c:242 ---Type to continue, or q to quit--- error = 0x0 ret = __FUNCTION__ = "doveadm_mail_single_user" #14 0x08053f58 in doveadm_mail_cmd (cmd=0x8096f60, argc=, argv=0x80901e4) at doveadm-mail.c:425 input = {module = 0x0, service = 0x8076b3a "doveadm", username = 0x8090242 "jedi at example.com", local_ip = {family = 0, u = { ip6 = {__in6_u = {__u6_addr8 = '\000' , __u6_addr16 = {0, 0, 0, 0, 0, 0, 0, 0}, __u6_addr32 = {0, 0, 0, 0}}}, ip4 = {s_addr = 0}}}, remote_ip = {family = 0, u = {ip6 = {__in6_u = {__u6_addr8 = '\000' , __u6_addr16 = {0, 0, 0, 0, 0, 0, 0, 0}, __u6_addr32 = {0, 0, 0, 0}}}, ip4 = {s_addr = 0}}}, local_port = 0, remote_port = 0, userdb_fields = 0x0, flags_override_add = 0, flags_override_remove = 0, no_userdb_lookup = 0} ctx = 0x8098ec0 getopt_args = wildcard_user = 0x0 c = #15 0x080543d9 in doveadm_mail_try_run (cmd_name=0x8090238 "backup", argc=5, argv=0x80901d4) at doveadm-mail.c:482 cmd__foreach_end = 0x8096f9c cmd = 0x8096f60 cmd_name_len = 6 __FUNCTION__ = "doveadm_mail_try_run" #16 0x08053347 in main (argc=5, argv=0x80901d4) at doveadm.c:352 cmd_name = i = quick_init = false c = From tss at iki.fi Fri Jan 27 02:06:22 2012 From: tss at iki.fi (Timo Sirainen) Date: Fri, 27 Jan 2012 02:06:22 +0200 Subject: [Dovecot] IMAP to Maildir Migration preserving UIDs? In-Reply-To: <4F21E92C.4090509@gedalya.net> References: <4F20D718.9010805@gedalya.net> <4F21E4CD.3070001@gedalya.net> <4F21E92C.4090509@gedalya.net> Message-ID: On 27.1.2012, at 2.00, Gedalya wrote: >>> # doveadm -o imapc_user=gedalya at thisdomain.com -o imapc_password=***** backup -u gedalya at thisdomain.com -R imapc: >>> Segmentation fault >> gdb backtrace would be helpful. You should be able to get that by running (as root): >> > 213 mailbox-log.c: No such file or directory. > in mailbox-log.c > (gdb) bt full > #0 mailbox_log_iter_open_next (iter=0x80cbd90) at mailbox-log.c:213 > No locals. > #1 0xb7fa7dd4 in mailbox_log_iter_init (log=0x0) at mailbox-log.c:239 > iter = 0x80cbd90 > #2 0x0806ffd3 in dsync_worker_get_list_mailbox_log (list=0x80b6180, worker=0x80c3138) at dsync-worker-local.c:316 Ah, right, dsync really wants index files. Of course it shouldn't crash, I'll fix that, but you should be able to work around it: rm -rf /tmp/imapc doveadm -o imapc_user=USERNAME -o imapc_password=PASSWORD backup -R imapc:/tmp/imapc From gedalya at gedalya.net Fri Jan 27 02:17:42 2012 From: gedalya at gedalya.net (Gedalya) Date: Thu, 26 Jan 2012 19:17:42 -0500 Subject: [Dovecot] IMAP to Maildir Migration preserving UIDs? In-Reply-To: References: <4F20D718.9010805@gedalya.net> <4F21E4CD.3070001@gedalya.net> <4F21E92C.4090509@gedalya.net> Message-ID: <4F21ED26.6020908@gedalya.net> On 01/26/2012 07:06 PM, Timo Sirainen wrote: > On 27.1.2012, at 2.00, Gedalya wrote: > >>>> # doveadm -o imapc_user=gedalya at thisdomain.com -o imapc_password=***** backup -u gedalya at thisdomain.com -R imapc: >>>> Segmentation fault >>> gdb backtrace would be helpful. You should be able to get that by running (as root): >>> >> 213 mailbox-log.c: No such file or directory. >> in mailbox-log.c >> (gdb) bt full >> #0 mailbox_log_iter_open_next (iter=0x80cbd90) at mailbox-log.c:213 >> No locals. >> #1 0xb7fa7dd4 in mailbox_log_iter_init (log=0x0) at mailbox-log.c:239 >> iter = 0x80cbd90 >> #2 0x0806ffd3 in dsync_worker_get_list_mailbox_log (list=0x80b6180, worker=0x80c3138) at dsync-worker-local.c:316 > Ah, right, dsync really wants index files. Of course it shouldn't crash, I'll fix that, but you should be able to work around it: > > rm -rf /tmp/imapc > doveadm -o imapc_user=USERNAME -o imapc_password=PASSWORD backup -R imapc:/tmp/imapc > # doveadm -o imapc_user=jedi at example.com -o imapc_password=***** backup -u jedi at example.com -R imapc:/tmp/imapc dsync(jedi at example.com): Error: Failed to sync mailbox Drafts: STATUS cannot access mailbox Drafts dsync(jedi at example.com): Fatal: dsync backup: Looks like you're trying to run backup in wrong direction. Source is empty and destination is not. To be clear, I am trying to pull all the mailboxes from the old server on to this dovecot server, which has no mailboxes populated yet. It looks like this command would be pushing the messages from here to the imapc_host rather than pulling? From gedalya at gedalya.net Fri Jan 27 02:33:46 2012 From: gedalya at gedalya.net (Gedalya) Date: Thu, 26 Jan 2012 19:33:46 -0500 Subject: [Dovecot] IMAP to Maildir Migration preserving UIDs? In-Reply-To: <4F21ED26.6020908@gedalya.net> References: <4F20D718.9010805@gedalya.net> <4F21E4CD.3070001@gedalya.net> <4F21E92C.4090509@gedalya.net> <4F21ED26.6020908@gedalya.net> Message-ID: <4F21F0EA.5090700@gedalya.net> On 01/26/2012 07:17 PM, Gedalya wrote: > On 01/26/2012 07:06 PM, Timo Sirainen wrote: >> On 27.1.2012, at 2.00, Gedalya wrote: >> >>>>> # doveadm -o imapc_user=gedalya at thisdomain.com -o >>>>> imapc_password=***** backup -u gedalya at thisdomain.com -R imapc: >>>>> Segmentation fault >>>> gdb backtrace would be helpful. You should be able to get that by >>>> running (as root): >>>> >>> 213 mailbox-log.c: No such file or directory. >>> in mailbox-log.c >>> (gdb) bt full >>> #0 mailbox_log_iter_open_next (iter=0x80cbd90) at mailbox-log.c:213 >>> No locals. >>> #1 0xb7fa7dd4 in mailbox_log_iter_init (log=0x0) at mailbox-log.c:239 >>> iter = 0x80cbd90 >>> #2 0x0806ffd3 in dsync_worker_get_list_mailbox_log (list=0x80b6180, >>> worker=0x80c3138) at dsync-worker-local.c:316 >> Ah, right, dsync really wants index files. Of course it shouldn't >> crash, I'll fix that, but you should be able to work around it: >> >> rm -rf /tmp/imapc >> doveadm -o imapc_user=USERNAME -o imapc_password=PASSWORD backup -R >> imapc:/tmp/imapc >> > # doveadm -o imapc_user=jedi at example.com -o imapc_password=***** > backup -u jedi at example.com -R imapc:/tmp/imapc > dsync(jedi at example.com): Error: Failed to sync mailbox Drafts: STATUS > cannot access mailbox Drafts > dsync(jedi at example.com): Fatal: dsync backup: Looks like you're trying > to run backup in wrong direction. Source is empty and destination is not. > > To be clear, I am trying to pull all the mailboxes from the old server > on to this dovecot server, which has no mailboxes populated yet. It > looks like this command would be pushing the messages from here to the > imapc_host rather than pulling? > This got me somewhere... # doveadm -o imapc_user=jedi1 at example.com -o imapc_password=***** import -u jedi1 at example.com imapc: "" all doveadm(jedi1 at example.com): Error: Copying box=INBOX uid=1 failed: Message GUID not available in this server (guid) doveadm(jedi1 at example.com): Error: Copying box=INBOX uid=2 failed: Message GUID not available in this server (guid) doveadm(jedi1 at example.com): Error: Copying box=INBOX uid=3 failed: Message GUID not available in this server (guid) doveadm(jedi1 at example.com): Error: Copying box=INBOX uid=4 failed: Message GUID not available in this server (guid) doveadm(jedi1 at example.com): Error: Copying box=INBOX uid=5 failed: Message GUID not available in this server (guid) doveadm(jedi1 at example.com): Error: Copying box=INBOX uid=6 failed: Message GUID not available in this server (guid) doveadm(jedi1 at example.com): Error: Copying box=INBOX uid=7 failed: Message GUID not available in this server (guid) doveadm(jedi1 at example.com): Error: Copying box=INBOX uid=8 failed: Message GUID not available in this server (guid) doveadm(jedi1 at example.com): Error: Copying box=INBOX uid=9 failed: Message GUID not available in this server (guid) doveadm(jedi1 at example.com): Error: Copying box=INBOX uid=10 failed: Message GUID not available in this server (guid) doveadm(jedi1 at example.com): Error: Copying box=INBOX uid=11 failed: Message GUID not available in this server (guid) doveadm(jedi1 at example.com): Error: Copying box=INBOX uid=12 failed: Message GUID not available in this server (guid) doveadm(jedi1 at example.com): Error: Copying box=INBOX uid=13 failed: Message GUID not available in this server (guid) doveadm(jedi1 at example.com): Error: Copying box=INBOX uid=14 failed: Message GUID not available in this server (guid) doveadm(jedi1 at example.com): Error: Copying box=INBOX uid=15 failed: Message GUID not available in this server (guid) doveadm(jedi1 at example.com): Error: Copying box=INBOX uid=16 failed: Message GUID not available in this server (guid) doveadm(jedi1 at example.com): Error: Copying box=INBOX uid=17 failed: Message GUID not available in this server (guid) Should I / how can I disable this message GUID thing? From gedalya at gedalya.net Fri Jan 27 02:44:01 2012 From: gedalya at gedalya.net (Gedalya) Date: Thu, 26 Jan 2012 19:44:01 -0500 Subject: [Dovecot] IMAP to Maildir Migration preserving UIDs? In-Reply-To: <4F21ED26.6020908@gedalya.net> References: <4F20D718.9010805@gedalya.net> <4F21E4CD.3070001@gedalya.net> <4F21E92C.4090509@gedalya.net> <4F21ED26.6020908@gedalya.net> Message-ID: <4F21F351.3090907@gedalya.net> On 01/26/2012 07:17 PM, Gedalya wrote: > On 01/26/2012 07:06 PM, Timo Sirainen wrote: >> On 27.1.2012, at 2.00, Gedalya wrote: >> >>>>> # doveadm -o imapc_user=gedalya at thisdomain.com -o >>>>> imapc_password=***** backup -u gedalya at thisdomain.com -R imapc: >>>>> Segmentation fault >>>> gdb backtrace would be helpful. You should be able to get that by >>>> running (as root): >>>> >>> 213 mailbox-log.c: No such file or directory. >>> in mailbox-log.c >>> (gdb) bt full >>> #0 mailbox_log_iter_open_next (iter=0x80cbd90) at mailbox-log.c:213 >>> No locals. >>> #1 0xb7fa7dd4 in mailbox_log_iter_init (log=0x0) at mailbox-log.c:239 >>> iter = 0x80cbd90 >>> #2 0x0806ffd3 in dsync_worker_get_list_mailbox_log (list=0x80b6180, >>> worker=0x80c3138) at dsync-worker-local.c:316 >> Ah, right, dsync really wants index files. Of course it shouldn't >> crash, I'll fix that, but you should be able to work around it: >> >> rm -rf /tmp/imapc >> doveadm -o imapc_user=USERNAME -o imapc_password=PASSWORD backup -R >> imapc:/tmp/imapc >> > # doveadm -o imapc_user=jedi at example.com -o imapc_password=***** > backup -u jedi at example.com -R imapc:/tmp/imapc > dsync(jedi at example.com): Error: Failed to sync mailbox Drafts: STATUS > cannot access mailbox Drafts > dsync(jedi at example.com): Fatal: dsync backup: Looks like you're trying > to run backup in wrong direction. Source is empty and destination is not. > > To be clear, I am trying to pull all the mailboxes from the old server > on to this dovecot server, which has no mailboxes populated yet. It > looks like this command would be pushing the messages from here to the > imapc_host rather than pulling? > Sorry, my bad. That was a malfunction on the old IMAP server - that mailbox is inaccessible. Tried with another account: doveadm -o imapc_user=jedi1 at example.com -o imapc_password=***** backup -u jedi1 at example.com -R imapc:/tmp/imapc dsync(jedi1 at example.com): Error: msg guid lookup failed: Message GUID not available in this server dsync(jedi1 at example.com): Error: msg guid lookup failed: Message GUID not available in this server dsync(jedi1 at example.com): Panic: file dsync-brain.c: line 901 (dsync_brain_sync_all): assertion failed: (brain->state != old_state) dsync(jedi1 at example.com): Error: Raw backtrace: /usr/lib/dovecot/libdovecot.so.0(+0x3e98a) [0xb756a98a] -> /usr/lib/dovecot/libdovecot.so.0(default_fatal_handler+0x41) [0xb756aa91] -> /usr/lib/dovecot/libdovecot.so.0(i_fatal+0) [0xb753f66b] -> doveadm() [0x8067095] -> doveadm() [0x8064cfd] -> doveadm() [0x805371e] -> doveadm(doveadm_mail_single_user+0x5e) [0x8053b2e] -> doveadm() [0x8053f58] -> doveadm(doveadm_mail_try_run+0x139) [0x80543d9] -> doveadm(main+0x3a7) [0x8053347] -> /lib/i386-linux-gnu/i686/cmov/libc.so.6(__libc_start_main+0xe6) [0xb73e8e46] -> doveadm() [0x8053519] Aborted So there :D From tss at iki.fi Fri Jan 27 02:45:45 2012 From: tss at iki.fi (Timo Sirainen) Date: Fri, 27 Jan 2012 02:45:45 +0200 Subject: [Dovecot] IMAP to Maildir Migration preserving UIDs? In-Reply-To: <4F21F0EA.5090700@gedalya.net> References: <4F20D718.9010805@gedalya.net> <4F21E4CD.3070001@gedalya.net> <4F21E92C.4090509@gedalya.net> <4F21ED26.6020908@gedalya.net> <4F21F0EA.5090700@gedalya.net> Message-ID: <59F2504B-1B62-4036-A4DD-F476C9D0A02A@iki.fi> On 27.1.2012, at 2.33, Gedalya wrote: >> # doveadm -o imapc_user=jedi at example.com -o imapc_password=***** backup -u jedi at example.com -R imapc:/tmp/imapc >> dsync(jedi at example.com): Error: Failed to sync mailbox Drafts: STATUS cannot access mailbox Drafts Apparently your server doesn't like sending STATUS command to Drafts mailbox and returns a failure. This isn't very nice from it. >> dsync(jedi at example.com): Fatal: dsync backup: Looks like you're trying to run backup in wrong direction. Source is empty and destination is not. The -R parameter reversed the direction. It possibly fails because of the STATUS error. Or maybe some other problem, I'd need to look into it. You could try giving "-m INBOX" parameter to see if it works for one mailbox. > This got me somewhere... > > # doveadm -o imapc_user=jedi1 at example.com -o imapc_password=***** import -u jedi1 at example.com imapc: "" all > doveadm(jedi1 at example.com): Error: Copying box=INBOX uid=1 failed: Message GUID not available in this server (guid) Fixed: http://hg.dovecot.org/dovecot-2.1/rev/f3e000992f61 But doveadm import doesn't preserve UIDs. From gedalya at gedalya.net Fri Jan 27 02:57:39 2012 From: gedalya at gedalya.net (Gedalya) Date: Thu, 26 Jan 2012 19:57:39 -0500 Subject: [Dovecot] IMAP to Maildir Migration preserving UIDs? In-Reply-To: <59F2504B-1B62-4036-A4DD-F476C9D0A02A@iki.fi> References: <4F20D718.9010805@gedalya.net> <4F21E4CD.3070001@gedalya.net> <4F21E92C.4090509@gedalya.net> <4F21ED26.6020908@gedalya.net> <4F21F0EA.5090700@gedalya.net> <59F2504B-1B62-4036-A4DD-F476C9D0A02A@iki.fi> Message-ID: <4F21F683.3080200@gedalya.net> On 01/26/2012 07:45 PM, Timo Sirainen wrote: > On 27.1.2012, at 2.33, Gedalya wrote: > >>> # doveadm -o imapc_user=jedi at example.com -o imapc_password=***** backup -u jedi at example.com -R imapc:/tmp/imapc >>> dsync(jedi at example.com): Error: Failed to sync mailbox Drafts: STATUS cannot access mailbox Drafts > Apparently your server doesn't like sending STATUS command to Drafts mailbox and returns a failure. This isn't very nice from it. > This particular is broken - I'm pretty sure it doesn't do this for other accounts. >>> dsync(jedi at example.com): Fatal: dsync backup: Looks like you're trying to run backup in wrong direction. Source is empty and destination is not. > The -R parameter reversed the direction. It possibly fails because of the STATUS error. Or maybe some other problem, I'd need to look into it. You could try giving "-m INBOX" parameter to see if it works for one mailbox. Must be that broken account. >> This got me somewhere... >> >> # doveadm -o imapc_user=jedi1 at example.com -o imapc_password=***** import -u jedi1 at example.com imapc: "" all >> doveadm(jedi1 at example.com): Error: Copying box=INBOX uid=1 failed: Message GUID not available in this server (guid) > Fixed: http://hg.dovecot.org/dovecot-2.1/rev/f3e000992f61 > > But doveadm import doesn't preserve UIDs. OK - I got a different error from running doveadm backup on a non-broken account - see my other email :) From tss at iki.fi Fri Jan 27 03:00:15 2012 From: tss at iki.fi (Timo Sirainen) Date: Fri, 27 Jan 2012 03:00:15 +0200 Subject: [Dovecot] IMAP to Maildir Migration preserving UIDs? In-Reply-To: <4F21F683.3080200@gedalya.net> References: <4F20D718.9010805@gedalya.net> <4F21E4CD.3070001@gedalya.net> <4F21E92C.4090509@gedalya.net> <4F21ED26.6020908@gedalya.net> <4F21F0EA.5090700@gedalya.net> <59F2504B-1B62-4036-A4DD-F476C9D0A02A@iki.fi> <4F21F683.3080200@gedalya.net> Message-ID: On 27.1.2012, at 2.57, Gedalya wrote: >>> # doveadm -o imapc_user=jedi1 at example.com -o imapc_password=***** import -u jedi1 at example.com imapc: "" all >>> doveadm(jedi1 at example.com): Error: Copying box=INBOX uid=1 failed: Message GUID not available in this server (guid) >> Fixed: http://hg.dovecot.org/dovecot-2.1/rev/f3e000992f61 >> >> But doveadm import doesn't preserve UIDs. > OK - I got a different error from running doveadm backup on a non-broken account - see my other email :) The GUID error is the same. The crash is probably the result of it. Try if upgrading fixes it. From gedalya at gedalya.net Fri Jan 27 03:03:33 2012 From: gedalya at gedalya.net (Gedalya) Date: Thu, 26 Jan 2012 20:03:33 -0500 Subject: [Dovecot] IMAP to Maildir Migration preserving UIDs? In-Reply-To: References: <4F20D718.9010805@gedalya.net> <4F21E4CD.3070001@gedalya.net> <4F21E92C.4090509@gedalya.net> <4F21ED26.6020908@gedalya.net> <4F21F0EA.5090700@gedalya.net> <59F2504B-1B62-4036-A4DD-F476C9D0A02A@iki.fi> <4F21F683.3080200@gedalya.net> Message-ID: <4F21F7E5.1020606@gedalya.net> On 01/26/2012 08:00 PM, Timo Sirainen wrote: > On 27.1.2012, at 2.57, Gedalya wrote: > >>>> # doveadm -o imapc_user=jedi1 at example.com -o imapc_password=***** import -u jedi1 at example.com imapc: "" all >>>> doveadm(jedi1 at example.com): Error: Copying box=INBOX uid=1 failed: Message GUID not available in this server (guid) >>> Fixed: http://hg.dovecot.org/dovecot-2.1/rev/f3e000992f61 >>> >>> But doveadm import doesn't preserve UIDs. >> OK - I got a different error from running doveadm backup on a non-broken account - see my other email :) > The GUID error is the same. The crash is probably the result of it. Try if upgrading fixes it. > OK. Thank you very very much for everything so far. I'm going to wait for the changes to pop up in the prebuilt binary repository - I assume it's a matter of hours? For now I need to go eat something :-) and get back to this later, I'll post the results at that time. From gedalya at gedalya.net Fri Jan 27 06:16:42 2012 From: gedalya at gedalya.net (Gedalya) Date: Thu, 26 Jan 2012 23:16:42 -0500 Subject: [Dovecot] IMAP to Maildir Migration preserving UIDs? In-Reply-To: References: <4F20D718.9010805@gedalya.net> <4F21E4CD.3070001@gedalya.net> <4F21E92C.4090509@gedalya.net> <4F21ED26.6020908@gedalya.net> <4F21F0EA.5090700@gedalya.net> <59F2504B-1B62-4036-A4DD-F476C9D0A02A@iki.fi> <4F21F683.3080200@gedalya.net> Message-ID: <4F22252A.4070204@gedalya.net> On 01/26/2012 08:00 PM, Timo Sirainen wrote: > On 27.1.2012, at 2.57, Gedalya wrote: > >>>> # doveadm -o imapc_user=jedi1 at example.com -o imapc_password=***** import -u jedi1 at example.com imapc: "" all >>>> doveadm(jedi1 at example.com): Error: Copying box=INBOX uid=1 failed: Message GUID not available in this server (guid) >>> Fixed: http://hg.dovecot.org/dovecot-2.1/rev/f3e000992f61 >>> >>> But doveadm import doesn't preserve UIDs. >> OK - I got a different error from running doveadm backup on a non-broken account - see my other email :) > The GUID error is the same. The crash is probably the result of it. Try if upgrading fixes it. > Yeap. Worked impeccably (doveadm backup)!! Pretty fast, too. Very impressed! I'll have to do some very thorough testing with various clients etc, will post interesting findings if any come up. From alexis.lelion at gmail.com Fri Jan 27 12:59:02 2012 From: alexis.lelion at gmail.com (Alexis Lelion) Date: Fri, 27 Jan 2012 11:59:02 +0100 Subject: [Dovecot] LMTP : Can't handle mixed proxy/non-proxy destinations Message-ID: Hello, In my current setup, I uses two mailservers to handle the users connections, and my emails are stored on a distant server using NFS (maildir architecture) Dovecot is both my IMAP server and the delivery agent (LMTP via postfix) To avoid indexing issues related to NFS, proxying is enabled both on IMAP and LMTP. But when a mail is sent to users that are shared between the servers, I got the subject mentionned error in the logs : Jan 25 09:05:12 mail01 postfix/lmtp[23934]: A92709300DB: to=< user_on_mail02 at domain.com>, relay=mail01.domain.com[private/dovecot-lmtp], delay=0.07, delays=0.01/0/0/0.06, dsn=4.3.0, status=deferred (host mail01.domain.com[private/dovecot-lmtp] said: 451 4.3.0 < user_on_mail02 at domain.com> Can't handle mixed proxy/non-proxy destinations (in reply to RCPT TO command)) >From what I saw, the mail is then put in the queue, and wait until the next time Postifx will browse the queue. The mail will then be correctly delivered on "mail02". However, the "queue_run_delay" postfix parameter is set to 900, which means that the mail will be delivered with a lag of 15 minutes. I was wondering if there was another way of handling this, for example by triggering an immediate queue lookup from postfix or forwarding a copy of the mail to the other server. Note that the postfix "queue_run_delay" was increased to 15min on purpose, so I cannot change that. I'm using dovecot 2.0.15 on Debian Squeeze, kernel 2.6.32-5-amd64. Thanks, Alexis From clube33-mail at yahoo.com Fri Jan 27 14:32:17 2012 From: clube33-mail at yahoo.com (Gustavo) Date: Fri, 27 Jan 2012 04:32:17 -0800 (PST) Subject: [Dovecot] Problem with Postfix + Dovecot + MySQL + Squirrelmail Message-ID: <1327667537.79787.YahooMailNeo@web65309.mail.ac2.yahoo.com> Dear friends, I try configure a webmail on my server using Postfix + Dovecot + MySQL + Squirrelmail. My system is a Debian6 and dovecot version is: #dovecot --version 1.2.15 But, when I try to access an account on squirrel I recieve this message: ?ERROR Error connecting to IMAP server: localhost. 111 : Connection refused? Looking for a problem I foud this: #service dovecot start Starting IMAP/POP3 mail server: dovecotLast died with error (see error log for more information): Auth process died too early - shutting down If you have trouble with authentication failures, enable auth_debug setting. See http://wiki.dovecot.org/WhyDoesItNotWork This message goes away after the first successful login. . And the status of doveco is: #service dovecot status dovecot is not running ... failed! The other services seems to be OK: #service postfix status postfix is running. # service mysql status /usr/bin/mysqladmin ?Ver 8.42 Distrib 5.1.49, for debian-linux-gnu on x86_64 Copyright 2000-2008 MySQL AB, 2008 Sun Microsystems, Inc. This software comes with ABSOLUTELY NO WARRANTY. This is free software, and you are welcome to modify and redistribute it under the GPL license Server version5.1.49-3 Protocol version10 ConnectionLocalhost via UNIX socket UNIX socket/var/run/mysqld/mysqld.sock Uptime:32 days 14 hours 23 min 39 sec Threads: 1 ?Questions: 6743 ?Slow queries: 0 ?Opens: 385 ?Flush tables: 1 ?Open tables: 47 ?Queries per second avg: 0.2. Looking at dovecot.conf I found some incosistences: On dovecot.conf: protocol lda { sendmail_path = /usr/lib/sendmail auth_socket_path = /var/run/dovecot/auth-master } socket listen { master { path = /var/run/dovecot/auth-master mode = 0600 user = vmail group = mail } client { path = /var/run/dovecot/auth-client mode = 0660 user = vmail group = mail } } But in the system I don1t found this files!!! /var/run/dovecot# ls total 20K drwxr-xr-x 3 root root ???4.0K Jan 27 11:35 . drwxr-xr-x 8 root root ???4.0K Jan 27 09:33 .. srw------- 1 root root ??????0 Jan 27 11:35 auth-worker.26163 srwxrwxrwx 1 root root ??????0 Jan 27 11:35 dict-server lrwxrwxrwx 1 root root ?????25 Jan 27 11:35 dovecot.conf -> /etc/dovecot/dovecot.conf drwxr-x--- 2 root dovecot 4.0K Jan 27 11:35 login -rw------- 1 root root ?????43 Jan 27 11:35 master-fatal.lastlog -rw------- 1 root root ??????6 Jan 27 11:35 master.pid /var/run/dovecot# ls login/ total 12K drwxr-x--- 2 root dovecot 4.0K Jan 27 11:35 . drwxr-xr-x 3 root root ???4.0K Jan 27 11:35 .. srw-rw---- 1 root dovecot ???0 Jan 27 11:35 default -rw-r--r-- 2 root root ????230 Jan 23 19:12 ssl-parameters.dat I think maybe that is the problem. Anyone knows how I fix that? Or what is the real problem? Thanks for any help! ? --? Gustavo? From odhiambo at gmail.com Fri Jan 27 17:28:50 2012 From: odhiambo at gmail.com (Odhiambo Washington) Date: Fri, 27 Jan 2012 18:28:50 +0300 Subject: [Dovecot] v2.1.rc5 released In-Reply-To: <4F21BCB6.6030908@hardwarefreak.com> References: <901DE631-5790-41BF-89A9-73BBB544FDF7@iki.fi> <20120125233930.GA17183@doctor.nl2k.ab.ca> <20120126000126.GA19765@doctor.nl2k.ab.ca> <4F21BCB6.6030908@hardwarefreak.com> Message-ID: On Thu, Jan 26, 2012 at 23:51, Stan Hoeppner wrote: > On 1/25/2012 6:01 PM, The Doctor wrote: > > BSD/OS 4.3.1 > > A defunct/dead operating system, last released in 2003, support > withdrawn in 2004. BSDI went belly up. Wind River acquired and then > killed BSD/OS. You're using a dead, 9 year old OS, that hasn't seen > official updates for 8 years. > > Do you think it's fair to ask application developers to support the > oddities of your one-of-a-kind, ancient, patchwork of a platform? > > We've had this discussion before. And I don't believe you ever provided > a sane rational for continuing to use an OS that's been officially dead > for 8 years. What is the reason you are unable or unwilling to migrate > to a newer and supported no cost BSD variant, or Linux distro? > > You're trying to run bleeding edge Dovecot, compiling it from source, on > an 8 year old platform... > > Maybe "The Doctor" has no idea on how to migrate. I see no other sane reason to continue running that OS. -- Best regards, Odhiambo WASHINGTON, Nairobi,KE +254733744121/+254722743223 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ I can't hear you -- I'm using the scrambler. Please consider the environment before printing this email. -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 652 bytes Desc: not available URL: From mcazzador at gmail.com Fri Jan 27 18:48:31 2012 From: mcazzador at gmail.com (Matteo Cazzador) Date: Fri, 27 Jan 2012 17:48:31 +0100 Subject: [Dovecot] dovecot imap cluster Message-ID: Hello, i'm using postfix like smtp server, i need to choose an imap server with a special features. I have a customer with 3 different geographic locations. Every locations have a mail server for the same domain (example.com). If user1 at example.com receive mail form external this mail going on every locations server. I've a problem now, is it possible to syncronize the state (mail flag) of user1 imap folder mails on every mail locations server? Example, if user1 read a mail on server one is it possible to change flag of the same mail file on server 2 and server 3? Is it possible to use dsync for it? I need something like imap cluster. Or an action in post processing imap mail read. I can't use distribuited file system. Thank's a lot -- Rispetta l'ambiente: se non ti ? necessario, non stampare questa mail. ****************************************** Ing. Matteo Cazzador Email: mcazzador at gmail.com ****************************************** From info at simonecaruso.com Fri Jan 27 20:11:59 2012 From: info at simonecaruso.com (Simone Caruso) Date: Fri, 27 Jan 2012 19:11:59 +0100 Subject: [Dovecot] dovecot imap cluster In-Reply-To: References: Message-ID: <4F22E8EF.7070609@simonecaruso.com> On 27/01/2012 17:48, Matteo Cazzador wrote: > Hello, i'm using postfix like smtp server, i need to choose an imap > server with a special features. > > I have a customer with 3 different geographic locations. > > Every locations have a mail server for the same domain (example.com). > > If user1 at example.com receive mail form external this mail going on > every locations server. > > I've a problem now, is it possible to syncronize the state (mail flag) > of user1 imap folder mails on every mail locations server? > > Example, if user1 read a mail on server one is it possible to change > flag of the same mail file on server 2 and server 3? > > Is it possible to use dsync for it? > > I need something like imap cluster. > > Or an action in post processing imap mail read. > > I can't use distribuited file system. > > Thank's a lot > Syncronize your storage with DRBD, (or async replica like rsync) and use dovecot director for connection persistence. -- Simone Caruso From me at junc.org Fri Jan 27 22:53:02 2012 From: me at junc.org (Benny Pedersen) Date: Fri, 27 Jan 2012 21:53:02 +0100 Subject: [Dovecot] v2.1.rc5 released In-Reply-To: <4F21BCB6.6030908@hardwarefreak.com> References: <901DE631-5790-41BF-89A9-73BBB544FDF7@iki.fi> <20120125233930.GA17183@doctor.nl2k.ab.ca> <20120126000126.GA19765@doctor.nl2k.ab.ca> <4F21BCB6.6030908@hardwarefreak.com> Message-ID: On Thu, 26 Jan 2012 14:51:02 -0600, Stan Hoeppner wrote: > You're trying to run bleeding edge Dovecot, compiling it from source, > on > an 8 year old platform... i remember freebsd 4.9 installed from 2 1440kb floppy disks, why is upgradeing so hard to keep without reinstalling ? gentoo/funtoo it keeps emerge world forever, and portage exists on freebsd From tss at iki.fi Fri Jan 27 22:57:05 2012 From: tss at iki.fi (Timo Sirainen) Date: Fri, 27 Jan 2012 22:57:05 +0200 Subject: [Dovecot] dovecot imap cluster In-Reply-To: <4F22E8EF.7070609@simonecaruso.com> References: <4F22E8EF.7070609@simonecaruso.com> Message-ID: <3797A713-4DA9-4AEC-A155-006E3574BB6C@iki.fi> On 27.1.2012, at 20.11, Simone Caruso wrote: >> I have a customer with 3 different geographic locations. >> >> Every locations have a mail server for the same domain (example.com). >> >> If user1 at example.com receive mail form external this mail going on >> every locations server. >> >> I've a problem now, is it possible to syncronize the state (mail flag) >> of user1 imap folder mails on every mail locations server? > > Syncronize your storage with DRBD, (or async replica like rsync) and use dovecot > director for connection persistence. There are a couple of problems with DRBD and most (all?) other filesystem based solutions when doing multi-master replication across wide geographic locations: 1. Multi-master requires synchronous replication -> latency may be very high -> performance probably is bad enough that the system is unusable. 2. Network outages are still common -> you can't handle split brain situations in filesystem level without either a) loss of availability (everyone's email down) or b) data loss/corruption (what do you do when multiple sites have modified the same file?) With dsync-based replication it's possible to avoid both of these problems, because application-level replication can intelligently handle situations where asynchronous replication results in data conflicts. (This kind of conflict resolution is also what I hope to do with some nosql database in future when Dovecot supports them.) I've been working on dsync-based easy-to-use replication recently, and it's almost in a condition where I'm going to start using it myself (maybe this weekend). From doctor at doctor.nl2k.ab.ca Fri Jan 27 23:03:11 2012 From: doctor at doctor.nl2k.ab.ca (The Doctor) Date: Fri, 27 Jan 2012 14:03:11 -0700 Subject: [Dovecot] v2.1.rc5 released In-Reply-To: References: <901DE631-5790-41BF-89A9-73BBB544FDF7@iki.fi> <20120125233930.GA17183@doctor.nl2k.ab.ca> <20120126000126.GA19765@doctor.nl2k.ab.ca> <4F21BCB6.6030908@hardwarefreak.com> Message-ID: <20120127210310.GA2218@doctor.nl2k.ab.ca> On Fri, Jan 27, 2012 at 09:53:02PM +0100, Benny Pedersen wrote: > On Thu, 26 Jan 2012 14:51:02 -0600, Stan Hoeppner wrote: > >> You're trying to run bleeding edge Dovecot, compiling it from source, on >> an 8 year old platform... > > i remember freebsd 4.9 installed from 2 1440kb floppy disks, why is > upgradeing so hard to keep without reinstalling ? > > gentoo/funtoo it keeps emerge world forever, and portage exists on freebsd I got 2.1rc to work on this old work horse, just that the --as-needed flag needs to be edited out of 21 files. IT might be easier just in configuration to look up which version of ld you have as if it does not need the --as-needed flag. -- Member - Liberal International This is doctor at nl2k.ab.ca Ici doctor at nl2k.ab.ca God, Queen and country! Never Satan President Republic! Beware AntiChrist rising! https://www.fullyfollow.me/rootnl2k Birthdate : 29 Jan 1969 Croydon, Surrey, UK From me at junc.org Fri Jan 27 23:31:55 2012 From: me at junc.org (Benny Pedersen) Date: Fri, 27 Jan 2012 22:31:55 +0100 Subject: [Dovecot] v2.1.rc5 released In-Reply-To: <20120127210310.GA2218@doctor.nl2k.ab.ca> References: <901DE631-5790-41BF-89A9-73BBB544FDF7@iki.fi> <20120125233930.GA17183@doctor.nl2k.ab.ca> <20120126000126.GA19765@doctor.nl2k.ab.ca> <4F21BCB6.6030908@hardwarefreak.com> <20120127210310.GA2218@doctor.nl2k.ab.ca> Message-ID: <70d5df2910c161d844f6dbb7aa8fef8c@junc.org> On Fri, 27 Jan 2012 14:03:11 -0700, The Doctor wrote: > IT might be easier just in configuration to look up > which version of ld you have as if it does not need the --as-needed > flag. replyed sent privately, keep up the good work on freebsd :=) From me at junc.org Sat Jan 28 00:05:57 2012 From: me at junc.org (Benny Pedersen) Date: Fri, 27 Jan 2012 23:05:57 +0100 Subject: [Dovecot] =?utf-8?q?IMAP_to_Maildir_Migration_preserving_UIDs=3F?= In-Reply-To: <4F20D718.9010805@gedalya.net> References: <4F20D718.9010805@gedalya.net> Message-ID: <2989c8bf4cccf90002e99389385c97d8@junc.org> On Wed, 25 Jan 2012 23:31:20 -0500, Gedalya wrote: > I'm facing the need to migrate from a proprietary IMAP server to > Dovecot. The migration must be as smooth and transparent as possible. setup dovecot and make it listen on 127.0.0.2 only, modify your current to only listen on 127.0.0.1 this so you now can have 2 imap servers running at the same time next step is here http://www.howtoforge.com/how-to-migrate-mailboxes-between-imap-servers-with-imapsync when all accounts is transfered, stop the old server, make dovecot listen on any ip, done, it worked for me when i changed from courier-imap to dovecot From gedalya at gedalya.net Sat Jan 28 00:35:40 2012 From: gedalya at gedalya.net (Gedalya) Date: Fri, 27 Jan 2012 17:35:40 -0500 Subject: [Dovecot] IMAP to Maildir Migration preserving UIDs? In-Reply-To: References: <4F20D718.9010805@gedalya.net> <4F21E4CD.3070001@gedalya.net> <4F21E92C.4090509@gedalya.net> <4F21ED26.6020908@gedalya.net> <4F21F0EA.5090700@gedalya.net> <59F2504B-1B62-4036-A4DD-F476C9D0A02A@iki.fi> <4F21F683.3080200@gedalya.net> Message-ID: <4F2326BC.608@gedalya.net> On 01/26/2012 08:00 PM, Timo Sirainen wrote: > On 27.1.2012, at 2.57, Gedalya wrote: > >>>> # doveadm -o imapc_user=jedi1 at example.com -o imapc_password=***** import -u jedi1 at example.com imapc: "" all >>>> doveadm(jedi1 at example.com): Error: Copying box=INBOX uid=1 failed: Message GUID not available in this server (guid) >>> Fixed: http://hg.dovecot.org/dovecot-2.1/rev/f3e000992f61 >>> >>> But doveadm import doesn't preserve UIDs. >> OK - I got a different error from running doveadm backup on a non-broken account - see my other email :) > The GUID error is the same. The crash is probably the result of it. Try if upgrading fixes it. > This is what I ended up doing. I have the production machine acting as a dovecot imap server, and as a proxy for accounts not yet migrated. Running dovecot 2.0.15, with directly attached 6 TB of storage. Per Timo's instructions, I set up a quick VM running debian wheezy and the latest dovecot 2.1, and copied the config from the production server with tiny modifications, and connected it to the same mysql user database. I gave this machine the same hostname as the production machine, just so that the maildir filenames end up looking neat. I don't know if this has anything more than psychological value :-) I mounted the storage from the production machine (sshfs surprisingly didn't seem slower than NFS) and set up dovecot 2.1 to find the mailboxes under there, then things like doveadm -o imapc_user=jedi1 at example.com -o imapc_password=****** backup -u jedi1 at example.com -R imapc:/tmp/imapc started doing the job. No output, no problems. So far the only glitch I noticed is that I have dovecot autocreate a Spam folder and when starting a Windows Live Mail which was reading a proxied account, after it was migrated and served by dovecot, it doesn't find the Spam folder until I click "Download all folders". We have thousands of mailboxes being read from every conceivable client, so there will be more tiny issues like this. Can't wait to test a blackberry. Other than that, things work as intended - UID and UIDVALIDITY seem to be preserved, the clients don't seem to notice the migration or react to it in any way. What's left is to wrap around this a proper process to lock the mailbox, essentially put the right things in the database in the beginning and in the end of the process. Looks beautiful. From kyle.lafkoff at cpanel.net Sat Jan 28 00:57:15 2012 From: kyle.lafkoff at cpanel.net (Kyle Lafkoff) Date: Fri, 27 Jan 2012 16:57:15 -0600 Subject: [Dovecot] Test suite? Message-ID: <5319F037-A973-45EE-9129-93489C026619@cpanel.net> Hi I am building a RPM for dovecot. Is there a test suite available I could use during the build to verify proper functionality? Thanks! Kyle From tss at iki.fi Sat Jan 28 01:15:53 2012 From: tss at iki.fi (Timo Sirainen) Date: Sat, 28 Jan 2012 01:15:53 +0200 Subject: [Dovecot] Test suite? In-Reply-To: <5319F037-A973-45EE-9129-93489C026619@cpanel.net> References: <5319F037-A973-45EE-9129-93489C026619@cpanel.net> Message-ID: <37E8F766-8456-49D2-8360-DB70288E7A8A@iki.fi> On 28.1.2012, at 0.57, Kyle Lafkoff wrote: > I am building a RPM for dovecot. Is there a test suite available I could use during the build to verify proper functionality? Thanks! It would be nice to have a proper finished test suite testing all kinds of functionality. Unfortunately I haven't had time to write such a thing, and no one's tried to help creating one. There is "make check" that you can run, which goes through some unit tests, but it's not very useful in catching bugs. There is also imaptest tool (http://imapwiki.org/ImapTest), which is very useful in catching bugs. I've been planning on creating a comprehensive test suite by creating Dovecot-specific scripts for imaptest and running them against many different Dovecot configurations (mbox/maildir/sdbox/mdbox formats each against different kinds of namespaces, as well as many other tests). That plan has existed several years now, but unfortunately only in my head. Perhaps soon I can hire someone else to do that via my company. :) From stan at hardwarefreak.com Sat Jan 28 02:23:35 2012 From: stan at hardwarefreak.com (Stan Hoeppner) Date: Fri, 27 Jan 2012 18:23:35 -0600 Subject: [Dovecot] dovecot imap cluster In-Reply-To: <3797A713-4DA9-4AEC-A155-006E3574BB6C@iki.fi> References: <4F22E8EF.7070609@simonecaruso.com> <3797A713-4DA9-4AEC-A155-006E3574BB6C@iki.fi> Message-ID: <4F234007.9030907@hardwarefreak.com> On 1/27/2012 2:57 PM, Timo Sirainen wrote: > On 27.1.2012, at 20.11, Simone Caruso wrote: > >>> I have a customer with 3 different geographic locations. >>> >>> Every locations have a mail server for the same domain (example.com). >>> >>> If user1 at example.com receive mail form external this mail going on >>> every locations server. >>> >>> I've a problem now, is it possible to syncronize the state (mail flag) >>> of user1 imap folder mails on every mail locations server? >> >> Syncronize your storage with DRBD, (or async replica like rsync) and use dovecot >> director for connection persistence. > > > There are a couple of problems with DRBD and most (all?) other filesystem based solutions when doing multi-master replication across wide geographic locations: > > 1. Multi-master requires synchronous replication -> latency may be very high -> performance probably is bad enough that the system is unusable. > > 2. Network outages are still common -> you can't handle split brain situations in filesystem level without either a) loss of availability (everyone's email down) or b) data loss/corruption (what do you do when multiple sites have modified the same file?) > > With dsync-based replication it's possible to avoid both of these problems, because application-level replication can intelligently handle situations where asynchronous replication results in data conflicts. (This kind of conflict resolution is also what I hope to do with some nosql database in future when Dovecot supports them.) I've been working on dsync-based easy-to-use replication recently, and it's almost in a condition where I'm going to start using it myself (maybe this weekend). Can you provide a basic diagram/high level description of how this dsync replication would be configured to work over a 2 node wide area network? Are we looking at something like period scripts, something more automatic, a replication daemon? -- Stan From tss at iki.fi Sat Jan 28 02:32:15 2012 From: tss at iki.fi (Timo Sirainen) Date: Sat, 28 Jan 2012 02:32:15 +0200 Subject: [Dovecot] dovecot imap cluster In-Reply-To: <4F234007.9030907@hardwarefreak.com> References: <4F22E8EF.7070609@simonecaruso.com> <3797A713-4DA9-4AEC-A155-006E3574BB6C@iki.fi> <4F234007.9030907@hardwarefreak.com> Message-ID: On 28.1.2012, at 2.23, Stan Hoeppner wrote: >> With dsync-based replication it's possible to avoid both of these problems, because application-level replication can intelligently handle situations where asynchronous replication results in data conflicts. (This kind of conflict resolution is also what I hope to do with some nosql database in future when Dovecot supports them.) I've been working on dsync-based easy-to-use replication recently, and it's almost in a condition where I'm going to start using it myself (maybe this weekend). > > Can you provide a basic diagram/high level description of how this dsync > replication would be configured to work over a 2 node wide area network? I'll write a description at some point.. It's anyway meant to be more scalable than just 2 nodes, so the idea is to have userdb lookup return the 2 (or more) replicas. > Are we looking at something like period scripts, something more > automatic, a replication daemon? It's a replication daemon that basically calls "doveadm sync" when needed (via doveadm server connection). Initially it's not as optimal from performance point of view as it could, but should get better. :) From dmiller at amfes.com Sat Jan 28 09:15:33 2012 From: dmiller at amfes.com (Daniel L. Miller) Date: Fri, 27 Jan 2012 23:15:33 -0800 Subject: [Dovecot] Crash on mail folder delete In-Reply-To: <4F209878.5040505@amfes.com> References: <4F20922C.60206@amfes.com> <54B5D728-EC26-4633-A927-7EC040043BF5@iki.fi> <4F20939E.4010903@amfes.com> <4F209878.5040505@amfes.com> Message-ID: On 1/25/2012 4:04 PM, Daniel L. Miller wrote: > On 1/25/2012 3:43 PM, Daniel L. Miller wrote: >> On 1/25/2012 3:42 PM, Timo Sirainen wrote: >>> On 26.1.2012, at 1.37, Daniel L. Miller wrote: >>> >>>> Attempting to delete a folder from within the trash folder using >>>> Thunderbird. I see the following in the log: >>> Dovecot version? >>> >> 2.1.rc3. I'm compiling rc5 now... >> > Error still there on rc5. > Can I do anything to help find this? Folders are still shown in Trash - unable to delete. -- Daniel From user+dovecot at localhost.localdomain.org Sat Jan 28 17:34:16 2012 From: user+dovecot at localhost.localdomain.org (Pascal Volk) Date: Sat, 28 Jan 2012 16:34:16 +0100 Subject: [Dovecot] v2.1.rc5 (85a9b5236b6c) Error: lmtp client: DNS lookup of $FQDN failed: connect(dns-client) failed: No such file or directory Message-ID: <4F241578.2090904@localhost.localdomain.org> When the Sieve plugin tries to send a vacation message or redirect a message to another address it fails. dovecot: lmtp(6412, user at example.com): Error: lmtp client: DNS lookup of orange.example.com failed: connect(dns-client) failed: No such file or directory dovecot: lmtp(6412, user at example.com): Error: dAIOClYSJE8MGQAAhQ0vrQ: sieve: msgid=<4F241255.2060900 at example.com>: failed to redirect message to (refer to server log for more information) But the dns-client sockets are created when Dovecot starts up: # find /usr/local/var/run/dovecot -name dns-client -exec ls -l {} + srw-rw-rw- 1 root staff 0 Jan 28 16:15 /usr/local/var/run/dovecot/dns-client srw-rw-rw- 1 root root 0 Jan 28 16:15 /usr/local/var/run/dovecot/login/dns-client Hum, is it Dovecot or Pigeonhole (dovecot-2.1-pigeonhole 1600:b2a456e15ed5)? Regards, Pascal -- The trapper recommends today: c01dcofe.1202816 at localdomain.org From adrian.minta at gmail.com Sat Jan 28 17:48:53 2012 From: adrian.minta at gmail.com (Adrian Minta) Date: Sat, 28 Jan 2012 17:48:53 +0200 Subject: [Dovecot] XFS Developer Takes Shots At Btrfs, EXT4 Message-ID: <4F2418E5.2020107@gmail.com> Nice article about XFS improvements: http://tinyurl.com/7pvr9ju From jd.beaubien at gmail.com Sat Jan 28 17:59:39 2012 From: jd.beaubien at gmail.com (Jean-Daniel Beaubien) Date: Sat, 28 Jan 2012 10:59:39 -0500 Subject: [Dovecot] maildir vs mdbox Message-ID: Hi, I am planning on running on test between maildir and mdbox to see which is a better fit for my use case. And I'm just looking for general advice/recommendation. I will post any results I obtain here. Important question: I have multiple users hitting the same email account at the same time. Can be a problem with mdbox? (either via thunderbird or with custom webmail apps). I remember having huge issues with mbox a decade ago because of this. Maildir fixed this... will mdbox reintroduce this problem? This is a very important point for me. Here is my use case: - Ubuntu server (any specific recommandations on FS to use?) - Standard PC hardware (core i5 or i7, few gigs of ram, hdds at first, probably ssd afterwards, nothing very fancy) - Serving only a hand full of email accounts but some of the accounst have over 3 millions emails in them (with individual mail folders having 100k+ emails) - Will use latest dovecot (2.1 when it comes out) - fts-lucene or fts-solr? -jd From tss at iki.fi Sat Jan 28 18:05:48 2012 From: tss at iki.fi (Timo Sirainen) Date: Sat, 28 Jan 2012 18:05:48 +0200 Subject: [Dovecot] maildir vs mdbox In-Reply-To: References: Message-ID: <959D4700-F076-45A4-9C42-47ED218A50BC@iki.fi> On 28.1.2012, at 17.59, Jean-Daniel Beaubien wrote: > I am planning on running on test between maildir and mdbox to see which is > a better fit for my use case. And I'm just looking for general > advice/recommendation. I will post any results I obtain here. Maildir is good for reliability, since it's just about impossible to corrupt, and even in case of filesystem corruption it's easier to recover than other formats. mdbox is good if you want the best performance. > Important question: I have multiple users hitting the same email account at > the same time. Can be a problem with mdbox? No problem. > - Serving only a hand full of email accounts but some of the accounst have > over 3 millions emails in them (with individual mail folders having 100k+ > emails) Maildir gets slow with that many mails in one folder. > - fts-lucene or fts-solr? fts-lucene uses the latest CLucene version, which is a little old. With fts-solr you can use the latest Solr/Lucene. So as long as you don't mind setting up a Solr instance it should be better. The good thing about fts-lucene is that you can simply enable it and it works without any external servers. From jd.beaubien at gmail.com Sat Jan 28 18:13:59 2012 From: jd.beaubien at gmail.com (Jean-Daniel Beaubien) Date: Sat, 28 Jan 2012 11:13:59 -0500 Subject: [Dovecot] maildir vs mdbox In-Reply-To: <959D4700-F076-45A4-9C42-47ED218A50BC@iki.fi> References: <959D4700-F076-45A4-9C42-47ED218A50BC@iki.fi> Message-ID: Wow, incredible response time :) I have 1 more question which I forgot to put in the initial post. Considering my use case (small number of accounts but alot of emails per account, and I should add that they are mostly small emails, most under 5k, alot under 30k) what mdbox setting would you recommend i start testing with (mdbox_rotate_size and mdbox_rotate_interval). -JD On Sat, Jan 28, 2012 at 11:05 AM, Timo Sirainen wrote: > On 28.1.2012, at 17.59, Jean-Daniel Beaubien wrote: > > > I am planning on running on test between maildir and mdbox to see which > is > > a better fit for my use case. And I'm just looking for general > > advice/recommendation. I will post any results I obtain here. > > Maildir is good for reliability, since it's just about impossible to > corrupt, and even in case of filesystem corruption it's easier to recover > than other formats. mdbox is good if you want the best performance. > > > Important question: I have multiple users hitting the same email account > at > > the same time. Can be a problem with mdbox? > > No problem. > > > - Serving only a hand full of email accounts but some of the accounst > have > > over 3 millions emails in them (with individual mail folders having 100k+ > > emails) > > Maildir gets slow with that many mails in one folder. > > > - fts-lucene or fts-solr? > > > fts-lucene uses the latest CLucene version, which is a little old. With > fts-solr you can use the latest Solr/Lucene. So as long as you don't mind > setting up a Solr instance it should be better. The good thing about > fts-lucene is that you can simply enable it and it works without any > external servers. From tss at iki.fi Sat Jan 28 18:37:19 2012 From: tss at iki.fi (Timo Sirainen) Date: Sat, 28 Jan 2012 18:37:19 +0200 Subject: [Dovecot] maildir vs mdbox In-Reply-To: References: <959D4700-F076-45A4-9C42-47ED218A50BC@iki.fi> Message-ID: <548DDD91-D0F1-49F7-8E8D-3EA03DF72397@iki.fi> On 28.1.2012, at 18.13, Jean-Daniel Beaubien wrote: > Considering my use case (small number of accounts but alot of emails per > account, and I should add that they are mostly small emails, most under 5k, > alot under 30k) what mdbox setting would you recommend i start testing with > (mdbox_rotate_size and mdbox_rotate_interval). mdbox_rotate_interval is useful only if you want smaller incremental backups (so files that are backed up no longer change unless messages are deleted). Its default is 0 (I just fixed example-config, which showed it as 1day). I don't really know about mdbox_rotate_size. It would be nice if someone were to test different values over longer period and report how it affects disk IO. From jd.beaubien at gmail.com Sat Jan 28 19:02:46 2012 From: jd.beaubien at gmail.com (Jean-Daniel Beaubien) Date: Sat, 28 Jan 2012 12:02:46 -0500 Subject: [Dovecot] maildir vs mdbox In-Reply-To: <548DDD91-D0F1-49F7-8E8D-3EA03DF72397@iki.fi> References: <959D4700-F076-45A4-9C42-47ED218A50BC@iki.fi> <548DDD91-D0F1-49F7-8E8D-3EA03DF72397@iki.fi> Message-ID: On Sat, Jan 28, 2012 at 11:37 AM, Timo Sirainen wrote: > On 28.1.2012, at 18.13, Jean-Daniel Beaubien wrote: > > > Considering my use case (small number of accounts but alot of emails per > > account, and I should add that they are mostly small emails, most under > 5k, > > alot under 30k) what mdbox setting would you recommend i start testing > with > > (mdbox_rotate_size and mdbox_rotate_interval). > > mdbox_rotate_interval is useful only if you want smaller incremental > backups (so files that are backed up no longer change unless messages are > deleted). Its default is 0 (I just fixed example-config, which showed it as > 1day). > > To be honest, the smaller incremental backup part is interesting. That along with auto-gzip of the mdbox files are very interesting for me. > I don't really know about mdbox_rotate_size. It would be nice if someone > were to test different values over longer period and report how it affects > disk IO. > > I was thinking on doing a test with 20MB and 80MB, look at the results and go from there. Btw, when I migrate my emails from Maildir to mdbox, dsync should take into account the rotate_size parameter. If I want to change the rotate_size parameter, I simply edit the config file, change the parameter (erase the mdbox folder?) and re-run dsync. Is that correct? From tss at iki.fi Sat Jan 28 19:27:53 2012 From: tss at iki.fi (Timo Sirainen) Date: Sat, 28 Jan 2012 19:27:53 +0200 Subject: [Dovecot] Crash on mail folder delete In-Reply-To: References: <4F20922C.60206@amfes.com> <54B5D728-EC26-4633-A927-7EC040043BF5@iki.fi> <4F20939E.4010903@amfes.com> <4F209878.5040505@amfes.com> Message-ID: On 28.1.2012, at 9.15, Daniel L. Miller wrote: > Can I do anything to help find this? Folders are still shown in Trash - unable to delete. gdb backtrace would be helpful: http://dovecot.org/bugreport.html and doveconf -n and the folder name. From tss at iki.fi Sat Jan 28 19:29:18 2012 From: tss at iki.fi (Timo Sirainen) Date: Sat, 28 Jan 2012 19:29:18 +0200 Subject: [Dovecot] v2.1.rc5 (85a9b5236b6c) Error: lmtp client: DNS lookup of $FQDN failed: connect(dns-client) failed: No such file or directory In-Reply-To: <4F241578.2090904@localhost.localdomain.org> References: <4F241578.2090904@localhost.localdomain.org> Message-ID: <64779967-206F-4F44-8F01-32810EE0795A@iki.fi> On 28.1.2012, at 17.34, Pascal Volk wrote: > When the Sieve plugin tries to send a vacation message or redirect > a message to another address it fails. > > dovecot: lmtp(6412, user at example.com): Error: lmtp client: DNS lookup of orange.example.com failed: connect(dns-client) failed: No such file or directory Fixed: http://hg.dovecot.org/dovecot-2.1/rev/bc2eea348f55 http://hg.dovecot.org/dovecot-2.1/rev/32318f1588d4 The same problem exists in v2.0 also, but I didn't bother to fix it there. A workaround is to use IP instead of host in submission_host. From tss at iki.fi Sat Jan 28 19:30:05 2012 From: tss at iki.fi (Timo Sirainen) Date: Sat, 28 Jan 2012 19:30:05 +0200 Subject: [Dovecot] v2.1.rc5 (85a9b5236b6c) Error: lmtp client: DNS lookup of $FQDN failed: connect(dns-client) failed: No such file or directory In-Reply-To: <64779967-206F-4F44-8F01-32810EE0795A@iki.fi> References: <4F241578.2090904@localhost.localdomain.org> <64779967-206F-4F44-8F01-32810EE0795A@iki.fi> Message-ID: <36FD5315-1EB7-4F70-AB4E-9E6C1D535747@iki.fi> On 28.1.2012, at 19.29, Timo Sirainen wrote: > On 28.1.2012, at 17.34, Pascal Volk wrote: > >> When the Sieve plugin tries to send a vacation message or redirect >> a message to another address it fails. >> >> dovecot: lmtp(6412, user at example.com): Error: lmtp client: DNS lookup of orange.example.com failed: connect(dns-client) failed: No such file or directory > > Fixed: http://hg.dovecot.org/dovecot-2.1/rev/bc2eea348f55 http://hg.dovecot.org/dovecot-2.1/rev/32318f1588d4 > > The same problem exists in v2.0 also, but I didn't bother to fix it there. A workaround is to use IP instead of host in submission_host. Oh, clarification: With LMTP it just happens to work with v2.0, but with dovecot-lda it doesn't work. From tss at iki.fi Sat Jan 28 19:32:48 2012 From: tss at iki.fi (Timo Sirainen) Date: Sat, 28 Jan 2012 19:32:48 +0200 Subject: [Dovecot] LMTP : Can't handle mixed proxy/non-proxy destinations In-Reply-To: References: Message-ID: <33BD52FA-1FE0-46D5-A1E8-9A54C406BE64@iki.fi> On 27.1.2012, at 12.59, Alexis Lelion wrote: > Jan 25 09:05:12 mail01 postfix/lmtp[23934]: A92709300DB: to=< > user_on_mail02 at domain.com>, relay=mail01.domain.com[private/dovecot-lmtp], > delay=0.07, delays=0.01/0/0/0.06, dsn=4.3.0, status=deferred (host > mail01.domain.com[private/dovecot-lmtp] said: 451 4.3.0 < > user_on_mail02 at domain.com> Can't handle mixed proxy/non-proxy destinations > (in reply to RCPT TO command)) > > I was wondering if there was another way of handling this, for example > by triggering an immediate queue lookup from postfix or forwarding a > copy of the mail to the other server. Note that the postfix > "queue_run_delay" was increased to 15min on purpose, so I cannot change > that. It would be possible to change the code to support mixed destinations, but it's probably not a simple change and I have other things to do.. Maybe you could work around it so that LMTP always proxies the mails, to localhost as well, but to a different port which doesn't do proxying at all. From tss at iki.fi Sat Jan 28 19:45:13 2012 From: tss at iki.fi (Timo Sirainen) Date: Sat, 28 Jan 2012 19:45:13 +0200 Subject: [Dovecot] IMAP to Maildir Migration preserving UIDs? In-Reply-To: <4F21E92C.4090509@gedalya.net> References: <4F20D718.9010805@gedalya.net> <4F21E4CD.3070001@gedalya.net> <4F21E92C.4090509@gedalya.net> Message-ID: <3F3C09E9-1E8F-4243-BC39-BAEA38AF5300@iki.fi> On 27.1.2012, at 2.00, Gedalya wrote: > Starting program: /usr/bin/doveadm -o imapc_user=jedi at example.com -o imapc_password=**** backup -u jedi at example.com -R imapc: > > Program received signal SIGSEGV, Segmentation fault. > mailbox_log_iter_open_next (iter=0x80cbd90) at mailbox-log.c:213 > 213 mailbox-log.c: No such file or directory. > in mailbox-log.c This crash is now fixed, so there's no need to give /tmp/imapc path anymore: http://hg.dovecot.org/dovecot-2.1/rev/7b94d1c8a6e7 From tss at iki.fi Sat Jan 28 19:51:08 2012 From: tss at iki.fi (Timo Sirainen) Date: Sat, 28 Jan 2012 19:51:08 +0200 Subject: [Dovecot] Problem with Postfix + Dovecot + MySQL + Squirrelmail In-Reply-To: <1327667537.79787.YahooMailNeo@web65309.mail.ac2.yahoo.com> References: <1327667537.79787.YahooMailNeo@web65309.mail.ac2.yahoo.com> Message-ID: <8C57281B-2C18-4C19-9F80-57BDF77D83B4@iki.fi> On 27.1.2012, at 14.32, Gustavo wrote: > #service dovecot start > Starting IMAP/POP3 mail server: dovecotLast died with error (see error log for more information): Auth process died too early - shutting down No need to keep guessing the problem. "See error log for more information" like it says. http://wiki.dovecot.org/Logging From tss at iki.fi Sat Jan 28 19:55:09 2012 From: tss at iki.fi (Timo Sirainen) Date: Sat, 28 Jan 2012 19:55:09 +0200 Subject: [Dovecot] problem compiling imaptest under solaris In-Reply-To: <89f61bff49f4c5343be06dd45459b14a@imapproxy.hrz> References: <89f61bff49f4c5343be06dd45459b14a@imapproxy.hrz> Message-ID: <3A621688-A7AE-4C08-96EA-D9668ECA02D1@iki.fi> On 25.1.2012, at 16.43, J?rgen Obermann wrote: > today I tried to compile imaptest under solaris 10 with studio 11 compiler and got the following error: > > gmake[2]: Entering directory `/net/fileserv/export/sunsrc/src/imaptest-20111119/src' > source='client.c' object='client.o' libtool=no \ > DEPDIR=.deps depmode=none /bin/bash ../depcomp \ > cc -DHAVE_CONFIG_H -I. -I. -I.. -I/opt/local/include/dovecot -I/usr/local/include -fast -xarch=v8plusa -I/usr/sfw/include -c client.c > "/opt/local/include/dovecot/imap-util.h", line 6: warning: useless declaration > "client-state.h", line 6: warning: useless declaration > "client.c", line 655: operand cannot have void type: op "==" > "client.c", line 655: operands have incompatible types: > const void "==" int > cc: acomp failed for client.c http://hg.dovecot.org/imaptest/rev/7e490e59f1ee should fix it? From tss at iki.fi Sat Jan 28 19:57:29 2012 From: tss at iki.fi (Timo Sirainen) Date: Sat, 28 Jan 2012 19:57:29 +0200 Subject: [Dovecot] Password auth scheme question with mysql In-Reply-To: <4F1F46D7.7050600@wildgooses.com> References: <4F1F2B7F.3070005@wildgooses.com> <4F1F46D7.7050600@wildgooses.com> Message-ID: <143E640C-EE04-4B5B-B5A5-991AF3C2D567@iki.fi> On 25.1.2012, at 2.03, Ed W wrote: > The error seems to be that I set the "pass" variable in my password_query to set the master password for the upstream proxied to server. I can't actually remember now why this was required, but it was necessary to allow the proxy to work correctly in the past. I guess this assumption needs revisiting now since it can't be used if the plain password isn't in the database... I'm not sure if I understand correctly, but if you need the user's plaintext password it's in %w variable (assuming plaintext authentication). So a common configuration is to use: '%w' AS pass From tss at iki.fi Sat Jan 28 19:58:55 2012 From: tss at iki.fi (Timo Sirainen) Date: Sat, 28 Jan 2012 19:58:55 +0200 Subject: [Dovecot] [PATCH] autoconf small fix In-Reply-To: References: Message-ID: <1BE2A6DE-DC86-4BC4-BFBC-E58A57361368@iki.fi> On 24.1.2012, at 17.58, Luca Di Vizio wrote: > the attached patch seems to solve a warning from autoconf: > > libtoolize: Consider adding `AC_CONFIG_MACRO_DIR([m4])' to configure.in and > libtoolize: rerunning libtoolize, to keep the correct libtool macros in-tree. I have considered it before, but I remember at one point there was some reason why I didn't want to do it. I just can't remember the reason anymore, maybe there isn't any.. But I don't really understand why libtoolize keeps complaining about that, since it works just fine as it is. From tss at iki.fi Sat Jan 28 20:06:01 2012 From: tss at iki.fi (Timo Sirainen) Date: Sat, 28 Jan 2012 20:06:01 +0200 Subject: [Dovecot] Quota-warning and setresgid In-Reply-To: References: Message-ID: <480D0593-2405-42B5-8EA9-9A66CD8F3B97@iki.fi> On 10.1.2012, at 11.34, l.chelchowski wrote: > Jan 10 10:15:06 lda: Debug: auth input: tester at domain.eu home=/home/vmail/domain.eu/tester/ mail=maildir:/home/vmail/domain.eu/tester/:INDEX=/var/mail/vmail/domain.eu/tester at domain.eu/index/public uid=101 gid=12 quota_rule=*:storage=2097 acl_groups= Note that userdb lookup returns gid=12(mail) > Jan 10 10:15:06 lda(tester at domain.eu): Fatal: setresgid(12(mail),12(mail),101(vmail)) failed with euid=101(vmail): Operation not permitted But you're running it with gid=101(vmail). > mail_gid = vmail > mail_privileged_group = vmail > mail_uid = vmail Here you're also using gid=101(vmail). (The mail_privileged_group=vmail is a useless setting BTW) > userdb { > args = /usr/local/etc/dovecot/dovecot-sql.conf > driver = sql > } My guess for the best fix: Change the user_query not to return uid or gid fields at all. From tss at iki.fi Sat Jan 28 20:23:12 2012 From: tss at iki.fi (Timo Sirainen) Date: Sat, 28 Jan 2012 20:23:12 +0200 Subject: [Dovecot] UID 0 problem while issuing an UID THREAD REFS command In-Reply-To: <201201231355.15051.jesus.navarro@bvox.net> References: <201201201724.41631.jesus.navarro@bvox.net> <4F19BD71.9000603@iki.fi> <201201231355.15051.jesus.navarro@bvox.net> Message-ID: <30046BB5-6E1C-41E5-9B04-787F568DE604@iki.fi> On 23.1.2012, at 14.55, Jes?s M. Navarro wrote: >>> I'm having problems on a maildir due to dovecot returning an UID 0 to an >>> UID THREAD REFS command: > > I'm sending to your personal address a whole maildir that reproduces the bug > (it's very short) to avoid having it published in the mail archives. Thanks, I finally looked at this. The problem happens only when the THREADing isn't done for all messages. I thought this would have been a much more complex bug. Fixed: http://hg.dovecot.org/dovecot-2.0/rev/57498cad6ab9 From tss at iki.fi Sat Jan 28 20:29:36 2012 From: tss at iki.fi (Timo Sirainen) Date: Sat, 28 Jan 2012 20:29:36 +0200 Subject: [Dovecot] where is subscribed list stored? In-Reply-To: <4F1C1536.1000407@makuch.org> References: <4F1C1536.1000407@makuch.org> Message-ID: <544CBFAD-1A55-422A-9292-2D876E65AE53@iki.fi> On 22.1.2012, at 15.55, Michael Makuch wrote: > I use dovecot locally for internal only access to my email archives, of which I have many gigs of email archives. Over time I end up subscribing to a couple dozen different IMAP email folders. Problem is that periodically my list of subscribed folders get zapped to none, and I have to go and re-subscribe to a dozen or two folders again. > > Anyone seen this happen? It looks like the list of subscribed folders is here ~/Mail/.subscriptions and I can see in my daily backup that it reflects what appears in TBird. What might be zapping it? I use multiple email clients simultaneously on different hosts. (IOW I leave them open) Is this a problem? Does dovecot manage that in some way? Or is that my problem? I don't think this is the problem since this only occurs like a few times per year. If it were the problem I'd expect it to occur much more frequently. No idea, but you could prevent it by making sure that it can't change the subscriptions: mail_location = mbox:~/Mail:CONTROL=~/mail-subscriptions mkdir ~/mail-subscriptions mv ~/Mail/.subscriptions ~/mail-subscriptions chmod 0500 ~/mail-subscriptions I thought Dovecot would also log an error if client tried to change subscriptions, but looks like it doesn't. It only returns failure to client: a unsubscribe INBOX a NO [NOPERM] No permission to modify subscriptions From tss at iki.fi Sat Jan 28 22:07:02 2012 From: tss at iki.fi (Timo Sirainen) Date: Sat, 28 Jan 2012 22:07:02 +0200 Subject: [Dovecot] MySQL server has gone away In-Reply-To: References: Message-ID: <02F8EC25-CB7D-4AD1-8ED8-5CB4950B6EC5@iki.fi> On 13.1.2012, at 20.29, Mark Moseley wrote: > If there are multiple hosts, it seems like the most robust thing to do > would be to exhaust the existing connections and if none of those > succeed, then start a new connection to one of them. It will probably > result in much more convoluted logic but it'd probably match better > what people expect from a retry. Done: http://hg.dovecot.org/dovecot-2.0/rev/4e7676b890f1 From tss at iki.fi Sat Jan 28 22:24:49 2012 From: tss at iki.fi (Timo Sirainen) Date: Sat, 28 Jan 2012 22:24:49 +0200 Subject: [Dovecot] imap-login process_limit reached In-Reply-To: <4F158473.1000901@orlitzky.com> References: <4F14A7AA.8010507@easystreet.net> <4F158473.1000901@orlitzky.com> Message-ID: <2D5E0681-DF1F-4798-83BF-54648B2DAFB4@iki.fi> On 17.1.2012, at 16.23, Michael Orlitzky wrote: > First of all, feature request: > > doveconf -d > show the default value of all settings Done: http://hg.dovecot.org/dovecot-2.1/rev/41cb0217b7c3 From tss at iki.fi Sat Jan 28 22:42:21 2012 From: tss at iki.fi (Timo Sirainen) Date: Sat, 28 Jan 2012 22:42:21 +0200 Subject: [Dovecot] 2.1.rc3 (1a722c7676bb):dsync umlaut problems In-Reply-To: <4F0FA0A7.10909@localhost.localdomain.org> References: <4F0FA0A7.10909@localhost.localdomain.org> Message-ID: <7D563028-0149-4A06-A7DF-9A3F7B84F805@iki.fi> On 13.1.2012, at 5.10, Pascal Volk wrote: > All umlauts in mailbox names are lost after converting mbox/Maildir > mailboxes to mdbox. Looks like it was a generic problem in v2.1 dsync. Fixed: http://hg.dovecot.org/dovecot-2.1/rev/ef6f3b7f6038 From tss at iki.fi Sat Jan 28 22:44:45 2012 From: tss at iki.fi (Timo Sirainen) Date: Sat, 28 Jan 2012 22:44:45 +0200 Subject: [Dovecot] moving mail out of alt storage In-Reply-To: References: <87sjnya3z5.fsf@algae.riseup.net> <1316077133.12936.18.camel@hurina> <87obylafsw.fsf_-_@algae.riseup.net> Message-ID: On 12.1.2012, at 20.32, Mark Moseley wrote: >>> On Wed, 2011-09-14 at 23:17 -0400, Micah Anderson wrote: >>>> I moved some mail into the alt storage: >>>> >>>> doveadm altmove -u johnd at example.com seen savedbefore 1w >>>> >>>> and now I want to move it back to the regular INBOX, but I can't see how >>>> I can do that with either 'altmove' or 'mailbox move'. >>> >>> Is this sdbox or mdbox? With sdbox you could simply "mv" the files. Or >>> apply patch: http://hg.dovecot.org/dovecot-2.0/rev/1910c76a6cc9 >> >> This is mdbox, which is why I am not sure how to operate because I am >> used to individual files as is with maildir. >> >> micah >> > > I'm curious about this too. Is moving the m.# file out of the ALT > path's storage/ directory into the non-ALT storage/ directory > sufficient? Or will that cause odd issues? You can manually move m.* files to alt storage and back. Just make sure that the same file isn't being simultaneously modified by Dovecot or you'll corrupt it. From tss at iki.fi Sat Jan 28 23:04:24 2012 From: tss at iki.fi (Timo Sirainen) Date: Sat, 28 Jan 2012 23:04:24 +0200 Subject: [Dovecot] dovecot 2.0.15 - purge errors In-Reply-To: <87hb00run6.fsf@alfa.kjonca> References: <87hb00run6.fsf@alfa.kjonca> Message-ID: <88D79565-2FEC-4B69-88F3-FC6F6AAB435A@iki.fi> On 13.1.2012, at 8.20, Kamil Jo?ca wrote: > Dovecot 2.0.15, debian package, am I lost some mails? How can I check > what is in *.broken file? You can look at the .broken file with text editor for example :) > --8<---------------cut here---------------start------------->8--- > $doveadm -v purge > doveadm(kjonca): Error: Corrupted dbox file /home/kjonca/Mail/0/storage/m.6469 (around offset=291530): purging found mismatched offsets (291500 vs 299692, 60/215) 299692 - 291500 = 8192 = output stream's buffering size. I guess what happened is that sometimes earlier Dovecot crashed while it was saving a message, but it had managed to write 8192 bytes. Now purging notices the extra 8192 bytes and wonders what to do about them, so it starts index rebuild, which probably adds it as a new message to mailbox. In future this check probably should be done before appending the next message to mdbox, so it's noticed earlier and it probably should delete the message instead of adding a partially saved message to mailbox. > doveadm(kjonca): Error: Corrupted dbox file /home/kjonca/Mail/0/storage/m.6469 (around offset=599914): metadata header has bad magic value This is about the same error as above. So, in short: Nothing to worry about. Although you could look into why the earlier saving crashed in the first place. From robert at schetterer.org Sat Jan 28 23:06:01 2012 From: robert at schetterer.org (Robert Schetterer) Date: Sat, 28 Jan 2012 22:06:01 +0100 Subject: [Dovecot] MySQL server has gone away In-Reply-To: <02F8EC25-CB7D-4AD1-8ED8-5CB4950B6EC5@iki.fi> References: <02F8EC25-CB7D-4AD1-8ED8-5CB4950B6EC5@iki.fi> Message-ID: <4F246339.708@schetterer.org> Am 28.01.2012 21:07, schrieb Timo Sirainen: > On 13.1.2012, at 20.29, Mark Moseley wrote: > >> If there are multiple hosts, it seems like the most robust thing to do >> would be to exhaust the existing connections and if none of those >> succeed, then start a new connection to one of them. It will probably >> result in much more convoluted logic but it'd probably match better >> what people expect from a retry. > > Done: http://hg.dovecot.org/dovecot-2.0/rev/4e7676b890f1 > Hi Timo doc/example-config/dovecot-sql.conf.ext from hg has something like # Database connection string. This is driver-specific setting. # HA / round-robin load-balancing is supported by giving multiple host # settings, like: host=sql1.host.org host=sql2.host.org but i dont find it in http://wiki2.dovecot.org/AuthDatabase/SQL -- Best Regards MfG Robert Schetterer Germany/Munich/Bavaria From tss at iki.fi Sat Jan 28 23:47:56 2012 From: tss at iki.fi (Timo Sirainen) Date: Sat, 28 Jan 2012 23:47:56 +0200 Subject: [Dovecot] dovecot 2.0.15 - purge errors In-Reply-To: <88D79565-2FEC-4B69-88F3-FC6F6AAB435A@iki.fi> References: <87hb00run6.fsf@alfa.kjonca> <88D79565-2FEC-4B69-88F3-FC6F6AAB435A@iki.fi> Message-ID: <3F7BA98D-9295-4823-80E5-A647FBD71D68@iki.fi> On 28.1.2012, at 23.04, Timo Sirainen wrote: > 299692 - 291500 = 8192 = output stream's buffering size. I guess what happened is that sometimes earlier Dovecot crashed while it was saving a message, but it had managed to write 8192 bytes. Now purging notices the extra 8192 bytes and wonders what to do about them, so it starts index rebuild, which probably adds it as a new message to mailbox. > > In future this check probably should be done before appending the next message to mdbox, so it's noticed earlier Done: http://hg.dovecot.org/dovecot-2.1/rev/bde005e302e0 > and it probably should delete the message instead of adding a partially saved message to mailbox. Not done. Safer to not delete any data. From tss at iki.fi Sat Jan 28 23:54:17 2012 From: tss at iki.fi (Timo Sirainen) Date: Sat, 28 Jan 2012 23:54:17 +0200 Subject: [Dovecot] howto disable indexing on dovecot-lda ? In-Reply-To: <4F0DC747.4070505@gmail.com> References: <4F06D5D9.20001@gmail.com> <4F06DFF5.40707@hardwarefreak.com> <4F06F0E7.904@gmail.com> <4F0DC747.4070505@gmail.com> Message-ID: On 11.1.2012, at 19.30, Adrian Minta wrote: > Hello, > > I tested with "mail_location = whatever-you-have-now:INDEX=MEMORY" and it seems to help, but in the mean time I found another option completely undocumented that seems to do exactly what I wanted: > protocol lda { > mailbox_list_index_disable= yes > > } > > Does anyone knows exactly what "mailbox_list_index_disable" does and if is still available in 2.0 and 2.1 branch ? mailbox_list_index_disable does absolutely nothing in v2.0, and it defaults to "no" in v2.1 also. It's about a different kind of index. From tss at iki.fi Sun Jan 29 00:00:27 2012 From: tss at iki.fi (Timo Sirainen) Date: Sun, 29 Jan 2012 00:00:27 +0200 Subject: [Dovecot] A small bug and a question about CONTEXT=SEARCH In-Reply-To: <20120111155746.BD7BDDA030B2B@bmail06.one.com> References: <20120111155746.BD7BDDA030B2B@bmail06.one.com> Message-ID: On 11.1.2012, at 17.57, Anders wrote: > Sorry, apparently I was a bit too fast there. ADDTO and REMOVEFROM should not > be sent by a client, but I think that a client can send CONTEXT as a hint to > the server, see > > http://tools.ietf.org/html/rfc5267#section-4.2 Yes, that was a bug. Thanks, fixed: http://hg.dovecot.org/dovecot-2.0/rev/fd16e200f0f7 From tss at iki.fi Sun Jan 29 00:04:16 2012 From: tss at iki.fi (Timo Sirainen) Date: Sun, 29 Jan 2012 00:04:16 +0200 Subject: [Dovecot] sieve under lmtp using wrong homedir ? In-Reply-To: References: Message-ID: On 11.1.2012, at 17.35, Frank Post wrote: > All is working well except lmtp. Sieve scripts are correctly saved under > /var/vmail/test.com/test/sieve, but under lmtp sieve will use > /var/vmail//testuser/ > Uid testuser has mail=test at test.com configured in ldap. > > As i could see in the debug logs, there is a difference between the auth > "master out" lines, but why ? .. > Jan 11 14:39:53 auth: Debug: master in: USER 1 testuser > service=lmtp lip=10.234.201.9 rip=10.234.201.4 This means that Dovecot LMTP got: RCPT TO: instead of: RCPT TO: You probably should fix your userdb lookup so that that would return "unknown user" instead of accepting it. But the real problem is anyway in your MTA setup. From tss at iki.fi Sun Jan 29 00:17:44 2012 From: tss at iki.fi (Timo Sirainen) Date: Sun, 29 Jan 2012 00:17:44 +0200 Subject: [Dovecot] MySQL server has gone away In-Reply-To: <4F246339.708@schetterer.org> References: <02F8EC25-CB7D-4AD1-8ED8-5CB4950B6EC5@iki.fi> <4F246339.708@schetterer.org> Message-ID: <9B43B5C1-8375-43E9-8CA3-722F601846A2@iki.fi> On 28.1.2012, at 23.06, Robert Schetterer wrote: > doc/example-config/dovecot-sql.conf.ext > from hg > has something like > > # Database connection string. This is driver-specific setting. > # HA / round-robin load-balancing is supported by giving multiple host > # settings, like: host=sql1.host.org host=sql2.host.org > > but i dont find it in > http://wiki2.dovecot.org/AuthDatabase/SQL I added something about it there. From tss at iki.fi Sun Jan 29 00:20:06 2012 From: tss at iki.fi (Timo Sirainen) Date: Sun, 29 Jan 2012 00:20:06 +0200 Subject: [Dovecot] maildir vs mdbox In-Reply-To: References: <959D4700-F076-45A4-9C42-47ED218A50BC@iki.fi> <548DDD91-D0F1-49F7-8E8D-3EA03DF72397@iki.fi> Message-ID: On 28.1.2012, at 19.02, Jean-Daniel Beaubien wrote: > Btw, when I migrate my emails from Maildir to mdbox, dsync should take into > account the rotate_size parameter. If I want to change the rotate_size > parameter, I simply edit the config file, change the parameter (erase the > mdbox folder?) and re-run dsync. Is that correct? Yes. You can also give -o mdbox_rotate_size=X parameter to dsync to override the config. The mdbox_rotate_size is started to be used immediately, so if you increase it Dovecot may start appending new mails to old files. The existing files aren't immediately shrunk, but during purge when writing new files the files can become smaller. From tss at iki.fi Sun Jan 29 00:26:01 2012 From: tss at iki.fi (Timo Sirainen) Date: Sun, 29 Jan 2012 00:26:01 +0200 Subject: [Dovecot] compressed mboxes very slow In-Reply-To: <8739blw6gl.fsf@alfa.kjonca> References: <87iptnoans.fsf@alfa.kjonca> <8739blw6gl.fsf@alfa.kjonca> Message-ID: <0C550F94-3CAE-4B0E-9E95-B6E1A708DBA0@iki.fi> I wonder if this patch helps here: http://hg.dovecot.org/dovecot-2.0/rev/9b2931607063 At least I can't now see any slowness with either v2.1 or the latest v2.0. But I don't know if I would have slowness with older versions either.. From robert at schetterer.org Sun Jan 29 00:27:22 2012 From: robert at schetterer.org (Robert Schetterer) Date: Sat, 28 Jan 2012 23:27:22 +0100 Subject: [Dovecot] MySQL server has gone away In-Reply-To: <9B43B5C1-8375-43E9-8CA3-722F601846A2@iki.fi> References: <02F8EC25-CB7D-4AD1-8ED8-5CB4950B6EC5@iki.fi> <4F246339.708@schetterer.org> <9B43B5C1-8375-43E9-8CA3-722F601846A2@iki.fi> Message-ID: <4F24764A.1080207@schetterer.org> Am 28.01.2012 23:17, schrieb Timo Sirainen: > On 28.1.2012, at 23.06, Robert Schetterer wrote: > >> doc/example-config/dovecot-sql.conf.ext >> from hg >> has something like >> >> # Database connection string. This is driver-specific setting. >> # HA / round-robin load-balancing is supported by giving multiple host >> # settings, like: host=sql1.host.org host=sql2.host.org >> >> but i dont find it in >> http://wiki2.dovecot.org/AuthDatabase/SQL > > I added something about it there. > cool thanks ! -- Best Regards MfG Robert Schetterer Germany/Munich/Bavaria From tss at iki.fi Sun Jan 29 00:36:25 2012 From: tss at iki.fi (Timo Sirainen) Date: Sun, 29 Jan 2012 00:36:25 +0200 Subject: [Dovecot] 2.0.17: Index lost -> SAVEDON lost as well? In-Reply-To: <17E5EF90-C133-4F70-A377-0D93694181DE@iki.fi> References: <20120109074057.GC22506@charite.de> <17E5EF90-C133-4F70-A377-0D93694181DE@iki.fi> Message-ID: On 9.1.2012, at 16.57, Timo Sirainen wrote: >> After that, the SAVEDON date for all mails was reset to today: > > Yeah. The "save date" is stored only in index. And index rebuild drops all those fields. I guess this could/should be fixed in index rebuild. Fixed: http://hg.dovecot.org/dovecot-2.0/rev/c30ea8aec902 From tss at iki.fi Sun Jan 29 00:38:54 2012 From: tss at iki.fi (Timo Sirainen) Date: Sun, 29 Jan 2012 00:38:54 +0200 Subject: [Dovecot] Attribute Cache flush errors on FreeBSD 8.2 In-Reply-To: <4F079021.4090001@kernick.org> References: <4F079021.4090001@kernick.org> Message-ID: On 7.1.2012, at 2.21, Phil Kernick wrote: > I'm running dovecot 2.0.16 on FreeBSD 8.2 with the mail spool and indexes on an NFS server. > > Lines like the following keep appearing in syslog for access to each mailbox: > > Error: nfs_flush_attr_cache_fd_locked: fchown(/home/philk/Mail/Deleted) failed: Bad file descriptor I've given up on trying to make mail_nfs_* settings work. If you have only one Dovecot server, you don't need these settings at all. If you have more than one Dovecot server, use director (and then you also don't need these settings). From tss at iki.fi Sun Jan 29 00:40:53 2012 From: tss at iki.fi (Timo Sirainen) Date: Sun, 29 Jan 2012 00:40:53 +0200 Subject: [Dovecot] 2.1.rc1 (056934abd2ef): virtual plugin mailbox search pattern In-Reply-To: <4EF4BB6C.3050902@gmx.de> References: <4EF4BB6C.3050902@gmx.de> Message-ID: <1F065FD5-11B7-44C0-A4CB-96B346801986@iki.fi> On 23.12.2011, at 19.33, e-frog wrote: > For testing propose I created the following folders with each containing one unread message > > INBOX, INBOX/level1 and INBOX/level1/level2 .. > Result: virtual/unread shows only 1 unseen message. Further tests showed it's the one from INBOX. The mails from the deeper levels are not found. What mailbox format are you using? Maybe I fixed this with http://hg.dovecot.org/dovecot-2.1/rev/54e74090fb42 From ronald at rmacd.com Sun Jan 29 01:16:19 2012 From: ronald at rmacd.com (Ronald MacDonald) Date: Sat, 28 Jan 2012 18:16:19 -0500 Subject: [Dovecot] Migration to multi-dbox and SiS Message-ID: Dear list, A huge thank-you first of all for all the work that's gone into Dovecot itself. I'm rebuilding a mail server next week and so, taking the rare opportunity to re-consider all the options I've had running over the past couple of years. Around the time of the last re-build (2010), there had been some discussion on single instance storage, which was quite new on Dovecot around then. I chickened out of setting it up though. Now with it having been in the wild for a couple of years, I wonder, how have people found SiS to behave? Additionally, though there was talk of the prospect of it being merged with 2.x am I right in thinking it's not yet in the main project? Couldn't find any 2.x changelogs that could confirm this. With best wishes, Ronald. From tss at iki.fi Sun Jan 29 01:53:59 2012 From: tss at iki.fi (Timo Sirainen) Date: Sun, 29 Jan 2012 01:53:59 +0200 Subject: [Dovecot] Migration to multi-dbox and SiS In-Reply-To: References: Message-ID: <7E4C5ED4-BE84-4638-8D2E-51D25FF88EB5@iki.fi> On 29.1.2012, at 1.16, Ronald MacDonald wrote: > Around the time of the last re-build (2010), there had been some discussion on single instance storage, which was quite new on Dovecot around then. I chickened out of setting it up though. Now with it having been in the wild for a couple of years, I wonder, how have people found SiS to behave? Additionally, though there was talk of the prospect of it being merged with 2.x am I right in thinking it's not yet in the main project? Couldn't find any 2.x changelogs that could confirm this. It's in v2.0 and used by at least a few installations. Apparently it works quite well. As long as you have a pretty typical setup it should work fine. It gets more complex if you want to spread the data across multiple mount points. Backups may also be more difficult, since filesystem snapshots are pretty much the only 100% safe way to do them. BTW. SIS, not SiS ("Instance", not "in") From stan at hardwarefreak.com Sun Jan 29 02:25:50 2012 From: stan at hardwarefreak.com (Stan Hoeppner) Date: Sat, 28 Jan 2012 18:25:50 -0600 Subject: [Dovecot] XFS Developer Takes Shots At Btrfs, EXT4 In-Reply-To: <4F2418E5.2020107@gmail.com> References: <4F2418E5.2020107@gmail.com> Message-ID: <4F24920E.6080500@hardwarefreak.com> On 1/28/2012 9:48 AM, Adrian Minta wrote: > Nice article about XFS improvements: > http://tinyurl.com/7pvr9ju The "article" is strictly a badly written summary of the video. But, the video was great. Until watching this I'd never seen Dave in a photo or video, though I correspond with him regularly on the XFS list. Nice to finally put a face and voice to a name. One of many reasons the summary is badly written is the use of present tense when referring to XFS deficiencies, specifically the part about EXT4 being 20-50x faster with some metadata operations. The author writes as if this was the current state of affairs right up to Dave's recent presentation. The author misread or misinterpreted the slides or Dave's speech, and apparently has no personal knowledge of Linux filesystem development. This 20-50x EXT4 advantage disappeared in 2009, almost 3 years ago. I've mentioned many of these "new" improvements on this list over the past 2-3 years. They're not "new". We have an "author" writing about something he knows nothing about, and making lots of mistakes in his summary. This seems to be a trend with Phoronix. They are decided desktop-only oriented. Thus when they attempt to write about the big stuff they fail badly. And the title? Juvenile attempt to draw readers. Pretty pathetic. The "article" was all about Dave's presentation. Dave's 50 minute presentation took 2 "shots" of 10-15 seconds each at EXT4 and BTRFS. A better title would have been simply something like "XFS dev details improvements at Linux.Conf.Au 2012." -- Stan From moseleymark at gmail.com Sun Jan 29 06:04:44 2012 From: moseleymark at gmail.com (Mark Moseley) Date: Sat, 28 Jan 2012 20:04:44 -0800 Subject: [Dovecot] MySQL server has gone away In-Reply-To: <02F8EC25-CB7D-4AD1-8ED8-5CB4950B6EC5@iki.fi> References: <02F8EC25-CB7D-4AD1-8ED8-5CB4950B6EC5@iki.fi> Message-ID: On Sat, Jan 28, 2012 at 12:07 PM, Timo Sirainen wrote: > On 13.1.2012, at 20.29, Mark Moseley wrote: > >> If there are multiple hosts, it seems like the most robust thing to do >> would be to exhaust the existing connections and if none of those >> succeed, then start a new connection to one of them. It will probably >> result in much more convoluted logic but it'd probably match better >> what people expect from a retry. > > Done: http://hg.dovecot.org/dovecot-2.0/rev/4e7676b890f1 > Excellent, thanks! From e-frog at gmx.de Sun Jan 29 10:33:01 2012 From: e-frog at gmx.de (e-frog) Date: Sun, 29 Jan 2012 09:33:01 +0100 Subject: [Dovecot] 2.1.rc1 (056934abd2ef): virtual plugin mailbox search pattern In-Reply-To: <1F065FD5-11B7-44C0-A4CB-96B346801986@iki.fi> References: <4EF4BB6C.3050902@gmx.de> <1F065FD5-11B7-44C0-A4CB-96B346801986@iki.fi> Message-ID: <4F25043D.7000501@gmx.de> On 28.01.2012 23:40, wrote Timo Sirainen: > On 23.12.2011, at 19.33, e-frog wrote: > >> For testing propose I created the following folders with each containing one unread message >> >> INBOX, INBOX/level1 and INBOX/level1/level2 > .. >> Result: virtual/unread shows only 1 unseen message. Further tests showed it's the one from INBOX. The mails from the deeper levels are not found. > > What mailbox format are you using? mdbox > Maybe I fixed this with http://hg.dovecot.org/dovecot-2.1/rev/54e74090fb42 Just tested and yes it works with the above mentioned patch. Thanks a lot Timo! From adrian.minta at gmail.com Sun Jan 29 12:08:44 2012 From: adrian.minta at gmail.com (Adrian Minta) Date: Sun, 29 Jan 2012 12:08:44 +0200 Subject: [Dovecot] XFS Developer Takes Shots At Btrfs, EXT4 In-Reply-To: <4F24920E.6080500@hardwarefreak.com> References: <4F2418E5.2020107@gmail.com> <4F24920E.6080500@hardwarefreak.com> Message-ID: <4F251AAC.4050803@gmail.com> On 01/29/12 02:25, Stan Hoeppner wrote: > The "article" is strictly a badly written summary of the video. But, > the video was great. Until watching this I'd never seen Dave in a photo > or video, though I correspond with him regularly on the XFS list. Nice > to finally put a face and voice to a name. Yes, the video is very nice. From CMarcus at Media-Brokers.com Sun Jan 29 20:00:49 2012 From: CMarcus at Media-Brokers.com (Charles Marcus) Date: Sun, 29 Jan 2012 13:00:49 -0500 Subject: [Dovecot] imap-login process_limit reached In-Reply-To: <2D5E0681-DF1F-4798-83BF-54648B2DAFB4@iki.fi> References: <4F14A7AA.8010507@easystreet.net> <4F158473.1000901@orlitzky.com> <2D5E0681-DF1F-4798-83BF-54648B2DAFB4@iki.fi> Message-ID: <4F258951.20006@Media-Brokers.com> On 2012-01-28 3:24 PM, Timo Sirainen wrote: > On 17.1.2012, at 16.23, Michael Orlitzky wrote: > >> First of all, feature request: >> >> doveconf -d >> show the default value of all settings > > Done: http://hg.dovecot.org/dovecot-2.1/rev/41cb0217b7c3 Awesome, thanks Timo! This makes it much easier to make sure that you aren't specifying anything which would be the same as the default, which minimizes doveconf -n 'noise'... -- Best regards, Charles From user+dovecot at localhost.localdomain.org Mon Jan 30 01:36:24 2012 From: user+dovecot at localhost.localdomain.org (Pascal Volk) Date: Mon, 30 Jan 2012 00:36:24 +0100 Subject: [Dovecot] 2.1.rc3 (1a722c7676bb): Panic: file ostream.c: line 173 (o_stream_sendv): assertion failed: (stream->stream_errno != 0) In-Reply-To: <4F10AA71.6030901@localhost.localdomain.org> References: <4F10AA71.6030901@localhost.localdomain.org> Message-ID: <4F25D7F8.7060609@localhost.localdomain.org> Looks like http://hg.dovecot.org/dovecot-2.1/rev/3c0bd1fd035b has solved the problem. -- The trapper recommends today: fabaceae.1203000 at localdomain.org From bryder at wetafx.co.nz Mon Jan 30 02:05:58 2012 From: bryder at wetafx.co.nz (Bill Ryder) Date: Mon, 30 Jan 2012 13:05:58 +1300 Subject: [Dovecot] A namespace error on 2.1rc5 Message-ID: <4F25DEE6.7020402@wetafx.co.nz> Hello all, I'm not sure if this is a bug. It's probably just an upgrade note. In summary I had no namespace section in my 2.0.17 config. When trying out 2.1rc5 no user could login because of a namespace error. 2.1rc5 adds a default namespace clause which broke my logins (It was noted in the changelog) I seemed to fix it by just putting this in the config file: namespace inbox { inbox = yes } Long story: I've been recently testing dovecot against cyrus to decide where we should go for our next mail server(s) I loaded up the mail server with mail delivered via postfix all on dovecot 2.0.15 (I've since moved to 2.0.17) I have three dovecot directors, two backends on the same NFS mail store. With dovecot 2.0.xx the tester works fine (it't just a script which logins in and emulates thunderbird when a user is idle - without using IDLE so the client asks for mail every few minutes). When I moved to 2.1rc5 I got namespace errors and the user can not login. The server said: dovecot-error.log-20120128:Jan 27 13:37:59 imap(ethab01): Error: user ethab01: Initialization failed: namespace configuration error: inbox=yes namespace missing The client says: * BYE Internal error occurred. Refer to server log for more information. The session looks like 0.000000 192.168.121.37 -> 192.168.121.2 TCP 33213 > imap [SYN] Seq=0 Win=5840 Len=0 MSS=1460 TSV=1056649457 TSER=0 WS=5 0.000036 192.168.121.2 -> 192.168.121.37 TCP imap > 33213 [SYN, ACK] Seq=0 Ack=1 Win=5792 Len=0 MSS=1460 TSV=3264407631 TSER=1056649457 WS=7 0.000187 192.168.121.37 -> 192.168.121.2 TCP 33213 > imap [ACK] Seq=1 Ack=1 Win=5856 Len=0 TSV=1056649458 TSER=3264407631 0.006338 192.168.121.2 -> 192.168.121.37 IMAP Response: * OK [CAPABILITY IMAP4rev1 LITERAL+ SASL-IR LOGIN-REFERRALS ID ENABLE IDLE STARTTLS AUTH=PLAIN AUTH=LOGIN] Dovecot ready. 0.006889 192.168.121.37 -> 192.168.121.2 TCP 33213 > imap [ACK] Seq=1 Ack=124 Win=5856 Len=0 TSV=1056649465 TSER=3264407637 0.006973 192.168.121.37 -> 192.168.121.2 IMAP Request: I ID ("x-originating-ip" "192.168.114.249" "x-originating-port" "49403" "x-connected-ip" "192.168.121.37" "x-connected-port" "143") 0.006980 192.168.121.2 -> 192.168.121.37 TCP imap > 33213 [ACK] Seq=124 Ack=178 Win=6912 Len=0 TSV=3264407638 TSER=1056649465 0.007086 192.168.121.2 -> 192.168.121.37 IMAP Response: * ID NIL 0.018471 192.168.121.2 -> 192.168.121.37 IMAP Response: * BYE Internal error occurred. Refer to server log for more information. (interestingly the tshark output strips out the user name and password which is convenient but which may mean there's not enough information?) I rolled back to 2.0.17 and it was fine again. It's the same config files for both, same maildirs etc etc. All I did was change the dovecot version from 2.0.17 to 2.1rc5 However I see from the changelog that 2.1rc5 added a default namespace inbox: diff doveconf-n.2.0.17 doveconf-n.2.1-rc5 1c1 < # 2.0.17 (684381041dc4+): /etc/dovecot/dovecot.conf --- > # 2.1.rc5: /etc/dovecot/dovecot.conf 20a21,39 > namespace inbox { > location = > mailbox Drafts { > special_use = \Drafts > } > mailbox Junk { > special_use = \Junk > } > mailbox Sent { > special_use = \Sent > } > mailbox "Sent Messages" { > special_use = \Sent > } > mailbox Trash { > special_use = \Trash > } > prefix = > } We had this section commented out in 2.0.x so there was no namespace inbox anywhere. ============== doveconf -n for 2.0.17 (for the backends) # 2.0.17 (684381041dc4+): /etc/dovecot/dovecot.conf # OS: Linux 2.6.32-131.6.1.el6.x86_64 x86_64 Scientific Linux release 6.1 (Carbon) nfs auth_mechanisms = plain login auth_username_format = %n auth_verbose = yes debug_log_path = /var/log/dovecot/dovecot-debug.log disable_plaintext_auth = no first_valid_uid = 200 info_log_path = /var/log/dovecot/dovecot-info.log log_path = /var/log/dovecot/dovecot-error.log mail_debug = yes mail_fsync = always mail_gid = vmail mail_location = maildir:/vol/dt_mailstore1/spool/%n:INDEX=/var/indexes/%n mail_nfs_storage = yes mail_plugins = " fts fts_solr mail_log notify quota" mail_uid = vmail maildir_very_dirty_syncs = yes managesieve_notify_capability = mailto managesieve_sieve_capability = fileinto reject envelope encoded-character vacation subaddress comparator-i;ascii-numeric relational regex imap4flags copy include variables body enotify environment mailbox date ihave passdb { driver = pam } plugin { autocreate = Trash autocreate2 = Drafts autocreate3 = Sent autocreate4 = Templates autosubscribe = Trash autosubscribe2 = Drafts autosubscribe3 = Sent autosubscribe4 = Templates fts = solr fts_solr = break-imap-search debug url=http://dovecot-solr1.wetafx.co.nz:8080/solr/ mail_log_events = delete undelete expunge copy mailbox_delete mailbox_rename sieve = ~/.dovecot.sieve sieve_dir = ~/sieve } protocols = imap pop3 lmtp sieve service auth { unix_listener auth-userdb { group = vmail user = vmail } } service lmtp { inet_listener lmtp { address = 192.168.121.2 127.0.0.1 port = 24 } process_min_avail = 20 unix_listener /var/spool/postfix/private/dovecot-lmtp { group = postfix mode = 0660 user = postfix } } service managesieve-login { inet_listener sieve { port = 4190 } inet_listener sieve_deprecated { port = 2000 } } ssl_cert = hi folks hi timo hi master of "Fu" I just migrate my emails from one type of Maildir to Mailbox I did as I was having problems reading speed with my webmail. I did it in order to optimize when do you my current config work for me sincerely -- http://pgp.mit.edu:11371/pks/lookup?op=get&search=0xC2626742 gpg --keyserver pgp.mit.edu --recv-key C2626742 http://urlshort.eu fakessh @ http://gplus.to/sshfake http://gplus.to/sshswilting http://gplus.to/john.swilting https://lists.fakessh.eu/mailman/ This list is moderated by me, but all applications will be accepted provided they receive a note of presentation -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 189 bytes Desc: Ceci est une partie de message num?riquement sign?e URL: From gedalya at gedalya.net Mon Jan 30 05:29:51 2012 From: gedalya at gedalya.net (Gedalya) Date: Sun, 29 Jan 2012 22:29:51 -0500 Subject: [Dovecot] IMAP to Maildir Migration preserving UIDs? In-Reply-To: References: <4F20D718.9010805@gedalya.net> Message-ID: <4F260EAF.4090408@gedalya.net> On 01/26/2012 07:27 AM, Timo Sirainen wrote: > On 26.1.2012, at 6.31, Gedalya wrote: > >> I'm facing the need to migrate from a proprietary IMAP server to Dovecot. The migration must be as smooth and transparent as possible. >> >> The mailbox format I would want to use is Maildir++. >> >> The storage format used by the current server is unknown, and I don't look forward to trying to reverse-engineer it. This leaves me with the option of reading the mailboxes using IMAP. There are tools like offlineimap or mbsync, and they do store the UID and UIDVALIDITY info. The last piece of the puzzle is a process to properly create the dovecot-uidlist and dovecot-uidvalidity files. So far I wasn't able to find anything on this. Are there any tips? Are there any tools available to do this job, or part of it? > Get Dovecot v2.1 and configure it to work. Then for migration add to dovecot.conf: > > imapc_host = imap.example.com > imapc_port = 993 > imapc_ssl = imaps > imapc_ssl_ca_dir = /etc/ssl/certs > mail_prefetch_count = 50 > > And do the migration one user at a time: > > doveadm -o imapc_user=USERNAME -o imapc_password=PASSWORD backup -R imapc: > Now, to the issue of POP3. The old system uses the message filename for UIDL, but we need to migrate via IMAP in order to preserve IMAP info and UIDs (which have nothing to do with the POP3 UIDL in this case). So I've just finished writing a script to insert X-UIDL headers, and pop3_reuse_xuidl is doing the job. Question: Since the system currently serves in excess of 10 pop3 connections per second, would there be any performance gain from using pop3_save_uidl? Would it be faster or slower to fetch the UIDL list from the uidlist rather than look up the X-UIDL in the index? Just wondering. Also, what order does dovecot return the UIDLs in? From tss at iki.fi Mon Jan 30 08:31:39 2012 From: tss at iki.fi (Timo Sirainen) Date: Mon, 30 Jan 2012 08:31:39 +0200 Subject: [Dovecot] Mountpoints Message-ID: <563DC292-C26B-42FD-9E0D-119A5ECC451B@iki.fi> I've been thinking about mountpoints recently. There have been a few problems related to them: - If dbox mails and indexes are in different filesystems, and index fs isn't mounted and mailbox is accessed -> Dovecot rebuilds indexes from scratch, which changes UIDVALIDITY, which causes client to redownload mails. All mails will also show up as unread. Once index fs gets mounted again, the UIDVALIDITY changes again and client again redownloads mails. What should happen instead is that Dovecot simply refuses to rebuild indexes when the index fs isn't mounted. This isn't as critical for mbox/maildir, but probably a good idea to do there as well. - If dbox's alternative storage isn't mounted and a mail from there is tried to be accessed -> Dovecot rebuilds indexes and sees that all mails in alt path are gone, so Dovecot also deletes them from indexes as well. Once alt fs is mounted again, the mails in there won't come back without manual index rebuild and then they have also lost flags and have updated UIDs causing clients to redownload them. So again what should happen is that Dovecot won't rebuild indexes while alt fs isn't mounted. - For dsync-based replication I need to keep a state of each mountpoint (online, offline, failover) to determine how to access user's mails. So in the first two cases the main problem is: How does Dovecot know where a mountpoint begins? If the mountpoint is actually mounted there is no problem, because there are functions to find it (e.g. from /etc/mtab). So how to find a mountpoint that should exist, but doesn't? In some OSes Dovecot could maybe read and parse /etc/fstab, but that doesn't exist in all OSes, and do all installations even have all of the filesystems listed there anyway? (They could be in some startup script.) So, I was thinking about adding doveadm commands to explicitly tell Dovecot about the mountpoints that it needs to care about. When no mountpoints are defined Dovecot would behave as it does now. doveadm mount add|remove - add/remove mountpoint doveadm mount state [ []] - get/set state of mountpoint (used by replication) - if path isn't given list states of all mountpoints List of mountpoints is kept in /var/lib/dovecot/mounts. But because the dovecot directory is only accessible to root (and probably too much trouble to change that), there's another list in /var/run/dovecot/mounts. This one also contains the states of the mounts. When Dovecot starts up and can't find the mounts from rundir, it creates it from vardir's mounts. When mail processes notice that a directory is missing, it usually autocreates it. With mountpoints enabled, Dovecot first finds the root mountpoint for the directory. The mount root is stat()ed and its parent is stat()ed. If their device numbers equal, the filesystem is unmounted currently, and Dovecot fails instead of creating a new directory. Similar logic is used to avoid doing a dbox rebuild if its alt dir is currently in unmounted filesystem. The main problem I see with all this is how to make sysadmins remember to use these commands when they add/remove mountpoints?.. Perhaps the additions could be automatic at startup. Whenever Dovecot sees a new mountpoint, it's added. If an old mountpoint doesn't exist at startup a warning is logged about it. Of course many of the mountpoints aren't intended for mail storage. They could be hidden from the "mount state" list by setting their state to "ignore". Dovecot could also skip some of the common known mountpoints, such as where type is proc/tmpfs/sysfs. Thoughts? From tss at iki.fi Mon Jan 30 08:34:08 2012 From: tss at iki.fi (Timo Sirainen) Date: Mon, 30 Jan 2012 08:34:08 +0200 Subject: [Dovecot] Mountpoints In-Reply-To: <563DC292-C26B-42FD-9E0D-119A5ECC451B@iki.fi> References: <563DC292-C26B-42FD-9E0D-119A5ECC451B@iki.fi> Message-ID: <8CA38742-F346-4925-82D7-B282E6B284FF@iki.fi> On 30.1.2012, at 8.31, Timo Sirainen wrote: > The main problem I see with all this is how to make sysadmins remember to use these commands when they add/remove mountpoints?.. Perhaps the additions could be automatic at startup. Whenever Dovecot sees a new mountpoint, it's added. If an old mountpoint doesn't exist at startup a warning is logged about it. Of course many of the mountpoints aren't intended for mail storage. They could be hidden from the "mount state" list by setting their state to "ignore". Dovecot could also skip some of the common known mountpoints, such as where type is proc/tmpfs/sysfs. I wonder how automounts would work with this.. Probably rather randomly.. From Juergen.Obermann at hrz.uni-giessen.de Mon Jan 30 09:57:22 2012 From: Juergen.Obermann at hrz.uni-giessen.de (=?UTF-8?Q?J=C3=BCrgen_Obermann?=) Date: Mon, 30 Jan 2012 08:57:22 +0100 Subject: [Dovecot] problem compiling imaptest under solaris In-Reply-To: <3A621688-A7AE-4C08-96EA-D9668ECA02D1@iki.fi> References: <89f61bff49f4c5343be06dd45459b14a@imapproxy.hrz> <3A621688-A7AE-4C08-96EA-D9668ECA02D1@iki.fi> Message-ID: <1dfabd64651b84755e914d4510ba1310@imapproxy.hrz> Am 28.01.2012 18:55, schrieb Timo Sirainen: > On 25.1.2012, at 16.43, J?rgen Obermann wrote: > >> today I tried to compile imaptest under solaris 10 with studio 11 >> compiler and got the following error: >> >> gmake[2]: Entering directory >> `/net/fileserv/export/sunsrc/src/imaptest-20111119/src' >> source='client.c' object='client.o' libtool=no \ >> DEPDIR=.deps depmode=none /bin/bash ../depcomp \ >> cc -DHAVE_CONFIG_H -I. -I. -I.. -I/opt/local/include/dovecot >> -I/usr/local/include -fast -xarch=v8plusa -I/usr/sfw/include -c >> client.c >> "/opt/local/include/dovecot/imap-util.h", line 6: warning: useless >> declaration >> "client-state.h", line 6: warning: useless declaration >> "client.c", line 655: operand cannot have void type: op "==" >> "client.c", line 655: operands have incompatible types: >> const void "==" int >> cc: acomp failed for client.c > > http://hg.dovecot.org/imaptest/rev/7e490e59f1ee should fix it? Yes it does. Thank you, J?rgen Obermann From f.bonnet at esiee.fr Mon Jan 30 10:37:59 2012 From: f.bonnet at esiee.fr (Frank Bonnet) Date: Mon, 30 Jan 2012 09:37:59 +0100 Subject: [Dovecot] converting from mbox to maildir ? Message-ID: <4F2656E7.8060501@esiee.fr> Hello We are planning to convert our mailhub ( freebsd 7.4 ) from mbox format to maildir format. I've read the documentation and performed some tests on another machine it is a bit long ... I would like some feedback from guys who did this operation and need some advice on what to convert first ? - first convert INBOX then convert IMAP folders ? - first convert IMAP folders then convert INBOX ? the machine use real users thru openldap ( pam_ldap + nss_ldap ) another problem is disk space. The users's email data takes about 2 Terabytes of data and I cannot duplicate as I only have 3 Tb on the raid array of the server. My idea is to use one of our NFS netapp filer during the convertion to throw the result of the convertion on an NFS mounted directory. Anyone did this before ? If yes I would be greatly interrested by their experience Thank you From alexis.lelion at gmail.com Mon Jan 30 11:24:02 2012 From: alexis.lelion at gmail.com (Alexis Lelion) Date: Mon, 30 Jan 2012 10:24:02 +0100 Subject: [Dovecot] LMTP : Can't handle mixed proxy/non-proxy destinations In-Reply-To: <33BD52FA-1FE0-46D5-A1E8-9A54C406BE64@iki.fi> References: <33BD52FA-1FE0-46D5-A1E8-9A54C406BE64@iki.fi> Message-ID: On 1/28/12, Timo Sirainen wrote: > On 27.1.2012, at 12.59, Alexis Lelion wrote: > >> Jan 25 09:05:12 mail01 postfix/lmtp[23934]: A92709300DB: to=< >> user_on_mail02 at domain.com>, relay=mail01.domain.com[private/dovecot-lmtp], >> delay=0.07, delays=0.01/0/0/0.06, dsn=4.3.0, status=deferred (host >> mail01.domain.com[private/dovecot-lmtp] said: 451 4.3.0 < >> user_on_mail02 at domain.com> Can't handle mixed proxy/non-proxy destinations >> (in reply to RCPT TO command)) >> >> I was wondering if there was another way of handling this, for example >> by triggering an immediate queue lookup from postfix or forwarding a >> copy of the mail to the other server. Note that the postfix >> "queue_run_delay" was increased to 15min on purpose, so I cannot change >> that. > > It would be possible to change the code to support mixed destinations, but > it's probably not a simple change and I have other things to do.. Yes I understand, this is a quite specific request, and not that impacting actually. But it would be cool if you could keep this request somewhere in your queue :-) > > Maybe you could work around it so that LMTP always proxies the mails, to > localhost as well, but to a different port which doesn't do proxying at all. Actually this was my first try, but I had proxying loops because unlike for IMAP, the LMTP server doesn't seem to support 'proxy_maybe' option yet, does it? > > From jesus.navarro at bvox.net Mon Jan 30 12:48:43 2012 From: jesus.navarro at bvox.net (=?iso-8859-1?q?Jes=FAs_M=2E?= Navarro) Date: Mon, 30 Jan 2012 11:48:43 +0100 Subject: [Dovecot] UID 0 problem while issuing an UID THREAD REFS command In-Reply-To: <30046BB5-6E1C-41E5-9B04-787F568DE604@iki.fi> References: <201201201724.41631.jesus.navarro@bvox.net> <201201231355.15051.jesus.navarro@bvox.net> <30046BB5-6E1C-41E5-9B04-787F568DE604@iki.fi> Message-ID: <201201301148.43979.jesus.navarro@bvox.net> Hi Timo: On S?bado, 28 de Enero de 2012 19:23:12 Timo Sirainen escribi?: > On 23.1.2012, at 14.55, Jes?s M. Navarro wrote: > >>> I'm having problems on a maildir due to dovecot returning an UID 0 to > >>> an > > > >>> UID THREAD REFS command: > > I'm sending to your personal address a whole maildir that reproduces the > > bug (it's very short) to avoid having it published in the mail archives. > > Thanks, I finally looked at this. The problem happens only when the > THREADing isn't done for all messages. I thought this would have been a > much more complex bug. Fixed: > http://hg.dovecot.org/dovecot-2.0/rev/57498cad6ab9 Thank you very much. Do you have a expected date for new packages covering this issue to be published at xi.rename-it.nl? From mark.zealey at webfusion.com Mon Jan 30 15:32:33 2012 From: mark.zealey at webfusion.com (Mark Zealey) Date: Mon, 30 Jan 2012 15:32:33 +0200 Subject: [Dovecot] Director to keep redirecting users to the same server even after all sessions closed? Message-ID: <4F269BF1.8010607@webfusion.com> Hi there, Just wondering how easy it would be to make the director continue to send a user to the same server (assuming it's still in the pool) for say 90 seconds after they have last been active (ie lmtp or pop/imap)? Basically we are working in quite a heavily cached environment so it takes perhaps 60-90 seconds for our imap servers to properly flush to our network storage meaning if the user got put on a different server in that time we would see some issues. Presently we have fixed proxying, but I'd really like to use the director if possible to allow us to more easily add & remove servers. Mark From tss at iki.fi Mon Jan 30 15:58:37 2012 From: tss at iki.fi (Timo Sirainen) Date: Mon, 30 Jan 2012 15:58:37 +0200 Subject: [Dovecot] Director to keep redirecting users to the same server even after all sessions closed? In-Reply-To: <4F269BF1.8010607@webfusion.com> References: <4F269BF1.8010607@webfusion.com> Message-ID: On 30.1.2012, at 15.32, Mark Zealey wrote: > Just wondering how easy it would be to make the director continue to send a user to the same server (assuming it's still in the pool) for say 90 seconds after they have last been active (ie lmtp or pop/imap)? Basically we are working in quite a heavily cached environment so it takes perhaps 60-90 seconds for our imap servers to properly flush to our network storage meaning if the user got put on a different server in that time we would see some issues. Presently we have fixed proxying, but I'd really like to use the director if possible to allow us to more easily add & remove servers. Already done, and enabled by default: # How long to redirect users to a specific server after it no longer has # any connections. #director_user_expire = 15 min I added this mainly to make sure that all attribute caches have timed out. From Mark.Zealey at webfusion.com Mon Jan 30 16:07:16 2012 From: Mark.Zealey at webfusion.com (Mark Zealey) Date: Mon, 30 Jan 2012 14:07:16 +0000 Subject: [Dovecot] Director to keep redirecting users to the same server even after all sessions closed? In-Reply-To: References: <4F269BF1.8010607@webfusion.com>, Message-ID: Brilliant; I had read the director page in the wiki but didn't see it there & a search of the wiki text doesn't show up the option - perhaps you could add it or is there another place to see a list of director options? Mark ________________________________________ From: Timo Sirainen [tss at iki.fi] Sent: 30 January 2012 13:58 To: Mark Zealey Cc: dovecot at dovecot.org Subject: Re: [Dovecot] Director to keep redirecting users to the same server even after all sessions closed? On 30.1.2012, at 15.32, Mark Zealey wrote: > Just wondering how easy it would be to make the director continue to send a user to the same server (assuming it's still in the pool) for say 90 seconds after they have last been active (ie lmtp or pop/imap)? Basically we are working in quite a heavily cached environment so it takes perhaps 60-90 seconds for our imap servers to properly flush to our network storage meaning if the user got put on a different server in that time we would see some issues. Presently we have fixed proxying, but I'd really like to use the director if possible to allow us to more easily add & remove servers. Already done, and enabled by default: # How long to redirect users to a specific server after it no longer has # any connections. #director_user_expire = 15 min I added this mainly to make sure that all attribute caches have timed out. From f.bonnet at esiee.fr Mon Jan 30 19:29:04 2012 From: f.bonnet at esiee.fr (Frank Bonnet) Date: Mon, 30 Jan 2012 18:29:04 +0100 Subject: [Dovecot] INBOX and IMAP forlders on differents machines ? Message-ID: <4F26D360.303@esiee.fr> Hello In MBOX format would it be possible with dovecot 2 to have two machines one containing the INBOX and the other containing IMAP folders. Of course this need a frontend but would it be possible ? thanks From tss at iki.fi Mon Jan 30 22:03:50 2012 From: tss at iki.fi (Timo Sirainen) Date: Mon, 30 Jan 2012 22:03:50 +0200 Subject: [Dovecot] INBOX and IMAP forlders on differents machines ? In-Reply-To: <4F26D360.303@esiee.fr> References: <4F26D360.303@esiee.fr> Message-ID: <7D7B1E45-9ED4-4E34-BF1C-EE14671F15AD@iki.fi> On 30.1.2012, at 19.29, Frank Bonnet wrote: > In MBOX format would it be possible with dovecot 2 to have two machines > one containing the INBOX and the other containing IMAP folders. > > Of course this need a frontend but would it be possible ? With v2.1 I guess you could in theory do this with imapc backend. From jtam.home at gmail.com Tue Jan 31 02:03:45 2012 From: jtam.home at gmail.com (Joseph Tam) Date: Mon, 30 Jan 2012 16:03:45 -0800 (PST) Subject: [Dovecot] Mountpoints In-Reply-To: References: Message-ID: On Mon, 30 Jan 2012, dovecot-request at dovecot.org wrote: > So, I was thinking about adding doveadm commands to explicitly tell > Dovecot about the mountpoints that it needs to care about. When no > mountpoints are defined Dovecot would behave as it does now. Maybe I don't understand the subtlety of your question, but are you trying to disambiguate between a mounted filesytem and a failed mount that presents the underlying filesystem (which looks like an uninitilized index directory)? Couldn't you write some cookie file "/mount/.../dovecot-data-root/.dovemount", whose existence will tell you whether the FS is mounted without trying to find the mount root. Oh, but then again if you have per-user mounts, that's going to get messy. Joseph Tam From deepa.malleeswaran at gmail.com Mon Jan 30 19:12:00 2012 From: deepa.malleeswaran at gmail.com (Deepa Malleeswaran) Date: Mon, 30 Jan 2012 12:12:00 -0500 Subject: [Dovecot] Help required Message-ID: Hi I use dovecot on CentOS. It was installed and configured by some other person who doesn't work here anymore. I am trying to renew ssl. But the command works fine and restarts the dovecot. But the license shows the same old expiry. Can you please help me with the same. When I type in dovecot --version, I get command not found. Please guide me! Regards, -- Deepa Malleeswaran From tss at iki.fi Tue Jan 31 02:42:33 2012 From: tss at iki.fi (Timo Sirainen) Date: Tue, 31 Jan 2012 02:42:33 +0200 Subject: [Dovecot] Mountpoints In-Reply-To: References: Message-ID: On 31.1.2012, at 2.03, Joseph Tam wrote: > On Mon, 30 Jan 2012, dovecot-request at dovecot.org wrote: > >> So, I was thinking about adding doveadm commands to explicitly tell >> Dovecot about the mountpoints that it needs to care about. When no >> mountpoints are defined Dovecot would behave as it does now. > > Maybe I don't understand the subtlety of your question, but are you > trying to disambiguate between a mounted filesytem and a failed mount > that presents the underlying filesystem (which looks like an uninitilized > index directory)? Yes. A mounted filesystem where a directory doesn't exist vs. accidentally unmounted filesystem. > Couldn't you write some cookie file "/mount/.../dovecot-data-root/.dovemount", > whose existence will tell you whether the FS is mounted without trying to > find the mount root. This would require that existing installations create such a file or start failing after upgrade. Or that it's made optional and most people wouldn't use this functionality at all.. And I'm sure many people with a single filesystem wouldn't be all that happy creating /.dovemount or /home/.dovemount or such files. From moseleymark at gmail.com Tue Jan 31 03:24:12 2012 From: moseleymark at gmail.com (Mark Moseley) Date: Mon, 30 Jan 2012 17:24:12 -0800 Subject: [Dovecot] moving mail out of alt storage In-Reply-To: References: <87sjnya3z5.fsf@algae.riseup.net> <1316077133.12936.18.camel@hurina> <87obylafsw.fsf_-_@algae.riseup.net> Message-ID: On Sat, Jan 28, 2012 at 12:44 PM, Timo Sirainen wrote: > On 12.1.2012, at 20.32, Mark Moseley wrote: > >>>> On Wed, 2011-09-14 at 23:17 -0400, Micah Anderson wrote: >>>>> I moved some mail into the alt storage: >>>>> >>>>> doveadm altmove -u johnd at example.com seen savedbefore 1w >>>>> >>>>> and now I want to move it back to the regular INBOX, but I can't see how >>>>> I can do that with either 'altmove' or 'mailbox move'. >>>> >>>> Is this sdbox or mdbox? With sdbox you could simply "mv" the files. Or >>>> apply patch: http://hg.dovecot.org/dovecot-2.0/rev/1910c76a6cc9 >>> >>> This is mdbox, which is why I am not sure how to operate because I am >>> used to individual files as is with maildir. >>> >>> micah >>> >> >> I'm curious about this too. Is moving the m.# file out of the ALT >> path's storage/ directory into the non-ALT storage/ directory >> sufficient? Or will that cause odd issues? > > You can manually move m.* files to alt storage and back. Just make sure that the same file isn't being simultaneously modified by Dovecot or you'll corrupt it. > Cool, good to know. Thanks! From stan at hardwarefreak.com Tue Jan 31 12:30:46 2012 From: stan at hardwarefreak.com (Stan Hoeppner) Date: Tue, 31 Jan 2012 04:30:46 -0600 Subject: [Dovecot] Help required In-Reply-To: References: Message-ID: <4F27C2D6.5070508@hardwarefreak.com> On 1/30/2012 11:12 AM, Deepa Malleeswaran wrote: > I use dovecot on CentOS. It was installed and configured by some other > person who doesn't work here anymore. I am trying to renew ssl. But the > command works fine and restarts the dovecot. But the license shows the same > old expiry. Can you please help me with the same. Please be much more specific. We need details. Log entries of errors would be very useful as well. > When I type in dovecot --version, I get command not found. Please guide me! That's strange. Are you sure you're on the right machine? What version of CentOS? -- Stan From nmilas at noa.gr Tue Jan 31 14:07:29 2012 From: nmilas at noa.gr (Nikolaos Milas) Date: Tue, 31 Jan 2012 14:07:29 +0200 Subject: [Dovecot] Renaming user account / mailbox Message-ID: <4F27D981.7060304@noa.gr> Hello, I am running dovecot-2.0.13-1_128.el5 x86_64 RPM on CentOS 5.7. All accounts are virtual, hosted on LDAP Server. We are using Maildir mailboxes. The question: What is the process to rename an existing account/mailbox? I would like to rename userx with email: userx at example.com to ux at example.com with a mailbox of ux (currently: userx) Of course the idea is that new mail will continue to be delivered to the same mailbox, although it has been renamed. How can I achieve it? Would it be enough (after changing the associated data in the associated LDAP entry) to simply rename the virtual user directory name, e.g. from /home/vmail/userx to /home/vmail/ux ? Thanks in advance, Nick From ath at b-one.net Tue Jan 31 14:36:13 2012 From: ath at b-one.net (Anders) Date: Tue, 31 Jan 2012 13:36:13 +0100 Subject: [Dovecot] A small bug and a question about CONTEXT=SEARCH Message-ID: <20120131123613.49B53A7952BCD@bmail02.one.com> Hi, My colleague just pointed me to the recent fix of this issue, thanks! From la at iki.fi Tue Jan 31 17:48:39 2012 From: la at iki.fi (Lauri Alanko) Date: Tue, 31 Jan 2012 17:48:39 +0200 Subject: [Dovecot] force-resync fails to recover all messages in mdbox Message-ID: <20120131174839.13512v46jc7ur23b.lealanko@webmail.helsinki.fi> To my understanding, when using mdbox, doveadm force-resync should be able to recover all the messages from the storage files alone, though of course losing all metadata except the initial delivery folder. However, this does not seem to be the case. For me, force-resync creates only partial indices that lose messages. The message contents are of course still in the storage files, but dovecot just doesn't seem to be aware of some of them after recreating the indices. Here is an example. I created a test mdbox by syncing a mailing list folder from a mbox location: $ dsync -m haskell-cafe backup mdbox:~/dbox Then I switched the location to the new mdbox: $ /usr/sbin/dovecot -n # 2.0.15: /etc/dovecot/dovecot.conf # OS: Linux 3.2.0-0.bpo.1-amd64 x86_64 Debian wheezy/sid mail_fsync = never mail_location = mdbox:~/dbox mail_plugins = zlib passdb { driver = pam } plugin { sieve = ~/etc/sieve/dovecot.sieve sieve_dir = ~/etc/sieve zlib_save = bz2 zlib_save_level = 9 } protocols = " imap" ssl_cert = References: <20120131174839.13512v46jc7ur23b.lealanko@webmail.helsinki.fi> Message-ID: <38EB3A30-DFD5-484B-852B-327BDA5E936E@iki.fi> On 31.1.2012, at 17.48, Lauri Alanko wrote: > $ doveadm search all | wc > 93236 186472 3625098 .. > Then I removed all the indices and rebuilt them: > > $ doveadm search all | wc > 43864 87728 1699590 > > Somehow dovecot lost over half of the messages! There may be a bug, and I just yesterday noticed something weird in the rebuilding code. I'll have to look into that. But anyway, "search all" isn't the proper way to test this. Try instead with: doveadm fetch guid all | sort | uniq | wc When you removed indexes Dovecot no longer knew about copies of messages. From la at iki.fi Tue Jan 31 18:34:45 2012 From: la at iki.fi (Lauri Alanko) Date: Tue, 31 Jan 2012 18:34:45 +0200 Subject: [Dovecot] force-resync fails to recover all messages in mdbox In-Reply-To: <38EB3A30-DFD5-484B-852B-327BDA5E936E@iki.fi> References: <20120131174839.13512v46jc7ur23b.lealanko@webmail.helsinki.fi> <38EB3A30-DFD5-484B-852B-327BDA5E936E@iki.fi> Message-ID: <20120131183445.545717eennh24eg5.lealanko@webmail.helsinki.fi> Quoting "Timo Sirainen" : > Try instead with: > > doveadm fetch guid all | sort | uniq | wc > > When you removed indexes Dovecot no longer knew about copies of messages. Well, well, well. This is interesting. Back with the indices created by dsync: $ doveadm fetch guid all | grep guid: | sort | uniq -c | sort -n | tail 17 guid: 1b28b22d4b2ee2885b5b81221c41201d 17 guid: 730c692395661dd62f82088804b85652 17 guid: 865e1537fddba6698e010d0b9dbddd02 17 guid: d271b6ba8af0e7fa39c16ea8ed13abcf 17 guid: d2cd391e837cf51cc85991bde814dc54 17 guid: ebce8373da6ffb134b58aca7906d61f1 18 guid: 1222b6c222ecb53fdbbec407400cba36 18 guid: 65695586efc69adc2d7294216ea88e55 19 guid: 4288f61ebbdcd44870c670439a97693b 20 guid: 080ec72aa49e2a01c8e249fe127605f6 This would explain why rebuilding the indices reduced the number of messages. However, those guid assignments seem really weird, because: $ doveadm fetch hdr guid 080ec72aa49e2a01c8e249fe127605f6 | grep -i '^Message-ID: ' Message-ID: <4B1ACA53.7040503 at rkit.pp.ru> Message-ID: <29bf512f0912051251u74d246afxafdfb9e5ea24342c at mail.gmail.com> Message-ID: <5e0214850912051300r3ebd0e44n61a4d6e020c94f4c at mail.gmail.com> Message-ID: <4B1ACD40.3040507 at btinternet.com> Message-Id: <200912052220.00317.daniel.is.fischer at web.de> Message-Id: <200912052225.28597.daniel.is.fischer at web.de> Message-ID: <20091205212848.GA23711 at seas.upenn.edu> Message-Id: <200912051336.13792.hgolden at socal.rr.com> Message-Id: <200912052243.03144.daniel.is.fischer at web.de> Message-Id: <0B59A706-8C41-47B9-A858-5ACE297581E1 at cs.uu.nl> Message-ID: <20091205215707.GA6161 at protagoras.phil.berkeley.edu> Message-ID: <471726.55822.qm at web113106.mail.gq1.yahoo.com> Message-ID: <4B1AD7FB.8050704 at btinternet.com> Message-ID: <5fdc56d70912051400h663a25a9w4f9b2e065a5b395e at mail.gmail.com> Message-Id: <1B613EE3-B4F8-4F6E-8A36-74BACF0C86FC at yandex.ru> Message-ID: <4B1ADA0E.5070207 at btinternet.com> Message-Id: <36C40624-B050-4A8C-8CAF-F15D84467180 at phys.washington.edu> Message-ID: Message-id: Message-ID: <29bf512f0912051423safd7842ka39c8b8b6dee1ac0 at mail.gmail.com> So all these completely unrelated messages have somehow received the same guid? And that guid is stored even in the storage files themselves so they cannot be cleaned up even with force-resync? Something is _seriously_ wrong. The complexity and opaqueness of the mdbox format is a worrisome. It would ease my mind quite a bit if there were a simple tool that would just dump out the plain message contents that are stored inside the storage files, without involving any of dovecot's index machinery. Then I would at least know that whatever happens, as long as the storage files stay intact, I can always migrate my mails into some other format. Lauri From stan at hardwarefreak.com Sun Jan 1 00:12:00 2012 From: stan at hardwarefreak.com (Stan Hoeppner) Date: Sat, 31 Dec 2011 16:12:00 -0600 Subject: [Dovecot] imap process limits problem In-Reply-To: <4EFE6367.5000408@hardwarefreak.com> References: <4EFE6367.5000408@hardwarefreak.com> Message-ID: <4EFF88B0.6090908@hardwarefreak.com> On 12/30/2011 7:20 PM, Stan Hoeppner wrote: > Just out of curiosity, have you tried the non > one-login-process-per-connection setup? > > login_process_size = 64 > login_process_per_connection = yes Correction. This should be 'no' ^^^ > login_processes_count = 3 > login_max_processes_count = 128 > login_max_connections = 256 > > Season values to taste. -- Stan From tlx at leuxner.net Sun Jan 1 11:31:22 2012 From: tlx at leuxner.net (Thomas Leuxner) Date: Sun, 1 Jan 2012 10:31:22 +0100 Subject: [Dovecot] Some Doveadm Tools lack proper exit codes Message-ID: <172CBBB1-DEFD-42E6-937E-B625FB9028EF@leuxner.net> Happy New Year Everyone, and *yes* it's that time of the year to archive old stuff again. Please implement proper error codes to support this (scripting) endeavor. => Good $ doveadm user foo userdb lookup: user foo doesn't exist $ echo $? 2 => Bad $ doveadm acl get -u tlx at leuxner.net FOO doveadm(tlx at leuxner.net): Error: Can't open mailbox FOO: Mailbox doesn't exist: FOO ID Global Rights $ echo $? 0 $ doveconf -n | head # 2.1.rc1 (056934abd2ef): /etc/dovecot/dovecot.conf # OS: Linux 2.6.32-5-amd64 x86_64 Debian 6.0.3 Thanks Thomas -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 163 bytes Desc: Message signed with OpenPGP using GPGMail URL: From gedalya at gedalya.net Sun Jan 1 18:48:06 2012 From: gedalya at gedalya.net (Gedalya) Date: Sun, 01 Jan 2012 11:48:06 -0500 Subject: [Dovecot] Trouble with proxy_maybe and auth_default_realm In-Reply-To: <4EFCA20F.10107@gedalya.net> References: <4EFCA20F.10107@gedalya.net> Message-ID: <4F008E46.6040000@gedalya.net> On 12/29/2011 12:23 PM, Gedalya wrote: > Hello, > > I'm using proxy_maybe and auth_default_realm. It seems that when a > user logs in without the domain name, relying on auth_default_realm, > and the "host" field points to the local server, I get the Proxying > loops to itself error. It does work as expected - log on to the local > server without proxying, if the user does include the domain name in > the login. > > (IP's and domain name masked below) > > No domain: > > Dec 29 11:49:07 imap01 dovecot: pop3-login: Error: Proxying loops to > itself: user=, method=PLAIN, rip=00.00.52.18, > lip=00.00.241.140 > Dec 29 11:49:27 imap01 dovecot: pop3-login: Disconnected (auth failed, > 1 attempts): user=, method=PLAIN, rip=00.00.52.18, > lip=00.00.241.140 > > With domain: > > Dec 29 11:52:13 imap01 dovecot: pop3-login: Login: > user=, method=PLAIN, rip=00.00.52.18, lip=00.00.241.140, > mpid=19969 > Dec 29 11:52:18 imap01 dovecot: pop3(jedi at ---.com): Disconnected: > Logged out top=0/0, retr=0/0, del=0/1, size=731 > > Otherwise, e.g. when the proxy host is indeed another host, > auth_default_domain works fine, including or not including the domain > seems to make no difference, and everything works. > > I'm using mysql, and I'm able to get around this problem including the > following in the password query: > IF(host='' or host='00.00.241.140', NULL, 'Y') as proxy_maybe > > # dovecot --version > 2.0.15 > > # dovecot -n > # 2.0.15: /etc/dovecot/dovecot.conf > # OS: Linux 2.6.32-5-amd64 x86_64 Debian 6.0.3 > auth_default_realm = ----.com > auth_mechanisms = plain login cram-md5 ntlm > auth_username_format = %Lu > auth_verbose = yes > auth_verbose_passwords = plain > dict { > quota = mysql:/etc/dovecot/dovecot-dict-sql.conf.ext > } > disable_plaintext_auth = no > login_greeting = How can I help you? > mail_gid = vmail > mail_uid = vmail > passdb { > args = /etc/dovecot/dovecot-sql.conf.ext > driver = sql > } > protocols = imap pop3 lmtp > service lmtp { > inet_listener lmtp { > address = 0.0.0.0 > port = 7025 > } > } > ssl_cert = ssl_key = userdb { > driver = prefetch > } > userdb { > args = /etc/dovecot/dovecot-sql.conf.ext > driver = sql > } > verbose_proctitle = yes > > ----- dovecot-sql.conf.ext ---- > driver = mysql > connect = host=localhost dbname=email user=email > default_pass_scheme = PLAIN > password_query = SELECT password, \ > IF('%s' = 'pop3', host_pop3, host) as host, \ > IF(host='' or host='00.00.241.140', NULL, 'Y') as proxy_maybe, \ > concat(userid, '@', domain) as destuser, \ > password as pass, \ > '/stor/mail/domains/%d/%n' AS userdb_home, \ > 'maildir:/stor/mail/domains/%d/%n/Maildir' as userdb_mail, \ > concat('*:storage=', quota_mb, 'M') as userdb_quota_rule, \ > 'vmail' AS userdb_uid, 'vmail' AS userdb_gid \ > FROM email WHERE userid = '%n' AND domain = '%d' > user_query = SELECT '/stor/mail/domains/%d/%n' AS home, \ > 'maildir:/stor/mail/domains/%d/%n/Maildir' as mail, \ > concat('*:storage=', quota_mb, 'M') as quota_rule, \ > 'vmail' AS uid, 'vmail' AS gid \ > FROM email WHERE userid = '%n' AND domain = '%d' > > OK, it turns out the problem went away when I removed the destuser field from the password query - it turned out to be unnecessary anyhow. My requirements are to allow users to log in using non-plaintext mechanisms such as CRAM-MD5, while my IMAP backends are non-dovecot and do not have a master user feature. Passwords are stored in the database in plaintext and presumably what I need to do is fetch the plaintext password from the database and simply use the user's own username and password when logging in to the backend. The wiki page on this subject only discusses a master-user setup, and my misunderstanding of that page lead me to think I need the destuser field. This turns out to be a simple setup, the only field involved being the "pass" field, and should probably be documented on the wiki. Either way, proxy_maybe doesn't work with auth_default_realm and destuser, even if destuser ends up containing the exact same username that would be constructed by auth_default_realm. From janfrode at tanso.net Sun Jan 1 21:59:07 2012 From: janfrode at tanso.net (Jan-Frode Myklebust) Date: Sun, 1 Jan 2012 20:59:07 +0100 Subject: [Dovecot] Dsync fails on second sync for folders with dot in the name Message-ID: <20120101195907.GA21500@dibs.tanso.net> I'm in the processes of running our first dsync backup of all users (from maildir to mdbox on remote server), and one problem I'm hitting that dsync will work fine on first run for some users, and then reliably fail whenever I try a new run: $ sudo dsync -u janfrode at example.net backup ssh -q mailbackup at repo1.example.net dsync -u janfrode at example.net $ sudo dsync -u janfrode at example.net backup ssh -q mailbackup at repo1.example.net dsync -u janfrode at example.net dsync-remote(janfrode at example.net): Error: Can't delete mailbox directory INBOX/a: Mailbox has children, delete them first The problem here seems to be that this user has a maildir named ".a.b". On the backup side I see this as "a/b/". So dsync doesn't quite seem to agree with itself for how to handle folders with dot in the name. -jf From rnabioullin at gmail.com Mon Jan 2 03:32:38 2012 From: rnabioullin at gmail.com (Ruslan Nabioullin) Date: Sun, 01 Jan 2012 20:32:38 -0500 Subject: [Dovecot] Multiple Maildirs per Virtual User Message-ID: <4F010936.7080107@gmail.com> How would it be possible to configure dovecot (2.0.16) in such a way that it would serve several maildirs (e.g., INBOX, INBOX.Drafts, INBOX.Sent, forum_email, [Gmail].Trash, etc.) per virtual user? I am only able to specify a single maildir, but I want all maildirs in /home/my-username/mail/account1/ to be served. e.g., /etc/dovecot/passwd: my-username_account1:{PLAIN}password:my-username:my-group::::userdb_mail=maildir:/home/my-username/mail/account1/INBOX Thanks in advance, Ruslan -- Ruslan Nabioullin rnabioullin at gmail.com -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 900 bytes Desc: OpenPGP digital signature URL: From Juergen.Obermann at hrz.uni-giessen.de Mon Jan 2 16:33:07 2012 From: Juergen.Obermann at hrz.uni-giessen.de (=?UTF-8?Q?J=C3=BCrgen_Obermann?=) Date: Mon, 02 Jan 2012 15:33:07 +0100 Subject: [Dovecot] error bad file number with compressed mbox files Message-ID: <77e69f67dbffe67a6205ed1de7d2d0df@imapproxy.hrz> Hello, can dsync convert from compressed mbox to compressed mdbox format? When I use compressed mbox files, either with gzip or with bzip2, I can read the mails as usual, but I find the following errors in dovecots log file: imap(userxy): Error: nfs_flush_fcntl: fcntl(/home/hrz/userxy/Mail/mymbox.gz, F_RDLCK) failed: Bad file number imap(userxy): Error: nfs_flush_fcntl: fcntl(/home/hrz/userxy/Mail/mymbox.bz2, F_RDLCK) failed: Bad file number These errors also appear when I use dsync to convert the compressed mbox to mdbox format on a second dovecot server: /opt/local/bin/dsync -v -u userxy backup mdbox:/sanpool/mail/home/hrz/userxy/mdbox dsync(userxy): Error: nfs_flush_fcntl: fcntl(/home/hrz/userxy/Mail/mymbox.gz, F_RDLCK) failed: Bad file number But now dovecot does not find the mails in the folder mymbox.gz on the second dovecot server in mdbox format! The relevant part of the dovcot configuration is: # 2.0.16: /opt/local/etc/dovecot/dovecot.conf # OS: SunOS 5.10 sun4v mail_fsync = always mail_location = mbox:~/Mail:INBOX=/var/mail/%u mail_nfs_index = yes mail_nfs_storage = yes mail_plugins = mail_log notify zlib mmap_disable = yes Thank you, -- J?rgen Obermann Hochschulrechenzentrum der Justus-Liebig-Universit?t Gie?en Heinrich-Buff-Ring 44 Tel. 0641-9913054 From CMarcus at Media-Brokers.com Mon Jan 2 16:51:00 2012 From: CMarcus at Media-Brokers.com (Charles Marcus) Date: Mon, 02 Jan 2012 09:51:00 -0500 Subject: [Dovecot] Dsync fails on second sync for folders with dot in the name In-Reply-To: <20120101195907.GA21500@dibs.tanso.net> References: <20120101195907.GA21500@dibs.tanso.net> Message-ID: <4F01C454.8030701@Media-Brokers.com> On 2012-01-01 2:59 PM, Jan-Frode Myklebust wrote: > I'm in the processes of running our first dsync backup of all users > (from maildir to mdbox on remote server), and one problem I'm hitting > that dsync will work fine on first run for some users, and then > reliably fail whenever I try a new run: > > $ sudo dsync -u janfrode at example.net backup ssh -q mailbackup at repo1.example.net dsync -u janfrode at example.net > $ sudo dsync -u janfrode at example.net backup ssh -q mailbackup at repo1.example.net dsync -u janfrode at example.net > dsync-remote(janfrode at example.net): Error: Can't delete mailbox directory INBOX/a: Mailbox has children, delete them first > > The problem here seems to be that this user has a maildir named > ".a.b". On the backup side I see this as "a/b/". > > So dsync doesn't quite seem to agree with itself for how to handle > folders with dot in the name. dovecot -n output? What are you using for the namespace hierarchy separator? http://wiki2.dovecot.org/Namespaces -- Best regards, Charles From janfrode at tanso.net Mon Jan 2 17:11:00 2012 From: janfrode at tanso.net (Jan-Frode Myklebust) Date: Mon, 2 Jan 2012 16:11:00 +0100 Subject: [Dovecot] Dsync fails on second sync for folders with dot in the name In-Reply-To: <4F01C454.8030701@Media-Brokers.com> References: <20120101195907.GA21500@dibs.tanso.net> <4F01C454.8030701@Media-Brokers.com> Message-ID: <20120102151059.GA10419@dibs.tanso.net> On Mon, Jan 02, 2012 at 09:51:00AM -0500, Charles Marcus wrote: > > dovecot -n output? What are you using for the namespace hierarchy separator? I have the folder format default separator (maildir "."), but still dovecot creates directories named ".a.b". On receiving dsync server: ===================================================================== $ dovecot -n # 2.0.14: /etc/dovecot/dovecot.conf mail_location = mdbox:~/mdbox mail_plugins = zlib mdbox_rotate_size = 5 M passdb { driver = static } plugin { zlib_save = gz zlib_save_level = 9 } protocols = service auth-worker { user = $default_internal_user } service auth { unix_listener auth-userdb { mode = 0600 user = mailbackup } } ssl = no userdb { args = home=/srv/mailbackup/%256Hu/%d/%n driver = static } On POP/IMAP-server: ===================================================================== $ doveconf -n # 2.0.14: /etc/dovecot/dovecot.conf auth_cache_size = 100 M auth_verbose = yes auth_verbose_passwords = sha1 disable_plaintext_auth = no login_trusted_networks = 192.168.0.0/16 mail_gid = 3000 mail_location = maildir:~/:INDEX=/indexes/%1u/%1.1u/%u mail_plugins = quota zlib mail_uid = 3000 maildir_stat_dirs = yes maildir_very_dirty_syncs = yes managesieve_notify_capability = mailto managesieve_sieve_capability = fileinto reject envelope encoded-character vacation subaddress comparator-i;ascii-numeric relational regex imap4flags copy include variables body enotify environment mailbox date mmap_disable = yes namespace { inbox = yes location = prefix = INBOX. type = private } passdb { args = /etc/dovecot/dovecot-ldap.conf.ext driver = ldap } plugin { quota = maildir:UserQuota sieve = /sieve/%1u/%1.1u/%u/.dovecot.sieve sieve_dir = /sieve/%1u/%1.1u/%u sieve_max_script_size = 1M zlib_save = gz zlib_save_level = 6 } postmaster_address = postmaster at example.net protocols = imap pop3 lmtp sieve service auth-worker { user = $default_internal_user } service auth { client_limit = 4521 unix_listener auth-userdb { group = mode = 0600 user = atmail } } service imap-login { inet_listener imap { address = * port = 143 } process_min_avail = 4 service_count = 0 vsz_limit = 1 G } service imap-postlogin { executable = script-login /usr/local/sbin/imap-postlogin.sh } service imap { executable = imap imap-postlogin process_limit = 2048 } service lmtp { client_limit = 1 inet_listener lmtp { address = * port = 24 } process_limit = 25 } service managesieve-login { inet_listener sieve { address = * port = 4190 } service_count = 1 } service pop3-login { inet_listener pop3 { address = * port = 110 } process_min_avail = 4 service_count = 0 vsz_limit = 1 G } service pop3-postlogin { executable = script-login /usr/local/sbin/pop3-postlogin.sh } service pop3 { executable = pop3 pop3-postlogin process_limit = 2048 } ssl = no userdb { args = /etc/dovecot/dovecot-ldap.conf.ext driver = ldap } protocol lmtp { mail_plugins = quota zlib sieve } protocol imap { imap_client_workarounds = delay-newmail mail_plugins = quota zlib imap_quota } protocol pop3 { mail_plugins = quota zlib pop3_client_workarounds = outlook-no-nuls oe-ns-eoh pop3_uidl_format = UID%u-%v } protocol sieve { managesieve_logout_format = bytes=%i/%o } -jf From preacher_net at gmx.net Mon Jan 2 18:17:10 2012 From: preacher_net at gmx.net (Preacher) Date: Mon, 02 Jan 2012 17:17:10 +0100 Subject: [Dovecot] Problem with huge IMAP Archive after Courier migration Message-ID: <4F01D886.6070905@gmx.net> I have a mail server running Debian 6.0 with Courier IMAP to store project related mail. Currently the maildir of the archive (one user) contains about 37GB of data. Our staff is acessing the archive via Outlook 2007 where they drag their Exchange inbox or sent files to it. The problem with courier is that is sometimes mixes up headers with message bodies, so I wanted to migrate to dovecot. I tried this on my proxy running Debian 7.0 with some test data and this worked fine (OK, spent some hours to get the config files done - Dovecot without authentication). Dovecot version here is 2.0.15. Tried it with our productive system today, but got Dovecot 1.2.15 installed on Debian 6.0 Config files and parameters I took from my test system were not compatible and I didn't get it to work. So I forced to install the Debisn 7.0 packages with 2.0.15 and finally got the server running, I also restarted the whole machine to empty caches. But the problem I got was that in the huge folder hierarchy the downloaded headers in the individual folders disappeared, some folders showed a few very old messages, some none. Also some subfolders disappeared. I checked this with Outlook and Thunderbird. The difference was, that Thunderbird shows more messages (but not all) than Outlook in some folders, but also none in some others. Outlook brought up a message in some cases, that the connection timed out, although I set the timeout to 60s. After being frustrated uninstalled dovecot, went back to Courier and folder contents are displayed correctly again. Anyone a clue what's wrong here? Finally some config information: proxy-server:~# dovecot -n # 2.0.15: /etc/dovecot/dovecot.conf # OS: Linux 3.1.0-1-686-pae i686 Debian wheezy/sid auth_debug_passwords = yes auth_mechanisms = plain login disable_plaintext_auth = no namespace { inbox = yes location = prefix = INBOX. separator = . type = private } passdb { driver = pam } plugin { sieve = ~/.dovecot.sieve sieve_dir = ~/sieve } protocols = imap ssl = no ssl_cert = References: <4F01D886.6070905@gmx.net> Message-ID: <4F020328.7090303@hardwarefreak.com> On 1/2/2012 10:17 AM, Preacher wrote: ... > So I forced to install the Debisn 7.0 packages with 2.0.15 and finally > got the server running, I also restarted the whole machine to empty caches. > But the problem I got was that in the huge folder hierarchy the > downloaded headers in the individual folders disappeared, some folders > showed a few very old messages, some none. Also some subfolders > disappeared. > I checked this with Outlook and Thunderbird. The difference was, that > Thunderbird shows more messages (but not all) than Outlook in some > folders, but also none in some others. Outlook brought up a message in > some cases, that the connection timed out, although I set the timeout to > 60s. ... > Anyone a clue what's wrong here? Absolutely. What's wrong is a lack of planning, self education, and patience on the part of the admin. Dovecot gets its speed from its indexes. How long do you think it takes Dovecot to index 37GB of maildir messages, many thousands per directory, hundreds of directories, millions of files total? Until those indexes are built you will not see a complete folder tree and all kinds of stuff will be missing. For your education: Dovecot indexes every message and these indexes are the key to its speed. Normally indexing occurs during delivery when using deliver or lmtp, so the index updates are small and incremental, keeping performance high. You tried to do this and expected Dovecot to instantly process it all: http://www.youtube.com/watch?v=THVz5aweqYU If you don't know, that's a coal train car being dumped. 100 tons of coal in a few seconds. Visuals are always good teaching tools. I think this drives the point home rather well. -- Stan From mpapet at yahoo.com Tue Jan 3 08:48:15 2012 From: mpapet at yahoo.com (Michael Papet) Date: Mon, 2 Jan 2012 22:48:15 -0800 (PST) Subject: [Dovecot] Newbie: LDA Isn't Logging Message-ID: <1325573295.74202.YahooMailClassic@web125405.mail.ne1.yahoo.com> Hi, I'm a newbie having some trouble getting deliver to log anything. Related to this, there are no return values unless the -d is missing. I'm using LDAP to store virtual domain and user account information. Test #1: /usr/lib/dovecot/deliver -e -f mpapet at yahoo.com -d zed at mailswansong.dom < bad.mail Expected result: supposed to fail, there's no zed account via ldap lookup and supposed to get a return code per the wiki at http://wiki2.dovecot.org/LDA. Supposed to log too. Actual result: nothing gets delivered, no return code, nothing is logged. Test #2: /usr/lib/dovecot/deliver -f mpapet at yahoo.com -d dude at mailswansong.dom < good.mail Expected result: deliver to dude and return 0. Actual result: delivers, but no return code. Nothing logged. The wiki is vague about the difficulties of getting deliver LDA to log, but I thought I had it covered in my config. I even opened permissions up wide (777) on my log files specified below. Nothing gets logged. The ONLY thing changed in 15-lda.conf is as follows. protocol lda { # Space separated list of plugins to load (default is global mail_plugins). #mail_plugins = $mail_plugins log_path = /var/log/dovecot/lda.log info_log_path = /var/log/dovecot/lda-info.log service auth { unix_listener auth-client { mode = 0600 user = vmail } } I'm running plain Debian Testing and used dovecot from Debian's repository. The end-goal is to write a qpsmtpd queue plugin, but I need to figure out what's the matter first. Thanks in advance. mpapet From janfrode at tanso.net Tue Jan 3 10:14:49 2012 From: janfrode at tanso.net (Jan-Frode Myklebust) Date: Tue, 3 Jan 2012 09:14:49 +0100 Subject: [Dovecot] GPFS for mail-storage (Was: Re: Compressing existing maildirs) In-Reply-To: <4EFEBFB8.1070301@hardwarefreak.com> References: <20111224152050.GA3958@dibs.tanso.net> <20111229084916.GA5895@dibs.tanso.net> <4EFC6453.8020304@hardwarefreak.com> <20111230144124.GA3936@dibs.tanso.net> <4EFE5984.9080905@hardwarefreak.com> <20111231065649.GA19046@dibs.tanso.net> <4EFEBFB8.1070301@hardwarefreak.com> Message-ID: <20120103081449.GA26269@dibs.tanso.net> On Sat, Dec 31, 2011 at 01:54:32AM -0600, Stan Hoeppner wrote: > Nice setup. I've mentioned GPFS for cluster use on this list before, > but I think you're the only operator to confirm using it. I'm sure > others would be interested in hearing of your first hand experience: > pros, cons, performance, etc. And a ball park figure on the licensing > costs, whether one can only use GPFS on IBM storage or if storage from > others vendors is allowed in the GPFS pool. I used to work for IBM, so I've been a bit uneasy about pushing GPFS too hard publicly, for risk of being accused of being biased. But I changed job in November, so now I'm only a satisfied customer :-) Pros: Extremely simple to configure and manage. Assuming root on all nodes can ssh freely, and port 1191/tcp is open between the nodes, these are the commands to create the cluster, create a NSD (network shared disks), and create a filesystem: # echo hostname1:manager-quorum > NodeFile # "manager" means this node can be selected as filesystem manager # echo hostname2:manager-quorum >> NodeFile # "quorum" means this node has a vote in the quorum selection # echo hostname3:manager-quorum >> NodeFile # all my nodes are usually the same, so they all have same roles. # mmcrcluster -n NodeFile -p $(hostname) -A ### sdb1 is either a local disk on hostname1 (in which case the other nodes will access it over tcp to ### hostname1), or a SAN-disk that they can access directly over FC/iSCSI. # echo sdb1:hostname1::dataAndMetadata:: > DescFile # This disk can be used for both data and metadata # mmcrnsd -F DescFile # mmstartup -A # starts GPFS services on all nodes # mmcrfs /gpfs1 gpfs1 -F DescFile # mount /gpfs1 You can add and remove disks from the filesystem, and change most settings without downtime. You can scale out your workload by adding more nodes (SAN attached or not), and scale out your disk performance by adding more disks on the fly. (IBM uses GPFS to create scale-out NAS solutions http://www-03.ibm.com/systems/storage/network/sonas/ , which highlights a few of the features available with GPFS) There's no problem running GPFS on other vendors disk systems. I've used Nexsan SATAboy earlier, for a HPC cluster. One can easily move from one disksystem to another without downtime. Cons: It has it's own page cache, staticly configured. So you don't get the "all available memory used for page caching" behaviour as you normally do on linux. There is a kernel module that needs to be rebuilt on every upgrade. It's a simple process, but it needs to be done and means we can't just run "yum update ; reboot" to upgrade. % export SHARKCLONEROOT=/usr/lpp/mmfs/src % cp /usr/lpp/mmfs/src/config/site.mcr.proto /usr/lpp/mmfs/src/config/site.mcr % vi /usr/lpp/mmfs/src/config/site.mcr # correct GPFS_ARCH, LINUX_DISTRIBUTION and LINUX_KERNEL_VERSION % cd /usr/lpp/mmfs/src/ ; make clean ; make World % su - root # export SHARKCLONEROOT=/usr/lpp/mmfs/src # cd /usr/lpp/mmfs/src/ ; make InstallImages > > To this point IIRC everyone here doing clusters is using NFS, GFS, or > OCFS. Each has its downsides, mostly because everyone is using maildir. > NFS has locking issues with shared dovecot index files. GFS and OCFS > have filesystem metadata performance issues. How does GPFS perform with > your maildir workload? Maildir is likely a worst case type workload for filesystems. Millions of tiny-tiny files, making all IO random, and getting minimal controller read cache utilized (unless you can cache all active files). So I've concluded that our performance issues are mostly design errors (and the fact that there were no better mail storage formats than maildir at the time these servers were implemented). I expect moving to mdbox will fix all our performance issues. I *think* GPFS is as good as it gets for maildir storage on clusterfs, but have no number to back that up ... Would be very interesting if we could somehow compare numbers for a few clusterfs'. I believe our main limitation in this setup is the iops we can get from the backend storage system. It's hard to balance the IO over enough RAID arrays (the fs is spread over 11 RAID5 arrays of 5 disks each), and we're always having hotspots. Right now two arrays are doing <100 iops, while others are doing 4-500 iops. Would very much like to replace it by something smarter where we can utilize SSDs for active data and something slower for stale data. GPFS can manage this by itself trough it's ILM interface, but we don't have the very fast storage to put in as tier-1. -jf From tss at iki.fi Tue Jan 3 10:49:27 2012 From: tss at iki.fi (Timo Sirainen) Date: Tue, 3 Jan 2012 10:49:27 +0200 Subject: [Dovecot] Compressing existing maildirs In-Reply-To: <4EFEBFB8.1070301@hardwarefreak.com> References: <20111224152050.GA3958@dibs.tanso.net> <20111229084916.GA5895@dibs.tanso.net> <4EFC6453.8020304@hardwarefreak.com> <20111230144124.GA3936@dibs.tanso.net> <4EFE5984.9080905@hardwarefreak.com> <20111231065649.GA19046@dibs.tanso.net> <4EFEBFB8.1070301@hardwarefreak.com> Message-ID: On 31.12.2011, at 9.54, Stan Hoeppner wrote: > Timo, is there any technical or sanity based upper bound on mdbox size? > Anything wrong with using 64MB, 128MB, or even larger for > mdbox_rotate_size? Should be fine. The only issue is the extra disk I/O required to recreate the files during doveadm purge. From ludek.finstrle at pzkagis.cz Mon Jan 2 20:20:15 2012 From: ludek.finstrle at pzkagis.cz (Ludek Finstrle) Date: Mon, 2 Jan 2012 19:20:15 +0100 Subject: [Dovecot] Small LOGIN_MAX_INBUF_SIZE for GSSAPI with samba4 (AD) Message-ID: <20120102182014.GA20872@pzkagis.cz> Hello, I faced the problem with samba (AD) + mutt (gssapi) + dovecot (imap). From dovecot log: Jan 2 17:58:42 server dovecot: imap-login: Disconnected: Input buffer full (no auth attempts): rip=192.167.14.16, lip=192.167.14.16, secured My situation: CentOS 6.2 IMAP: dovecot --version: 2.0.9 (CentOS 6.2) MUA: mutt 1.5.20 (CentOS 6.2) Kerberos: samba4 4.0.0alpha17 as AD PDC $ klist -e Ticket cache: FILE:/tmp/krb5cc_1002_Mmg2Rc Default principal: luf at TEST Valid starting Expires Service principal 01/02/12 15:56:16 01/03/12 01:56:16 krbtgt/TEST at TEST renew until 01/03/12 01:56:16, Etype (skey, tkt): arcfour-hmac, arcfour-hmac 01/02/12 16:33:19 01/03/12 01:56:16 imap/server.test at TEST Etype (skey, tkt): arcfour-hmac, arcfour-hmac I fixed this problem with enlarging LOGIN_MAX_INBUF_SIZE. I also red about wrong lower/uppercase but it's not definitely my problem (I tried all possibilities of lower/uppercas in login). I sniffed the plain communication and the "a0000 AUTHENTICATE GSSAPI" line has around 1873 chars. When I enlarged the LOGIN_MAX_INBUF_SIZE to 2048 the problem disappeared and I'm now able to login to dovecot using gssapi in mutt client. I use also thunderbird (on windows with sspi) and it works ok with LOGIN_MAX_INBUF_SIZE = 1024. Does anybody have any idea why it's so large or how to fix it another way? It's terrible to patch each version of dovecot rpm package. Or is there any possibility to change constant? I have no idea how much this should affect memory usage. The simple patch I have to use is attached. Please cc: to me (luf at pzkagis dot cz) as I'm not member of the this list. Best regards, Ludek Finstrle -------------- next part -------------- diff -cr dovecot-2.0.9.orig/src/login-common/client-common.h dovecot-2.0.9/src/login-common/client-common.h *** dovecot-2.0.9.orig/src/login-common/client-common.h 2012-01-02 18:09:53.371909782 +0100 --- dovecot-2.0.9/src/login-common/client-common.h 2012-01-02 18:30:58.057787619 +0100 *************** *** 10,16 **** IMAP: Max. length of a single parameter POP3: Max. length of a command line (spec says 512 would be enough) */ ! #define LOGIN_MAX_INBUF_SIZE 1024 /* max. size of output buffer. if it gets full, the client is disconnected. SASL authentication gives the largest output. */ #define LOGIN_MAX_OUTBUF_SIZE 4096 --- 10,16 ---- IMAP: Max. length of a single parameter POP3: Max. length of a command line (spec says 512 would be enough) */ ! #define LOGIN_MAX_INBUF_SIZE 2048 /* max. size of output buffer. if it gets full, the client is disconnected. SASL authentication gives the largest output. */ #define LOGIN_MAX_OUTBUF_SIZE 4096 From tss at iki.fi Tue Jan 3 13:16:29 2012 From: tss at iki.fi (Timo Sirainen) Date: Tue, 03 Jan 2012 13:16:29 +0200 Subject: [Dovecot] Small LOGIN_MAX_INBUF_SIZE for GSSAPI with samba4 (AD) In-Reply-To: <20120102182014.GA20872@pzkagis.cz> References: <20120102182014.GA20872@pzkagis.cz> Message-ID: <1325589389.6987.55.camel@innu> On Mon, 2012-01-02 at 19:20 +0100, Ludek Finstrle wrote: > Jan 2 17:58:42 server dovecot: imap-login: Disconnected: Input buffer full (no auth attempts): rip=192.167.14.16, lip=192.167.14.16, secured .. > I fixed this problem with enlarging LOGIN_MAX_INBUF_SIZE. I also red about wrong lower/uppercase > but it's not definitely my problem (I tried all possibilities of lower/uppercas in login). > > I sniffed the plain communication and the "a0000 AUTHENTICATE GSSAPI" line has around 1873 chars. > When I enlarged the LOGIN_MAX_INBUF_SIZE to 2048 the problem disappeared and I'm now able to login > to dovecot using gssapi in mutt client. There was already code that allowed 16kB SAS messages, but that didn't work for initial SASL reponse with IMAP SASL-IR extension. > I use also thunderbird (on windows with sspi) and it works ok with LOGIN_MAX_INBUF_SIZE = 1024. TB probably doesn't support SASL-IR. > Does anybody have any idea why it's so large or how to fix it another way? It's terrible to > patch each version of dovecot rpm package. Or is there any possibility to change constant? > I have no idea how much this should affect memory usage. > > The simple patch I have to use is attached. I increased it to 4 kB: http://hg.dovecot.org/dovecot-2.0/rev/d06061408f6d From tss at iki.fi Tue Jan 3 13:29:36 2012 From: tss at iki.fi (Timo Sirainen) Date: Tue, 03 Jan 2012 13:29:36 +0200 Subject: [Dovecot] error bad file number with compressed mbox files In-Reply-To: <77e69f67dbffe67a6205ed1de7d2d0df@imapproxy.hrz> References: <77e69f67dbffe67a6205ed1de7d2d0df@imapproxy.hrz> Message-ID: <1325590176.6987.57.camel@innu> On Mon, 2012-01-02 at 15:33 +0100, J?rgen Obermann wrote: > can dsync convert from compressed mbox to compressed mdbox format? > > When I use compressed mbox files, either with gzip or with bzip2, I can > read the mails as usual, but I find the following errors in dovecots log > file: > > imap(userxy): Error: nfs_flush_fcntl: > fcntl(/home/hrz/userxy/Mail/mymbox.gz, F_RDLCK) failed: Bad file number > imap(userxy): Error: nfs_flush_fcntl: > fcntl(/home/hrz/userxy/Mail/mymbox.bz2, F_RDLCK) failed: Bad file number This happens because of mail_nfs_* settings. You can either ignore those errors, or disable the settings. Those settings are useful only if you attempt to access the same mailbox from multiple servers at the same time, which is randomly going to fail even with those settings, so they aren't hugely useful. From tss at iki.fi Tue Jan 3 13:42:13 2012 From: tss at iki.fi (Timo Sirainen) Date: Tue, 03 Jan 2012 13:42:13 +0200 Subject: [Dovecot] Problem with huge IMAP Archive after Courier migration In-Reply-To: <4F01D886.6070905@gmx.net> References: <4F01D886.6070905@gmx.net> Message-ID: <1325590933.6987.59.camel@innu> On Mon, 2012-01-02 at 17:17 +0100, Preacher wrote: > So I forced to install the Debisn 7.0 packages with 2.0.15 and finally > got the server running, I also restarted the whole machine to empty caches. > But the problem I got was that in the huge folder hierarchy the > downloaded headers in the individual folders disappeared, some folders > showed a few very old messages, some none. Also some subfolders disappeared. > I checked this with Outlook and Thunderbird. The difference was, that > Thunderbird shows more messages (but not all) than Outlook in some > folders, but also none in some others. Outlook brought up a message in > some cases, that the connection timed out, although I set the timeout to > 60s. Did you run the Courier migration script? http://wiki2.dovecot.org/Migration/Courier Also explicitly setting mail_location would be a good idea. From tss at iki.fi Tue Jan 3 13:52:12 2012 From: tss at iki.fi (Timo Sirainen) Date: Tue, 03 Jan 2012 13:52:12 +0200 Subject: [Dovecot] Multiple Maildirs per Virtual User In-Reply-To: <4F010936.7080107@gmail.com> References: <4F010936.7080107@gmail.com> Message-ID: <1325591532.6987.60.camel@innu> On Sun, 2012-01-01 at 20:32 -0500, Ruslan Nabioullin wrote: > How would it be possible to configure dovecot (2.0.16) in such a way > that it would serve several maildirs (e.g., INBOX, INBOX.Drafts, > INBOX.Sent, forum_email, [Gmail].Trash, etc.) per virtual user? > > I am only able to specify a single maildir, but I want all maildirs in > /home/my-username/mail/account1/ to be served. Sounds like you want LAYOUT=fs rather than the default LAYOUT=maildir++. http://wiki2.dovecot.org/MailboxFormat/Maildir#Directory_Structure From tss at iki.fi Tue Jan 3 13:55:01 2012 From: tss at iki.fi (Timo Sirainen) Date: Tue, 03 Jan 2012 13:55:01 +0200 Subject: [Dovecot] dsync / separator / namespace config-problem In-Reply-To: <20111229200345.GA17871@dibs.tanso.net> References: <20111229111455.GA9344@dibs.tanso.net> <3F4112A3-FF46-4ABA-9EC5-E04651D50E87@iki.fi> <20111229134234.GB11809@dibs.tanso.net> <20111229200345.GA17871@dibs.tanso.net> Message-ID: <1325591701.6987.62.camel@innu> On Thu, 2011-12-29 at 21:03 +0100, Jan-Frode Myklebust wrote: > On Thu, Dec 29, 2011 at 03:49:57PM +0200, Timo Sirainen wrote: > > >> > > >> With mdbox the internal separator is '/', but it's not valid to have "INBOX." prefix then (it should be "INBOX/"). > > > > > > But how should this be handled in the migration phase from maildir to > > > mdbox then? Can we have different namespaces for users with maildirs vs. > > > mdboxes? (..or am i misunderstanding something?) > > > > You'll most likely want to keep the '.' separator with mdbox, at > least initially. Some clients don't like if the separator changes. > Perhaps in future if you want to allow users to use '.' character in > mailbox names you could change it, or possibly make it a per-user > setting. > > > > Sorry for being so dense, but I don't quite get it still. Do you suggest > dropping the trailing dot from prefix=INBOX. ? I.e. > > namespace { > inbox = yes > location = > prefix = INBOX > type = private > separator = . > } > > when we do the migration to mdbox? And this should work without issues > for both current maildir users, and mdbox users ? With that setup you can't even start up Dovecot. The prefix must end with the separator. So initially just do it like above, but with "prefix=INBOX." > Ideally I don't want to use the . as a separator, since it's causing > problems for our users who expect to be able to use them in folder > names. But I don't understand if I can change them without causing > problems to existing users.. or how these problems will appear to the > users. It's going to be problematic to change the separator for existing users. Clients can become confused. From tss at iki.fi Tue Jan 3 14:00:08 2012 From: tss at iki.fi (Timo Sirainen) Date: Tue, 03 Jan 2012 14:00:08 +0200 Subject: [Dovecot] Dsync fails on second sync for folders with dot in the name In-Reply-To: <20120101195907.GA21500@dibs.tanso.net> References: <20120101195907.GA21500@dibs.tanso.net> Message-ID: <1325592008.6987.63.camel@innu> On Sun, 2012-01-01 at 20:59 +0100, Jan-Frode Myklebust wrote: > I'm in the processes of running our first dsync backup of all users > (from maildir to mdbox on remote server), and one problem I'm hitting > that dsync will work fine on first run for some users, and then > reliably fail whenever I try a new run: > > $ sudo dsync -u janfrode at example.net backup ssh -q mailbackup at repo1.example.net dsync -u janfrode at example.net > $ sudo dsync -u janfrode at example.net backup ssh -q mailbackup at repo1.example.net dsync -u janfrode at example.net > dsync-remote(janfrode at example.net): Error: Can't delete mailbox directory INBOX/a: Mailbox has children, delete them first > > The problem here seems to be that this user has a maildir named > ".a.b". On the backup side I see this as "a/b/". > > So dsync doesn't quite seem to agree with itself for how to handle > folders with dot in the name. So here on source you have namespace separator '.' and in destination you have separator '/'? Maybe that's the problem? Try with both having '.' separator. From janfrode at tanso.net Tue Jan 3 14:12:22 2012 From: janfrode at tanso.net (Jan-Frode Myklebust) Date: Tue, 3 Jan 2012 13:12:22 +0100 Subject: [Dovecot] Dsync fails on second sync for folders with dot in the name In-Reply-To: <1325592008.6987.63.camel@innu> References: <20120101195907.GA21500@dibs.tanso.net> <1325592008.6987.63.camel@innu> Message-ID: <20120103121222.GA30793@dibs.tanso.net> On Tue, Jan 03, 2012 at 02:00:08PM +0200, Timo Sirainen wrote: > > So here on source you have namespace separator '.' and in destination > you have separator '/'? Maybe that's the problem? Try with both having > '.' separator. I added this namespace on the destination: namespace { inbox = yes location = prefix = INBOX. separator = . type = private } and am getting the same error: dsync-remote(janfrode at tanso.net): Error: Can't delete mailbox directory INBOX.a: Mailbox has children, delete them first This was with a freshly created .a.b folder on source. With no messages in .a.b and also no plain .a folder on source: $ find /usr/local/atmail/users/j/a/janfrode at tanso.net/.a* /usr/local/atmail/users/j/a/janfrode at tanso.net/.a.b /usr/local/atmail/users/j/a/janfrode at tanso.net/.a.b/maildirfolder /usr/local/atmail/users/j/a/janfrode at tanso.net/.a.b/cur /usr/local/atmail/users/j/a/janfrode at tanso.net/.a.b/new /usr/local/atmail/users/j/a/janfrode at tanso.net/.a.b/tmp /usr/local/atmail/users/j/a/janfrode at tanso.net/.a.b/dovecot-uidlist -jf From tss at iki.fi Tue Jan 3 14:15:45 2012 From: tss at iki.fi (Timo Sirainen) Date: Tue, 03 Jan 2012 14:15:45 +0200 Subject: [Dovecot] Newbie: LDA Isn't Logging In-Reply-To: <1325573295.74202.YahooMailClassic@web125405.mail.ne1.yahoo.com> References: <1325573295.74202.YahooMailClassic@web125405.mail.ne1.yahoo.com> Message-ID: <1325592945.6987.70.camel@innu> On Mon, 2012-01-02 at 22:48 -0800, Michael Papet wrote: > Hi, > > I'm a newbie having some trouble getting deliver to log anything. Related to this, there are no return values unless the -d is missing. I'm using LDAP to store virtual domain and user account information. > > Test #1: /usr/lib/dovecot/deliver -e -f mpapet at yahoo.com -d zed at mailswansong.dom < bad.mail > Expected result: supposed to fail, there's no zed account via ldap lookup and supposed to get a return code per the wiki at http://wiki2.dovecot.org/LDA. Supposed to log too. > Actual result: nothing gets delivered, no return code, nothing is logged. As in return code is 0? Something's definitely wrong there then. First check that deliver at least reads the config file. Add something broken in there, such as: "foo=bar" at the beginning of dovecot.conf. Does deliver fail now? Also running deliver via strace could show something useful. From tss at iki.fi Tue Jan 3 14:34:59 2012 From: tss at iki.fi (Timo Sirainen) Date: Tue, 03 Jan 2012 14:34:59 +0200 Subject: [Dovecot] Dsync fails on second sync for folders with dot in the name In-Reply-To: <20120103121222.GA30793@dibs.tanso.net> References: <20120101195907.GA21500@dibs.tanso.net> <1325592008.6987.63.camel@innu> <20120103121222.GA30793@dibs.tanso.net> Message-ID: <1325594099.6987.71.camel@innu> On Tue, 2012-01-03 at 13:12 +0100, Jan-Frode Myklebust wrote: > dsync-remote(janfrode at tanso.net): Error: Can't delete mailbox directory INBOX.a: Mailbox has children, delete them first Oh, this happens only with dsync backup, and only with Maildir++ -> FS layout change. You can simply ignore this error, or patch with http://hg.dovecot.org/dovecot-2.0/rev/69c6d7436f7f that hides it. From tss at iki.fi Tue Jan 3 14:36:52 2012 From: tss at iki.fi (Timo Sirainen) Date: Tue, 03 Jan 2012 14:36:52 +0200 Subject: [Dovecot] lmtp-postlogin ? In-Reply-To: <20111230130804.GA2107@dibs.tanso.net> References: <20111230090053.GA30820@dibs.tanso.net> <16B30E6C-AE5E-44CB-8F48-66274FEAB357@iki.fi> <20111230130804.GA2107@dibs.tanso.net> Message-ID: <1325594212.6987.73.camel@innu> On Fri, 2011-12-30 at 14:08 +0100, Jan-Frode Myklebust wrote: > > Maybe create a new plugin for this using notify plugin. > > Is there any documentation for this plugin? I've tried searching both > this list, and the wiki's. Nope. You could look at mail-log and http://dovecot.org/patches/2.0/touch-plugin.c and write something based on them. From janfrode at tanso.net Tue Jan 3 14:54:10 2012 From: janfrode at tanso.net (Jan-Frode Myklebust) Date: Tue, 3 Jan 2012 13:54:10 +0100 Subject: [Dovecot] Dsync fails on second sync for folders with dot in the name In-Reply-To: <1325594099.6987.71.camel@innu> References: <20120101195907.GA21500@dibs.tanso.net> <1325592008.6987.63.camel@innu> <20120103121222.GA30793@dibs.tanso.net> <1325594099.6987.71.camel@innu> Message-ID: <20120103125410.GA2966@dibs.tanso.net> On Tue, Jan 03, 2012 at 02:34:59PM +0200, Timo Sirainen wrote: > On Tue, 2012-01-03 at 13:12 +0100, Jan-Frode Myklebust wrote: > > dsync-remote(janfrode at tanso.net): Error: Can't delete mailbox directory INBOX.a: Mailbox has children, delete them first > > Oh, this happens only with dsync backup, and only with Maildir++ -> FS > layout change. You can simply ignore this error, or patch with > http://hg.dovecot.org/dovecot-2.0/rev/69c6d7436f7f that hides it. Oh, it was so quick to fail that I didn't realize it had successfully updated the remote mailboxes :-) Thanks! But isn't it a bug that users are allowed to create folders named .a.b, or that dovecot creates this as a folder named .a.b instead of .a/.b when the separator is "." ? -jf From mikko at woima.fi Tue Jan 3 16:54:11 2012 From: mikko at woima.fi (Mikko Lampikoski) Date: Tue, 3 Jan 2012 16:54:11 +0200 Subject: [Dovecot] What is normal CPU usage of dovecot imap? Message-ID: <6FD1B169-1409-40BF-9B2F-53598B1300CB@woima.fi> I got Dual Core Intel Xeon CPU 3.00GHz, over 1000 mailbox and almost 1 dovecot login / second (peak time). Server stats says that load is continually over 2 and cpu usage is 60%. top says that imap is making this load. virtual users are in mysql database and mysqld is running on another server (this server is ok). Do I need better CPU or is there something going on that I do not understand? # dovecot -n # 1.1.11: /etc/dovecot/dovecot.conf # OS: Linux 2.6.32-4-pve i686 Ubuntu 9.10 nfs log_timestamp: %Y-%m-%d %H:%M:%S protocols: imap imaps pop3 pop3s listen: *, [::] ssl_ca_file: /etc/ssl/**********.crt ssl_cert_file: /etc/ssl/**********.crt ssl_key_file: /etc/ssl/**********.key ssl_key_password: ********** disable_plaintext_auth: no verbose_ssl: yes shutdown_clients: no login_dir: /var/run/dovecot/login login_executable(default): /usr/lib/dovecot/imap-login login_executable(imap): /usr/lib/dovecot/imap-login login_executable(pop3): /usr/lib/dovecot/pop3-login login_greeting_capability(default): yes login_greeting_capability(imap): yes login_greeting_capability(pop3): no login_process_size: 128 login_processes_count: 10 login_max_processes_count: 2048 mail_max_userip_connections(default): 10 mail_max_userip_connections(imap): 10 mail_max_userip_connections(pop3): 3 first_valid_uid: 99 last_valid_uid: 100 mail_privileged_group: mail mail_location: maildir:/var/vmail/%d/%n:INDEX=/var/indexes/%d/%n fsync_disable: yes mail_nfs_storage: yes mbox_write_locks: fcntl mbox_min_index_size: 4 mail_executable(default): /usr/lib/dovecot/imap mail_executable(imap): /usr/lib/dovecot/imap mail_executable(pop3): /usr/lib/dovecot/pop3 mail_process_size: 2048 mail_plugin_dir(default): /usr/lib/dovecot/modules/imap mail_plugin_dir(imap): /usr/lib/dovecot/modules/imap mail_plugin_dir(pop3): /usr/lib/dovecot/modules/pop3 imap_client_workarounds(default): outlook-idle imap_client_workarounds(imap): outlook-idle imap_client_workarounds(pop3): pop3_client_workarounds(default): pop3_client_workarounds(imap): pop3_client_workarounds(pop3): outlook-no-nuls auth default: mechanisms: plain login cram-md5 cache_size: 1024 passdb: driver: sql args: /etc/dovecot/dovecot-sql.conf userdb: driver: static args: uid=99 gid=99 home=/var/vmail/%d/%n allow_all_users=yes socket: type: listen client: path: /var/spool/postfix/private/auth-client mode: 432 user: postfix group: postfix master: path: /var/run/dovecot/auth-master mode: 384 user: vmail From tss at iki.fi Tue Jan 3 17:08:14 2012 From: tss at iki.fi (Timo Sirainen) Date: Tue, 3 Jan 2012 17:08:14 +0200 Subject: [Dovecot] Dsync fails on second sync for folders with dot in the name In-Reply-To: <20120103125410.GA2966@dibs.tanso.net> References: <20120101195907.GA21500@dibs.tanso.net> <1325592008.6987.63.camel@innu> <20120103121222.GA30793@dibs.tanso.net> <1325594099.6987.71.camel@innu> <20120103125410.GA2966@dibs.tanso.net> Message-ID: <9D352B5F-77C3-4473-92E1-9ED2AFB5FFFB@iki.fi> On 3.1.2012, at 14.54, Jan-Frode Myklebust wrote: > But isn't it a bug that users are allowed to create folders named .a.b, The folder name is "a.b", it just exists in filesystem with Maildir++ as ".a.b". > or that dovecot creates this as a folder named .a.b instead of .a/.b > when the separator is "." ? The separator is the IMAP separator, not the filesystem separator. From tss at iki.fi Tue Jan 3 17:12:34 2012 From: tss at iki.fi (Timo Sirainen) Date: Tue, 3 Jan 2012 17:12:34 +0200 Subject: [Dovecot] What is normal CPU usage of dovecot imap? In-Reply-To: <6FD1B169-1409-40BF-9B2F-53598B1300CB@woima.fi> References: <6FD1B169-1409-40BF-9B2F-53598B1300CB@woima.fi> Message-ID: On 3.1.2012, at 16.54, Mikko Lampikoski wrote: > I got Dual Core Intel Xeon CPU 3.00GHz, over 1000 mailbox and almost 1 dovecot login / second (peak time). > Server stats says that load is continually over 2 and cpu usage is 60%. top says that imap is making this load. You mean an actual "imap" process? Or more than one imap processes? Or something else, e.g. "imap-login" process? If there's one long running IMAP process eating CPU, it might have simply gone to an infinite loop, and upgrading could help. > virtual users are in mysql database and mysqld is running on another server (this server is ok). > > Do I need better CPU or is there something going on that I do not understand? Your CPU usage should probably be closer to 0%. > login_process_size: 128 > login_processes_count: 10 > login_max_processes_count: 2048 Switching to http://wiki2.dovecot.org/LoginProcess#High-performance_mode may be helpful. > mail_nfs_storage: yes Do you have more than one Dovecot server? This setting doesn't anyway work reliably. If you've only one server accessing mails, you can set this to "no". From stan at hardwarefreak.com Tue Jan 3 17:20:28 2012 From: stan at hardwarefreak.com (Stan Hoeppner) Date: Tue, 03 Jan 2012 09:20:28 -0600 Subject: [Dovecot] GPFS for mail-storage (Was: Re: Compressing existing maildirs) In-Reply-To: <20120103081449.GA26269@dibs.tanso.net> References: <20111224152050.GA3958@dibs.tanso.net> <20111229084916.GA5895@dibs.tanso.net> <4EFC6453.8020304@hardwarefreak.com> <20111230144124.GA3936@dibs.tanso.net> <4EFE5984.9080905@hardwarefreak.com> <20111231065649.GA19046@dibs.tanso.net> <4EFEBFB8.1070301@hardwarefreak.com> <20120103081449.GA26269@dibs.tanso.net> Message-ID: <4F031CBC.60302@hardwarefreak.com> On 1/3/2012 2:14 AM, Jan-Frode Myklebust wrote: > On Sat, Dec 31, 2011 at 01:54:32AM -0600, Stan Hoeppner wrote: >> Nice setup. I've mentioned GPFS for cluster use on this list before, >> but I think you're the only operator to confirm using it. I'm sure >> others would be interested in hearing of your first hand experience: >> pros, cons, performance, etc. And a ball park figure on the licensing >> costs, whether one can only use GPFS on IBM storage or if storage from >> others vendors is allowed in the GPFS pool. > > I used to work for IBM, so I've been a bit uneasy about pushing GPFS too > hard publicly, for risk of being accused of being biased. But I changed job in > November, so now I'm only a satisfied customer :-) Fascinating. And good timing. :) > Pros: > Extremely simple to configure and manage. Assuming root on all > nodes can ssh freely, and port 1191/tcp is open between the > nodes, these are the commands to create the cluster, create a > NSD (network shared disks), and create a filesystem: > > # echo hostname1:manager-quorum > NodeFile # "manager" means this node can be selected as filesystem manager > # echo hostname2:manager-quorum >> NodeFile # "quorum" means this node has a vote in the quorum selection > # echo hostname3:manager-quorum >> NodeFile # all my nodes are usually the same, so they all have same roles. > # mmcrcluster -n NodeFile -p $(hostname) -A > > ### sdb1 is either a local disk on hostname1 (in which case the other nodes will access it over tcp to > ### hostname1), or a SAN-disk that they can access directly over FC/iSCSI. > # echo sdb1:hostname1::dataAndMetadata:: > DescFile # This disk can be used for both data and metadata > # mmcrnsd -F DescFile > > # mmstartup -A # starts GPFS services on all nodes > # mmcrfs /gpfs1 gpfs1 -F DescFile > # mount /gpfs1 > > You can add and remove disks from the filesystem, and change most > settings without downtime. You can scale out your workload by adding > more nodes (SAN attached or not), and scale out your disk performance > by adding more disks on the fly. (IBM uses GPFS to create > scale-out NAS solutions http://www-03.ibm.com/systems/storage/network/sonas/ , > which highlights a few of the features available with GPFS) > > There's no problem running GPFS on other vendors disk systems. I've used Nexsan > SATAboy earlier, for a HPC cluster. One can easily move from one disksystem to > another without downtime. That's good to know. The only FC SAN arrays I've installed/used are IBM FasTt 600 and Nexsan SataBlade/Boy. I much prefer the web management interface on the Nexsan units, much more intuitive, more flexible. The FasTt is obviously much more suitable to random IOPS workloads with its 15k FC disks vs 7.2K SATA disks in the Nexsan units (although Nexsan has offered 15K SAS disks and SSDs for a while now). > Cons: > It has it's own page cache, staticly configured. So you don't get the "all > available memory used for page caching" behaviour as you normally do on linux. Yep, that's ugly. > There is a kernel module that needs to be rebuilt on every > upgrade. It's a simple process, but it needs to be done and means we > can't just run "yum update ; reboot" to upgrade. > > % export SHARKCLONEROOT=/usr/lpp/mmfs/src > % cp /usr/lpp/mmfs/src/config/site.mcr.proto /usr/lpp/mmfs/src/config/site.mcr > % vi /usr/lpp/mmfs/src/config/site.mcr # correct GPFS_ARCH, LINUX_DISTRIBUTION and LINUX_KERNEL_VERSION > % cd /usr/lpp/mmfs/src/ ; make clean ; make World > % su - root > # export SHARKCLONEROOT=/usr/lpp/mmfs/src > # cd /usr/lpp/mmfs/src/ ; make InstallImages So is this, but it's totally expected since this is proprietary code and not in mainline. >> To this point IIRC everyone here doing clusters is using NFS, GFS, or >> OCFS. Each has its downsides, mostly because everyone is using maildir. >> NFS has locking issues with shared dovecot index files. GFS and OCFS >> have filesystem metadata performance issues. How does GPFS perform with >> your maildir workload? > > Maildir is likely a worst case type workload for filesystems. Millions > of tiny-tiny files, making all IO random, and getting minimal controller > read cache utilized (unless you can cache all active files). So I've Yep. Which is the reason I've stuck with mbox everywhere I can over the years, minor warts and all, and will be moving to mdbox at some point. IMHO maildir solved one set of problems but created a bigger problem. Many sites hailed maildir as a savior in many ways, then decried it as their user base and IO demands exceeded their storage, scrambling for budget money for fix an "unforeseen" problem, which is absolutely clear from day one. At least for anyone with more than a cursory knowledge of filesystem design and hardware performance. > concluded that our performance issues are mostly design errors (and the > fact that there were no better mail storage formats than maildir at the > time these servers were implemented). I expect moving to mdbox will > fix all our performance issues. Yeah, it should decrease FS IOPS by a couple orders or magnitude, especially if you go with large mdbox files. The larger the better. > I *think* GPFS is as good as it gets for maildir storage on clusterfs, > but have no number to back that up ... Would be very interesting if we > could somehow compare numbers for a few clusterfs'. Apparently no one (vendor) with the resources to do so has the desire to do so. > I believe our main limitation in this setup is the iops we can get from > the backend storage system. It's hard to balance the IO over enough > RAID arrays (the fs is spread over 11 RAID5 arrays of 5 disks each), > and we're always having hotspots. Right now two arrays are doing <100 iops, > while others are doing 4-500 iops. Would very much like to replace > it by something smarter where we can utilize SSDs for active data and > something slower for stale data. GPFS can manage this by itself trough > it's ILM interface, but we don't have the very fast storage to put in as > tier-1. Obviously not news to you, balancing mail workload IO across large filesystems and wide disk farms will always be a problem, due to which users are logged in at a given moment, and the fact you can't stripe all users' small mail files across all disks. And this is true of all mailbox formats to one degree or another, obviously worst with maildir. A properly engineered XFS can get far closer to linear IO distribution across arrays than most filesystems due to its allocation group design, but it still won't be perfect. Simply getting away from maildir, with its extraneous metadata IOs, is a huge win for decreasing custerFS and SAN IOPs. I'm anxious to see your report on your SAN IOPs after you've converted to mdbox, especially if you go with 16/32MB or larger mdbox files. -- Stan From mikko at woima.fi Tue Jan 3 17:38:48 2012 From: mikko at woima.fi (Mikko Lampikoski) Date: Tue, 3 Jan 2012 17:38:48 +0200 Subject: [Dovecot] What is normal CPU usage of dovecot imap? In-Reply-To: References: <6FD1B169-1409-40BF-9B2F-53598B1300CB@woima.fi> Message-ID: <3B6D056C-1D1E-46F4-AB56-FDD5B98BC669@woima.fi> On 3.1.2012, at 17.12, Timo Sirainen wrote: > On 3.1.2012, at 16.54, Mikko Lampikoski wrote: > >> I got Dual Core Intel Xeon CPU 3.00GHz, over 1000 mailbox and almost 1 dovecot login / second (peak time). >> Server stats says that load is continually over 2 and cpu usage is 60%. top says that imap is making this load. > > You mean an actual "imap" process? Or more than one imap processes? Or something else, e.g. "imap-login" process? If there's one long running IMAP process eating CPU, it might have simply gone to an infinite loop, and upgrading could help. It is "imap" process and process takes cpu like 10-30 seconds and then PID changes to another imap process (process also takes 10% of memory = 150MB). Restarting dovecot does not help. >> virtual users are in mysql database and mysqld is running on another server (this server is ok). >> Do I need better CPU or is there something going on that I do not understand? > > Your CPU usage should probably be closer to 0%. I think so too, but I ran out of good ideas. If someone have lots of mails in mailbox can it make effect like this? >> login_process_size: 128 >> login_processes_count: 10 >> login_max_processes_count: 2048 > > Switching to http://wiki2.dovecot.org/LoginProcess#High-performance_mode may be helpful. This loses much of the security benefits, no thanks. >> mail_nfs_storage: yes > > Do you have more than one Dovecot server? This setting doesn't anyway work reliably. If you've only one server accessing mails, you can set this to "no". Trying this too, but I think its not going to help.. From tss at iki.fi Tue Jan 3 17:44:21 2012 From: tss at iki.fi (Timo Sirainen) Date: Tue, 3 Jan 2012 17:44:21 +0200 Subject: [Dovecot] What is normal CPU usage of dovecot imap? In-Reply-To: <3B6D056C-1D1E-46F4-AB56-FDD5B98BC669@woima.fi> References: <6FD1B169-1409-40BF-9B2F-53598B1300CB@woima.fi> <3B6D056C-1D1E-46F4-AB56-FDD5B98BC669@woima.fi> Message-ID: <8188AE59-6646-4686-9320-F11D25A42F5D@iki.fi> On 3.1.2012, at 17.38, Mikko Lampikoski wrote: > On 3.1.2012, at 17.12, Timo Sirainen wrote: > >> On 3.1.2012, at 16.54, Mikko Lampikoski wrote: >> >>> I got Dual Core Intel Xeon CPU 3.00GHz, over 1000 mailbox and almost 1 dovecot login / second (peak time). >>> Server stats says that load is continually over 2 and cpu usage is 60%. top says that imap is making this load. >> >> You mean an actual "imap" process? Or more than one imap processes? Or something else, e.g. "imap-login" process? If there's one long running IMAP process eating CPU, it might have simply gone to an infinite loop, and upgrading could help. > > It is "imap" process and process takes cpu like 10-30 seconds and then PID changes to another imap process (process also takes 10% of memory = 150MB). > Restarting dovecot does not help. Is the IMAP process always for the same user (or the same few users)? verbose_proctitle=yes shows the username in ps output. > If someone have lots of mails in mailbox can it make effect like this? Possibly. maildir_very_dirty_syncs=yes is helpful with huge maildirs (I don't remember if it exists in v1.1 yet). From preacher_net at gmx.net Tue Jan 3 18:47:23 2012 From: preacher_net at gmx.net (Preacher) Date: Tue, 03 Jan 2012 17:47:23 +0100 Subject: [Dovecot] Problem with huge IMAP Archive after Courier migration In-Reply-To: <4F020328.7090303@hardwarefreak.com> References: <4F01D886.6070905@gmx.net> <4F020328.7090303@hardwarefreak.com> Message-ID: <4F03311B.8030109@gmx.net> Actually I took a look inside the folders right after starting up and waited for two hours to let Dovecot work. Saving the whole Maildir into a tar on the same partition also took only 2 hours before. But nothing did change and when looking at activities with top, the server was idle, dovecot not indexing. Also I wasn't able to drag new messages to the folder hierachy. With courier it takes not more than 5 seconds to download the headers in a folder containing more than 3.000 messages. Stan Hoeppner schrieb: > On 1/2/2012 10:17 AM, Preacher wrote: > ... >> So I forced to install the Debisn 7.0 packages with 2.0.15 and finally >> got the server running, I also restarted the whole machine to empty caches. >> But the problem I got was that in the huge folder hierarchy the >> downloaded headers in the individual folders disappeared, some folders >> showed a few very old messages, some none. Also some subfolders >> disappeared. >> I checked this with Outlook and Thunderbird. The difference was, that >> Thunderbird shows more messages (but not all) than Outlook in some >> folders, but also none in some others. Outlook brought up a message in >> some cases, that the connection timed out, although I set the timeout to >> 60s. > ... >> Anyone a clue what's wrong here? > Absolutely. What's wrong is a lack of planning, self education, and > patience on the part of the admin. > > Dovecot gets its speed from its indexes. How long do you think it takes > Dovecot to index 37GB of maildir messages, many thousands per directory, > hundreds of directories, millions of files total? Until those indexes > are built you will not see a complete folder tree and all kinds of stuff > will be missing. > > For your education: Dovecot indexes every message and these indexes are > the key to its speed. Normally indexing occurs during delivery when > using deliver or lmtp, so the index updates are small and incremental, > keeping performance high. You tried to do this and expected Dovecot to > instantly process it all: > > http://www.youtube.com/watch?v=THVz5aweqYU > > If you don't know, that's a coal train car being dumped. 100 tons of > coal in a few seconds. Visuals are always good teaching tools. I think > this drives the point home rather well. > From preacher_net at gmx.net Tue Jan 3 18:50:51 2012 From: preacher_net at gmx.net (Preacher) Date: Tue, 03 Jan 2012 17:50:51 +0100 Subject: [Dovecot] Problem with huge IMAP Archive after Courier migration In-Reply-To: <1325590933.6987.59.camel@innu> References: <4F01D886.6070905@gmx.net> <1325590933.6987.59.camel@innu> Message-ID: <4F0331EB.2000006@gmx.net> Yes i did, followed this guide you mentioned, it said that it found the 3 mailboxes I have set-up in total, conversion took only a few moments. I guess the mail location was automaticall set correctly as the folder hierachy was displayed correctly Timo Sirainen schrieb: > On Mon, 2012-01-02 at 17:17 +0100, Preacher wrote: >> So I forced to install the Debisn 7.0 packages with 2.0.15 and finally >> got the server running, I also restarted the whole machine to empty caches. >> But the problem I got was that in the huge folder hierarchy the >> downloaded headers in the individual folders disappeared, some folders >> showed a few very old messages, some none. Also some subfolders disappeared. >> I checked this with Outlook and Thunderbird. The difference was, that >> Thunderbird shows more messages (but not all) than Outlook in some >> folders, but also none in some others. Outlook brought up a message in >> some cases, that the connection timed out, although I set the timeout to >> 60s. > Did you run the Courier migration script? > http://wiki2.dovecot.org/Migration/Courier > > Also explicitly setting mail_location would be a good idea. > > > From rnabioullin at gmail.com Tue Jan 3 19:33:31 2012 From: rnabioullin at gmail.com (Ruslan Nabioullin) Date: Tue, 03 Jan 2012 12:33:31 -0500 Subject: [Dovecot] Multiple Maildirs per Virtual User In-Reply-To: <1325591532.6987.60.camel@innu> References: <4F010936.7080107@gmail.com> <1325591532.6987.60.camel@innu> Message-ID: <4F033BEB.4070103@gmail.com> On 01/03/2012 06:52 AM, Timo Sirainen wrote: > On Sun, 2012-01-01 at 20:32 -0500, Ruslan Nabioullin wrote: >> How would it be possible to configure dovecot (2.0.16) in such a way >> that it would serve several maildirs (e.g., INBOX, INBOX.Drafts, >> INBOX.Sent, forum_email, [Gmail].Trash, etc.) per virtual user? >> >> I am only able to specify a single maildir, but I want all maildirs in >> /home/my-username/mail/account1/ to be served. > > Sounds like you want LAYOUT=fs rather than the default LAYOUT=maildir++. > http://wiki2.dovecot.org/MailboxFormat/Maildir#Directory_Structure > > I changed /etc/dovecot/passwd to: my-username_account1:{PLAIN}password:my-username:my-group::::userdb_mail=maildir:/home/my-username/mail/account1:LAYOUT=fs Dovecot creates {tmp,new,cur} dirs within account1 (the root), apparently not recognizing the maildirs within the root (e.g., /home/my-username/mail/account1/INBOX/{tmp,new,cur}). -- Ruslan Nabioullin rnabioullin at gmail.com -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 900 bytes Desc: OpenPGP digital signature URL: From tss at iki.fi Tue Jan 3 19:47:52 2012 From: tss at iki.fi (Timo Sirainen) Date: Tue, 3 Jan 2012 19:47:52 +0200 Subject: [Dovecot] Multiple Maildirs per Virtual User In-Reply-To: <4F033BEB.4070103@gmail.com> References: <4F010936.7080107@gmail.com> <1325591532.6987.60.camel@innu> <4F033BEB.4070103@gmail.com> Message-ID: <7543064C-1F66-49D1-8694-4793958CCFD8@iki.fi> On 3.1.2012, at 19.33, Ruslan Nabioullin wrote: > I changed /etc/dovecot/passwd to: > my-username_account1:{PLAIN}password:my-username:my-group::::userdb_mail=maildir:/home/my-username/mail/account1:LAYOUT=fs > > Dovecot creates {tmp,new,cur} dirs within account1 (the root), > apparently not recognizing the maildirs within the root (e.g., > /home/my-username/mail/account1/INBOX/{tmp,new,cur}). Your client probably only shows subscribed folders, and none are subscribed. From stan at hardwarefreak.com Tue Jan 3 19:55:27 2012 From: stan at hardwarefreak.com (Stan Hoeppner) Date: Tue, 03 Jan 2012 11:55:27 -0600 Subject: [Dovecot] Problem with huge IMAP Archive after Courier migration In-Reply-To: <4F03311B.8030109@gmx.net> References: <4F01D886.6070905@gmx.net> <4F020328.7090303@hardwarefreak.com> <4F03311B.8030109@gmx.net> Message-ID: <4F03410F.6080404@hardwarefreak.com> On 1/3/2012 10:47 AM, Preacher wrote: > Actually I took a look inside the folders right after starting up and > waited for two hours to let Dovecot work. So two hours after clicking on an IMAP folder the contents of that folder were still not displayed correctly? > Saving the whole Maildir into a tar on the same partition also took only > 2 hours before. This doesn't have any relevance. > But nothing did change and when looking at activities with top, the > server was idle, dovecot not indexing. > Also I wasn't able to drag new messages to the folder hierachy. Then something is seriously wrong. The fact that you "forced" the Wheezy Dovecot package into a Squeeze system may have something, if not everything, to do with your problem (somehow I missed this fact in your previous email). Debian testing/sid packages are compiled against newer system libraries. If you check various logs you'll likely see problems related to this. This is also why the Backports project exists--TESTING packages are compiled against the STABLE libraries so newer application revs can be used on the STABLE distribution. Currently there is no Dovecot 2.x backport for Squeeze. I would suggest you thoroughly remove the Wheezy 2.0.15 package and install the 1.2.15-7 STABLE package. Read the documentation for 1.2.x and configure it properly. Then things will likely work as they should. -- Stan From Ralf.Hildebrandt at charite.de Tue Jan 3 20:09:29 2012 From: Ralf.Hildebrandt at charite.de (Ralf Hildebrandt) Date: Tue, 3 Jan 2012 19:09:29 +0100 Subject: [Dovecot] Deliver all addresses to the same mdbox:? Message-ID: <20120103180929.GA21651@charite.de> For archiving purposes I'm delivering all addresses to the same mdbox: like this: passdb { driver = passwd-file args = username_format=%u /etc/dovecot/passwd } userdb { driver = static args = uid=1000 gid=1000 home=/home/copymail allow_all_users=yes } Yet I'm getting this: Jan 3 19:03:27 mail postfix/lmtp[29378]: 3THjg02wfWzFvmL: to=, relay=mail.charite.de[private/dovecot-lmtp], conn_use=20, delay=323, delays=323/0/0/0, dsn=4.1.1, status=SOFTBOUNCE (host mail.charite.de[private/dovecot-lmtp] said: 550 5.1.1 <"firstname.lastname at charite.de"@backup.invalid> User doesn't exist: "firstname.lastname at charite.de"@backup.invalid (in reply to RCPT TO command)) (using soft_bounce = yes here in Postfix) In short: backup.invalid is delivered to dovecot by means of LMTP (local socket). I thought my static mapping in userdb would enable the lmtp listener to accept ALL recipients and map their $home to /home/copymail - why is that not working? -- Ralf Hildebrandt Gesch?ftsbereich IT | Abteilung Netzwerk Charit? - Universit?tsmedizin Berlin Campus Benjamin Franklin Hindenburgdamm 30 | D-12203 Berlin Tel. +49 30 450 570 155 | Fax: +49 30 450 570 962 ralf.hildebrandt at charite.de | http://www.charite.de From tss at iki.fi Tue Jan 3 20:34:11 2012 From: tss at iki.fi (Timo Sirainen) Date: Tue, 3 Jan 2012 20:34:11 +0200 Subject: [Dovecot] Deliver all addresses to the same mdbox:? In-Reply-To: <20120103180929.GA21651@charite.de> References: <20120103180929.GA21651@charite.de> Message-ID: On 3.1.2012, at 20.09, Ralf Hildebrandt wrote: > For archiving purposes I'm delivering all addresses to the same mdbox: > like this: > > passdb { > driver = passwd-file > args = username_format=%u /etc/dovecot/passwd > } > > userdb { > driver = static > args = uid=1000 gid=1000 home=/home/copymail allow_all_users=yes > } allow_all_users=yes is used only when the passdb is incapable of telling if the user exists or not. > Yet I'm getting this: > > Jan 3 19:03:27 mail postfix/lmtp[29378]: 3THjg02wfWzFvmL: to=, > relay=mail.charite.de[private/dovecot-lmtp], conn_use=20, delay=323, delays=323/0/0/0, dsn=4.1.1, status=SOFTBOUNCE (host > mail.charite.de[private/dovecot-lmtp] said: 550 5.1.1 <"firstname.lastname at charite.de"@backup.invalid> User doesn't exist: "firstname.lastname at charite.de"@backup.invalid (in reply to RCPT TO > command)) Fails because user doesn't exist in passwd-file, I guess. Maybe use passdb static? If you also need authentication to work, put passdb static in protocol lmtp {} and passdb passwd-file in protocol !lmtp {} From Ralf.Hildebrandt at charite.de Tue Jan 3 20:43:38 2012 From: Ralf.Hildebrandt at charite.de (Ralf Hildebrandt) Date: Tue, 3 Jan 2012 19:43:38 +0100 Subject: [Dovecot] Deliver all addresses to the same mdbox:? In-Reply-To: References: <20120103180929.GA21651@charite.de> Message-ID: <20120103184338.GC21651@charite.de> * Timo Sirainen : > On 3.1.2012, at 20.09, Ralf Hildebrandt wrote: > > > For archiving purposes I'm delivering all addresses to the same mdbox: > > like this: > > > > passdb { > > driver = passwd-file > > args = username_format=%u /etc/dovecot/passwd > > } > > > > userdb { > > driver = static > > args = uid=1000 gid=1000 home=/home/copymail allow_all_users=yes > > } > > allow_all_users=yes is used only when the passdb is incapable of telling if the user exists or not. Ah, damn :| > Fails because user doesn't exist in passwd-file, I guess. Indeed. > Maybe use passdb static? Right now I simply solved it by using + addressing like this: Jan 3 19:42:49 mail postfix/lmtp[2728]: 3THkfd20f1zFvlF: to=, relay=mail.charite.de[private/dovecot-lmtp], delay=0.01, delays=0.01/0/0/0, dsn=2.0.0, status=sent (250 2.0.0 IHdDM9VLA0/aCwAAY73zkw Saved) Call me lazy :) > If you also need authentication to work, put passdb static in protocol > lmtp {} and passdb passwd-file in protocol !lmtp {} Ah yes, good idea. -- Ralf Hildebrandt Gesch?ftsbereich IT | Abteilung Netzwerk Charit? - Universit?tsmedizin Berlin Campus Benjamin Franklin Hindenburgdamm 30 | D-12203 Berlin Tel. +49 30 450 570 155 | Fax: +49 30 450 570 962 ralf.hildebrandt at charite.de | http://www.charite.de From djonas at vitalwerks.com Tue Jan 3 21:14:28 2012 From: djonas at vitalwerks.com (David Jonas) Date: Tue, 03 Jan 2012 11:14:28 -0800 Subject: [Dovecot] Maildir migration and uids In-Reply-To: <81E45F76-34A4-4666-9F10-7566B7BD496C@iki.fi> References: <4EF28D7B.8050601@vitalwerks.com> <81E45F76-34A4-4666-9F10-7566B7BD496C@iki.fi> Message-ID: <4F035394.8090701@vitalwerks.com> On 12/29/11 5:35 AM, Timo Sirainen wrote: > On 22.12.2011, at 3.52, David Jonas wrote: > >> I'm in the process of migrating a large number of maildirs to a 3rd >> party dovecot server (from a dovecot server). Tests have shown that >> using imap to sync the accounts doesn't preserve the uidl for pop3 access. >> >> My current attempt is to convert the maildir to mbox and add an X-UIDL >> header in the process. Run a second dovecot that serves the converted >> mbox. But dovecot's docs say, "None of these headers are sent to >> IMAP/POP3 clients when they read the mail". > > That's rather complex. Thanks, Timo. Unfortunately I don't have shell access at the new dovecot servers. They have a migration tool that doesn't keep the uids intact when I sync via imap. Looks like I'm going to have to sync twice, once with POP3 (which maintains uids) and once with imap skipping the inbox. Ugh. >> Is there any way to sync these maildirs to the new server and maintain >> the uids? > > What Dovecot versions? dsync could do this easily. You could simply install the dsync binary even if you're using Dovecot v1.x. Good idea with dsync though, I had forgotten about that. Perhaps they'll do something custom for me. > You could also log in with POP3 and get the UIDL list and write a script to add them to dovecot-uidlist. > From CMarcus at Media-Brokers.com Tue Jan 3 22:40:02 2012 From: CMarcus at Media-Brokers.com (Charles Marcus) Date: Tue, 03 Jan 2012 15:40:02 -0500 Subject: [Dovecot] Storing passwords encrypted... bcrypt? Message-ID: <4F0367A2.1000807@Media-Brokers.com> Hi everyone, Was just perusing this article about how trivial it is to decrypt passwords that are stored using most (standard) encryption methods (like MD5), and was wondering - is it possible to use bcrypt with dovecot+postfix+mysql (or posgres)? -- Best regards, Charles From CMarcus at Media-Brokers.com Tue Jan 3 22:59:39 2012 From: CMarcus at Media-Brokers.com (Charles Marcus) Date: Tue, 03 Jan 2012 15:59:39 -0500 Subject: [Dovecot] Storing passwords encrypted... bcrypt? In-Reply-To: <4F0367A2.1000807@Media-Brokers.com> References: <4F0367A2.1000807@Media-Brokers.com> Message-ID: <4F036C3B.5080904@Media-Brokers.com> On 2012-01-03 3:40 PM, Charles Marcus wrote: > Hi everyone, > > Was just perusing this article about how trivial it is to decrypt > passwords that are stored using most (standard) encryption methods (like > MD5), and was wondering - is it possible to use bcrypt with > dovecot+postfix+mysql (or posgres)? Ooop... forgot the link: http://codahale.com/how-to-safely-store-a-password/ But after perusing the wiki: http://wiki2.dovecot.org/Authentication/PasswordSchemes it appears not? Timo - any chance for adding support for it? Or is that web page incorrect? -- Best regards, Charles From david at blue-labs.org Tue Jan 3 23:03:34 2012 From: david at blue-labs.org (David Ford) Date: Tue, 03 Jan 2012 16:03:34 -0500 Subject: [Dovecot] Storing passwords encrypted... bcrypt? In-Reply-To: <4F036C3B.5080904@Media-Brokers.com> References: <4F0367A2.1000807@Media-Brokers.com> <4F036C3B.5080904@Media-Brokers.com> Message-ID: <4F036D26.9010409@blue-labs.org> md5 is deprecated, *nix has used sha1 for a while now From bill-dovecot at carpenter.org Wed Jan 4 00:10:13 2012 From: bill-dovecot at carpenter.org (WJCarpenter) Date: Tue, 03 Jan 2012 14:10:13 -0800 Subject: [Dovecot] Storing passwords encrypted... bcrypt? In-Reply-To: <4F036C3B.5080904@Media-Brokers.com> References: <4F0367A2.1000807@Media-Brokers.com> <4F036C3B.5080904@Media-Brokers.com> Message-ID: <4F037CC5.9030900@carpenter.org> >> Was just perusing this article about how trivial it is to decrypt >> passwords that are stored using most (standard) encryption methods (like >> MD5), and was wondering - is it possible to use bcrypt with >> dovecot+postfix+mysql (or posgres)? > > Ooop... forgot the link: > > http://codahale.com/how-to-safely-store-a-password/ AFAIK, that web page is correct in a relative sense, but getting bcrypt support might not be the most urgent priority. In his description, he uses the example of passwords which are "lowercase, alphanumeric, and 6 characters long" (and in another place the example is "lowercase, alphabetic passwords which are ?7 characters", I guess to illustrate that things have gotten faster). If you are allowing your users to create such weak passwords, using bcrypt will not save you/them. Attackers will just be wasting more of your CPU time making attempts. If they get a copy of your hashed passwords, they'll likely be wasting their own CPU time, but they have plenty of that, too. There are plenty of recommendations for what makes a good password / passphrase. If you are not already enforcing such rules (perhaps also with a lookaside to one or more of the leaked tables of passwords floating around), then IMHO that's much more urgent. (One of the best twists I read somewhere [sorry, I forget where] was to require at least one uppercase and one digit, but to not count them as fulfilling the requirement if they were used as the first or last character.) Side note, but for the sake of precision ... attackers are not literally decrypting passwords. They are guessing passwords and then performing a one-way hash to see if they guessed correctly. As a practical matter, that means that you have to ask your users to update their passwords any time you change the password storage scheme. (I don't know enough about bcrypt to know if that would be required if you wanted to simply increase the work factor.) From CMarcus at Media-Brokers.com Wed Jan 4 00:27:16 2012 From: CMarcus at Media-Brokers.com (Charles Marcus) Date: Tue, 03 Jan 2012 17:27:16 -0500 Subject: [Dovecot] Storing passwords encrypted... bcrypt? In-Reply-To: <4F036D26.9010409@blue-labs.org> References: <4F0367A2.1000807@Media-Brokers.com> <4F036C3B.5080904@Media-Brokers.com> <4F036D26.9010409@blue-labs.org> Message-ID: <4F0380C4.8040205@Media-Brokers.com> On 2012-01-03 4:03 PM, David Ford wrote: > md5 is deprecated, *nix has used sha1 for a while now That link lumps sha1 in with MD5 and others: "Why Not {MD5, SHA1, SHA256, SHA512, SHA-3, etc}?" -- Best regards, Charles From CMarcus at Media-Brokers.com Wed Jan 4 00:30:30 2012 From: CMarcus at Media-Brokers.com (Charles Marcus) Date: Tue, 03 Jan 2012 17:30:30 -0500 Subject: [Dovecot] Storing passwords encrypted... bcrypt? In-Reply-To: <4F037CC5.9030900@carpenter.org> References: <4F0367A2.1000807@Media-Brokers.com> <4F036C3B.5080904@Media-Brokers.com> <4F037CC5.9030900@carpenter.org> Message-ID: <4F038186.3030505@Media-Brokers.com> On 2012-01-03 5:10 PM, WJCarpenter wrote: > In his description, he uses the example of passwords which are > "lowercase, alphanumeric, and 6 characters long" (and in another place > the example is "lowercase, alphabetic passwords which are ?7 > characters", I guess to illustrate that things have gotten faster). If > you are allowing your users to create such weak passwords, using bcrypt > will not save you/them. Attackers will just be wasting more of your CPU > time making attempts. If they get a copy of your hashed passwords, > they'll likely be wasting their own CPU time, but they have plenty of > that, too. I require strong passwords of 15 characters in length. Whats more, they are assigned (by me), and the user cannot change it. But, he isn't talking about brute force attacks against the server. He is talking about if someone gained access to the SQL database where the passwords are stored (as has happened countless times in the last few years), and then had the luxury of brute forcing an attack locally (on their own systems) against your password database. -- Best regards, Charles From david at blue-labs.org Wed Jan 4 00:35:14 2012 From: david at blue-labs.org (David Ford) Date: Tue, 03 Jan 2012 17:35:14 -0500 Subject: [Dovecot] Storing passwords encrypted... bcrypt? In-Reply-To: <4F038186.3030505@Media-Brokers.com> References: <4F0367A2.1000807@Media-Brokers.com> <4F036C3B.5080904@Media-Brokers.com> <4F037CC5.9030900@carpenter.org> <4F038186.3030505@Media-Brokers.com> Message-ID: <4F0382A2.9010105@blue-labs.org> On 01/03/2012 05:30 PM, Charles Marcus wrote: > On 2012-01-03 5:10 PM, WJCarpenter wrote: >> In his description, he uses the example of passwords which are >> "lowercase, alphanumeric, and 6 characters long" (and in another place >> the example is "lowercase, alphabetic passwords which are ?7 >> characters", I guess to illustrate that things have gotten faster). If >> you are allowing your users to create such weak passwords, using bcrypt >> will not save you/them. Attackers will just be wasting more of your CPU >> time making attempts. If they get a copy of your hashed passwords, >> they'll likely be wasting their own CPU time, but they have plenty of >> that, too. > > I require strong passwords of 15 characters in length. Whats more, > they are assigned (by me), and the user cannot change it. But, he > isn't talking about brute force attacks against the server. He is > talking about if someone gained access to the SQL database where the > passwords are stored (as has happened countless times in the last few > years), and then had the luxury of brute forcing an attack locally (on > their own systems) against your password database. when it comes to brute force, passwords like "51k$jh#21hiaj2" are significantly weaker than "wePut85umbrellasIn2shoes". considerably difficult for humans which makes them far more likely to write it on a sticky and make it handily available. From simon.brereton at buongiorno.com Wed Jan 4 00:38:36 2012 From: simon.brereton at buongiorno.com (Simon Brereton) Date: Tue, 3 Jan 2012 17:38:36 -0500 Subject: [Dovecot] Storing passwords encrypted... bcrypt? In-Reply-To: <4F038186.3030505@Media-Brokers.com> References: <4F0367A2.1000807@Media-Brokers.com> <4F036C3B.5080904@Media-Brokers.com> <4F037CC5.9030900@carpenter.org> <4F038186.3030505@Media-Brokers.com> Message-ID: On 3 January 2012 17:30, Charles Marcus wrote: > On 2012-01-03 5:10 PM, WJCarpenter wrote: >> >> In his description, he uses the example of passwords which are >> "lowercase, alphanumeric, and 6 characters long" (and in another place >> the example is "lowercase, alphabetic passwords which are ?7 >> characters", I guess to illustrate that things have gotten faster). ?If >> you are allowing your users to create such weak passwords, using bcrypt >> will not save you/them. ?Attackers will just be wasting more of your CPU >> time making attempts. ?If they get a copy of your hashed passwords, >> they'll likely be wasting their own CPU time, but they have plenty of >> that, too. > > > I require strong passwords of 15 characters in length. Whats more, they are > assigned (by me), and the user cannot change it. But, he isn't talking about > brute force attacks against the server. He is talking about if someone > gained access to the SQL database where the passwords are stored (as has > happened countless times in the last few years), and then had the luxury of > brute forcing an attack locally (on their own systems) against your password > database. 24+ would be better.. http://xkcd.com/936/ Simon From dg at dguhl.org Wed Jan 4 00:48:05 2012 From: dg at dguhl.org (Dennis Guhl) Date: Tue, 3 Jan 2012 23:48:05 +0100 Subject: [Dovecot] Problem with huge IMAP Archive after Courier migration In-Reply-To: <4F03410F.6080404@hardwarefreak.com> References: <4F01D886.6070905@gmx.net> <4F020328.7090303@hardwarefreak.com> <4F03311B.8030109@gmx.net> <4F03410F.6080404@hardwarefreak.com> Message-ID: <20120103224804.GA16434@laptop-dg.leere.eu> On Tue, Jan 03, 2012 at 11:55:27AM -0600, Stan Hoeppner wrote: [..] > I would suggest you thoroughly remove the Wheezy 2.0.15 package and Not to use the Wheezy package might be wise. > install the 1.2.15-7 STABLE package. Read the documentation for 1.2.x Alternatively you could use Stephan Bosch's repository: deb http://xi.rename-it.nl/debian/ stable-auto/dovecot-2.0 main Despite the warning at http://wiki2.dovecot.org/PrebuiltBinaries#Automatically_Built_Packages they work very stable. > and configure it properly. Then things will likely work as they should. Dennis From bill-dovecot at carpenter.org Wed Jan 4 01:12:50 2012 From: bill-dovecot at carpenter.org (WJCarpenter) Date: Tue, 03 Jan 2012 15:12:50 -0800 Subject: [Dovecot] Storing passwords encrypted... bcrypt? In-Reply-To: References: <4F0367A2.1000807@Media-Brokers.com> <4F036C3B.5080904@Media-Brokers.com> <4F037CC5.9030900@carpenter.org> <4F038186.3030505@Media-Brokers.com> Message-ID: <4F038B72.1000003@carpenter.org> On 1/3/2012 2:38 PM, Simon Brereton wrote: > http://xkcd.com/936/ As they saying goes, entropy ain't what it used to be. https://www.grc.com/haystack.htm However, both links actually illustrate the same point: once you get past dictionary attacks, the length of the password is dominant factor in the strength of the password against brute force attack. From gedalya at gedalya.net Wed Jan 4 01:59:28 2012 From: gedalya at gedalya.net (Gedalya) Date: Tue, 03 Jan 2012 18:59:28 -0500 Subject: [Dovecot] Problem with huge IMAP Archive after Courier migration In-Reply-To: <20120103224804.GA16434@laptop-dg.leere.eu> References: <4F01D886.6070905@gmx.net> <4F020328.7090303@hardwarefreak.com> <4F03311B.8030109@gmx.net> <4F03410F.6080404@hardwarefreak.com> <20120103224804.GA16434@laptop-dg.leere.eu> Message-ID: <4F039660.1010903@gedalya.net> On 01/03/2012 05:48 PM, Dennis Guhl wrote: > On Tue, Jan 03, 2012 at 11:55:27AM -0600, Stan Hoeppner wrote: > > [..] > >> I would suggest you thoroughly remove the Wheezy 2.0.15 package and > Not to use the Wheezy package might be wise. > >> install the 1.2.15-7 STABLE package. Read the documentation for 1.2.x > Alternatively you could use Stephan Bosch's repository: > > deb http://xi.rename-it.nl/debian/ stable-auto/dovecot-2.0 main > > Despite the warning at > http://wiki2.dovecot.org/PrebuiltBinaries#Automatically_Built_Packages > they work very stable. > >> and configure it properly. Then things will likely work as they should. > Dennis See http://www.prato.linux.it/~mnencia/debian/dovecot-squeeze/ and http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=592959 I have the packages from this repository running in production on a squeeze system, working fine. From CMarcus at Media-Brokers.com Wed Jan 4 03:25:02 2012 From: CMarcus at Media-Brokers.com (Charles Marcus) Date: Tue, 03 Jan 2012 20:25:02 -0500 Subject: [Dovecot] Storing passwords encrypted... bcrypt? In-Reply-To: <4F038B72.1000003@carpenter.org> References: <4F0367A2.1000807@Media-Brokers.com> <4F036C3B.5080904@Media-Brokers.com> <4F037CC5.9030900@carpenter.org> <4F038186.3030505@Media-Brokers.com> <4F038B72.1000003@carpenter.org> Message-ID: <4F03AA6E.30003@Media-Brokers.com> On 2012-01-03 6:12 PM, WJCarpenter wrote: > On 1/3/2012 2:38 PM, Simon Brereton wrote: >> http://xkcd.com/936/ > > As they saying goes, entropy ain't what it used to be. > > https://www.grc.com/haystack.htm > > However, both links actually illustrate the same point: once you get > past dictionary attacks, the length of the password is dominant factor > in the strength of the password against brute force attack. I think ya'll are missing the point... not sure, because I'm still not completely sure that this is saying what I think it is saying (that's why I asked)... I'm not worried about *active* brute force attacks against my server using the standard smtp or imap protocols - fail2ban takes care of those in a hurry. What I'm worried about is the worst case scenario of someone getting ahold of the entire user database of *stored* passwords, where they can then take their time and brute force them at their leisure, on *their* *own* systems, without having to hammer my server over smtp/imap and without the automated limit of *my* fail2ban getting in their way. As for people writing their passwords down... our policy is that it is a potentially *firable* *offense* (never even encountered one case of anyone posting their password, and I'm on these systems off and on all the time) if they do post these anywhere that is not under lock and key. Also, I always set up their email clients for them (on their workstations and on their phones - and of course tell it to remember the password, so they basically never have to enter it. -- Best regards, Charles From david at blue-labs.org Wed Jan 4 03:37:21 2012 From: david at blue-labs.org (David Ford) Date: Tue, 03 Jan 2012 20:37:21 -0500 Subject: [Dovecot] Storing passwords encrypted... bcrypt? In-Reply-To: <4F03AA6E.30003@Media-Brokers.com> References: <4F0367A2.1000807@Media-Brokers.com> <4F036C3B.5080904@Media-Brokers.com> <4F037CC5.9030900@carpenter.org> <4F038186.3030505@Media-Brokers.com> <4F038B72.1000003@carpenter.org> <4F03AA6E.30003@Media-Brokers.com> Message-ID: <4F03AD51.7080506@blue-labs.org> On 01/03/2012 08:25 PM, Charles Marcus wrote: > > I think ya'll are missing the point... not sure, because I'm still not > completely sure that this is saying what I think it is saying (that's > why I asked)... > > I'm not worried about *active* brute force attacks against my server > using the standard smtp or imap protocols - fail2ban takes care of > those in a hurry. > > What I'm worried about is the worst case scenario of someone getting > ahold of the entire user database of *stored* passwords, where they > can then take their time and brute force them at their leisure, on > *their* *own* systems, without having to hammer my server over > smtp/imap and without the automated limit of *my* fail2ban getting in > their way. > > As for people writing their passwords down... our policy is that it is > a potentially *firable* *offense* (never even encountered one case of > anyone posting their password, and I'm on these systems off and on all > the time) if they do post these anywhere that is not under lock and > key. Also, I always set up their email clients for them (on their > workstations and on their phones - and of course tell it to remember > the password, so they basically never have to enter it. perhaps. part of my point along that of brute force resistance, is that when security becomes onerous to the typical user such as requiring non-repeat passwords of "10 characters including punctuation and mixed case", even stalwart policy followers start tending toward avoiding it. if anyone has a stressful job, spends a lot of time working, missing sleep, is thereby prone to memory lapse, it's almost a sure guarantee they *will* write it down/store it somewhere -- usually not in a password safe. or, they'll export their saved passwords to make a backup plain text copy, and leave it on their Desktop folder but coyly named and prefixed with a few random emails to grandma, so mr. sysadmin doesn't notice it. on a tangent, you should worry about active brute force attacks. fail2ban and iptables heuristics become meaningless when the brute forcing is done by bot nets which is more and more common than single-host attacks these days. one IP per attempt in a 10-20 minute window will probably never trigger any of these methods. From michael at orlitzky.com Wed Jan 4 03:58:51 2012 From: michael at orlitzky.com (Michael Orlitzky) Date: Tue, 03 Jan 2012 20:58:51 -0500 Subject: [Dovecot] Storing passwords encrypted... bcrypt? In-Reply-To: <4F03AA6E.30003@Media-Brokers.com> References: <4F0367A2.1000807@Media-Brokers.com> <4F036C3B.5080904@Media-Brokers.com> <4F037CC5.9030900@carpenter.org> <4F038186.3030505@Media-Brokers.com> <4F038B72.1000003@carpenter.org> <4F03AA6E.30003@Media-Brokers.com> Message-ID: <4F03B25B.2020309@orlitzky.com> On 01/03/2012 08:25 PM, Charles Marcus wrote: > > What I'm worried about is the worst case scenario of someone getting > ahold of the entire user database of *stored* passwords, where they can > then take their time and brute force them at their leisure, on *their* > *own* systems, without having to hammer my server over smtp/imap and > without the automated limit of *my* fail2ban getting in their way. To prevent rainbow table attacks, salt your passwords. You can make them a little bit more difficult in plenty of ways, but salt is the /solution/. > As for people writing their passwords down... our policy is that it is a > potentially *firable* *offense* (never even encountered one case of > anyone posting their password, and I'm on these systems off and on all > the time) if they do post these anywhere that is not under lock and key. > Also, I always set up their email clients for them (on their > workstations and on their phones - and of course tell it to remember the > password, so they basically never have to enter it. You realize they're just walking around with a $400 post-it note with the password written on it, right? From bill-dovecot at carpenter.org Wed Jan 4 05:07:47 2012 From: bill-dovecot at carpenter.org (WJCarpenter) Date: Tue, 03 Jan 2012 19:07:47 -0800 Subject: [Dovecot] Storing passwords encrypted... bcrypt? In-Reply-To: <4F03AA6E.30003@Media-Brokers.com> References: <4F0367A2.1000807@Media-Brokers.com> <4F036C3B.5080904@Media-Brokers.com> <4F037CC5.9030900@carpenter.org> <4F038186.3030505@Media-Brokers.com> <4F038B72.1000003@carpenter.org> <4F03AA6E.30003@Media-Brokers.com> Message-ID: <4F03C283.6070005@carpenter.org> On 1/3/2012 5:25 PM, Charles Marcus wrote: > I think ya'll are missing the point... not sure, because I'm still not > completely sure that this is saying what I think it is saying (that's > why I asked)... I'm sure I'm not missing the point. My comment was that password length and complexity are probably more important than bcrypt versus sha1, and you've already addressed those. Given that you use strong 15-character passwords, pretty much all hash functions are already out of reach for brute force. bcrypt is probably better in the same sense that it's harder to drive a car to Saturn than it is to drive to Mars. From list at airstreamcomm.net Wed Jan 4 08:09:39 2012 From: list at airstreamcomm.net (=?utf-8?B?bGlzdEBhaXJzdHJlYW1jb21tLm5ldA==?=) Date: Wed, 04 Jan 2012 00:09:39 -0600 Subject: [Dovecot] =?utf-8?q?GPFS_for_mail-storage_=28Was=3A_Re=3A_Compres?= =?utf-8?q?sing_existing_maildirs=29?= Message-ID: <20120104060924.0E7C727659@osmtp-1.airstreamcomm.net> Great information, thank you. Could you remark on GPFS services hosting mail storage over a WAN between two geographically separated data centers? ----- Reply message ----- From: "Jan-Frode Myklebust" To: "Stan Hoeppner" Cc: "Timo Sirainen" , Subject: [Dovecot] GPFS for mail-storage (Was: Re: Compressing existing maildirs) Date: Tue, Jan 3, 2012 2:14 am On Sat, Dec 31, 2011 at 01:54:32AM -0600, Stan Hoeppner wrote: > Nice setup. I've mentioned GPFS for cluster use on this list before, > but I think you're the only operator to confirm using it. I'm sure > others would be interested in hearing of your first hand experience: > pros, cons, performance, etc. And a ball park figure on the licensing > costs, whether one can only use GPFS on IBM storage or if storage from > others vendors is allowed in the GPFS pool. I used to work for IBM, so I've been a bit uneasy about pushing GPFS too hard publicly, for risk of being accused of being biased. But I changed job in November, so now I'm only a satisfied customer :-) Pros: Extremely simple to configure and manage. Assuming root on all nodes can ssh freely, and port 1191/tcp is open between the nodes, these are the commands to create the cluster, create a NSD (network shared disks), and create a filesystem: # echo hostname1:manager-quorum > NodeFile # "manager" means this node can be selected as filesystem manager # echo hostname2:manager-quorum >> NodeFile # "quorum" means this node has a vote in the quorum selection # echo hostname3:manager-quorum >> NodeFile # all my nodes are usually the same, so they all have same roles. # mmcrcluster -n NodeFile -p $(hostname) -A ### sdb1 is either a local disk on hostname1 (in which case the other nodes will access it over tcp to ### hostname1), or a SAN-disk that they can access directly over FC/iSCSI. # echo sdb1:hostname1::dataAndMetadata:: > DescFile # This disk can be used for both data and metadata # mmcrnsd -F DescFile # mmstartup -A # starts GPFS services on all nodes # mmcrfs /gpfs1 gpfs1 -F DescFile # mount /gpfs1 You can add and remove disks from the filesystem, and change most settings without downtime. You can scale out your workload by adding more nodes (SAN attached or not), and scale out your disk performance by adding more disks on the fly. (IBM uses GPFS to create scale-out NAS solutions http://www-03.ibm.com/systems/storage/network/sonas/ , which highlights a few of the features available with GPFS) There's no problem running GPFS on other vendors disk systems. I've used Nexsan SATAboy earlier, for a HPC cluster. One can easily move from one disksystem to another without downtime. Cons: It has it's own page cache, staticly configured. So you don't get the "all available memory used for page caching" behaviour as you normally do on linux. There is a kernel module that needs to be rebuilt on every upgrade. It's a simple process, but it needs to be done and means we can't just run "yum update ; reboot" to upgrade. % export SHARKCLONEROOT=/usr/lpp/mmfs/src % cp /usr/lpp/mmfs/src/config/site.mcr.proto /usr/lpp/mmfs/src/config/site.mcr % vi /usr/lpp/mmfs/src/config/site.mcr # correct GPFS_ARCH, LINUX_DISTRIBUTION and LINUX_KERNEL_VERSION % cd /usr/lpp/mmfs/src/ ; make clean ; make World % su - root # export SHARKCLONEROOT=/usr/lpp/mmfs/src # cd /usr/lpp/mmfs/src/ ; make InstallImages > > To this point IIRC everyone here doing clusters is using NFS, GFS, or > OCFS. Each has its downsides, mostly because everyone is using maildir. > NFS has locking issues with shared dovecot index files. GFS and OCFS > have filesystem metadata performance issues. How does GPFS perform with > your maildir workload? Maildir is likely a worst case type workload for filesystems. Millions of tiny-tiny files, making all IO random, and getting minimal controller read cache utilized (unless you can cache all active files). So I've concluded that our performance issues are mostly design errors (and the fact that there were no better mail storage formats than maildir at the time these servers were implemented). I expect moving to mdbox will fix all our performance issues. I *think* GPFS is as good as it gets for maildir storage on clusterfs, but have no number to back that up ... Would be very interesting if we could somehow compare numbers for a few clusterfs'. I believe our main limitation in this setup is the iops we can get from the backend storage system. It's hard to balance the IO over enough RAID arrays (the fs is spread over 11 RAID5 arrays of 5 disks each), and we're always having hotspots. Right now two arrays are doing <100 iops, while others are doing 4-500 iops. Would very much like to replace it by something smarter where we can utilize SSDs for active data and something slower for stale data. GPFS can manage this by itself trough it's ILM interface, but we don't have the very fast storage to put in as tier-1. -jf From janfrode at tanso.net Wed Jan 4 09:33:55 2012 From: janfrode at tanso.net (Jan-Frode Myklebust) Date: Wed, 4 Jan 2012 08:33:55 +0100 Subject: [Dovecot] GPFS for mail-storage (Was: Re: Compressing existing maildirs) In-Reply-To: <20120104060924.0E7C727659@osmtp-1.airstreamcomm.net> References: <20120104060924.0E7C727659@osmtp-1.airstreamcomm.net> Message-ID: <20120104073355.GA20482@dibs.tanso.net> On Wed, Jan 04, 2012 at 12:09:39AM -0600, list at airstreamcomm.net wrote: > Could you remark on GPFS services hosting mail storage over a WAN between two geographically separated data centers? I haven't tried that, but know the theory quite well. There are 2 or 3 options: 1 - shared SAN between the data centers. Should work the same as a single data center, but you'd want to use disk quorum or a quorum node on a 3. site to avoid split brain. 2 - different SANs on the two sites. Disks on SAN1 would belong to failure group 1 and disks on SAN2 would belong to failure group 2. GPFS will write every block to disks in different failure groups. Nodes on location 1 will use SAN1 directly, and write to SAN2 via tcp/ip to nodes on location 2 (and vica versa). It's configurable if you want to return success when first block is written (asyncronous replication), or if you need both replicas to be written. Ref: mmcrfs -K: http://publib.boulder.ibm.com/infocenter/clresctr/vxrx/index.jsp?topic=%2Fcom.ibm.cluster.gpfs.v3r4.gpfs300.doc%2Fbl1adm_mmcrfs.html With asyncronous replication it will try to allocate both replicas, but if it fails you can re-establish the replication level later using "mmrestripefs". Reading will happen from a direct attached disk if possible, and over tcp/ip if there are no local replica of the needed block. Again you'll need a quorum node on a 3. site to avoid split brain. 3 - GPFS multi-cluster. Separate GPFS clusters on the two locations. Let them mount each others filesystems over IP, and access disks over either SAN or IP network. Each cluster is managed locally, if one site goes down the other site also loses access to the fs. I don't have any experience with this kind of config, but believe it's quite popular to use to share fs between HPC-sites. http://www.ibm.com/developerworks/systems/library/es-multiclustergpfs/index.html http://www.cisl.ucar.edu/hss/ssg/presentations/storage/NCAR-GPFS_Elahi.pdf -jf From Juergen.Obermann at hrz.uni-giessen.de Wed Jan 4 11:40:25 2012 From: Juergen.Obermann at hrz.uni-giessen.de (=?utf-8?b?SsO8cmdlbg==?= Obermann) Date: Wed, 04 Jan 2012 10:40:25 +0100 Subject: [Dovecot] error bad file number with compressed mbox files In-Reply-To: <1325590176.6987.57.camel@innu> References: <77e69f67dbffe67a6205ed1de7d2d0df@imapproxy.hrz> <1325590176.6987.57.camel@innu> Message-ID: <20120104104025.13503j06dkxnqg08@webmail.hrz.uni-giessen.de> > On Mon, 2012-01-02 at 15:33 +0100, J?rgen Obermann wrote: > >> can dsync convert from compressed mbox to compressed mdbox format? >> >> When I use compressed mbox files, either with gzip or with bzip2, I can >> read the mails as usual, but I find the following errors in dovecots log >> file: >> >> imap(userxy): Error: nfs_flush_fcntl: >> fcntl(/home/hrz/userxy/Mail/mymbox.gz, F_RDLCK) failed: Bad file number >> imap(userxy): Error: nfs_flush_fcntl: >> fcntl(/home/hrz/userxy/Mail/mymbox.bz2, F_RDLCK) failed: Bad file number > > This happens because of mail_nfs_* settings. You can either ignore those > errors, or disable the settings. Those settings are useful only if you > attempt to access the same mailbox from multiple servers at the same > time, which is randomly going to fail even with those settings, so they > aren't hugely useful. > > > After removing the mail_nfs_* settings this problem went away. Thank you, Timo. Greetings, J?rgen From openbsd at e-solutions.re Wed Jan 4 15:08:35 2012 From: openbsd at e-solutions.re (Wesley M.) Date: Wed, 04 Jan 2012 17:08:35 +0400 Subject: [Dovecot] migrate dovecot files 1.2.16 to 2.0.13 (OpenBSD 5.0) Message-ID: <9183836c1dc45b710712d4985f04df81@localhost> Hi, I have a mailserver(Postfix+MySql) on OpenBSD 4.9 with Dovecot 1.2.16, all works fine. Now i want to do the same but on OpenBSD 5.0. I meet problems using dovecot 2.0.13 on OpenBSD 5.0. Some tests (on the box): telnet 127.0.0.1 110 Trying 127.0.0.1... Connected to 127.0.0.1. Escape character is '^]'. Connection closed by foreign host. telnet 127.0.0.1 143 Trying 127.0.0.1... Connected to 127.0.0.1. Escape character is '^]'. Connection closed by foreign host. Seems that pop3/imap doesn't work 'netstat -anf inet' tcp 0 0 *.993 *.* LISTEN tcp 0 0 *.143 *.* LISTEN tcp 0 0 *.995 *.* LISTEN tcp 0 0 *.110 *.* LISTEN Therefore, ports are open. When i use Roundcube webmail, i have errors : error imap connection If someone can help me on. Thank you very much. Files to migrate (already tried to modify them) : dovecot.conf / dovecot-sql.conf / and 'dovecot -n ' ###############::::::::dovecot.conf:::::::::::################################# base_dir = /var/dovecot/ protocols = imap pop3 ssl_cert = /etc/ssl/dovecotcert.pem ssl_key = /etc/ssl/private/dovecot.pem ssl_cipher_list = HIGH:MEDIUM:+TLSv1:!SSLv2:+SSLv3 disable_plaintext_auth = yes default_login_user = _dovecot default_internal_user = _dovecot login_process_per_connection = no login_process_size = 64 mail_location = maildir:/var/mailserv/mail/%d/%n first_valid_uid = 1000 mmap_disable = yes protocol imap { mail_plugins = quota imap_quota autocreate imap_client_workarounds = delay-newmail } protocol pop3 { pop3_uidl_format = %08Xv%08Xu mail_plugins = quota pop3_client_workarounds = outlook-no-nuls oe-ns-eoh } protocol lda { mail_plugins = sieve quota postmaster_address = postmaster at mailr130.localdomain sendmail_path = /usr/sbin/sendmail auth_socket_path = /var/run/dovecot-auth-master } auth default { mechanisms = plain login digest-md5 cram-md5 apop passdb { driver=sql args = /etc/dovecot/dovecot-sql.conf } userdb { driver=sql args = /etc/dovecot/dovecot-sql.conf } user = root socket listen { client { path = /var/spool/postfix/private/auth mode = 0660 user = _postfix group = _postfix } master { path = /var/run/dovecot-auth-master mode = 0600 user = _dovecot # User running Dovecot LDA group = _dovecot # Or alternatively mode 0660 + LDA user in this group } } } plugin { sieve=~/.dovecot.sieve sieve_storage=~/sieve } plugin { quota = maildir quota_rule = *:storage=5G quota_rule2 = Trash:storage=100M quota_warning = storage=95%% /usr/local/bin/quota-warning.sh 95 quota_warning2 = storage=80%% /usr/local/bin/quota-warning.sh 80 } plugin { autocreate = Trash autocreate2 = Spam autocreate3 = Sent autocreate4 = Drafts autosubscribe = Trash autosubscribe2 = Spam autosubscribe3 = Sent autosubscribe4 = Drafts } plugin { antispam_signature = X-Spam-Flag antispam_signature_missing = move # move silently without training antispam_trash = trash;Trash;Deleted Items; Deleted Messages antispam_spam = SPAM;Spam;spam;Junk;junk antispam_mail_sendmail = /usr/local/bin/sa-learn antispam_mail_sendmail_args = --username=%u antispam_mail_spam = --spam antispam_mail_notspam = --ham antispam_mail_tmpdir = /tmp } ###############::::::::dovecot-sql.conf:::::::################################## driver = mysql connect = host=localhost dbname=mail user=postfix password=postfix default_pass_scheme = PLAIN password_query = SELECT email as user, password FROM users WHERE email = '%u' user_query = SELECT id as uid, id as gid, home, concat('*:storage=', quota, 'M') AS quota_rule FROM users WHERE email = '%u' ################### dovecot -n######################################## # 2.0.13: /etc/dovecot/dovecot.conf # OS: OpenBSD 5.0 i386 ffs auth_mechanisms = plain login digest-md5 cram-md5 apop base_dir = /var/dovecot/ default_internal_user = _dovecot default_login_user = _dovecot first_valid_uid = 1000 mail_location = maildir:/var/mailserv/mail/%d/%n mmap_disable = yes passdb { args = /etc/dovecot/dovecot-sql.conf driver = sql } plugin { antispam_mail_notspam = --ham antispam_mail_sendmail = /usr/local/bin/sa-learn antispam_mail_sendmail_args = --username=%u antispam_mail_spam = --spam antispam_mail_tmpdir = /tmp antispam_signature = X-Spam-Flag antispam_signature_missing = move antispam_spam = SPAM;Spam;spam;Junk;junk antispam_trash = trash;Trash;Deleted Items; Deleted Messages autocreate = Trash autocreate2 = Spam autocreate3 = Sent autocreate4 = Drafts autosubscribe = Trash autosubscribe2 = Spam autosubscribe3 = Sent autosubscribe4 = Drafts quota = maildir quota_rule = *:storage=5G quota_rule2 = Trash:storage=100M quota_warning = storage=95%% /usr/local/bin/quota-warning.sh 95 quota_warning2 = storage=80%% /usr/local/bin/quota-warning.sh 80 sieve = ~/.dovecot.sieve sieve_storage = ~/sieve } protocols = imap pop3 service auth { unix_listener /var/run/dovecot-auth-master { group = _dovecot mode = 0600 user = _dovecot } unix_listener /var/spool/postfix/private/auth { group = _postfix mode = 0660 user = _postfix } user = root } service imap-login { service_count = 0 vsz_limit = 64 M } service pop3-login { service_count = 0 vsz_limit = 64 M } ssl_cert = /etc/ssl/dovecotcert.pem ssl_cipher_list = HIGH:MEDIUM:+TLSv1:!SSLv2:+SSLv3 ssl_key = /etc/ssl/private/dovecot.pem userdb { args = /etc/dovecot/dovecot-sql.conf driver = sql } protocol imap { imap_client_workarounds = delay-newmail mail_plugins = quota imap_quota autocreate } protocol pop3 { mail_plugins = quota pop3_client_workarounds = outlook-no-nuls oe-ns-eoh pop3_uidl_format = %08Xv%08Xu } protocol lda { auth_socket_path = /var/run/dovecot-auth-master mail_plugins = sieve quota postmaster_address = postmaster at mailr130.localdomain sendmail_path = /usr/sbin/sendmail } Cheers, Wesley. M www.mouedine.net From Ralf.Hildebrandt at charite.de Wed Jan 4 16:06:40 2012 From: Ralf.Hildebrandt at charite.de (Ralf Hildebrandt) Date: Wed, 4 Jan 2012 15:06:40 +0100 Subject: [Dovecot] doveadm move from one user's mailbox to another user's mailbox? Message-ID: <20120104140640.GT5536@charite.de> Is something along the lines: doveadm move -u sourceuser destinationuser:/inbox search_query possible with 2.0.16? I want to move mails from a backup mailbox (which has no valid password assigned) to a "restore" mailbox (which *HAS* a password assigned to it). -- Ralf Hildebrandt Gesch?ftsbereich IT | Abteilung Netzwerk Charit? - Universit?tsmedizin Berlin Campus Benjamin Franklin Hindenburgdamm 30 | D-12203 Berlin Tel. +49 30 450 570 155 | Fax: +49 30 450 570 962 ralf.hildebrandt at charite.de | http://www.charite.de From tss at iki.fi Wed Jan 4 16:11:26 2012 From: tss at iki.fi (Timo Sirainen) Date: Wed, 04 Jan 2012 16:11:26 +0200 Subject: [Dovecot] doveadm move from one user's mailbox to another user's mailbox? In-Reply-To: <20120104140640.GT5536@charite.de> References: <20120104140640.GT5536@charite.de> Message-ID: <1325686286.6987.82.camel@innu> On Wed, 2012-01-04 at 15:06 +0100, Ralf Hildebrandt wrote: > Is something along the lines: > doveadm move -u sourceuser destinationuser:/inbox search_query > possible with 2.0.16? > > I want to move mails from a backup mailbox (which has no valid > password assigned) to a "restore" mailbox (which *HAS* a password > assigned to it). I guess: doveadm import -u dest maildir:/source/Maildir "" search_query There's no direct command to move mails between users. Or you could create a shared namespace.. From Ralf.Hildebrandt at charite.de Wed Jan 4 16:33:01 2012 From: Ralf.Hildebrandt at charite.de (Ralf Hildebrandt) Date: Wed, 4 Jan 2012 15:33:01 +0100 Subject: [Dovecot] doveadm move from one user's mailbox to another user's mailbox? In-Reply-To: <1325686286.6987.82.camel@innu> References: <20120104140640.GT5536@charite.de> <1325686286.6987.82.camel@innu> Message-ID: <20120104143301.GU5536@charite.de> * Timo Sirainen : > On Wed, 2012-01-04 at 15:06 +0100, Ralf Hildebrandt wrote: > > Is something along the lines: > > doveadm move -u sourceuser destinationuser:/inbox search_query > > possible with 2.0.16? > > > > I want to move mails from a backup mailbox (which has no valid > > password assigned) to a "restore" mailbox (which *HAS* a password > > assigned to it). > > I guess: > > doveadm import -u dest maildir:/source/Maildir "" search_query Yes, just the other way round. It's even better, since the MOVE operation would have REMOVED the mails from the backup. IMPORT instead only copies what it needs. -- Ralf Hildebrandt Gesch?ftsbereich IT | Abteilung Netzwerk Charit? - Universit?tsmedizin Berlin Campus Benjamin Franklin Hindenburgdamm 30 | D-12203 Berlin Tel. +49 30 450 570 155 | Fax: +49 30 450 570 962 ralf.hildebrandt at charite.de | http://www.charite.de From ludek.finstrle at pzkagis.cz Wed Jan 4 17:11:41 2012 From: ludek.finstrle at pzkagis.cz (Ludek Finstrle) Date: Wed, 4 Jan 2012 16:11:41 +0100 Subject: [Dovecot] Small LOGIN_MAX_INBUF_SIZE for GSSAPI with samba4 (AD) In-Reply-To: <1325589389.6987.55.camel@innu> References: <20120102182014.GA20872@pzkagis.cz> <1325589389.6987.55.camel@innu> Message-ID: <20120104151141.GA5755@pzkagis.cz> Hi Timo, Tue, Jan 03, 2012 at 01:16:29PM +0200, Timo Sirainen napsal(a): > On Mon, 2012-01-02 at 19:20 +0100, Ludek Finstrle wrote: > > > Jan 2 17:58:42 server dovecot: imap-login: Disconnected: Input buffer full (no auth attempts): rip=192.167.14.16, lip=192.167.14.16, secured > .. > > I fixed this problem with enlarging LOGIN_MAX_INBUF_SIZE. I also red about wrong lower/uppercase > > but it's not definitely my problem (I tried all possibilities of lower/uppercas in login). > > > > I sniffed the plain communication and the "a0000 AUTHENTICATE GSSAPI" line has around 1873 chars. > > When I enlarged the LOGIN_MAX_INBUF_SIZE to 2048 the problem disappeared and I'm now able to login > > to dovecot using gssapi in mutt client. > > There was already code that allowed 16kB SAS messages, but that didn't > work for initial SASL reponse with IMAP SASL-IR extension. > > > The simple patch I have to use is attached. > > I increased it to 4 kB: > http://hg.dovecot.org/dovecot-2.0/rev/d06061408f6d thank you very much for such fast reaction and for such good piece of SW. Luf From bra at fsn.hu Wed Jan 4 17:19:33 2012 From: bra at fsn.hu (Attila Nagy) Date: Wed, 04 Jan 2012 16:19:33 +0100 Subject: [Dovecot] assertion failed in mail-index.c Message-ID: <4F046E05.8070000@fsn.hu> Hi, I have this: Jan 04 15:55:21 pop3(jfm47): Panic: file mail-index.c: line 257 (mail_index_keyword_lookup_or_create): assertion failed: (*keyword != '\0') Jan 04 15:55:21 master: Error: service(pop3): child 3391 killed with signal 6 (core not dumped - set service pop3 { drop_priv_before_exec=yes }) I don't know why this happened, but wouldn't be the "self healing" (seen in the wiki I think :) kick in here? I mean it's even better to completely remove the index than dying and make the mailbox inaccessible. Thanks, From tss at iki.fi Wed Jan 4 17:28:15 2012 From: tss at iki.fi (Timo Sirainen) Date: Wed, 04 Jan 2012 17:28:15 +0200 Subject: [Dovecot] assertion failed in mail-index.c In-Reply-To: <4F046E05.8070000@fsn.hu> References: <4F046E05.8070000@fsn.hu> Message-ID: <1325690895.6987.88.camel@innu> On Wed, 2012-01-04 at 16:19 +0100, Attila Nagy wrote: > Hi, > > I have this: > Jan 04 15:55:21 pop3(jfm47): Panic: file mail-index.c: line 257 > (mail_index_keyword_lookup_or_create): assertion failed: (*keyword != '\0') > Jan 04 15:55:21 master: Error: service(pop3): child 3391 killed with > signal 6 (core not dumped - set service pop3 { drop_priv_before_exec=yes }) > I don't know why this happened, but wouldn't be the "self healing" (seen > in the wiki I think :) kick in here? > I mean it's even better to completely remove the index than dying and > make the mailbox inaccessible. See if http://hg.dovecot.org/dovecot-2.0/raw-rev/5ef791398c8c helps. If not, I'd need a gdb backtrace to find out what is causing it: http://dovecot.org/bugreport.html From sottilette at rfx.it Wed Jan 4 19:08:52 2012 From: sottilette at rfx.it (sottilette at rfx.it) Date: Wed, 4 Jan 2012 18:08:52 +0100 (CET) Subject: [Dovecot] POP3 problems Message-ID: Migrated a 1.0.2 server to 2.0.16 (same old box). IMAP seems working Ok. POP3 give problems with some clients (Outlook 2010 and Thunderbird reported). Seems authentication problem Below my doveconf -n (debug enbled, but no answer found to the problems) Any hints? Thanks, P. # doveconf -n # 2.0.16: /etc/dovecot/dovecot.conf doveconf: Warning: NOTE: You can get a new clean config file with: doveconf -n > dovecot-new.conf doveconf: Warning: Obsolete setting in /etc/dovecot/dovecot.conf:791: add auth_ prefix to all settings inside auth {} and remove the auth {} section completely doveconf: Warning: Obsolete setting in /etc/dovecot/dovecot.conf:864: passdb {} has been replaced by passdb { driver= } doveconf: Warning: Obsolete setting in /etc/dovecot/dovecot.conf:935: userdb passwd {} has been replaced by userdb { driver=passwd } doveconf: Warning: Obsolete setting in /etc/dovecot/dovecot.conf:998: auth_user has been replaced by service auth { user } doveconf: Warning: Obsolete setting in /etc/dovecot/dovecot.conf:1131: ssl_disable has been renamed to ssl # OS: Linux 2.6.9-42.0.10.ELsmp i686 CentOS release 4.9 (Final) auth_debug = yes auth_debug_passwords = yes auth_mechanisms = plain login auth_verbose = yes disable_plaintext_auth = no info_log_path = /var/log/mail/dovecot.info.log listen = * log_path = /var/log/mail/dovecot.log mail_full_filesystem_access = yes mail_location = mbox:~/:INBOX=/var/mail/%u mbox_read_locks = dotlock fcntl passdb { driver = pam } protocols = pop3 imap service auth { user = root } ssl = no ssl_cert = /etc/pki/dovecot/certs/dovecot.pem ssl_key = /etc/pki/dovecot/private/dovecot.pem userdb { driver = passwd } userdb { driver = passwd } protocol lda { postmaster_address = postmaster at example.com } From wgillespie+dovecot at es2eng.com Wed Jan 4 19:16:08 2012 From: wgillespie+dovecot at es2eng.com (Willie Gillespie) Date: Wed, 04 Jan 2012 10:16:08 -0700 Subject: [Dovecot] POP3 problems In-Reply-To: References: Message-ID: <4F048958.5070208@es2eng.com> On 01/04/2012 10:08 AM, sottilette at rfx.it wrote: > Migrated a 1.0.2 server to 2.0.16 (same old box). Some of the configuration settings changed between 1.x and 2.x > doveconf: Warning: Obsolete setting in /etc/dovecot/dovecot.conf:791: add auth_ prefix to all settings inside auth {} and remove the auth {} section completely > doveconf: Warning: Obsolete setting in /etc/dovecot/dovecot.conf:864: passdb {} has been replaced by passdb { driver= } > doveconf: Warning: Obsolete setting in /etc/dovecot/dovecot.conf:935: userdb passwd {} has been replaced by userdb { driver=passwd } > doveconf: Warning: Obsolete setting in /etc/dovecot/dovecot.conf:998: auth_user has been replaced by service auth { user } > doveconf: Warning: Obsolete setting in /etc/dovecot/dovecot.conf:1131: ssl_disable has been renamed to ssl You'll probably want to make sure everything is correct as per a 2.x config. From tss at iki.fi Wed Jan 4 19:24:28 2012 From: tss at iki.fi (Timo Sirainen) Date: Wed, 4 Jan 2012 19:24:28 +0200 Subject: [Dovecot] POP3 problems In-Reply-To: References: Message-ID: <55076E79-BD81-455B-BD68-9ABCFE53ED22@iki.fi> On 4.1.2012, at 19.08, sottilette at rfx.it wrote: > Migrated a 1.0.2 server to 2.0.16 (same old box). > IMAP seems working Ok. > POP3 give problems with some clients (Outlook 2010 and Thunderbird reported). > Seems authentication problem > Below my doveconf -n (debug enbled, but no answer found to the problems) What do the logs say when a client logs in? The debug logs should tell everything. > doveconf: Warning: NOTE: You can get a new clean config file with: doveconf -n > dovecot-new.conf You should do this and replace your old dovecot.conf with the new generated one. > userdb { > driver = passwd > } > userdb { > driver = passwd > } Also remove the duplicated userdb passwd. From sottilette at rfx.it Wed Jan 4 20:11:47 2012 From: sottilette at rfx.it (sottilette at rfx.it) Date: Wed, 4 Jan 2012 19:11:47 +0100 (CET) Subject: [Dovecot] POP3 problems In-Reply-To: <55076E79-BD81-455B-BD68-9ABCFE53ED22@iki.fi> References: <55076E79-BD81-455B-BD68-9ABCFE53ED22@iki.fi> Message-ID: On Wed, 4 Jan 2012, Timo Sirainen wrote: >> Migrated a 1.0.2 server to 2.0.16 (same old box). >> IMAP seems working Ok. >> POP3 give problems with some clients (Outlook 2010 and Thunderbird reported). >> Seems authentication problem >> Below my doveconf -n (debug enbled, but no answer found to the problems) > > What do the logs say when a client logs in? The debug logs should tell > everything. Yes, but my problem is that this is a production server with a really fast increasing log, so (in my limited skill of dovecot), I have some difficult to select interesting rows from it (I hoped this period was less busy, but my customers don't have the same idea ... ;-) ) Thanks for hints in selecting interesting rows. >> doveconf: Warning: NOTE: You can get a new clean config file with: doveconf -n > dovecot-new.conf > > You should do this and replace your old dovecot.conf with the new generated one. > >> userdb { >> driver = passwd >> } >> userdb { >> driver = passwd >> } > > Also remove the duplicated userdb passwd. This was an experimental configs manually derived from old 1.0.2 (mix of old working and new). If I replace it with a new config (below), authentication seems Ok, but fetch of mail from client is very slow (compared with old 1.0.2). Thanks for your very fast support ;-) P. # doveconf -n # 2.0.16: /etc/dovecot/dovecot.conf # OS: Linux 2.6.9-42.0.10.ELsmp i686 CentOS release 4.9 (Final) auth_mechanisms = plain login disable_plaintext_auth = no info_log_path = /var/log/mail/dovecot.info.log log_path = /var/log/mail/dovecot.log mail_full_filesystem_access = yes mail_location = mbox:~/:INBOX=/var/mail/%u mbox_read_locks = dotlock fcntl passdb { driver = pam } protocols = imap pop3 ssl_cert = References: <55076E79-BD81-455B-BD68-9ABCFE53ED22@iki.fi> Message-ID: <4F04B6B2.1030903@enas.net> Am 04.01.2012 19:11, schrieb sottilette at rfx.it: > On Wed, 4 Jan 2012, Timo Sirainen wrote: > >>> Migrated a 1.0.2 server to 2.0.16 (same old box). >>> IMAP seems working Ok. >>> POP3 give problems with some clients (Outlook 2010 and Thunderbird >>> reported). >>> Seems authentication problem >>> Below my doveconf -n (debug enbled, but no answer found to the >>> problems) >> >> What do the logs say when a client logs in? The debug logs should >> tell everything. > > Yes, but my problem is that this is a production server with a really > fast increasing log, so (in my limited skill of dovecot), I have some > difficult to select interesting rows from it (I hoped this period was > less busy, but my customers don't have the same idea ... ;-) ) > Thanks for hints in selecting interesting rows. Try to run "tail -f $MAILLOG | grep $USERNAME" until the user logs in and tries to fetch his emails. $MAILLOG = logfile to which dovecot logs all info $USERNAME = Username of your client which has the problems > >>> doveconf: Warning: NOTE: You can get a new clean config file with: >>> doveconf -n > dovecot-new.conf >> >> You should do this and replace your old dovecot.conf with the new >> generated one. >> >>> userdb { >>> driver = passwd >>> } >>> userdb { >>> driver = passwd >>> } >> >> Also remove the duplicated userdb passwd. > > > This was an experimental configs manually derived from old 1.0.2 (mix > of old working and new). > > If I replace it with a new config (below), authentication seems Ok, > but fetch of mail from client is very slow (compared with old 1.0.2). > > Thanks for your very fast support ;-) > > P. > > > > # doveconf -n > # 2.0.16: /etc/dovecot/dovecot.conf > # OS: Linux 2.6.9-42.0.10.ELsmp i686 CentOS release 4.9 (Final) > auth_mechanisms = plain login > disable_plaintext_auth = no > info_log_path = /var/log/mail/dovecot.info.log > log_path = /var/log/mail/dovecot.log > mail_full_filesystem_access = yes > mail_location = mbox:~/:INBOX=/var/mail/%u > mbox_read_locks = dotlock fcntl > passdb { > driver = pam > } > protocols = imap pop3 > ssl_cert = ssl_key = userdb { > driver = passwd > } > protocol pop3 { > pop3_uidl_format = %08Xu%08Xv > } > From dmiller at amfes.com Thu Jan 5 02:24:40 2012 From: dmiller at amfes.com (Daniel L. Miller) Date: Wed, 04 Jan 2012 16:24:40 -0800 Subject: [Dovecot] Possible mdbox corruption Message-ID: I thought I had cleared out the corruption I had before - perhaps I was mistaken. What steps should I take to help locate these issues? Currently using 2.1rc1. I see the following errors in my logs, including out of memory and message size issues (at 15:30): Jan 4 05:17:17 bubba dovecot: indexer-worker(user1 at domain.com): Error: Raw backtrace: /usr/local/lib/dovecot/libdovecot.so.0(+0x3ed0a) [0x7f6e17cbfd0a] -> /usr/local/lib/dovecot/libdovecot.so.0(+0x3ed56) [0x7f6e17cbfd56] -> /usr/local/lib/dovecot/libdovecot.so.0(i_fatal+0) [0x7f6e17c98d08] -> /usr/local/lib/dovecot/libdovecot.so.0(+0x4f310) [0x7f6e17cd0310] -> /usr/local/lib/dovecot/libdovecot.so.0(+0x3b965) [0x7f6e17cbc965] -> /usr/local/lib/dovecot/libdovecot.so.0(buffer_write+0x7c) [0x7f6e17cbd0ec] -> /usr/local/lib/dovecot/lib21_fts_solr_plugin.so(+0x3292) [0x7f6e164b7292] -> /usr/local/lib/dovecot/lib21_fts_solr_plugin.so(+0x3a97) [0x7f6e164b7a97] -> /usr/local/lib/dovecot/lib20_fts_plugin.so(fts_backend_update_set_build_key+0x2c) [0x7f6e166c4abc] -> /usr/local/lib/dovecot/lib20_fts_plugin.so(fts_build_mail+0x2d1) [0x7f6e166c5561] -> /usr/local/lib/dovecot/lib20_fts_plugin.so(+0xc630) [0x7f6e166ca630] -> dovecot/indexer-worker [user1 at domain.com Sent - 5500/7510]() [0x40245f] -> dovecot/indexer-worker [user1 at domain.com Sent - 5500/7510]() [0x4027dd] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_call_io+0x36) [0x7f6e17ccc0f6] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_handler_run+0x9f) [0x7f6e17ccd17f] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_run+0x28) [0x7f6e17ccc098] -> /usr/local/lib/dovecot/libdovecot.so.0(master_service_run+0x13) [0x7f6e17cb9123] -> dovecot/indexer-worker [user1 at domain.com Sent - 5500/7510](main+0x109) [0x401f29] -> /lib/libc.so.6(__libc_start_main+0xfe) [0x7f6e1791cd8e] -> dovecot/indexer-worker [user1 at domain.com Sent - 5500/7510]() [0x401d19] Jan 4 05:17:17 bubba dovecot: indexer: Error: Indexer worker disconnected, discarding 1 requests for user1 at domain.com Jan 4 05:17:17 bubba dovecot: master: Error: service(indexer-worker): child 10896 returned error 83 (Out of memory (vsz_limit=256 MB, you may need to increase it)) Jan 4 06:17:17 bubba dovecot: indexer-worker(user1 at domain.com): Fatal: pool_system_realloc(134217728): Out of memory Jan 4 06:17:17 bubba dovecot: indexer-worker(user1 at domain.com): Error: Raw backtrace: /usr/local/lib/dovecot/libdovecot.so.0(+0x3ed0a) [0x7f0ec1a57d0a] -> /usr/local/lib/dovecot/libdovecot.so.0(+0x3ed56) [0x7f0ec1a57d56] -> /usr/local/lib/dovecot/libdovecot.so.0(i_fatal+0) [0x7f0ec1a30d08] -> /usr/local/lib/dovecot/libdovecot.so.0(+0x4f310) [0x7f0ec1a68310] -> /usr/local/lib/dovecot/libdovecot.so.0(+0x3b965) [0x7f0ec1a54965] -> /usr/local/lib/dovecot/libdovecot.so.0(buffer_write+0x7c) [0x7f0ec1a550ec] -> /usr/local/lib/dovecot/lib21_fts_solr_plugin.so(+0x3292) [0x7f0ec024f292] -> /usr/local/lib/dovecot/lib21_fts_solr_plugin.so(+0x3a97) [0x7f0ec024fa97] -> /usr/local/lib/dovecot/lib20_fts_plugin.so(fts_backend_update_set_build_key+0x2c) [0x7f0ec045cabc] -> /usr/local/lib/dovecot/lib20_fts_plugin.so(fts_build_mail+0x2d1) [0x7f0ec045d561] -> /usr/local/lib/dovecot/lib20_fts_plugin.so(+0xc630) [0x7f0ec0462630] -> dovecot/indexer-worker [user1 at domain.com Sent - 5500/7510]() [0x40245f] -> dovecot/indexer-worker [user1 at domain.com Sent - 5500/7510]() [0x4027dd] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_call_io+0x36) [0x7f0ec1a640f6] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_handler_run+0x9f) [0x7f0ec1a6517f] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_run+0x28) [0x7f0ec1a64098] -> /usr/local/lib/dovecot/libdovecot.so.0(master_service_run+0x13) [0x7f0ec1a51123] -> dovecot/indexer-worker [user1 at domain.com Sent - 5500/7510](main+0x109) [0x401f29] -> /lib/libc.so.6(__libc_start_main+0xfe) [0x7f0ec16b4d8e] -> dovecot/indexer-worker [user1 at domain.com Sent - 5500/7510]() [0x401d19] Jan 4 06:17:17 bubba dovecot: indexer: Error: Indexer worker disconnected, discarding 1 requests for user1 at domain.com Jan 4 06:17:17 bubba dovecot: master: Error: service(indexer-worker): child 11941 returned error 83 (Out of memory (vsz_limit=256 MB, you may need to increase it)) Jan 4 07:17:18 bubba dovecot: indexer-worker(user1 at domain.com): Fatal: pool_system_realloc(134217728): Out of memory Jan 4 07:17:18 bubba dovecot: indexer-worker(user1 at domain.com): Error: Raw backtrace: /usr/local/lib/dovecot/libdovecot.so.0(+0x3ed0a) [0x7faed4e56d0a] -> /usr/local/lib/dovecot/libdovecot.so.0(+0x3ed56) [0x7faed4e56d56] -> /usr/local/lib/dovecot/libdovecot.so.0(i_fatal+0) [0x7faed4e2fd08] -> /usr/local/lib/dovecot/libdovecot.so.0(+0x4f310) [0x7faed4e67310] -> /usr/local/lib/dovecot/libdovecot.so.0(+0x3b965) [0x7faed4e53965] -> /usr/local/lib/dovecot/libdovecot.so.0(buffer_write+0x7c) [0x7faed4e540ec] -> /usr/local/lib/dovecot/lib21_fts_solr_plugin.so(+0x3292) [0x7faed364e292] -> /usr/local/lib/dovecot/lib21_fts_solr_plugin.so(+0x3a97) [0x7faed364ea97] -> /usr/local/lib/dovecot/lib20_fts_plugin.so(fts_backend_update_set_build_key+0x2c) [0x7faed385babc] -> /usr/local/lib/dovecot/lib20_fts_plugin.so(fts_build_mail+0x2d1) [0x7faed385c561] -> /usr/local/lib/dovecot/lib20_fts_plugin.so(+0xc630) [0x7faed3861630] -> dovecot/indexer-worker [user1 at domain.com Sent - 5500/7510]() [0x40245f] -> dovecot/indexer-worker [user1 at domain.com Sent - 5500/7510]() [0x4027dd] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_call_io+0x36) [0x7faed4e630f6] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_handler_run+0x9f) [0x7faed4e6417f] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_run+0x28) [0x7faed4e63098] -> /usr/local/lib/dovecot/libdovecot.so.0(master_service_run+0x13) [0x7faed4e50123] -> dovecot/indexer-worker [user1 at domain.com Sent - 5500/7510](main+0x109) [0x401f29] -> /lib/libc.so.6(__libc_start_main+0xfe) [0x7faed4ab3d8e] -> dovecot/indexer-worker [user1 at domain.com Sent - 5500/7510]() [0x401d19] Jan 4 07:17:18 bubba dovecot: indexer: Error: Indexer worker disconnected, discarding 1 requests for user1 at domain.com Jan 4 07:17:18 bubba dovecot: master: Error: service(indexer-worker): child 13299 returned error 83 (Out of memory (vsz_limit=256 MB, you may need to increase it)) Jan 4 08:17:17 bubba dovecot: indexer-worker(user1 at domain.com): Fatal: pool_system_realloc(134217728): Out of memory Jan 4 08:17:17 bubba dovecot: indexer-worker(user1 at domain.com): Error: Raw backtrace: /usr/local/lib/dovecot/libdovecot.so.0(+0x3ed0a) [0x7ffd84382d0a] -> /usr/local/lib/dovecot/libdovecot.so.0(+0x3ed56) [0x7ffd84382d56] -> /usr/local/lib/dovecot/libdovecot.so.0(i_fatal+0) [0x7ffd8435bd08] -> /usr/local/lib/dovecot/libdovecot.so.0(+0x4f310) [0x7ffd84393310] -> /usr/local/lib/dovecot/libdovecot.so.0(+0x3b965) [0x7ffd8437f965] -> /usr/local/lib/dovecot/libdovecot.so.0(buffer_write+0x7c) [0x7ffd843800ec] -> /usr/local/lib/dovecot/lib21_fts_solr_plugin.so(+0x3292) [0x7ffd82b7a292] -> /usr/local/lib/dovecot/lib21_fts_solr_plugin.so(+0x3a97) [0x7ffd82b7aa97] -> /usr/local/lib/dovecot/lib20_fts_plugin.so(fts_backend_update_set_build_key+0x2c) [0x7ffd82d87abc] -> /usr/local/lib/dovecot/lib20_fts_plugin.so(fts_build_mail+0x2d1) [0x7ffd82d88561] -> /usr/local/lib/dovecot/lib20_fts_plugin.so(+0xc630) [0x7ffd82d8d630] -> dovecot/indexer-worker [user1 at domain.com Sent - 5500/7510]() [0x40245f] -> dovecot/indexer-worker [user1 at domain.com Sent - 5500/7510]() [0x4027dd] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_call_io+0x36) [0x7ffd8438f0f6] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_handler_run+0x9f) [0x7ffd8439017f] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_run+0x28) [0x7ffd8438f098] -> /usr/local/lib/dovecot/libdovecot.so.0(master_service_run+0x13) [0x7ffd8437c123] -> dovecot/indexer-worker [user1 at domain.com Sent - 5500/7510](main+0x109) [0x401f29] -> /lib/libc.so.6(__libc_start_main+0xfe) [0x7ffd83fdfd8e] -> dovecot/indexer-worker [user1 at domain.com Sent - 5500/7510]() [0x401d19] Jan 4 08:17:17 bubba dovecot: indexer: Error: Indexer worker disconnected, discarding 1 requests for user1 at domain.com Jan 4 08:17:17 bubba dovecot: master: Error: service(indexer-worker): child 14413 returned error 83 (Out of memory (vsz_limit=256 MB, you may need to increase it)) Jan 4 09:17:19 bubba dovecot: indexer-worker(user1 at domain.com): Fatal: pool_system_realloc(134217728): Out of memory Jan 4 09:17:19 bubba dovecot: indexer-worker(user1 at domain.com): Error: Raw backtrace: /usr/local/lib/dovecot/libdovecot.so.0(+0x3ed0a) [0x7fb701bf5d0a] -> /usr/local/lib/dovecot/libdovecot.so.0(+0x3ed56) [0x7fb701bf5d56] -> /usr/local/lib/dovecot/libdovecot.so.0(i_fatal+0) [0x7fb701bced08] -> /usr/local/lib/dovecot/libdovecot.so.0(+0x4f310) [0x7fb701c06310] -> /usr/local/lib/dovecot/libdovecot.so.0(+0x3b965) [0x7fb701bf2965] -> /usr/local/lib/dovecot/libdovecot.so.0(buffer_write+0x7c) [0x7fb701bf30ec] -> /usr/local/lib/dovecot/lib21_fts_solr_plugin.so(+0x3292) [0x7fb7003ed292] -> /usr/local/lib/dovecot/lib21_fts_solr_plugin.so(+0x3a97) [0x7fb7003eda97] -> /usr/local/lib/dovecot/lib20_fts_plugin.so(fts_backend_update_set_build_key+0x2c) [0x7fb7005faabc] -> /usr/local/lib/dovecot/lib20_fts_plugin.so(fts_build_mail+0x2d1) [0x7fb7005fb561] -> /usr/local/lib/dovecot/lib20_fts_plugin.so(+0xc630) [0x7fb700600630] -> dovecot/indexer-worker [user1 at domain.com Sent - 5500/7510]() [0x40245f] -> dovecot/indexer-worker [user1 at domain.com Sent - 5500/7510]() [0x4027dd] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_call_io+0x36) [0x7fb701c020f6] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_handler_run+0x9f) [0x7fb701c0317f] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_run+0x28) [0x7fb701c02098] -> /usr/local/lib/dovecot/libdovecot.so.0(master_service_run+0x13) [0x7fb701bef123] -> dovecot/indexer-worker [user1 at domain.com Sent - 5500/7510](main+0x109) [0x401f29] -> /lib/libc.so.6(__libc_start_main+0xfe) [0x7fb701852d8e] -> dovecot/indexer-worker [user1 at domain.com Sent - 5500/7510]() [0x401d19] Jan 4 09:17:19 bubba dovecot: indexer: Error: Indexer worker disconnected, discarding 1 requests for user1 at domain.com Jan 4 09:17:19 bubba dovecot: master: Error: service(indexer-worker): child 15486 returned error 83 (Out of memory (vsz_limit=256 MB, you may need to increase it)) Jan 4 10:17:17 bubba dovecot: indexer-worker(user1 at domain.com): Fatal: pool_system_realloc(134217728): Out of memory Jan 4 10:17:17 bubba dovecot: indexer-worker(user1 at domain.com): Error: Raw backtrace: /usr/local/lib/dovecot/libdovecot.so.0(+0x3ed0a) [0x7f8dc590ed0a] -> /usr/local/lib/dovecot/libdovecot.so.0(+0x3ed56) [0x7f8dc590ed56] -> /usr/local/lib/dovecot/libdovecot.so.0(i_fatal+0) [0x7f8dc58e7d08] -> /usr/local/lib/dovecot/libdovecot.so.0(+0x4f310) [0x7f8dc591f310] -> /usr/local/lib/dovecot/libdovecot.so.0(+0x3b965) [0x7f8dc590b965] -> /usr/local/lib/dovecot/libdovecot.so.0(buffer_write+0x7c) [0x7f8dc590c0ec] -> /usr/local/lib/dovecot/lib21_fts_solr_plugin.so(+0x3292) [0x7f8dc4106292] -> /usr/local/lib/dovecot/lib21_fts_solr_plugin.so(+0x3a97) [0x7f8dc4106a97] -> /usr/local/lib/dovecot/lib20_fts_plugin.so(fts_backend_update_set_build_key+0x2c) [0x7f8dc4313abc] -> /usr/local/lib/dovecot/lib20_fts_plugin.so(fts_build_mail+0x2d1) [0x7f8dc4314561] -> /usr/local/lib/dovecot/lib20_fts_plugin.so(+0xc630) [0x7f8dc4319630] -> dovecot/indexer-worker [user1 at domain.com Sent - 5500/7510]() [0x40245f] -> dovecot/indexer-worker [user1 at domain.com Sent - 5500/7510]() [0x4027dd] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_call_io+0x36) [0x7f8dc591b0f6] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_handler_run+0x9f) [0x7f8dc591c17f] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_run+0x28) [0x7f8dc591b098] -> /usr/local/lib/dovecot/libdovecot.so.0(master_service_run+0x13) [0x7f8dc5908123] -> dovecot/indexer-worker [user1 at domain.com Sent - 5500/7510](main+0x109) [0x401f29] -> /lib/libc.so.6(__libc_start_main+0xfe) [0x7f8dc556bd8e] -> dovecot/indexer-worker [user1 at domain.com Sent - 5500/7510]() [0x401d19] Jan 4 10:17:17 bubba dovecot: indexer: Error: Indexer worker disconnected, discarding 1 requests for user1 at domain.com Jan 4 10:17:17 bubba dovecot: master: Error: service(indexer-worker): child 16472 returned error 83 (Out of memory (vsz_limit=256 MB, you may need to increase it)) Jan 4 11:17:17 bubba dovecot: indexer-worker(user1 at domain.com): Fatal: pool_system_realloc(134217728): Out of memory Jan 4 11:17:17 bubba dovecot: indexer-worker(user1 at domain.com): Error: Raw backtrace: /usr/local/lib/dovecot/libdovecot.so.0(+0x3ed0a) [0x7ff619c1dd0a] -> /usr/local/lib/dovecot/libdovecot.so.0(+0x3ed56) [0x7ff619c1dd56] -> /usr/local/lib/dovecot/libdovecot.so.0(i_fatal+0) [0x7ff619bf6d08] -> /usr/local/lib/dovecot/libdovecot.so.0(+0x4f310) [0x7ff619c2e310] -> /usr/local/lib/dovecot/libdovecot.so.0(+0x3b965) [0x7ff619c1a965] -> /usr/local/lib/dovecot/libdovecot.so.0(buffer_write+0x7c) [0x7ff619c1b0ec] -> /usr/local/lib/dovecot/lib21_fts_solr_plugin.so(+0x3292) [0x7ff618415292] -> /usr/local/lib/dovecot/lib21_fts_solr_plugin.so(+0x3a97) [0x7ff618415a97] -> /usr/local/lib/dovecot/lib20_fts_plugin.so(fts_backend_update_set_build_key+0x2c) [0x7ff618622abc] -> /usr/local/lib/dovecot/lib20_fts_plugin.so(fts_build_mail+0x2d1) [0x7ff618623561] -> /usr/local/lib/dovecot/lib20_fts_plugin.so(+0xc630) [0x7ff618628630] -> dovecot/indexer-worker [user1 at domain.com Sent - 5500/7514]() [0x40245f] -> dovecot/indexer-worker [user1 at domain.com Sent - 5500/7514]() [0x4027dd] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_call_io+0x36) [0x7ff619c2a0f6] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_handler_run+0x9f) [0x7ff619c2b17f] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_run+0x28) [0x7ff619c2a098] -> /usr/local/lib/dovecot/libdovecot.so.0(master_service_run+0x13) [0x7ff619c17123] -> dovecot/indexer-worker [user1 at domain.com Sent - 5500/7514](main+0x109) [0x401f29] -> /lib/libc.so.6(__libc_start_main+0xfe) [0x7ff61987ad8e] -> dovecot/indexer-worker [user1 at domain.com Sent - 5500/7514]() [0x401d19] Jan 4 11:17:17 bubba dovecot: indexer: Error: Indexer worker disconnected, discarding 1 requests for user1 at domain.com Jan 4 11:17:17 bubba dovecot: master: Error: service(indexer-worker): child 17522 returned error 83 (Out of memory (vsz_limit=256 MB, you may need to increase it)) Jan 4 12:17:17 bubba dovecot: indexer-worker(user1 at domain.com): Fatal: pool_system_realloc(134217728): Out of memory Jan 4 12:17:17 bubba dovecot: indexer-worker(user1 at domain.com): Error: Raw backtrace: /usr/local/lib/dovecot/libdovecot.so.0(+0x3ed0a) [0x7ffd988c1d0a] -> /usr/local/lib/dovecot/libdovecot.so.0(+0x3ed56) [0x7ffd988c1d56] -> /usr/local/lib/dovecot/libdovecot.so.0(i_fatal+0) [0x7ffd9889ad08] -> /usr/local/lib/dovecot/libdovecot.so.0(+0x4f310) [0x7ffd988d2310] -> /usr/local/lib/dovecot/libdovecot.so.0(+0x3b965) [0x7ffd988be965] -> /usr/local/lib/dovecot/libdovecot.so.0(buffer_write+0x7c) [0x7ffd988bf0ec] -> /usr/local/lib/dovecot/lib21_fts_solr_plugin.so(+0x3292) [0x7ffd970b9292] -> /usr/local/lib/dovecot/lib21_fts_solr_plugin.so(+0x3a97) [0x7ffd970b9a97] -> /usr/local/lib/dovecot/lib20_fts_plugin.so(fts_backend_update_set_build_key+0x2c) [0x7ffd972c6abc] -> /usr/local/lib/dovecot/lib20_fts_plugin.so(fts_build_mail+0x2d1) [0x7ffd972c7561] -> /usr/local/lib/dovecot/lib20_fts_plugin.so(+0xc630) [0x7ffd972cc630] -> dovecot/indexer-worker [user1 at domain.com Sent - 5500/7515]() [0x40245f] -> dovecot/indexer-worker [user1 at domain.com Sent - 5500/7515]() [0x4027dd] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_call_io+0x36) [0x7ffd988ce0f6] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_handler_run+0x9f) [0x7ffd988cf17f] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_run+0x28) [0x7ffd988ce098] -> /usr/local/lib/dovecot/libdovecot.so.0(master_service_run+0x13) [0x7ffd988bb123] -> dovecot/indexer-worker [user1 at domain.com Sent - 5500/7515](main+0x109) [0x401f29] -> /lib/libc.so.6(__libc_start_main+0xfe) [0x7ffd9851ed8e] -> dovecot/indexer-worker [user1 at domain.com Sent - 5500/7515]() [0x401d19] Jan 4 12:17:17 bubba dovecot: indexer: Error: Indexer worker disconnected, discarding 1 requests for user1 at domain.com Jan 4 12:17:17 bubba dovecot: master: Error: service(indexer-worker): child 18498 returned error 83 (Out of memory (vsz_limit=256 MB, you may need to increase it)) Jan 4 13:17:16 bubba dovecot: indexer-worker(user1 at domain.com): Fatal: pool_system_realloc(134217728): Out of memory Jan 4 13:17:16 bubba dovecot: indexer-worker(user1 at domain.com): Error: Raw backtrace: /usr/local/lib/dovecot/libdovecot.so.0(+0x3ed0a) [0x7f51b0163d0a] -> /usr/local/lib/dovecot/libdovecot.so.0(+0x3ed56) [0x7f51b0163d56] -> /usr/local/lib/dovecot/libdovecot.so.0(i_fatal+0) [0x7f51b013cd08] -> /usr/local/lib/dovecot/libdovecot.so.0(+0x4f310) [0x7f51b0174310] -> /usr/local/lib/dovecot/libdovecot.so.0(+0x3b965) [0x7f51b0160965] -> /usr/local/lib/dovecot/libdovecot.so.0(buffer_write+0x7c) [0x7f51b01610ec] -> /usr/local/lib/dovecot/lib21_fts_solr_plugin.so(+0x3292) [0x7f51ae95b292] -> /usr/local/lib/dovecot/lib21_fts_solr_plugin.so(+0x3a97) [0x7f51ae95ba97] -> /usr/local/lib/dovecot/lib20_fts_plugin.so(fts_backend_update_set_build_key+0x2c) [0x7f51aeb68abc] -> /usr/local/lib/dovecot/lib20_fts_plugin.so(fts_build_mail+0x2d1) [0x7f51aeb69561] -> /usr/local/lib/dovecot/lib20_fts_plugin.so(+0xc630) [0x7f51aeb6e630] -> dovecot/indexer-worker [user1 at domain.com Sent - 5500/7515]() [0x40245f] -> dovecot/indexer-worker [user1 at domain.com Sent - 5500/7515]() [0x4027dd] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_call_io+0x36) [0x7f51b01700f6] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_handler_run+0x9f) [0x7f51b017117f] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_run+0x28) [0x7f51b0170098] -> /usr/local/lib/dovecot/libdovecot.so.0(master_service_run+0x13) [0x7f51b015d123] -> dovecot/indexer-worker [user1 at domain.com Sent - 5500/7515](main+0x109) [0x401f29] -> /lib/libc.so.6(__libc_start_main+0xfe) [0x7f51afdc0d8e] -> dovecot/indexer-worker [user1 at domain.com Sent - 5500/7515]() [0x401d19] Jan 4 13:17:16 bubba dovecot: indexer: Error: Indexer worker disconnected, discarding 1 requests for user1 at domain.com Jan 4 13:17:16 bubba dovecot: master: Error: service(indexer-worker): child 19550 returned error 83 (Out of memory (vsz_limit=256 MB, you may need to increase it)) Jan 4 14:17:17 bubba dovecot: indexer-worker(user1 at domain.com): Fatal: pool_system_realloc(134217728): Out of memory Jan 4 14:17:17 bubba dovecot: indexer-worker(user1 at domain.com): Error: Raw backtrace: /usr/local/lib/dovecot/libdovecot.so.0(+0x3ed0a) [0x7f423b546d0a] -> /usr/local/lib/dovecot/libdovecot.so.0(+0x3ed56) [0x7f423b546d56] -> /usr/local/lib/dovecot/libdovecot.so.0(i_fatal+0) [0x7f423b51fd08] -> /usr/local/lib/dovecot/libdovecot.so.0(+0x4f310) [0x7f423b557310] -> /usr/local/lib/dovecot/libdovecot.so.0(+0x3b965) [0x7f423b543965] -> /usr/local/lib/dovecot/libdovecot.so.0(buffer_write+0x7c) [0x7f423b5440ec] -> /usr/local/lib/dovecot/lib21_fts_solr_plugin.so(+0x3292) [0x7f4239d3e292] -> /usr/local/lib/dovecot/lib21_fts_solr_plugin.so(+0x3a97) [0x7f4239d3ea97] -> /usr/local/lib/dovecot/lib20_fts_plugin.so(fts_backend_update_set_build_key+0x2c) [0x7f4239f4babc] -> /usr/local/lib/dovecot/lib20_fts_plugin.so(fts_build_mail+0x2d1) [0x7f4239f4c561] -> /usr/local/lib/dovecot/lib20_fts_plugin.so(+0xc630) [0x7f4239f51630] -> dovecot/indexer-worker [user1 at domain.com Sent - 5500/7516]() [0x40245f] -> dovecot/indexer-worker [user1 at domain.com Sent - 5500/7516]() [0x4027dd] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_call_io+0x36) [0x7f423b5530f6] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_handler_run+0x9f) [0x7f423b55417f] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_run+0x28) [0x7f423b553098] -> /usr/local/lib/dovecot/libdovecot.so.0(master_service_run+0x13) [0x7f423b540123] -> dovecot/indexer-worker [user1 at domain.com Sent - 5500/7516](main+0x109) [0x401f29] -> /lib/libc.so.6(__libc_start_main+0xfe) [0x7f423b1a3d8e] -> dovecot/indexer-worker [user1 at domain.com Sent - 5500/7516]() [0x401d19] Jan 4 14:17:17 bubba dovecot: master: Error: service(indexer-worker): child 20638 returned error 83 (Out of memory (vsz_limit=256 MB, you may need to increase it)) Jan 4 14:17:17 bubba dovecot: indexer: Error: Indexer worker disconnected, discarding 1 requests for user1 at domain.com Jan 4 15:17:17 bubba dovecot: indexer-worker(user1 at domain.com): Fatal: pool_system_realloc(134217728): Out of memory Jan 4 15:17:17 bubba dovecot: indexer-worker(user1 at domain.com): Error: Raw backtrace: /usr/local/lib/dovecot/libdovecot.so.0(+0x3ed0a) [0x7f3ab5e51d0a] -> /usr/local/lib/dovecot/libdovecot.so.0(+0x3ed56) [0x7f3ab5e51d56] -> /usr/local/lib/dovecot/libdovecot.so.0(i_fatal+0) [0x7f3ab5e2ad08] -> /usr/local/lib/dovecot/libdovecot.so.0(+0x4f310) [0x7f3ab5e62310] -> /usr/local/lib/dovecot/libdovecot.so.0(+0x3b965) [0x7f3ab5e4e965] -> /usr/local/lib/dovecot/libdovecot.so.0(buffer_write+0x7c) [0x7f3ab5e4f0ec] -> /usr/local/lib/dovecot/lib21_fts_solr_plugin.so(+0x3292) [0x7f3ab4649292] -> /usr/local/lib/dovecot/lib21_fts_solr_plugin.so(+0x3a97) [0x7f3ab4649a97] -> /usr/local/lib/dovecot/lib20_fts_plugin.so(fts_backend_update_set_build_key+0x2c) [0x7f3ab4856abc] -> /usr/local/lib/dovecot/lib20_fts_plugin.so(fts_build_mail+0x2d1) [0x7f3ab4857561] -> /usr/local/lib/dovecot/lib20_fts_plugin.so(+0xc630) [0x7f3ab485c630] -> dovecot/indexer-worker [user1 at domain.com Sent - 5500/7517]() [0x40245f] -> dovecot/indexer-worker [user1 at domain.com Sent - 5500/7517]() [0x4027dd] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_call_io+0x36) [0x7f3ab5e5e0f6] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_handler_run+0x9f) [0x7f3ab5e5f17f] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_run+0x28) [0x7f3ab5e5e098] -> /usr/local/lib/dovecot/libdovecot.so.0(master_service_run+0x13) [0x7f3ab5e4b123] -> dovecot/indexer-worker [user1 at domain.com Sent - 5500/7517](main+0x109) [0x401f29] -> /lib/libc.so.6(__libc_start_main+0xfe) [0x7f3ab5aaed8e] -> dovecot/indexer-worker [user1 at domain.com Sent - 5500/7517]() [0x401d19] Jan 4 15:17:17 bubba dovecot: master: Error: service(indexer-worker): child 21821 returned error 83 (Out of memory (vsz_limit=256 MB, you may need to increase it)) Jan 4 15:17:17 bubba dovecot: indexer: Error: Indexer worker disconnected, discarding 1 requests for user1 at domain.com Jan 4 15:30:48 bubba dovecot: imap(user2 at domain.com): Error: Cached message size smaller than expected (822 < 1493) Jan 4 15:30:48 bubba dovecot: imap(user2 at domain.com): Error: Corrupted index cache file /var/mail/amfes.com/lmiller/mdbox/mailboxes/Sent/dbox-Mails/dovecot.index.cache: Broken physical size for mail UID 1786 Jan 4 15:30:48 bubba dovecot: imap(user2 at domain.com): Error: read(/var/mail/amfes.com/lmiller/mdbox/storage/m.208) failed: Input/output error (FETCH for mailbox Sent UID 1786) Jan 4 16:17:20 bubba dovecot: indexer-worker(user1 at domain.com): Fatal: pool_system_realloc(134217728): Out of memory Jan 4 16:17:20 bubba dovecot: indexer-worker(user1 at domain.com): Error: Raw backtrace: /usr/local/lib/dovecot/libdovecot.so.0(+0x3ed0a) [0x7ffc91276d0a] -> /usr/local/lib/dovecot/libdovecot.so.0(+0x3ed56) [0x7ffc91276d56] -> /usr/local/lib/dovecot/libdovecot.so.0(i_fatal+0) [0x7ffc9124fd08] -> /usr/local/lib/dovecot/libdovecot.so.0(+0x4f310) [0x7ffc91287310] -> /usr/local/lib/dovecot/libdovecot.so.0(+0x3b965) [0x7ffc91273965] -> /usr/local/lib/dovecot/libdovecot.so.0(buffer_write+0x7c) [0x7ffc912740ec] -> /usr/local/lib/dovecot/lib21_fts_solr_plugin.so(+0x3292) [0x7ffc8fa6e292] -> /usr/local/lib/dovecot/lib21_fts_solr_plugin.so(+0x3a97) [0x7ffc8fa6ea97] -> /usr/local/lib/dovecot/lib20_fts_plugin.so(fts_backend_update_set_build_key+0x2c) [0x7ffc8fc7babc] -> /usr/local/lib/dovecot/lib20_fts_plugin.so(fts_build_mail+0x2d1) [0x7ffc8fc7c561] -> /usr/local/lib/dovecot/lib20_fts_plugin.so(+0xc630) [0x7ffc8fc81630] -> dovecot/indexer-worker [user1 at domain.com Sent - 5500/7517]() [0x40245f] -> dovecot/indexer-worker [user1 at domain.com Sent - 5500/7517]() [0x4027dd] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_call_io+0x36) [0x7ffc912830f6] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_handler_run+0x9f) [0x7ffc9128417f] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_run+0x28) [0x7ffc91283098] -> /usr/local/lib/dovecot/libdovecot.so.0(master_service_run+0x13) [0x7ffc91270123] -> dovecot/indexer-worker [user1 at domain.com Sent - 5500/7517](main+0x109) [0x401f29] -> /lib/libc.so.6(__libc_start_main+0xfe) [0x7ffc90ed3d8e] -> dovecot/indexer-worker [user1 at domain.com Sent - 5500/7517]() [0x401d19] Jan 4 16:17:20 bubba dovecot: master: Error: service(indexer-worker): child 22927 returned error 83 (Out of memory (vsz_limit=256 MB, you may need to increase it)) Jan 4 16:17:20 bubba dovecot: indexer: Error: Indexer worker disconnected, discarding 1 requests for user1 at domain.com -- Daniel From user+dovecot at localhost.localdomain.org Thu Jan 5 03:19:37 2012 From: user+dovecot at localhost.localdomain.org (Pascal Volk) Date: Thu, 05 Jan 2012 02:19:37 +0100 Subject: [Dovecot] Storing passwords encrypted... bcrypt? In-Reply-To: <4F0367A2.1000807@Media-Brokers.com> References: <4F0367A2.1000807@Media-Brokers.com> Message-ID: <4F04FAA9.3020908@localhost.localdomain.org> On 01/03/2012 09:40 PM Charles Marcus wrote: > Hi everyone, > > Was just perusing this article about how trivial it is to decrypt > passwords that are stored using most (standard) encryption methods (like > MD5), and was wondering - is it possible to use bcrypt with > dovecot+postfix+mysql (or posgres)? Yes it is possible to use bcrypt with dovecot. Currently you have only to write your password scheme plugin. The bcrypt algorithm is described at http://en.wikipedia.org/wiki/Bcrypt. If you are using Dovecot >= 2.0 'doveadm pw' supports the schemes: *BSD: Blowfish-Crypt *Linux (since glibc 2.7): SHA-256-Crypt and SHA-512-Crypt Some distributions have also added support for Blowfish-Crypt See also: doveadm-pw(1) If you are using Dovecot < 2.0 you can also use any of the algorithms supported by your system's libc. But then you have to prefix the hashes with {CRYPT} - not {{BLF,SHA256,SHA512}-CRYPT}. Regards, Pascal -- The trapper recommends today: deadbeef.1200501 at localdomain.org From noel.butler at ausics.net Thu Jan 5 03:59:12 2012 From: noel.butler at ausics.net (Noel Butler) Date: Thu, 05 Jan 2012 11:59:12 +1000 Subject: [Dovecot] Storing passwords encrypted... bcrypt? In-Reply-To: <4F03B25B.2020309@orlitzky.com> References: <4F0367A2.1000807@Media-Brokers.com> <4F036C3B.5080904@Media-Brokers.com> <4F037CC5.9030900@carpenter.org> <4F038186.3030505@Media-Brokers.com> <4F038B72.1000003@carpenter.org> <4F03AA6E.30003@Media-Brokers.com> <4F03B25B.2020309@orlitzky.com> Message-ID: <1325728752.9555.8.camel@tardis> On Tue, 2012-01-03 at 20:58 -0500, Michael Orlitzky wrote: > To prevent rainbow table attacks, salt your passwords. You can make them > a little bit more difficult in plenty of ways, but salt is the /solution/. Agreed... We use Crypt::PasswdMD5 - unix_md5_crypt() for all general password storage including mail/ftp etc, except for web, where we need to use apache_md5_crypt(). -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 490 bytes Desc: This is a digitally signed message part URL: From patrickdk at patrickdk.com Thu Jan 5 04:06:44 2012 From: patrickdk at patrickdk.com (Patrick Domack) Date: Wed, 04 Jan 2012 21:06:44 -0500 Subject: [Dovecot] Storing passwords encrypted... bcrypt? In-Reply-To: <1325728752.9555.8.camel@tardis> References: <4F0367A2.1000807@Media-Brokers.com> <4F036C3B.5080904@Media-Brokers.com> <4F037CC5.9030900@carpenter.org> <4F038186.3030505@Media-Brokers.com> <4F038B72.1000003@carpenter.org> <4F03AA6E.30003@Media-Brokers.com> <4F03B25B.2020309@orlitzky.com> <1325728752.9555.8.camel@tardis> Message-ID: <20120104210644.Horde.YEJENpLnE6FPBQW0C1KEd8A@kishi.patrickdk.com> Quoting Noel Butler : > On Tue, 2012-01-03 at 20:58 -0500, Michael Orlitzky wrote: > > >> To prevent rainbow table attacks, salt your passwords. You can make them >> a little bit more difficult in plenty of ways, but salt is the /solution/. > > > > Agreed... > We use Crypt::PasswdMD5 - > unix_md5_crypt() for all general password storage including mail/ftp > etc, except for web, where we need to use apache_md5_crypt(). But still, the results are all the same, if they get the hash, it can be broken, given time. Using more cpu expensive methods make it take longer (like adding salt, more complex hash). But the end result is they will have it if they want it. The only solution is to use two factor authenication, or rotate your passwords quicker than they can get broken. From user+dovecot at localhost.localdomain.org Thu Jan 5 04:26:59 2012 From: user+dovecot at localhost.localdomain.org (Pascal Volk) Date: Thu, 05 Jan 2012 03:26:59 +0100 Subject: [Dovecot] Storing passwords encrypted... bcrypt? In-Reply-To: <1325728752.9555.8.camel@tardis> References: <4F0367A2.1000807@Media-Brokers.com> <4F036C3B.5080904@Media-Brokers.com> <4F037CC5.9030900@carpenter.org> <4F038186.3030505@Media-Brokers.com> <4F038B72.1000003@carpenter.org> <4F03AA6E.30003@Media-Brokers.com> <4F03B25B.2020309@orlitzky.com> <1325728752.9555.8.camel@tardis> Message-ID: <4F050A73.7090300@localhost.localdomain.org> On 01/05/2012 02:59 AM Noel Butler wrote: > We use Crypt::PasswdMD5 - > unix_md5_crypt() for all general password storage including mail/ftp > etc, except for web, where we need to use apache_md5_crypt(). Huh, why do you need to store passwords in Apaches md5 crypt() format? ,--[ Apache config ]-- | AuthType Basic | AuthName "bla ?" | AuthBasicProvider dbm | AuthDBMUserFile /path/2/.htpasswd | Require valid-user | Order allow,deny | Allow from 203.0.113.0/24 2001:db8::/32 | Satisfy any `-- ,--[ stdin/stdout ]-- | user at localhost ~ $ python | Python 2.5.4 (r254:67916, Feb 17 2009, 20:16:45) | [GCC 4.3.3] on linux2 | Type "help", "copyright", "credits" or "license" for more information. | >>> import anydbm | >>> dbm = anydbm.open('/path/2/.htpasswd') | >>> dbm['user'] | '$6$Rn6L.3hT2x6dnX0t$d0/Tx.Ps3KSRxxm.ggFBYqum54/k8JmDzUcpoCXre88cBEXK8WB.Vdb1YzN.8fOvz3fJU4uLgW0/AlTiB9Ui2.::Real Name' | >>> `-- Regards, Pascal -- The trapper recommends today: deadbeef.1200503 at localdomain.org From noel.butler at ausics.net Thu Jan 5 04:31:37 2012 From: noel.butler at ausics.net (Noel Butler) Date: Thu, 05 Jan 2012 12:31:37 +1000 Subject: [Dovecot] Storing passwords encrypted... bcrypt? In-Reply-To: <20120104210644.Horde.YEJENpLnE6FPBQW0C1KEd8A@kishi.patrickdk.com> References: <4F0367A2.1000807@Media-Brokers.com> <4F036C3B.5080904@Media-Brokers.com> <4F037CC5.9030900@carpenter.org> <4F038186.3030505@Media-Brokers.com> <4F038B72.1000003@carpenter.org> <4F03AA6E.30003@Media-Brokers.com> <4F03B25B.2020309@orlitzky.com> <1325728752.9555.8.camel@tardis> <20120104210644.Horde.YEJENpLnE6FPBQW0C1KEd8A@kishi.patrickdk.com> Message-ID: <1325730697.9555.15.camel@tardis> On Wed, 2012-01-04 at 21:06 -0500, Patrick Domack wrote: > Quoting Noel Butler : > > > On Tue, 2012-01-03 at 20:58 -0500, Michael Orlitzky wrote: > > > > > >> To prevent rainbow table attacks, salt your passwords. You can make them > >> a little bit more difficult in plenty of ways, but salt is the /solution/. > > > > > > > > Agreed... > > We use Crypt::PasswdMD5 - > > unix_md5_crypt() for all general password storage including mail/ftp > > etc, except for web, where we need to use apache_md5_crypt(). > > But still, the results are all the same, if they get the hash, it can > be broken, given time. Using more cpu expensive methods make it take > longer (like adding salt, more complex hash). But the end result is > they will have it if they want it. > > The only solution is to use two factor authenication, or rotate your > passwords quicker than they can get broken. > Yup, anything can be broken, given time and resources, no mater what, but using crypted MD5 is better than using normal md5 (like sadly way too many use) and having easy rainbow attacks succeed in mere seconds. No mater how good your database security is, always assume the worse, too many think that a DB compromise just can't happen to them, and as murphy's law shows, their usually the ones it does happen to. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 490 bytes Desc: This is a digitally signed message part URL: From noel.butler at ausics.net Thu Jan 5 04:36:38 2012 From: noel.butler at ausics.net (Noel Butler) Date: Thu, 05 Jan 2012 12:36:38 +1000 Subject: [Dovecot] Storing passwords encrypted... bcrypt? In-Reply-To: <4F050A73.7090300@localhost.localdomain.org> References: <4F0367A2.1000807@Media-Brokers.com> <4F036C3B.5080904@Media-Brokers.com> <4F037CC5.9030900@carpenter.org> <4F038186.3030505@Media-Brokers.com> <4F038B72.1000003@carpenter.org> <4F03AA6E.30003@Media-Brokers.com> <4F03B25B.2020309@orlitzky.com> <1325728752.9555.8.camel@tardis> <4F050A73.7090300@localhost.localdomain.org> Message-ID: <1325730998.9555.21.camel@tardis> On Thu, 2012-01-05 at 03:26 +0100, Pascal Volk wrote: > On 01/05/2012 02:59 AM Noel Butler wrote: > > We use Crypt::PasswdMD5 - > > unix_md5_crypt() for all general password storage including mail/ftp > > etc, except for web, where we need to use apache_md5_crypt(). > > Huh, why do you need to store passwords in Apaches md5 crypt() format? > Because with multiple servers, we store them all in (replicated) mysql :) (the same with postfix/dovecot). and as I'm sure you are aware, Apache does not understand standard crypted MD5, hence why there is the second option of apache_md5_crypt() > ,--[ Apache config ]-- > | AuthType Basic > | AuthName "bla ?" > | AuthBasicProvider dbm > | AuthDBMUserFile /path/2/.htpasswd > | Require valid-user > | Order allow,deny > | Allow from 203.0.113.0/24 2001:db8::/32 > | Satisfy any > `-- -------------- next part -------------- A non-text attachment was scrubbed... Name: face-smile.png Type: image/png Size: 873 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 490 bytes Desc: This is a digitally signed message part URL: From user+dovecot at localhost.localdomain.org Thu Jan 5 05:05:53 2012 From: user+dovecot at localhost.localdomain.org (Pascal Volk) Date: Thu, 05 Jan 2012 04:05:53 +0100 Subject: [Dovecot] Storing passwords encrypted... bcrypt? In-Reply-To: <1325730998.9555.21.camel@tardis> References: <4F0367A2.1000807@Media-Brokers.com> <4F036C3B.5080904@Media-Brokers.com> <4F037CC5.9030900@carpenter.org> <4F038186.3030505@Media-Brokers.com> <4F038B72.1000003@carpenter.org> <4F03AA6E.30003@Media-Brokers.com> <4F03B25B.2020309@orlitzky.com> <1325728752.9555.8.camel@tardis> <4F050A73.7090300@localhost.localdomain.org> <1325730998.9555.21.camel@tardis> Message-ID: <4F051391.404@localhost.localdomain.org> On 01/05/2012 03:36 AM Noel Butler wrote: > > Because with multiple servers, we store them all in (replicated) > mysql :) (the same with postfix/dovecot). > and as I'm sure you are aware, Apache does not understand standard > crypted MD5, hence why there is the second option of apache_md5_crypt() Oh, let me guess: You are using Windows, Netware, TPF as OS for your web servers? ;-) man htpasswd | grep -- '-d ' -d Use crypt() encryption for passwords. This is not supported by the httpd server on Windows and Netware and TPF. As you may have seen in my previous mail, the password is generated using crypt(). HTTP Authentication works with that password hash, even with the httpd from the ASF. Regards, Pascal -- The trapper recommends today: cafefeed.1200504 at localdomain.org From david at blue-labs.org Thu Jan 5 05:16:15 2012 From: david at blue-labs.org (David Ford) Date: Wed, 04 Jan 2012 22:16:15 -0500 Subject: [Dovecot] Storing passwords encrypted... bcrypt? In-Reply-To: <1325730998.9555.21.camel@tardis> References: <4F0367A2.1000807@Media-Brokers.com> <4F036C3B.5080904@Media-Brokers.com> <4F037CC5.9030900@carpenter.org> <4F038186.3030505@Media-Brokers.com> <4F038B72.1000003@carpenter.org> <4F03AA6E.30003@Media-Brokers.com> <4F03B25B.2020309@orlitzky.com> <1325728752.9555.8.camel@tardis> <4F050A73.7090300@localhost.localdomain.org> <1325730998.9555.21.camel@tardis> Message-ID: <4F0515FF.9050101@blue-labs.org> > Because with multiple servers, we store them all in (replicated) mysql > :) (the same with postfix/dovecot). and as I'm sure you are aware, > Apache does not understand standard crypted MD5, hence why there is > the second option of apache_md5_crypt() with multiple servers, we use pam & nss, with a replicated ldap backed. this serves all auth requests for all services and no services cares if it is sha, md5, or a crypt method. -d From noel.butler at ausics.net Thu Jan 5 09:19:10 2012 From: noel.butler at ausics.net (Noel Butler) Date: Thu, 05 Jan 2012 17:19:10 +1000 Subject: [Dovecot] Storing passwords encrypted... bcrypt? In-Reply-To: <4F051391.404@localhost.localdomain.org> References: <4F0367A2.1000807@Media-Brokers.com> <4F036C3B.5080904@Media-Brokers.com> <4F037CC5.9030900@carpenter.org> <4F038186.3030505@Media-Brokers.com> <4F038B72.1000003@carpenter.org> <4F03AA6E.30003@Media-Brokers.com> <4F03B25B.2020309@orlitzky.com> <1325728752.9555.8.camel@tardis> <4F050A73.7090300@localhost.localdomain.org> <1325730998.9555.21.camel@tardis> <4F051391.404@localhost.localdomain.org> Message-ID: <1325747950.5349.31.camel@tardis> On Thu, 2012-01-05 at 04:05 +0100, Pascal Volk wrote: > On 01/05/2012 03:36 AM Noel Butler wrote: > > > > > Because with multiple servers, we store them all in (replicated) > > mysql :) (the same with postfix/dovecot). > > and as I'm sure you are aware, Apache does not understand standard > > crypted MD5, hence why there is the second option of apache_md5_crypt() > > Oh, let me guess: You are using Windows, Netware, TPF as OS for your > web servers? ;-) > > man htpasswd | grep -- '-d ' > -d Use crypt() encryption for passwords. This is not supported by the httpd server on Windows and Netware and TPF. > > > As you may have seen in my previous mail, the password is generated > using crypt(). HTTP Authentication works with that password hash, even > with the httpd from the ASF. > I think you need to do some homework, and although I now have 3.25 days of holidays remaining, I don't intend to waste them educating anybody hehe. Assuming you even know what I'm talking about, which I suspect you don't since you keep using console commands and things like htpasswd, which does not write to a mysql db, you don't seem to have comprehended that I do not work with flat files nor local so it is irrelevant, I use perl scripts for all systems management, so I hope you are not going to suggest that I should make a system call when I can do it natively in perl. But please, by all means, create a mysql db using a system crpyted md5 password, I'll even help ya, openssl passwd -1 foobartilly $1$e3a.f3uW$SYRQiMlEhC5XlnSxtxiNC/ pop the entry into the db and go for your life trying to authenticate. and when you've gone through half bottle of bourbon trying to figure out why its not working, try the apache crypted md5 version $apr1$yKxk.DrQ $ybcmM8mC1qD5t5FvoY9820 If you stop, and think about what I've said, you just might wake up to what I've been saying. Cheers PS me use windaz? wash your bloody mouth out mister! ;) (Slackware all the way) -------------- next part -------------- A non-text attachment was scrubbed... Name: face-wink.png Type: image/png Size: 876 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 490 bytes Desc: This is a digitally signed message part URL: From noel.butler at ausics.net Thu Jan 5 09:28:10 2012 From: noel.butler at ausics.net (Noel Butler) Date: Thu, 05 Jan 2012 17:28:10 +1000 Subject: [Dovecot] Storing passwords encrypted... bcrypt? In-Reply-To: <4F0515FF.9050101@blue-labs.org> References: <4F0367A2.1000807@Media-Brokers.com> <4F036C3B.5080904@Media-Brokers.com> <4F037CC5.9030900@carpenter.org> <4F038186.3030505@Media-Brokers.com> <4F038B72.1000003@carpenter.org> <4F03AA6E.30003@Media-Brokers.com> <4F03B25B.2020309@orlitzky.com> <1325728752.9555.8.camel@tardis> <4F050A73.7090300@localhost.localdomain.org> <1325730998.9555.21.camel@tardis> <4F0515FF.9050101@blue-labs.org> Message-ID: <1325748490.5349.37.camel@tardis> On Wed, 2012-01-04 at 22:16 -0500, David Ford wrote: > > with multiple servers, we use pam & nss, with a replicated ldap backed. public accessible mode :P oh dont start me on that, but luckily I'm not subjected to its dangers...and telling Pascal bout Bourbon made me realise its time to head out for last couple of nights of freedom and have a few. -------------- next part -------------- A non-text attachment was scrubbed... Name: face-raspberry.png Type: image/png Size: 865 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 490 bytes Desc: This is a digitally signed message part URL: From openbsd at e-solutions.re Thu Jan 5 09:45:06 2012 From: openbsd at e-solutions.re (Wesley M.) Date: Thu, 05 Jan 2012 11:45:06 +0400 Subject: [Dovecot] dovecot-lda error Message-ID: Hi, I use Dovecot 2.0.13 on OpenBSD 5.0 When i try to send emails i have the following error in /var/log/maillog Jan 5 11:23:49 mail50 postfix/pipe[29423]: D951842244C: to=, relay=dovecot, delay=0.02, delays=0.01/0/0/0.01, dsn=5.3.0, status=bounced (command line usage error. Command output: deliver: unknown option -- n Usage: dovecot-lda [-c ] [-a ] [-d ] [-p ] [-f ] [-m ] [-e] [-k] ) Jan 5 11:23:49 mail50 postfix/qmgr[13787]: D951842244C: removed In my /etc/postfix/master.cf : # Dovecot LDA dovecot unix - n n - - pipe flags=ODR user=_dovecot:_dovecot argv=/usr/local/libexec/dovecot/deliver -f ${ sender} -d ${user}@${nexthop} -n -m ${extension} How can i resolve that ? Thank you very much for your replies. Cheers, Wesley. From e-frog at gmx.de Thu Jan 5 10:39:50 2012 From: e-frog at gmx.de (e-frog) Date: Thu, 05 Jan 2012 09:39:50 +0100 Subject: [Dovecot] dovecot-lda error In-Reply-To: References: Message-ID: <4F0561D6.4020300@gmx.de> On 05.01.2012 08:45, wrote Wesley M.: > > > Hi, Hi, > > I use Dovecot 2.0.13 on OpenBSD 5.0 > When i try to send emails i > have the following error in /var/log/maillog > > Jan 5 11:23:49 mail50 > postfix/pipe[29423]: D951842244C: to=, relay=dovecot, delay=0.02, > delays=0.01/0/0/0.01, dsn=5.3.0, status=bounced (command line usage error. > Command output: deliver: unknown option -- n Usage: dovecot-lda [-c ] [-a ] Look at the bottom of this page: http://wiki2.dovecot.org/Upgrading/2.0 > [-d ] [-p ] [-f ] [-m ] [-e] [-k] ) > Jan 5 11:23:49 mail50 > postfix/qmgr[13787]: D951842244C: removed > > In my /etc/postfix/master.cf > : > # Dovecot LDA > dovecot unix - n n - - pipe > flags=ODR > user=_dovecot:_dovecot argv=/usr/local/libexec/dovecot/deliver -f > ${ > sender} -d ${user}@${nexthop} -n -m ${extension} > > How can i resolve that > ? > Thank you very much for your replies. > > Cheers, > > Wesley. > > From CMarcus at Media-Brokers.com Thu Jan 5 13:24:26 2012 From: CMarcus at Media-Brokers.com (Charles Marcus) Date: Thu, 05 Jan 2012 06:24:26 -0500 Subject: [Dovecot] Storing passwords encrypted... bcrypt? In-Reply-To: <4F03AD51.7080506@blue-labs.org> References: <4F0367A2.1000807@Media-Brokers.com> <4F036C3B.5080904@Media-Brokers.com> <4F037CC5.9030900@carpenter.org> <4F038186.3030505@Media-Brokers.com> <4F038B72.1000003@carpenter.org> <4F03AA6E.30003@Media-Brokers.com> <4F03AD51.7080506@blue-labs.org> Message-ID: <4F05886A.4080907@Media-Brokers.com> On 2012-01-03 8:37 PM, David Ford wrote: > part of my point along that of brute force resistance, is that > when security becomes onerous to the typical user such as requiring > non-repeat passwords of "10 characters including punctuation and mixed > case", even stalwart policy followers start tending toward avoiding it. Our policy is that we also don't force password changes unless/until there is a reason (an account is hacked/abused. I've been managing this mail system for 11+ years now, and this has *never* happened (knock wood). I'm not saying we're immune, or it can never happen, I'm simply saying it has never happened, so out policy is working as far as I'm concerned. > if anyone has a stressful job, spends a lot of time working, missing > sleep, is thereby prone to memory lapse, it's almost a sure guarantee > they *will* write it down/store it somewhere -- usually not in a > password safe. Again - there is no *need* form them to write it down. Once their workstation/home computer/phone is set up, it remembers the password for them. > or, they'll export their saved passwords to make a backup plain text > copy, and leave it on their Desktop folder but coyly named and > prefixed with a few random emails to grandma, so mr. sysadmin doesn't > notice it. And if I don't notice it, no one else will either, most likely. There is *no* perfect way, but ours works and has been working for 11+ years. > on a tangent, you should worry about active brute force attacks. > fail2ban and iptables heuristics become meaningless when the brute > forcing is done by bot nets which is more and more common than > single-host attacks these days. one IP per attempt in a 10-20 minute > window will probably never trigger any of these methods. Nor will it ever be successful in brute forcing a strong password either, because a botnet has to try the same user+different passwords, and is easy to monitor for an excessive number of failures (of the same user login attempts) and notify the sys admin (me) well in advance of the hack attempt being successful. -- Best regards, Charles From CMarcus at Media-Brokers.com Thu Jan 5 13:26:17 2012 From: CMarcus at Media-Brokers.com (Charles Marcus) Date: Thu, 05 Jan 2012 06:26:17 -0500 Subject: [Dovecot] Storing passwords encrypted... bcrypt? In-Reply-To: <4F03B25B.2020309@orlitzky.com> References: <4F0367A2.1000807@Media-Brokers.com> <4F036C3B.5080904@Media-Brokers.com> <4F037CC5.9030900@carpenter.org> <4F038186.3030505@Media-Brokers.com> <4F038B72.1000003@carpenter.org> <4F03AA6E.30003@Media-Brokers.com> <4F03B25B.2020309@orlitzky.com> Message-ID: <4F0588D9.1030709@Media-Brokers.com> On 2012-01-03 8:58 PM, Michael Orlitzky wrote: > On 01/03/2012 08:25 PM, Charles Marcus wrote: >> What I'm worried about is the worst case scenario of someone getting >> ahold of the entire user database of *stored* passwords, where they can >> then take their time and brute force them at their leisure, on *their* >> *own* systems, without having to hammer my server over smtp/imap and >> without the automated limit of *my* fail2ban getting in their way. > To prevent rainbow table attacks, salt your passwords. You can make them > a little bit more difficult in plenty of ways, but salt is the /solution/. Go read that link (you obviously didn't yet, because he claims that salting passwords is next to *useless*... >> As for people writing their passwords down... our policy is that it is a >> potentially *firable* *offense* (never even encountered one case of >> anyone posting their password, and I'm on these systems off and on all >> the time) if they do post these anywhere that is not under lock and key. >> Also, I always set up their email clients for them (on their >> workstations and on their phones - and of course tell it to remember the >> password, so they basically never have to enter it. > You realize they're just walking around with a $400 post-it note with > the password written on it, right? Nope, you are wrong - as I have patiently explained before. They do not *need* to write their password down. -- Best regards, Charles From CMarcus at Media-Brokers.com Thu Jan 5 13:31:32 2012 From: CMarcus at Media-Brokers.com (Charles Marcus) Date: Thu, 05 Jan 2012 06:31:32 -0500 Subject: [Dovecot] Storing passwords encrypted... bcrypt? In-Reply-To: <4F04FAA9.3020908@localhost.localdomain.org> References: <4F0367A2.1000807@Media-Brokers.com> <4F04FAA9.3020908@localhost.localdomain.org> Message-ID: <4F058A14.2060303@Media-Brokers.com> On 2012-01-04 8:19 PM, Pascal Volk wrote: > On 01/03/2012 09:40 PM Charles Marcus wrote: >> Hi everyone, >> >> Was just perusing this article about how trivial it is to decrypt >> passwords that are stored using most (standard) encryption methods (like >> MD5), and was wondering - is it possible to use bcrypt with >> dovecot+postfix+mysql (or posgres)? > Yes it is possible to use bcrypt with dovecot. Currently you have only > to write your password scheme plugin. The bcrypt algorithm is described > at http://en.wikipedia.org/wiki/Bcrypt. > > If you are using Dovecot>= 2.0 'doveadm pw' supports the schemes: > *BSD: Blowfish-Crypt > *Linux (since glibc 2.7): SHA-256-Crypt and SHA-512-Crypt > Some distributions have also added support for Blowfish-Crypt > See also: doveadm-pw(1) > > If you are using Dovecot< 2.0 you can also use any of the algorithms > supported by your system's libc. But then you have to prefix the hashes > with {CRYPT} - not {{BLF,SHA256,SHA512}-CRYPT}. Hmmm... thanks very much Pascal, I think that gets me half-way to an answer (but since ianap, this is mostly greek to me and so is not quite a solution I can implement yet)... You said above that 'yes, I can use it with dovecot' - but what about postfix and mysql... where/how do they fit into this mix? My thought was that there are two issues here: 1. Storing them in bcrypted form, and 2. The clients must support *decrypting* them... So, since I use postfixadmin, I'm guessing that for #1, it will have to support encrypting them in bcrypt form, and then I have to worry about dovecot - and since I'm planning on using postfix+dovecot-sasl, once dovecot supports it, postfix will too... Is that about right? Thanks again, -- Best regards, Charles From patrickdk at patrickdk.com Thu Jan 5 16:53:38 2012 From: patrickdk at patrickdk.com (Patrick Domack) Date: Thu, 05 Jan 2012 09:53:38 -0500 Subject: [Dovecot] Storing passwords encrypted... bcrypt? In-Reply-To: <1325747950.5349.31.camel@tardis> References: <4F0367A2.1000807@Media-Brokers.com> <4F036C3B.5080904@Media-Brokers.com> <4F037CC5.9030900@carpenter.org> <4F038186.3030505@Media-Brokers.com> <4F038B72.1000003@carpenter.org> <4F03AA6E.30003@Media-Brokers.com> <4F03B25B.2020309@orlitzky.com> <1325728752.9555.8.camel@tardis> <4F050A73.7090300@localhost.localdomain.org> <1325730998.9555.21.camel@tardis> <4F051391.404@localhost.localdomain.org> <1325747950.5349.31.camel@tardis> Message-ID: <20120105095338.Horde.6Wa7KJLnE6FPBbly7kZFh-A@kishi.patrickdk.com> Quoting Noel Butler : > On Thu, 2012-01-05 at 04:05 +0100, Pascal Volk wrote: > >> On 01/05/2012 03:36 AM Noel Butler wrote: >> >> > >> > Because with multiple servers, we store them all in (replicated) >> > mysql :) (the same with postfix/dovecot). >> > and as I'm sure you are aware, Apache does not understand standard >> > crypted MD5, hence why there is the second option of apache_md5_crypt() >> >> Oh, let me guess: You are using Windows, Netware, TPF as OS for your >> web servers? ;-) >> >> man htpasswd | grep -- '-d ' >> -d Use crypt() encryption for passwords. This is not >> supported by the httpd server on Windows and Netware and TPF. >> >> >> As you may have seen in my previous mail, the password is generated >> using crypt(). HTTP Authentication works with that password hash, even >> with the httpd from the ASF. >> > > > I think you need to do some homework, and although I now have 3.25 days > of holidays remaining, I don't intend to waste them educating anybody > hehe. Assuming you even know what I'm talking about, which I suspect you > don't since you keep using console commands and things like htpasswd, > which does not write to a mysql db, you don't seem to have comprehended > that I do not work with flat files nor local so it is irrelevant, I use > perl scripts for all systems management, so I hope you are not going to > suggest that I should make a system call when I can do it natively in > perl. > > But please, by all means, create a mysql db using a system crpyted md5 > password, I'll even help ya, openssl passwd -1 foobartilly > > $1$e3a.f3uW$SYRQiMlEhC5XlnSxtxiNC/ > > pop the entry into the db and go for your life trying to authenticate. > > > and when you've gone through half bottle of bourbon trying to figure out > why its not working, try the apache crypted md5 version $apr1$yKxk.DrQ > $ybcmM8mC1qD5t5FvoY9820 Mysql supports crypt right in it, so you can just submit the password to the mysql crypt function. We know perl has to support it also. The first thing I did when I was hired was to convert the password database from md5 to $6$. After that, I secured the machines that could and majorly limited what ones of them could get access to the list. About a month or two after this, we had about a thousand accounts compromised. So someone obviously got the list in how the old system was set, as every compromised password contains only lowercase letters less than 8 long. I wont say salted anything is bad, but keep the salt lengths up. Start with 8bytes atleast. crypts new option to support rounds also makes it a lot of fun, too bad I haven't seen consistant support for it yet, so I haven't been able to make use of that option. From michael at orlitzky.com Thu Jan 5 17:28:26 2012 From: michael at orlitzky.com (Michael Orlitzky) Date: Thu, 05 Jan 2012 10:28:26 -0500 Subject: [Dovecot] Storing passwords encrypted... bcrypt? In-Reply-To: <4F0588D9.1030709@Media-Brokers.com> References: <4F0367A2.1000807@Media-Brokers.com> <4F036C3B.5080904@Media-Brokers.com> <4F037CC5.9030900@carpenter.org> <4F038186.3030505@Media-Brokers.com> <4F038B72.1000003@carpenter.org> <4F03AA6E.30003@Media-Brokers.com> <4F03B25B.2020309@orlitzky.com> <4F0588D9.1030709@Media-Brokers.com> Message-ID: <4F05C19A.4030603@orlitzky.com> On 01/05/12 06:26, Charles Marcus wrote: > >> To prevent rainbow table attacks, salt your passwords. You can make them >> a little bit more difficult in plenty of ways, but salt is the >> /solution/. > > Go read that link (you obviously didn't yet, because he claims that > salting passwords is next to *useless*... > He doesn't claim that, but he's a crackpot anyway. Use a slow algorithm (others already mentioned bcrypt) to prevent brute-force search, and use salt to prevent pre-computed lookups. Anyone who tells you otherwise can probably be ignored. Extraordinary claims require extraordinary evidence. >> You realize they're just walking around with a $400 post-it note with >> the password written on it, right? > > Nope, you are wrong - as I have patiently explained before. They do not > *need* to write their password down. > They have them written down on their phones. If someone gets a hold of the phone, he can just read the password off of it. From michael at orlitzky.com Thu Jan 5 17:32:32 2012 From: michael at orlitzky.com (Michael Orlitzky) Date: Thu, 05 Jan 2012 10:32:32 -0500 Subject: [Dovecot] Storing passwords encrypted... bcrypt? In-Reply-To: <20120104210644.Horde.YEJENpLnE6FPBQW0C1KEd8A@kishi.patrickdk.com> References: <4F0367A2.1000807@Media-Brokers.com> <4F036C3B.5080904@Media-Brokers.com> <4F037CC5.9030900@carpenter.org> <4F038186.3030505@Media-Brokers.com> <4F038B72.1000003@carpenter.org> <4F03AA6E.30003@Media-Brokers.com> <4F03B25B.2020309@orlitzky.com> <1325728752.9555.8.camel@tardis> <20120104210644.Horde.YEJENpLnE6FPBQW0C1KEd8A@kishi.patrickdk.com> Message-ID: <4F05C290.5020308@orlitzky.com> On 01/04/12 21:06, Patrick Domack wrote: > > But still, the results are all the same, if they get the hash, it can be > broken, given time. Using more cpu expensive methods make it take longer > (like adding salt, more complex hash). But the end result is they will > have it if they want it. > Unless someone breaks either math or the hash algorithm, this is false. My password will be of little use to you in 10^20 years. From michael at orlitzky.com Thu Jan 5 17:46:23 2012 From: michael at orlitzky.com (Michael Orlitzky) Date: Thu, 05 Jan 2012 10:46:23 -0500 Subject: [Dovecot] Storing passwords encrypted... bcrypt? In-Reply-To: <4F05C19A.4030603@orlitzky.com> References: <4F0367A2.1000807@Media-Brokers.com> <4F036C3B.5080904@Media-Brokers.com> <4F037CC5.9030900@carpenter.org> <4F038186.3030505@Media-Brokers.com> <4F038B72.1000003@carpenter.org> <4F03AA6E.30003@Media-Brokers.com> <4F03B25B.2020309@orlitzky.com> <4F0588D9.1030709@Media-Brokers.com> <4F05C19A.4030603@orlitzky.com> Message-ID: <4F05C5CF.7010804@orlitzky.com> On 01/05/12 10:28, Michael Orlitzky wrote: >> >> Nope, you are wrong - as I have patiently explained before. They do not >> *need* to write their password down. >> > > They have them written down on their phones. If someone gets a hold of > the phone, he can just read the password off of it. I should point out, I don't think this is a bad thing! From CMarcus at Media-Brokers.com Thu Jan 5 18:14:20 2012 From: CMarcus at Media-Brokers.com (Charles Marcus) Date: Thu, 05 Jan 2012 11:14:20 -0500 Subject: [Dovecot] Storing passwords encrypted... bcrypt? In-Reply-To: <4F05C19A.4030603@orlitzky.com> References: <4F0367A2.1000807@Media-Brokers.com> <4F036C3B.5080904@Media-Brokers.com> <4F037CC5.9030900@carpenter.org> <4F038186.3030505@Media-Brokers.com> <4F038B72.1000003@carpenter.org> <4F03AA6E.30003@Media-Brokers.com> <4F03B25B.2020309@orlitzky.com> <4F0588D9.1030709@Media-Brokers.com> <4F05C19A.4030603@orlitzky.com> Message-ID: <4F05CC5C.7020807@Media-Brokers.com> On 2012-01-05 10:28 AM, Michael Orlitzky wrote: > On 01/05/12 06:26, Charles Marcus wrote: >>> To prevent rainbow table attacks, salt your passwords. You can make them >>> a little bit more difficult in plenty of ways, but salt is the >>> /solution/. >> Go read that link (you obviously didn't yet, because he claims that >> salting passwords is next to *useless*... > He doesn't claim that, Ummm... yes, he does... from tfa: "Salts Will Not Help You It?s important to note that salts are useless for preventing dictionary attacks or brute force attacks. You can use huge salts or many salts or hand-harvested, shade-grown, organic Himalayan pink salt. It doesn?t affect how fast an attacker can try a candidate password, given the hash and the salt from your database. Salt or no, if you?re using a general-purpose hash function designed for speed you?re well and truly effed." > but he's a crackpot anyway. Why? I asked because I'm genuinely unsure (don't know enough about the innards of the different encryption methods), and that's why I asked. Simply saying he's a crackpot means nothing. Also... > Use a slow algorithm (others already mentioned bcrypt)to prevent > brute-force search, Actually, that (bcrypt) is precisely what *the author of the article* (the one who you are saying is a crackpot) is suggesting to use - I guess you didn't even bother to read it or else you'd know that, so why bother commenting? > and use salt to prevent pre-computed lookups. Anyone who tells you > otherwise can probably be ignored. Extraordinary claims require > extraordinary evidence. I don't see it as an extraordinary claim, and anyone who goes around claiming someone else is a crackpot without evidence to support the claim is just yammering. >>> You realize they're just walking around with a $400 post-it note with >>> the password written on it, right? >> Nope, you are wrong - as I have patiently explained before. They do not >> *need* to write their password down. > They have them written down on their phones. If someone gets a hold of > the phone, he can just read the password off of it. No, they don't, your claim is baseless and without merit. Most people have never even known what their password *is*, much less written it down, because as I said (more than once), *I* set up their email clients (workstations, home computers and phones) *for them*. -- Best regards, Charles From wgillespie at es2eng.com Thu Jan 5 18:21:45 2012 From: wgillespie at es2eng.com (Willie Gillespie) Date: Thu, 05 Jan 2012 09:21:45 -0700 Subject: [Dovecot] Storing passwords encrypted... bcrypt? In-Reply-To: <4F05CC5C.7020807@Media-Brokers.com> References: <4F0367A2.1000807@Media-Brokers.com> <4F036C3B.5080904@Media-Brokers.com> <4F037CC5.9030900@carpenter.org> <4F038186.3030505@Media-Brokers.com> <4F038B72.1000003@carpenter.org> <4F03AA6E.30003@Media-Brokers.com> <4F03B25B.2020309@orlitzky.com> <4F0588D9.1030709@Media-Brokers.com> <4F05C19A.4030603@orlitzky.com> <4F05CC5C.7020807@Media-Brokers.com> Message-ID: <4F05CE19.8030204@es2eng.com> On 1/5/2012 9:14 AM, Charles Marcus wrote: > On 2012-01-05 10:28 AM, Michael Orlitzky wrote: >> On 01/05/12 06:26, Charles Marcus wrote: >>>> You realize they're just walking around with a $400 post-it note with >>>> the password written on it, right? > >>> Nope, you are wrong - as I have patiently explained before. They do not >>> *need* to write their password down. > >> They have them written down on their phones. If someone gets a hold of >> the phone, he can just read the password off of it. > > No, they don't, your claim is baseless and without merit. > > Most people have never even known what their password *is*, much less > written it down, because as I said (more than once), *I* set up their > email clients (workstations, home computers and phones) *for them*. If the phone knows the password and I have the phone, then I have the password. Similarly, if I compromise the workstation that knows the password, then I also have the password. Even if the user doesn't know the password, the phone/workstation does. And it has to be stored in a retrievable way. That's what he's trying to say when he was talking about a "$400 post-it note." From michael at orlitzky.com Thu Jan 5 18:31:17 2012 From: michael at orlitzky.com (Michael Orlitzky) Date: Thu, 05 Jan 2012 11:31:17 -0500 Subject: [Dovecot] Storing passwords encrypted... bcrypt? In-Reply-To: <4F05CC5C.7020807@Media-Brokers.com> References: <4F0367A2.1000807@Media-Brokers.com> <4F036C3B.5080904@Media-Brokers.com> <4F037CC5.9030900@carpenter.org> <4F038186.3030505@Media-Brokers.com> <4F038B72.1000003@carpenter.org> <4F03AA6E.30003@Media-Brokers.com> <4F03B25B.2020309@orlitzky.com> <4F0588D9.1030709@Media-Brokers.com> <4F05C19A.4030603@orlitzky.com> <4F05CC5C.7020807@Media-Brokers.com> Message-ID: <4F05D055.7020305@orlitzky.com> On 01/05/12 11:14, Charles Marcus wrote: > > Ummm... yes, he does... from tfa: > > "Salts Will Not Help You > > It?s important to note that salts are useless for preventing dictionary > attacks or brute force attacks. You can use huge salts or many salts or > hand-harvested, shade-grown, organic Himalayan pink salt. It doesn?t > affect how fast an attacker can try a candidate password, given the hash > and the salt from your database. > > Salt or no, if you?re using a general-purpose hash function designed for > speed you?re well and truly effed." Ugh, sorry. I went to the link that someone else quoted: https://www.grc.com/haystack.htm The article you posted is correct. Salt will not prevent brute-force search, but it isn't meant to. Salt is meant to prevent the attacker from using precomputed tables of hashed passwords, called rainbow tables. To prevent brute-force search, you use a better algorithm, like the author says. >> but he's a crackpot anyway. Gibson *is* a renowned crackpot. > Why? I asked because I'm genuinely unsure (don't know enough about the > innards of the different encryption methods), and that's why I asked. > Simply saying he's a crackpot means nothing. > > Also... > >> Use a slow algorithm (others already mentioned bcrypt)to prevent >> brute-force search, > > Actually, that (bcrypt) is precisely what *the author of the article* > (the one who you are saying is a crackpot) is suggesting to use - I > guess you didn't even bother to read it or else you'd know that, so why > bother commenting? Again, sorry, I don't always know how to work my email client. > > I don't see it as an extraordinary claim, and anyone who goes around > claiming someone else is a crackpot without evidence to support the > claim is just yammering. > Your article is fine, but you should always be skeptical because for every article like the one you posted, there are 100 like Gibson's. > > No, they don't, your claim is baseless and without merit. > > Most people have never even known what their password *is*, much less > written it down, because as I said (more than once), *I* set up their > email clients (workstations, home computers and phones) *for them*. > The password is on the phone, in plain text. If I have the phone, I can read it as easily as if it was written in sharpie. From yubao.liu at gmail.com Thu Jan 5 20:23:56 2012 From: yubao.liu at gmail.com (Yubao Liu) Date: Fri, 06 Jan 2012 02:23:56 +0800 Subject: [Dovecot] Strange error: DIGEST-MD5 mechanism can't be supported with given passdbs Message-ID: <4F05EABC.7070309@gmail.com> Hi all, I have no idea about that message, here is my configuration, what's wrong? Debian testing, Dovecot 2.0.15 $ doveconf -n # 2.0.15: /etc/dovecot/dovecot.conf # OS: Linux 3.1.0-1-686-pae i686 Debian wheezy/sid auth_default_realm = corp.example.com auth_krb5_keytab = /etc/dovecot.keytab auth_master_user_separator = * auth_mechanisms = gssapi digest-md5 auth_realms = corp.example.com auth_username_format = %n first_valid_gid = 1000 first_valid_uid = 1000 mail_location = mdbox:/srv/mail/%u/Mail managesieve_notify_capability = mailto managesieve_sieve_capability = fileinto reject envelope encoded-character vacation subaddress comparator-i;ascii-numeric relational regex imap4flags copy include variables body enotify environment mailbox date ihave passdb { args = /etc/dovecot/master-users driver = passwd-file master = yes pass = yes } passdb { driver = pam } plugin { sieve = /srv/mail/%u/.dovecot.sieve sieve_dir = /srv/mail/%u/sieve } protocols = " imap lmtp sieve" service auth { unix_listener auth-client { group = Debian-exim mode = 0660 } } ssl_cert = References: <4F0367A2.1000807@Media-Brokers.com> <4F036C3B.5080904@Media-Brokers.com> <4F037CC5.9030900@carpenter.org> <4F038186.3030505@Media-Brokers.com> <4F038B72.1000003@carpenter.org> <4F03AA6E.30003@Media-Brokers.com> <4F03B25B.2020309@orlitzky.com> <4F0588D9.1030709@Media-Brokers.com> <4F05C19A.4030603@orlitzky.com> <4F05CC5C.7020807@Media-Brokers.com> <4F05CE19.8030204@es2eng.com> Message-ID: <4F05ED9B.10901@Media-Brokers.com> On 2012-01-05 11:21 AM, Willie Gillespie wrote: > If the phone knows the password and I have the phone, then I have the > password. Similarly, if I compromise the workstation that knows the > password, then I also have the password. Interesting... I thought they were stored encrypted. I definitely use a (strong) Master Password in Thunderbird to protect the passwords, so it would take some doing on the workstations. > Even if the user doesn't know the password, the phone/workstation does. > And it has to be stored in a retrievable way. Yes, if an attacker has unfettered physical access to the workstation/phone, it can be compromised... > That's what he's trying to say when he was talking about a "$400 post-it > note." Got it... As I said, there is no perfect system... but ours has worked well in the 11+ years we've been doing it this way. -- Best regards, Charles From CMarcus at Media-Brokers.com Thu Jan 5 20:37:58 2012 From: CMarcus at Media-Brokers.com (Charles Marcus) Date: Thu, 05 Jan 2012 13:37:58 -0500 Subject: [Dovecot] Storing passwords encrypted... bcrypt? In-Reply-To: <4F05D055.7020305@orlitzky.com> References: <4F0367A2.1000807@Media-Brokers.com> <4F036C3B.5080904@Media-Brokers.com> <4F037CC5.9030900@carpenter.org> <4F038186.3030505@Media-Brokers.com> <4F038B72.1000003@carpenter.org> <4F03AA6E.30003@Media-Brokers.com> <4F03B25B.2020309@orlitzky.com> <4F0588D9.1030709@Media-Brokers.com> <4F05C19A.4030603@orlitzky.com> <4F05CC5C.7020807@Media-Brokers.com> <4F05D055.7020305@orlitzky.com> Message-ID: <4F05EE06.8070302@Media-Brokers.com> On 2012-01-05 11:31 AM, Michael Orlitzky wrote: > Ugh, sorry. I went to the link that someone else quoted: > > https://www.grc.com/haystack.htm > Gibson*is* a renowned crackpot. Don't know about that, but I do know from long experience Spinrite rocks! Maybe -- Best regards, Charles From david at blue-labs.org Thu Jan 5 20:47:58 2012 From: david at blue-labs.org (David Ford) Date: Thu, 05 Jan 2012 13:47:58 -0500 Subject: [Dovecot] Storing passwords encrypted... bcrypt? In-Reply-To: <4F05EE06.8070302@Media-Brokers.com> References: <4F0367A2.1000807@Media-Brokers.com> <4F036C3B.5080904@Media-Brokers.com> <4F037CC5.9030900@carpenter.org> <4F038186.3030505@Media-Brokers.com> <4F038B72.1000003@carpenter.org> <4F03AA6E.30003@Media-Brokers.com> <4F03B25B.2020309@orlitzky.com> <4F0588D9.1030709@Media-Brokers.com> <4F05C19A.4030603@orlitzky.com> <4F05CC5C.7020807@Media-Brokers.com> <4F05D055.7020305@orlitzky.com> <4F05EE06.8070302@Media-Brokers.com> Message-ID: <4F05F05E.60104@blue-labs.org> On 01/05/2012 01:37 PM, Charles Marcus wrote: > On 2012-01-05 11:31 AM, Michael Orlitzky wrote: >> Ugh, sorry. I went to the link that someone else quoted: >> >> https://www.grc.com/haystack.htm > >> Gibson*is* a renowned crackpot. > > Don't know about that, but I do know from long experience Spinrite rocks! > > Maybe he often piggybacks on common sense but makes it into an elaborate grandiose presentation. a lot of his topics tend to wander out to left field come half-time. -d From wgillespie+dovecot at es2eng.com Thu Jan 5 21:22:47 2012 From: wgillespie+dovecot at es2eng.com (Willie Gillespie) Date: Thu, 05 Jan 2012 12:22:47 -0700 Subject: [Dovecot] Storing passwords encrypted... bcrypt? In-Reply-To: <4F05ED9B.10901@Media-Brokers.com> References: <4F0367A2.1000807@Media-Brokers.com> <4F036C3B.5080904@Media-Brokers.com> <4F037CC5.9030900@carpenter.org> <4F038186.3030505@Media-Brokers.com> <4F038B72.1000003@carpenter.org> <4F03AA6E.30003@Media-Brokers.com> <4F03B25B.2020309@orlitzky.com> <4F0588D9.1030709@Media-Brokers.com> <4F05C19A.4030603@orlitzky.com> <4F05CC5C.7020807@Media-Brokers.com> <4F05CE19.8030204@es2eng.com> <4F05ED9B.10901@Media-Brokers.com> Message-ID: <4F05F887.70204@es2eng.com> On 01/05/2012 11:36 AM, Charles Marcus wrote: > On 2012-01-05 11:21 AM, Willie Gillespie wrote: >> If the phone knows the password and I have the phone, then I have the >> password. Similarly, if I compromise the workstation that knows the >> password, then I also have the password. > > Interesting... I thought they were stored encrypted. I definitely use a > (strong) Master Password in Thunderbird to protect the passwords, so it > would take some doing on the workstations. True. If you are using a master password, they are encrypted. From user+dovecot at localhost.localdomain.org Fri Jan 6 00:28:27 2012 From: user+dovecot at localhost.localdomain.org (Pascal Volk) Date: Thu, 05 Jan 2012 23:28:27 +0100 Subject: [Dovecot] Storing passwords encrypted... bcrypt? In-Reply-To: <4F058A14.2060303@Media-Brokers.com> References: <4F0367A2.1000807@Media-Brokers.com> <4F04FAA9.3020908@localhost.localdomain.org> <4F058A14.2060303@Media-Brokers.com> Message-ID: <4F06240B.2040101@localhost.localdomain.org> On 01/05/2012 12:31 PM Charles Marcus wrote: > ? > You said above that 'yes, I can use it with dovecot' - but what about > postfix and mysql... where/how do they fit into this mix? My thought was > that there are two issues here: > > 1. Storing them in bcrypted form, and For MySQL the bcrypted password is just a varchar. > 2. The clients must support *decrypting* them... Sorry, i don't know if clients need to know anything about the used password scheme. The used password scheme is mostly relevant for Dovecot. Don't mix password scheme and authentication scheme. > So, since I use postfixadmin, I'm guessing that for #1, it will have to > support encrypting them in bcrypt form, and then I have to worry about > dovecot - and since I'm planning on using postfix+dovecot-sasl, once > dovecot supports it, postfix will too... > > Is that about right? I think that's correct. Postfix uses Dovecot for the authentication stuff. If I'm wrong, please let me know it. Regards, Pascal -- The trapper recommends today: c01dcafe.1200523 at localdomain.org From Ralf.Hildebrandt at charite.de Fri Jan 6 12:09:53 2012 From: Ralf.Hildebrandt at charite.de (Ralf Hildebrandt) Date: Fri, 6 Jan 2012 11:09:53 +0100 Subject: [Dovecot] Deduplication active - but how good does it perform? Message-ID: <20120106100953.GV24134@charite.de> I have deduplication active in my first mdbox: type mailbox, but how do I find out how well the deduplication works? Is there a way of finding out how much disk space I saved (if I saved some :) )? -- Ralf Hildebrandt Gesch?ftsbereich IT | Abteilung Netzwerk Charit? - Universit?tsmedizin Berlin Campus Benjamin Franklin Hindenburgdamm 30 | D-12203 Berlin Tel. +49 30 450 570 155 | Fax: +49 30 450 570 962 ralf.hildebrandt at charite.de | http://www.charite.de From nick+dovecot at bunbun.be Fri Jan 6 12:52:34 2012 From: nick+dovecot at bunbun.be (Nick Rosier) Date: Fri, 06 Jan 2012 11:52:34 +0100 Subject: [Dovecot] Strange error: DIGEST-MD5 mechanism can't be supported with given passdbs In-Reply-To: <4F05EABC.7070309@gmail.com> References: <4F05EABC.7070309@gmail.com> Message-ID: <4F06D272.5010200@bunbun.be> Yubao Liu wrote: > Hi all, > > I have no idea about that message, here is my configuration, what's wrong? You have 2 passdb entries; 1 with a file and 1 with pam. I'm pretty sure PAM doesn't support DIGEST-MD5 authentication. Could be the cause of the problem. > Debian testing, Dovecot 2.0.15 > > $ doveconf -n > # 2.0.15: /etc/dovecot/dovecot.conf > # OS: Linux 3.1.0-1-686-pae i686 Debian wheezy/sid > auth_default_realm = corp.example.com > auth_krb5_keytab = /etc/dovecot.keytab > auth_master_user_separator = * > auth_mechanisms = gssapi digest-md5 > auth_realms = corp.example.com > auth_username_format = %n > first_valid_gid = 1000 > first_valid_uid = 1000 > mail_location = mdbox:/srv/mail/%u/Mail > managesieve_notify_capability = mailto > managesieve_sieve_capability = fileinto reject envelope > encoded-character vacation subaddress comparator-i;ascii-numeric > relational regex imap4flags copy include variables body enotify > environment mailbox date ihave > passdb { > args = /etc/dovecot/master-users > driver = passwd-file > master = yes > pass = yes > } > passdb { > driver = pam > } > plugin { > sieve = /srv/mail/%u/.dovecot.sieve > sieve_dir = /srv/mail/%u/sieve > } > protocols = " imap lmtp sieve" > service auth { > unix_listener auth-client { > group = Debian-exim > mode = 0660 > } > } > ssl_cert = ssl_key = userdb { > args = home=/srv/mail/%u > driver = passwd > } > protocol lmtp { > mail_plugins = " sieve" > } > protocol lda { > mail_plugins = " sieve" > } > > # cat /etc/dovecot/master-users > xxx at corp.example.com:zzzzzzzz > > The zzzzz is obtained by "doveadm pw -s digest-md5 -u > xxx at corp.example.com", > I tried to add prefix "{DIGEST-MD5}" before the generated hash and/or add > "scheme=DIGEST-MD5" to the passwd-file passdb's "args" option, both > don't help. > > The error message: > dovecot: master: Dovecot v2.0.15 starting up (core dumps disabled) > dovecot: auth: Fatal: DIGEST-MD5 mechanism can't be supported with given > passdbs > gold dovecot: master: Error: service(auth): command startup failed, > throttling > > I opened debug auth log, it showed dovecot read /etc/dovecot/master-users > and parsed one line, then the error occurred. Doesn't passwd-file > passdb support > digest-md5 password scheme? If it doesn't support, how do I configure > digest-md5 auth > mechanism with digest-md5 password scheme for virtual users? > > Regards, > Yubao Liu > Rgds, N. From tss at iki.fi Fri Jan 6 12:54:19 2012 From: tss at iki.fi (Timo Sirainen) Date: Fri, 6 Jan 2012 12:54:19 +0200 Subject: [Dovecot] Deduplication active - but how good does it perform? In-Reply-To: <20120106100953.GV24134@charite.de> References: <20120106100953.GV24134@charite.de> Message-ID: On 6.1.2012, at 12.09, Ralf Hildebrandt wrote: > I have deduplication active in my first mdbox: type mailbox, but how > do I find out how well the deduplication works? Is there a way of > finding out how much disk space I saved (if I saved some :) )? You could look at the files in the attachments directory, and see how many links they have. Each file has 2 initially. Each additional link has saved you bytes of space. From tss at iki.fi Fri Jan 6 12:55:49 2012 From: tss at iki.fi (Timo Sirainen) Date: Fri, 6 Jan 2012 12:55:49 +0200 Subject: [Dovecot] Possible mdbox corruption In-Reply-To: References: Message-ID: On 5.1.2012, at 2.24, Daniel L. Miller wrote: > I thought I had cleared out the corruption I had before - perhaps I was mistaken. What steps should I take to help locate these issues? Currently using 2.1rc1. I see the following errors in my logs, including out of memory and message size issues (at 15:30): .. > Jan 4 05:17:17 bubba dovecot: master: Error: service(indexer-worker): child 10896 returned error 83 (Out of memory (vsz_limit=256 MB, you may need to increase it)) > Jan 4 06:17:17 bubba dovecot: indexer-worker(user1 at domain.com): Fatal: pool_system_realloc(134217728): Out of memory The problem is clearly that index-worker's vsz_limit is too low. Increase it (or default_vsz_limit). From tss at iki.fi Fri Jan 6 12:57:43 2012 From: tss at iki.fi (Timo Sirainen) Date: Fri, 6 Jan 2012 12:57:43 +0200 Subject: [Dovecot] Possible mdbox corruption In-Reply-To: References: Message-ID: <9D72A466-6ECD-4A10-A52A-20BF56B32916@iki.fi> On 6.1.2012, at 12.55, Timo Sirainen wrote: >> Jan 4 05:17:17 bubba dovecot: master: Error: service(indexer-worker): child 10896 returned error 83 (Out of memory (vsz_limit=256 MB, you may need to increase it)) >> Jan 4 06:17:17 bubba dovecot: indexer-worker(user1 at domain.com): Fatal: pool_system_realloc(134217728): Out of memory > > The problem is clearly that index-worker's vsz_limit is too low. Increase it (or default_vsz_limit). Although the source of the out-of-memory /usr/local/lib/dovecot/libdovecot.so.0(buffer_write+0x7c) [0x7f0ec1a550ec] -> /usr/local/lib/dovecot/lib21_fts_solr_plugin.so(+0x3292) [0x7f0ec024f292] -> is something that shouldn't really be happening. I guess the Solr plugin wastes memory unnecessarily, I'll see what I can do about it. But for now just increase vsz limit. From nick+dovecot at bunbun.be Fri Jan 6 13:04:51 2012 From: nick+dovecot at bunbun.be (Nick Rosier) Date: Fri, 06 Jan 2012 12:04:51 +0100 Subject: [Dovecot] Deduplication active - but how good does it perform? In-Reply-To: <20120106100953.GV24134@charite.de> References: <20120106100953.GV24134@charite.de> Message-ID: <4F06D553.2010605@bunbun.be> Ralf Hildebrandt wrote: > I have deduplication active in my first mdbox: type mailbox, but how > do I find out how well the deduplication works? Is there a way of > finding out how much disk space I saved (if I saved some :) )? You could check how much diskspace all the mail uses (or the mail of a user) and compare it to the quota dovecot reports. But I think you would need quota's activated for this. E.g. on my small server used diskquota is 2GB where doveadm quota reports all users use 3.1GB. From adrian.minta at gmail.com Fri Jan 6 13:07:05 2012 From: adrian.minta at gmail.com (Adrian Minta) Date: Fri, 06 Jan 2012 13:07:05 +0200 Subject: [Dovecot] howto disable indexing on dovecot-lda ? Message-ID: <4F06D5D9.20001@gmail.com> Hello, is it possible to disable indexing on dovecot-lda ? Right now postfix delivers the mail directly to the nfs server without any problems. If I switch to dovecot-lda the system crashes do to the high I/O and locking. Indexing on lda is not very useful because the number of of imap logins is less than 5% that of incoming mails, so an user could wait for 3 sec to get his mail index, but a new mail can't. Dovecot version 1.2.15 mail_nfs_storage = yes mail_nfs_index = yes Than you ! From tss at iki.fi Fri Jan 6 13:27:41 2012 From: tss at iki.fi (Timo Sirainen) Date: Fri, 06 Jan 2012 13:27:41 +0200 Subject: [Dovecot] Possible mdbox corruption In-Reply-To: <9D72A466-6ECD-4A10-A52A-20BF56B32916@iki.fi> References: <9D72A466-6ECD-4A10-A52A-20BF56B32916@iki.fi> Message-ID: <1325849261.17774.0.camel@hurina> On Fri, 2012-01-06 at 12:57 +0200, Timo Sirainen wrote: > On 6.1.2012, at 12.55, Timo Sirainen wrote: > > >> Jan 4 05:17:17 bubba dovecot: master: Error: service(indexer-worker): child 10896 returned error 83 (Out of memory (vsz_limit=256 MB, you may need to increase it)) > >> Jan 4 06:17:17 bubba dovecot: indexer-worker(user1 at domain.com): Fatal: pool_system_realloc(134217728): Out of memory > > > > The problem is clearly that index-worker's vsz_limit is too low. Increase it (or default_vsz_limit). > > Although the source of the out-of-memory > > /usr/local/lib/dovecot/libdovecot.so.0(buffer_write+0x7c) [0x7f0ec1a550ec] -> /usr/local/lib/dovecot/lib21_fts_solr_plugin.so(+0x3292) [0x7f0ec024f292] -> > > is something that shouldn't really be happening. I guess the Solr plugin wastes memory unnecessarily, I'll see what I can do about it. But for now just increase vsz limit. I don't see any obvious reason why it would be using a lot of memory, unless you have a message that has huge (MIME) headers. See if http://hg.dovecot.org/dovecot-2.1/rev/380b0667e0a5 helps / logs a warning about it. From tss at iki.fi Fri Jan 6 13:39:44 2012 From: tss at iki.fi (Timo Sirainen) Date: Fri, 06 Jan 2012 13:39:44 +0200 Subject: [Dovecot] howto disable indexing on dovecot-lda ? In-Reply-To: <4F06D5D9.20001@gmail.com> References: <4F06D5D9.20001@gmail.com> Message-ID: <1325849985.17774.10.camel@hurina> On Fri, 2012-01-06 at 13:07 +0200, Adrian Minta wrote: > Hello, > is it possible to disable indexing on dovecot-lda ? protocol lda { mail_location = whatever-you-have-now:INDEX=MEMORY } > Right now postfix delivers the mail directly to the nfs server without > any problems. If I switch to dovecot-lda the system crashes do to the > high I/O and locking. Disabling indexing won't disable writing to dovecot-uidlist file. So I don't know if disabling indexes actually helps. From alexis.lelion at gmail.com Fri Jan 6 13:36:15 2012 From: alexis.lelion at gmail.com (Alexis Lelion) Date: Fri, 6 Jan 2012 12:36:15 +0100 Subject: [Dovecot] ACL with IMAP proxying Message-ID: Hello, I'm trying to use ACLs to restrict subscription on public mailboxes, but I went into trouble. My setup is made of two servers, and users are shared between them via a proxy. User authentication is done with LDAP, and credentials aren't shared between the mailservers. Instead, the proxies are using master password. The thing is that when the ACLs are checked, it actually doesn't give the user login, but the master login, which is useless. Is there a way to use the first part of destuser as it is done when fetching info from the userdb? Any help is appreciated, Thansk! Alexis -------------------------------------------------- ACL bug logs : 104184 Jan 6 12:09:35 mail02 dovecot: imap(user at domain): Debug: acl: acl username = proxy 104185 Jan 6 12:09:35 mail02 dovecot: imap(user at domain): Debug: acl: owner = 0 104186 Jan 6 12:09:35 mail02 dovecot: imap(user at domain): Debug: acl vfile: Global ACL directory: (none) 104187 Jan 6 12:09:35 mail02 dovecot: imap(user at domain): Debug: Namespace : type=public, prefix=Shared., sep=., inbox=no, hidden=no, list=yes, subscriptions=no location=maildir:/var/vmail/domain/Shared -------------------------------------------------- Output of "dovecot -n" # 2.0.15: /etc/dovecot/dovecot.conf # OS: Linux 2.6.32-5-amd64 x86_64 Debian 6.0.3 ext3 auth_debug = yes auth_master_user_separator = * auth_socket_path = /var/run/dovecot/auth-userdb auth_verbose = yes first_valid_uid = 150 lmtp_proxy = yes login_trusted_networks = mail01.ip mail_debug = yes mail_location = maildir:/var/vmail/%d/%n mail_nfs_storage = yes mail_plugins = acl mail_privileged_group = mail managesieve_notify_capability = mailto managesieve_sieve_capability = fileinto reject envelope encoded-character vacation subaddress comparator-i;ascii-numeric relational regex imap4flags copy include variables body enotify environment mailbox date ihave namespace { inbox = yes location = maildir:/var/vmail/%d/%n prefix = separator = . type = private } namespace { location = maildir:/var/vmail/domain/Shared prefix = Shared. separator = . subscriptions = no type = public } passdb { args = /etc/dovecot/master-users driver = passwd-file master = yes } passdb { args = /etc/dovecot/dovecot-ldap.conf driver = ldap } plugin { acl = vfile:/etc/dovecot/global-acls:cache_secs=300 recipient_delimiter = + sieve_after = /var/lib/dovecot/sieve/after.d/ sieve_before = /var/lib/dovecot/sieve/pre.d/ sieve_dir = /var/vmail/%d/%n/sieve sieve_global_path = /var/lib/dovecot/sieve/default.sieve } postmaster_address = user at domain protocols = " imap lmtp sieve" service auth { unix_listener /var/spool/postfix/private/auth { group = postfix mode = 0666 user = postfix } unix_listener auth-userdb { group = mail mode = 0600 user = vmail } } service lmtp { inet_listener lmtp { address = mail02.ip port = 24 } unix_listener /var/spool/postfix/private/dovecot-lmtp { group = postfix mode = 0660 user = postfix } } ssl = required ssl_cert = References: Message-ID: <1325850528.17774.13.camel@hurina> On Fri, 2012-01-06 at 12:36 +0100, Alexis Lelion wrote: > The thing is that when the ACLs are checked, it actually doesn't give > the user login, but the master login, which is useless. Yes, this is intentional. > Is there a way to use the first part of destuser as it is done when > fetching info from the userdb? You should be able to work around this with modifying userdb's query: user_query = select '%n' AS master_user, ... From stan at hardwarefreak.com Fri Jan 6 13:50:13 2012 From: stan at hardwarefreak.com (Stan Hoeppner) Date: Fri, 06 Jan 2012 05:50:13 -0600 Subject: [Dovecot] howto disable indexing on dovecot-lda ? In-Reply-To: <4F06D5D9.20001@gmail.com> References: <4F06D5D9.20001@gmail.com> Message-ID: <4F06DFF5.40707@hardwarefreak.com> On 1/6/2012 5:07 AM, Adrian Minta wrote: > Hello, > is it possible to disable indexing on dovecot-lda ? > > Right now postfix delivers the mail directly to the nfs server without > any problems. If I switch to dovecot-lda the system crashes do to the > high I/O and locking. > Indexing on lda is not very useful because the number of of imap logins > is less than 5% that of incoming mails, so an user could wait for 3 sec > to get his mail index, but a new mail can't. Then why bother with Dovecot LDA w/disabled indexing (the main reason for using it in the first place) instead of simply sticking with Postfix Local(8)? -- Stan From CMarcus at Media-Brokers.com Fri Jan 6 13:58:16 2012 From: CMarcus at Media-Brokers.com (Charles Marcus) Date: Fri, 06 Jan 2012 06:58:16 -0500 Subject: [Dovecot] Deduplication active - but how good does it perform? In-Reply-To: References: <20120106100953.GV24134@charite.de> Message-ID: <4F06E1D8.7090507@Media-Brokers.com> On 2012-01-06 5:54 AM, Timo Sirainen wrote: > On 6.1.2012, at 12.09, Ralf Hildebrandt wrote: >> I have deduplication active in my first mdbox: type mailbox, but how >> do I find out how well the deduplication works? Is there a way of >> finding out how much disk space I saved (if I saved some :) )? > You could look at the files in the attachments directory, and see how > many links they have. Each file has 2 initially. Each additional link > has saved you bytes of space. Maybe there could be a doveadm command for this? That would be really useful for some kind of stats applications... especially for promoting its use in environments where large attachments are common... -- Best regards, Charles From CMarcus at Media-Brokers.com Fri Jan 6 14:09:05 2012 From: CMarcus at Media-Brokers.com (Charles Marcus) Date: Fri, 06 Jan 2012 07:09:05 -0500 Subject: [Dovecot] Deduplication active - but how good does it perform? In-Reply-To: <4F06E1D8.7090507@Media-Brokers.com> References: <20120106100953.GV24134@charite.de> <4F06E1D8.7090507@Media-Brokers.com> Message-ID: <4F06E461.3010906@Media-Brokers.com> On 2012-01-06 6:58 AM, Charles Marcus wrote: > On 2012-01-06 5:54 AM, Timo Sirainen wrote: >> On 6.1.2012, at 12.09, Ralf Hildebrandt wrote: >>> I have deduplication active in my first mdbox: type mailbox, but how >>> do I find out how well the deduplication works? Is there a way of >>> finding out how much disk space I saved (if I saved some :) )? > >> You could look at the files in the attachments directory, and see how >> many links they have. Each file has 2 initially. Each additional link >> has saved you bytes of space. > > Maybe there could be a doveadm command for this? Incidentally, I use rsnapshot (which is simply a wrapper script for rsync) for my disk based backups. It uses hard links so that you can have hourly/daily/weekly/monthly (or whatever naming scheme you want) snapshots of your backups, but each snapshot simply contains hardlinks to the previous snapshots, so you can literally have hundreds of snapshots that only consume a little more space that one single whole snapshot. Anyway, rsnapshot has to leverage the du command to determine the amount of disk space each snapshot uses (when considered as a separate/standalone snapshot), or how much *actual* space each snapshot consumes (ie, only the files that are *not* hardlinked against a previous backup)... Maybe this could be a starting point for how to do this... http://rsnapshot.org/rsnapshot.html#usage and scroll down to the rsnapshot du command... -- Best regards, Charles From alexis.lelion at gmail.com Fri Jan 6 14:22:02 2012 From: alexis.lelion at gmail.com (Alexis Lelion) Date: Fri, 6 Jan 2012 13:22:02 +0100 Subject: [Dovecot] ACL with IMAP proxying In-Reply-To: <1325850528.17774.13.camel@hurina> References: <1325850528.17774.13.camel@hurina> Message-ID: Hi Timo, Thanks for your prompt answer, I wasn't expecting an answer that soon ;-) I just tried your workaround, and actually, master_user is properly set to the username, but then is overriden with the proxy login again : Jan 6 13:14:19 mail01 dovecot: imap: Debug: Added userdb setting: mail=maildir:/var/vmail/domain/user Jan 6 13:14:19 mail01 dovecot: imap: Debug: Added userdb setting: plugin/quota=dirsize:storage=0 Jan 6 13:14:19 mail01 dovecot: imap: Debug: Added userdb setting: plugin/master_user=user Jan 6 13:14:19 mail01 dovecot: imap: Debug: Added userdb setting: plugin/master_user=proxy Is there any other flag I can set to avoid this? (Something like Y for the password)? Alexis On Fri, Jan 6, 2012 at 12:48 PM, Timo Sirainen wrote: > On Fri, 2012-01-06 at 12:36 +0100, Alexis Lelion wrote: > > The thing is that when the ACLs are checked, it actually doesn't give > > the user login, but the master login, which is useless. > > Yes, this is intentional. > > > Is there a way to use the first part of destuser as it is done when > > fetching info from the userdb? > > You should be able to work around this with modifying userdb's query: > > user_query = select '%n' AS master_user, ... > > > From tss at iki.fi Fri Jan 6 14:26:37 2012 From: tss at iki.fi (Timo Sirainen) Date: Fri, 06 Jan 2012 14:26:37 +0200 Subject: [Dovecot] doveadm + dsync merging In-Reply-To: <4EFC76F0.2050705@localhost.localdomain.org> References: <20111229125326.GA2295@state-of-mind.de> <4EFC76F0.2050705@localhost.localdomain.org> Message-ID: <1325852800.17774.17.camel@hurina> On Thu, 2011-12-29 at 15:19 +0100, Pascal Volk wrote: > >> b) Don't have the dsync prefix: > >> > >> dsync mirror -> doveadm mirror > >> dsync backup -> doveadm backup > >> dsync server -> doveadm dsync-server (could be hidden from the doveadm commands list) I did this now, with mirror -> sync. > I'd prefer doveadm commands with the dsync prefix. (a)) Because: > > * doveadm already has other 'command groups' like mailbox, director ? > * that's the way to avoid command clashes (w/o hiding anything) There are already many mail related commands that don't have any prefix. For example I think "doveadm import" and "doveadm backup" are quite related. Also "dsync" is perhaps more about the internal implementation, so in future it's possible that sync/backup works some other way.. From tss at iki.fi Fri Jan 6 14:30:12 2012 From: tss at iki.fi (Timo Sirainen) Date: Fri, 06 Jan 2012 14:30:12 +0200 Subject: [Dovecot] ACL with IMAP proxying In-Reply-To: References: <1325850528.17774.13.camel@hurina> Message-ID: <1325853012.17774.19.camel@hurina> On Fri, 2012-01-06 at 13:22 +0100, Alexis Lelion wrote: > Thanks for your prompt answer, I wasn't expecting an answer that soon ;-) > I just tried your workaround, and actually, master_user is properly set to > the username, but then is overriden with the proxy login again : > > Jan 6 13:14:19 mail01 dovecot: imap: Debug: Added userdb setting: > mail=maildir:/var/vmail/domain/user > Jan 6 13:14:19 mail01 dovecot: imap: Debug: Added userdb setting: > plugin/quota=dirsize:storage=0 > Jan 6 13:14:19 mail01 dovecot: imap: Debug: Added userdb setting: > plugin/master_user=user > Jan 6 13:14:19 mail01 dovecot: imap: Debug: Added userdb setting: > plugin/master_user=proxy I thought it would have been the other way around.. See if http://hg.dovecot.org/dovecot-2.0/raw-rev/684381041dc4 helps? > Is there any other flag I can set to avoid this? (Something like Y for the > password)? Nope. From alexis.lelion at gmail.com Fri Jan 6 14:55:03 2012 From: alexis.lelion at gmail.com (Alexis Lelion) Date: Fri, 6 Jan 2012 13:55:03 +0100 Subject: [Dovecot] ACL with IMAP proxying In-Reply-To: <1325853012.17774.19.camel@hurina> References: <1325850528.17774.13.camel@hurina> <1325853012.17774.19.camel@hurina> Message-ID: Thanks Timo. I'm actually using a packaged version of Dovecot 2.0 from Debian, so I can't apply the patch easily right now. I'll try do build dovecot this weekend and see if it solves the issue. Cheers Alexis On Fri, Jan 6, 2012 at 1:30 PM, Timo Sirainen wrote: > On Fri, 2012-01-06 at 13:22 +0100, Alexis Lelion wrote: > > > Thanks for your prompt answer, I wasn't expecting an answer that soon ;-) > > I just tried your workaround, and actually, master_user is properly set > to > > the username, but then is overriden with the proxy login again : > > > > Jan 6 13:14:19 mail01 dovecot: imap: Debug: Added userdb setting: > > mail=maildir:/var/vmail/domain/user > > Jan 6 13:14:19 mail01 dovecot: imap: Debug: Added userdb setting: > > plugin/quota=dirsize:storage=0 > > Jan 6 13:14:19 mail01 dovecot: imap: Debug: Added userdb setting: > > plugin/master_user=user > > Jan 6 13:14:19 mail01 dovecot: imap: Debug: Added userdb setting: > > plugin/master_user=proxy > > I thought it would have been the other way around.. See if > http://hg.dovecot.org/dovecot-2.0/raw-rev/684381041dc4 helps? > > > Is there any other flag I can set to avoid this? (Something like Y for > the > > password)? > > Nope. > > > From tss at iki.fi Fri Jan 6 14:57:24 2012 From: tss at iki.fi (Timo Sirainen) Date: Fri, 06 Jan 2012 14:57:24 +0200 Subject: [Dovecot] ACL with IMAP proxying In-Reply-To: References: <1325850528.17774.13.camel@hurina> <1325853012.17774.19.camel@hurina> Message-ID: <1325854644.17774.20.camel@hurina> Another possibility: http://wiki2.dovecot.org/PostLoginScripting and set MASTER_USER environment. On Fri, 2012-01-06 at 13:55 +0100, Alexis Lelion wrote: > Thanks Timo. > I'm actually using a packaged version of Dovecot 2.0 from Debian, so I > can't apply the patch easily right now. > I'll try do build dovecot this weekend and see if it solves the issue. > > Cheers > > Alexis > > On Fri, Jan 6, 2012 at 1:30 PM, Timo Sirainen wrote: > > > On Fri, 2012-01-06 at 13:22 +0100, Alexis Lelion wrote: > > > > > Thanks for your prompt answer, I wasn't expecting an answer that soon ;-) > > > I just tried your workaround, and actually, master_user is properly set > > to > > > the username, but then is overriden with the proxy login again : > > > > > > Jan 6 13:14:19 mail01 dovecot: imap: Debug: Added userdb setting: > > > mail=maildir:/var/vmail/domain/user > > > Jan 6 13:14:19 mail01 dovecot: imap: Debug: Added userdb setting: > > > plugin/quota=dirsize:storage=0 > > > Jan 6 13:14:19 mail01 dovecot: imap: Debug: Added userdb setting: > > > plugin/master_user=user > > > Jan 6 13:14:19 mail01 dovecot: imap: Debug: Added userdb setting: > > > plugin/master_user=proxy > > > > I thought it would have been the other way around.. See if > > http://hg.dovecot.org/dovecot-2.0/raw-rev/684381041dc4 helps? > > > > > Is there any other flag I can set to avoid this? (Something like Y for > > the > > > password)? > > > > Nope. > > > > > > From adrian.minta at gmail.com Fri Jan 6 15:01:52 2012 From: adrian.minta at gmail.com (Adrian Minta) Date: Fri, 06 Jan 2012 15:01:52 +0200 Subject: [Dovecot] howto disable indexing on dovecot-lda ? In-Reply-To: <1325849985.17774.10.camel@hurina> References: <4F06D5D9.20001@gmail.com> <1325849985.17774.10.camel@hurina> Message-ID: <4F06F0C0.30906@gmail.com> On 01/06/12 13:39, Timo Sirainen wrote: > On Fri, 2012-01-06 at 13:07 +0200, Adrian Minta wrote: >> Hello, >> is it possible to disable indexing on dovecot-lda ? > protocol lda { > mail_location = whatever-you-have-now:INDEX=MEMORY > } > >> Right now postfix delivers the mail directly to the nfs server without >> any problems. If I switch to dovecot-lda the system crashes do to the >> high I/O and locking. > Disabling indexing won't disable writing to dovecot-uidlist file. So I > don't know if disabling indexes actually helps. > I don't have mail_location under "protocol lda": protocol lda { # Address to use when sending rejection mails. postmaster_address = postmaster at xxx sendmail_path = /usr/lib/sendmail auth_socket_path = /var/run/dovecot/auth-master mail_plugins = quota syslog_facility = mail } The mail_location is present only global. What to do then ? From adrian.minta at gmail.com Fri Jan 6 15:02:31 2012 From: adrian.minta at gmail.com (Adrian Minta) Date: Fri, 06 Jan 2012 15:02:31 +0200 Subject: [Dovecot] howto disable indexing on dovecot-lda ? In-Reply-To: <4F06DFF5.40707@hardwarefreak.com> References: <4F06D5D9.20001@gmail.com> <4F06DFF5.40707@hardwarefreak.com> Message-ID: <4F06F0E7.904@gmail.com> On 01/06/12 13:50, Stan Hoeppner wrote: > On 1/6/2012 5:07 AM, Adrian Minta wrote: >> Hello, >> is it possible to disable indexing on dovecot-lda ? >> >> Right now postfix delivers the mail directly to the nfs server without >> any problems. If I switch to dovecot-lda the system crashes do to the >> high I/O and locking. >> Indexing on lda is not very useful because the number of of imap logins >> is less than 5% that of incoming mails, so an user could wait for 3 sec >> to get his mail index, but a new mail can't. > Then why bother with Dovecot LDA w/disabled indexing (the main reason > for using it in the first place) instead of simply sticking with Postfix > Local(8)? > Because of sieve and quota support. Another possible advantage will be the support for hashed mailbox directories. From tss at iki.fi Fri Jan 6 15:08:26 2012 From: tss at iki.fi (Timo Sirainen) Date: Fri, 06 Jan 2012 15:08:26 +0200 Subject: [Dovecot] howto disable indexing on dovecot-lda ? In-Reply-To: <4F06F0C0.30906@gmail.com> References: <4F06D5D9.20001@gmail.com> <1325849985.17774.10.camel@hurina> <4F06F0C0.30906@gmail.com> Message-ID: <1325855306.17774.21.camel@hurina> On Fri, 2012-01-06 at 15:01 +0200, Adrian Minta wrote: > > protocol lda { > > mail_location = whatever-you-have-now:INDEX=MEMORY > > } > > > I don't have mail_location under "protocol lda": Just add it there. From alexis.lelion at gmail.com Fri Jan 6 15:20:26 2012 From: alexis.lelion at gmail.com (Alexis Lelion) Date: Fri, 6 Jan 2012 14:20:26 +0100 Subject: [Dovecot] ACL with IMAP proxying In-Reply-To: <1325854644.17774.20.camel@hurina> References: <1325850528.17774.13.camel@hurina> <1325853012.17774.19.camel@hurina> <1325854644.17774.20.camel@hurina> Message-ID: It worked! Thanks a lot for your help and have a wonderful day! On Fri, Jan 6, 2012 at 1:57 PM, Timo Sirainen wrote: > Another possibility: http://wiki2.dovecot.org/PostLoginScripting > > and set MASTER_USER environment. > > On Fri, 2012-01-06 at 13:55 +0100, Alexis Lelion wrote: > > Thanks Timo. > > I'm actually using a packaged version of Dovecot 2.0 from Debian, so I > > can't apply the patch easily right now. > > I'll try do build dovecot this weekend and see if it solves the issue. > > > > Cheers > > > > Alexis > > > > On Fri, Jan 6, 2012 at 1:30 PM, Timo Sirainen wrote: > > > > > On Fri, 2012-01-06 at 13:22 +0100, Alexis Lelion wrote: > > > > > > > Thanks for your prompt answer, I wasn't expecting an answer that > soon ;-) > > > > I just tried your workaround, and actually, master_user is properly > set > > > to > > > > the username, but then is overriden with the proxy login again : > > > > > > > > Jan 6 13:14:19 mail01 dovecot: imap: Debug: Added userdb setting: > > > > mail=maildir:/var/vmail/domain/user > > > > Jan 6 13:14:19 mail01 dovecot: imap: Debug: Added userdb setting: > > > > plugin/quota=dirsize:storage=0 > > > > Jan 6 13:14:19 mail01 dovecot: imap: Debug: Added userdb setting: > > > > plugin/master_user=user > > > > Jan 6 13:14:19 mail01 dovecot: imap: Debug: Added userdb setting: > > > > plugin/master_user=proxy > > > > > > I thought it would have been the other way around.. See if > > > http://hg.dovecot.org/dovecot-2.0/raw-rev/684381041dc4 helps? > > > > > > > Is there any other flag I can set to avoid this? (Something like Y > for > > > the > > > > password)? > > > > > > Nope. > > > > > > > > > > > > From adrian.minta at gmail.com Fri Jan 6 15:25:11 2012 From: adrian.minta at gmail.com (Adrian Minta) Date: Fri, 06 Jan 2012 15:25:11 +0200 Subject: [Dovecot] howto disable indexing on dovecot-lda ? In-Reply-To: <1325855306.17774.21.camel@hurina> References: <4F06D5D9.20001@gmail.com> <1325849985.17774.10.camel@hurina> <4F06F0C0.30906@gmail.com> <1325855306.17774.21.camel@hurina> Message-ID: <4F06F637.3070504@gmail.com> On 01/06/12 15:08, Timo Sirainen wrote: > On Fri, 2012-01-06 at 15:01 +0200, Adrian Minta wrote: >>> protocol lda { >>> mail_location = whatever-you-have-now:INDEX=MEMORY >>> } >>> >> I don't have mail_location under "protocol lda": > Just add it there. > Thank you ! Dovecot didn't complain after restart and the "dovecot -a" reports it correctly: lda: postmaster_address: postmaster at xxx sendmail_path: /usr/lib/sendmail auth_socket_path: /var/run/dovecot/auth-master mail_plugins: quota syslog_facility: mail mail_location: maildir:/var/virtual/%d/%u:INDEX=MEMORY I will do a test with this. From yubao.liu at gmail.com Fri Jan 6 18:15:55 2012 From: yubao.liu at gmail.com (Yubao Liu) Date: Sat, 07 Jan 2012 00:15:55 +0800 Subject: [Dovecot] Strange error: DIGEST-MD5 mechanism can't be supported with given passdbs In-Reply-To: <4F06D272.5010200@bunbun.be> References: <4F05EABC.7070309@gmail.com> <4F06D272.5010200@bunbun.be> Message-ID: <4F071E3B.2060405@gmail.com> On 01/06/2012 06:52 PM, Nick Rosier wrote: > Yubao Liu wrote: >> Hi all, >> >> I have no idea about that message, here is my configuration, what's wrong? > You have 2 passdb entries; 1 with a file and 1 with pam. I'm pretty sure > PAM doesn't support DIGEST-MD5 authentication. Could be the cause of the > problem. > Thanks, that does be the cause. http://hg.dovecot.org/dovecot-2.0/file/684381041dc4/src/auth/auth.c 121 static bool auth_passdb_list_have_lookup_credentials(struct auth *auth) 122 { 123 struct auth_passdb *passdb; 124 125 for (passdb = auth->passdbs; passdb != NULL; passdb = passdb->next) { 126 if (passdb->passdb->iface.lookup_credentials != NULL) 127 return TRUE; 128 } 129 return FALSE; 130 } I don't know why this function doesn't check auth->masterdbs, if I insert these lines after line 128, that error goes away, and dovecot's imap-login process happily does DIGEST-MD5 authentication [1]. In my configuration, "masterdbs" contains "passdb passwd-file", "passdbs" contains " passdb pam". for (passdb = auth->masterdbs; passdb != NULL; passdb = passdb->next) { if (passdb->passdb->iface.lookup_credentials != NULL) return TRUE; } [1] But the authentication for "user*master" always fails, I realized master users can't login as other users by DIGEST-MD5 or CRAM-MD5 auth mechanisms because these authentication mechanisms use "user*master" as username in hash algorithm, not just "master". Regards, Yubao Liu >> Debian testing, Dovecot 2.0.15 >> >> $ doveconf -n >> # 2.0.15: /etc/dovecot/dovecot.conf >> # OS: Linux 3.1.0-1-686-pae i686 Debian wheezy/sid >> auth_default_realm = corp.example.com >> auth_krb5_keytab = /etc/dovecot.keytab >> auth_master_user_separator = * >> auth_mechanisms = gssapi digest-md5 >> auth_realms = corp.example.com >> auth_username_format = %n >> first_valid_gid = 1000 >> first_valid_uid = 1000 >> mail_location = mdbox:/srv/mail/%u/Mail >> managesieve_notify_capability = mailto >> managesieve_sieve_capability = fileinto reject envelope >> encoded-character vacation subaddress comparator-i;ascii-numeric >> relational regex imap4flags copy include variables body enotify >> environment mailbox date ihave >> passdb { >> args = /etc/dovecot/master-users >> driver = passwd-file >> master = yes >> pass = yes >> } >> passdb { >> driver = pam >> } >> plugin { >> sieve = /srv/mail/%u/.dovecot.sieve >> sieve_dir = /srv/mail/%u/sieve >> } >> protocols = " imap lmtp sieve" >> service auth { >> unix_listener auth-client { >> group = Debian-exim >> mode = 0660 >> } >> } >> ssl_cert => ssl_key => userdb { >> args = home=/srv/mail/%u >> driver = passwd >> } >> protocol lmtp { >> mail_plugins = " sieve" >> } >> protocol lda { >> mail_plugins = " sieve" >> } >> >> # cat /etc/dovecot/master-users >> xxx at corp.example.com:zzzzzzzz >> >> The zzzzz is obtained by "doveadm pw -s digest-md5 -u >> xxx at corp.example.com", >> I tried to add prefix "{DIGEST-MD5}" before the generated hash and/or add >> "scheme=DIGEST-MD5" to the passwd-file passdb's "args" option, both >> don't help. >> >> The error message: >> dovecot: master: Dovecot v2.0.15 starting up (core dumps disabled) >> dovecot: auth: Fatal: DIGEST-MD5 mechanism can't be supported with given >> passdbs >> gold dovecot: master: Error: service(auth): command startup failed, >> throttling >> >> I opened debug auth log, it showed dovecot read /etc/dovecot/master-users >> and parsed one line, then the error occurred. Doesn't passwd-file >> passdb support >> digest-md5 password scheme? If it doesn't support, how do I configure >> digest-md5 auth >> mechanism with digest-md5 password scheme for virtual users? >> >> Regards, >> Yubao Liu >> > Rgds, > N. From tss at iki.fi Fri Jan 6 18:41:48 2012 From: tss at iki.fi (Timo Sirainen) Date: Fri, 06 Jan 2012 18:41:48 +0200 Subject: [Dovecot] v2.0.17 released Message-ID: <1325868113.17774.28.camel@hurina> http://dovecot.org/releases/2.0/dovecot-2.0.17.tar.gz http://dovecot.org/releases/2.0/dovecot-2.0.17.tar.gz.sig Among other changes: + Proxying now supports sending SSL client certificate to server with ssl_client_cert/key settings. + doveadm dump: Added support for dumping dbox headers/metadata. - Fixed memory leaks in login processes with SSL connections - vpopmail support was broken in v2.0.16 From tss at iki.fi Fri Jan 6 18:42:07 2012 From: tss at iki.fi (Timo Sirainen) Date: Fri, 06 Jan 2012 18:42:07 +0200 Subject: [Dovecot] v2.1.rc2 released Message-ID: <1325868127.17774.29.camel@hurina> http://dovecot.org/releases/2.1/rc/dovecot-2.1.rc2.tar.gz http://dovecot.org/releases/2.1/rc/dovecot-2.1.rc2.tar.gz.sig Lots of fixes since rc1. Some of the changes were larger than I wanted at RC stage, but they had to be done now.. Hopefully it's all over now, and we can have v2.1.0 soon. :) Some of the more important changes: * dsync was merged into doveadm. There is still "dsync" symlink pointing to "doveadm", which you can use the old way for now. The preferred ways to run dsync are "doveadm sync" (for old "dsync mirror") and "doveadm backup". + IMAP SPECIAL-USE extension to describe mailboxes + Added mailbox {} sections, which deprecate autocreate plugin + lib-fs: Added "mode" parameter to "posix" backend to specify mode for created files/dirs (for mail_attachment_dir). + inet_listener names are now used to figure out what type the socket is when useful. For example naming service auth { inet_listener } to auth-client vs. auth-userdb has different behavior. + Added pop3c (= POP3 client) storage backend. - LMTP proxying code was simplified, hopefully fixing its problems. - dsync: Don't remove user's subscriptions for subscriptions=no namespaces. From tss at iki.fi Fri Jan 6 18:44:44 2012 From: tss at iki.fi (Timo Sirainen) Date: Fri, 06 Jan 2012 18:44:44 +0200 Subject: [Dovecot] Strange error: DIGEST-MD5 mechanism can't be supported with given passdbs In-Reply-To: <4F071E3B.2060405@gmail.com> References: <4F05EABC.7070309@gmail.com> <4F06D272.5010200@bunbun.be> <4F071E3B.2060405@gmail.com> Message-ID: <1325868288.17774.30.camel@hurina> On Sat, 2012-01-07 at 00:15 +0800, Yubao Liu wrote: > I don't know why this function doesn't check auth->masterdbs, if I > insert these lines after line 128, that error goes away, and dovecot's > imap-login process happily does DIGEST-MD5 authentication [1]. > In my configuration, "masterdbs" contains "passdb passwd-file", > "passdbs" contains " passdb pam". So .. you want DIGEST-MD5 authentication for the master users, but not for anyone else? I hadn't really thought anyone would want that.. From lists at luigirosa.com Fri Jan 6 19:13:20 2012 From: lists at luigirosa.com (Luigi Rosa) Date: Fri, 06 Jan 2012 18:13:20 +0100 Subject: [Dovecot] v2.1.rc2 released In-Reply-To: <1325868127.17774.29.camel@hurina> References: <1325868127.17774.29.camel@hurina> Message-ID: <4F072BB0.7040507@luigirosa.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Timo Sirainen said the following on 06/01/12 17:42: > http://dovecot.org/releases/2.1/rc/dovecot-2.1.rc2.tar.gz Making all in doveadm make[3]: Entering directory `/usr/src/dovecot-2.1.rc2/src/doveadm' Making all in dsync make[4]: Entering directory `/usr/src/dovecot-2.1.rc2/src/doveadm/dsync' gcc -DHAVE_CONFIG_H -I. -I../../.. -I../../../src/lib -I../../../src/lib-test - -I../../../src/lib-settings -I../../../src/lib-master -I../../../src/lib-mail - -I../../../src/lib-imap -I../../../src/lib-index -I../../../src/lib-storage - -I../../../src/doveadm -std=gnu99 -g -O2 -Wall -W -Wmissing-prototypes - -Wmissing-declarations -Wpointer-arith -Wchar-subscripts -Wformat=2 - -Wbad-function-cast -Wstrict-aliasing=2 -I/usr/kerberos/include -MT doveadm-dsync.o -MD -MP -MF .deps/doveadm-dsync.Tpo -c -o doveadm-dsync.o doveadm-dsync.c doveadm-dsync.c:17:27: error: doveadm-dsync.h: No such file or directory doveadm-dsync.c:386: warning: no previous prototype for ?doveadm_dsync_main? make[4]: *** [doveadm-dsync.o] Error 1 make[4]: Leaving directory `/usr/src/dovecot-2.1.rc2/src/doveadm/dsync' make[3]: *** [all-recursive] Error 1 make[3]: Leaving directory `/usr/src/dovecot-2.1.rc2/src/doveadm' make[2]: *** [all-recursive] Error 1 make[2]: Leaving directory `/usr/src/dovecot-2.1.rc2/src' make[1]: *** [all-recursive] Error 1 make[1]: Leaving directory `/usr/src/dovecot-2.1.rc2' make: *** [all] Error 2 In fact the file doveadm-dsync.h is not in the tarball Ciao, luigi - -- / +--[Luigi Rosa]-- \ Non cercare di vincere mai un gatto in testardaggine. --Robert A. Heinlein -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.11 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iEYEARECAAYFAk8HK68ACgkQ3kWu7Tfl6ZRCkgCgwUGMxj12NBI3p8FO0W2AIBwW uSAAn3YuEAtm5ulsvWaPuPeylK2e/Vpc =kzD0 -----END PGP SIGNATURE----- From yubao.liu at gmail.com Fri Jan 6 19:29:14 2012 From: yubao.liu at gmail.com (Yubao Liu) Date: Sat, 07 Jan 2012 01:29:14 +0800 Subject: [Dovecot] Strange error: DIGEST-MD5 mechanism can't be supported with given passdbs In-Reply-To: <1325868288.17774.30.camel@hurina> References: <4F05EABC.7070309@gmail.com> <4F06D272.5010200@bunbun.be> <4F071E3B.2060405@gmail.com> <1325868288.17774.30.camel@hurina> Message-ID: <4F072F6A.8050801@gmail.com> On 01/07/2012 12:44 AM, Timo Sirainen wrote: > On Sat, 2012-01-07 at 00:15 +0800, Yubao Liu wrote: > >> I don't know why this function doesn't check auth->masterdbs, if I >> insert these lines after line 128, that error goes away, and dovecot's >> imap-login process happily does DIGEST-MD5 authentication [1]. >> In my configuration, "masterdbs" contains "passdb passwd-file", >> "passdbs" contains " passdb pam". > So .. you want DIGEST-MD5 authentication for the master users, but not > for anyone else? I hadn't really thought anyone would want that.. > I hope users use GSSAPI authentication from native MUA, but RoundCube webmail doesn't support that, so that I have to use DIGEST-MD5/CRAM-MD5/ PLAIN/LOGIN for authentication between RoundCube and Dovecot, and let RoundCube login as master user for normal user. I really don't like to transfer password as plain text, so I prefer DIGEST-MD5 and CRAM-MD5 for both auth mechanisms and password schemes. My last email is partially wrong, DIGEST-MD5 can't be used for master users because 'real_user*master_user' is used to calculate digest in IMAP client, this can't be consistent with digest in passdb because only 'master_user' is used to calculate digest. But CRAM-MD5 doesn't use user name to calculate digest, I just tried it successfully with my rude patch to src/auth/auth.c in my previous email:-) # doveadm pw -s CRAM-MD5 -u webmail (use 123456 as passwd) # cat > /etc/dovecot/master-users webmail:{CRAM-MD5}dd59f669267e9bb13d42a1ba57c972c5b13a4b2ae457c9ada8035dc7d8bae41b ^D $ gsasl --imap imap.corp.example.com --verbose -m CRAM-MD5 -a 'dieken*webmail at corp.example.com' -p 123456 Trying `gold.corp.example.com'... * OK [CAPABILITY IMAP4rev1 LITERAL+ SASL-IR LOGIN-REFERRALS ID ENABLE IDLE STARTTLS LOGINDISABLED AUTH=GSSAPI AUTH=DIGEST-MD5 AUTH=CRAM-MD5] Dovecot ready. . CAPABILITY * CAPABILITY IMAP4rev1 LITERAL+ SASL-IR LOGIN-REFERRALS ID ENABLE IDLE STARTTLS LOGINDISABLED AUTH=GSSAPI AUTH=DIGEST-MD5 AUTH=CRAM-MD5 . OK Pre-login capabilities listed, post-login capabilities have more. . STARTTLS . OK Begin TLS negotiation now. . CAPABILITY * CAPABILITY IMAP4rev1 LITERAL+ SASL-IR LOGIN-REFERRALS ID ENABLE IDLE AUTH=GSSAPI AUTH=DIGEST-MD5 AUTH=CRAM-MD5 . OK Pre-login capabilities listed, post-login capabilities have more. . AUTHENTICATE CRAM-MD5 + PDM1OTIzODgxNjgyNzUxMjUuMTMyNTg3MDQwMkBnb2xkPg== ZGlla2VuKndlYm1haWxAY29ycC5leGFtcGxlLmNvbSBkYjRlZWJlMTUwZGZjZjg5NTVkODZhNDBlMGJiZmQzNA== * CAPABILITY IMAP4rev1 LITERAL+ SASL-IR LOGIN-REFERRALS ID ENABLE IDLE SORT SORT=DISPLAY THREAD=REFERENCES THREAD=REFS MULTIAPPEND UNSELECT CHILDREN NAMESPACE UIDPLUS LIST-EXTENDED I18NLEVEL=1 CONDSTORE QRESYNC ESEARCH ESORT SEARCHRES WITHIN CONTEXT=SEARCH LIST-STATUS Client authentication finished (server trusted)... Enter application data (EOF to finish): It's also OK to use "-a 'dieken*webmail'" instead of "-a 'dieken*webmail at corp.example.com'. # doveconf -n # 2.0.15: /etc/dovecot/dovecot.conf # OS: Linux 3.1.0-1-686-pae i686 Debian wheezy/sid auth_debug = yes auth_debug_passwords = yes auth_default_realm = corp.example.com auth_krb5_keytab = /etc/dovecot.keytab auth_master_user_separator = * auth_mechanisms = gssapi digest-md5 cram-md5 auth_realms = corp.example.com auth_username_format = %n auth_verbose = yes auth_verbose_passwords = plain first_valid_gid = 1000 first_valid_uid = 1000 mail_debug = yes mail_location = mdbox:/srv/mail/%u/Mail managesieve_notify_capability = mailto managesieve_sieve_capability = fileinto reject envelope encoded-character vacation subaddress comparator-i;ascii-numeric relational regex imap4flags copy include variables body enotify environment mailbox date ihave passdb { args = /etc/dovecot/master-users driver = passwd-file master = yes } passdb { driver = pam } plugin { sieve = /srv/mail/%u/.dovecot.sieve sieve_dir = /srv/mail/%u/sieve } protocols = " imap lmtp sieve" service auth { unix_listener auth-client { group = Debian-exim mode = 0660 } } ssl_cert = References: <9D72A466-6ECD-4A10-A52A-20BF56B32916@iki.fi> Message-ID: On 1/6/2012 2:57 AM, Timo Sirainen wrote: > On 6.1.2012, at 12.55, Timo Sirainen wrote: > >>> Jan 4 05:17:17 bubba dovecot: master: Error: service(indexer-worker): child 10896 returned error 83 (Out of memory (vsz_limit=256 MB, you may need to increase it)) >>> Jan 4 06:17:17 bubba dovecot: indexer-worker(user1 at domain.com): Fatal: pool_system_realloc(134217728): Out of memory >> The problem is clearly that index-worker's vsz_limit is too low. Increase it (or default_vsz_limit). > Although the source of the out-of-memory > > /usr/local/lib/dovecot/libdovecot.so.0(buffer_write+0x7c) [0x7f0ec1a550ec] -> /usr/local/lib/dovecot/lib21_fts_solr_plugin.so(+0x3292) [0x7f0ec024f292] -> > > is something that shouldn't really be happening. I guess the Solr plugin wastes memory unnecessarily, I'll see what I can do about it. But for now just increase vsz limit. > I set default_vsz_limit = 1024M. Those errors appear gone - but I do have messages like: Jan 6 09:22:42 bubba dovecot: indexer-worker(user1 at domain.com): Error: fts_solr: Indexing failed: 400 Illegal character ((CTRL-CHAR, code 18)) at [row,col {unknown-source}]: [482765,16] Jan 6 09:22:42 bubba dovecot: indexer-worker: Error: Google seems to indicate that Solr cannot handle "invalid" characters - and that it is the responsibility of the calling program to strip out such. A quick search shows me a both an individual character comparison in Java and a regex used for the purpose. Is there any "illegal character protection" in the Dovecot Solr plugin? -- Daniel From dmiller at amfes.com Fri Jan 6 19:35:34 2012 From: dmiller at amfes.com (Daniel L. Miller) Date: Fri, 06 Jan 2012 09:35:34 -0800 Subject: [Dovecot] FTS-Solr plugin Message-ID: Solr plugin appears to break when mailbox names have an ampersand in the name. The messages appear to indicate '&' gets translated to '&--'. -- Daniel From tss at iki.fi Fri Jan 6 19:36:41 2012 From: tss at iki.fi (Timo Sirainen) Date: Fri, 6 Jan 2012 19:36:41 +0200 Subject: [Dovecot] Possible mdbox corruption In-Reply-To: References: <9D72A466-6ECD-4A10-A52A-20BF56B32916@iki.fi> Message-ID: <75CA67D3-81FF-4FD5-8D61-6992E808DB09@iki.fi> On 6.1.2012, at 19.30, Daniel L. Miller wrote: > Jan 6 09:22:42 bubba dovecot: indexer-worker(user1 at domain.com): Error: fts_solr: Indexing failed: 400 Illegal character ((CTRL-CHAR, code 18)) at [row,col {unknown-source}]: [482765,16] > Jan 6 09:22:42 bubba dovecot: indexer-worker: Error: > > Google seems to indicate that Solr cannot handle "invalid" characters - and that it is the responsibility of the calling program to strip out such. A quick search shows me a both an individual character comparison in Java and a regex used for the purpose. Is there any "illegal character protection" in the Dovecot Solr plugin? Yes, there is. So I'm not really sure what it's complaining about. Are you using the "solr" or "solr_old" backend? From yubao.liu at gmail.com Fri Jan 6 19:45:15 2012 From: yubao.liu at gmail.com (Yubao Liu) Date: Sat, 07 Jan 2012 01:45:15 +0800 Subject: [Dovecot] Strange error: DIGEST-MD5 mechanism can't be supported with given passdbs In-Reply-To: <1325868288.17774.30.camel@hurina> References: <4F05EABC.7070309@gmail.com> <4F06D272.5010200@bunbun.be> <4F071E3B.2060405@gmail.com> <1325868288.17774.30.camel@hurina> Message-ID: <4F07332B.70708@gmail.com> On 01/07/2012 12:44 AM, Timo Sirainen wrote: > On Sat, 2012-01-07 at 00:15 +0800, Yubao Liu wrote: > >> I don't know why this function doesn't check auth->masterdbs, if I >> insert these lines after line 128, that error goes away, and dovecot's >> imap-login process happily does DIGEST-MD5 authentication [1]. >> In my configuration, "masterdbs" contains "passdb passwd-file", >> "passdbs" contains " passdb pam". > So .. you want DIGEST-MD5 authentication for the master users, but not > for anyone else? I hadn't really thought anyone would want that.. > Is there any special reason that master passdb isn't taken into account in src/auth/auth.c:auth_passdb_list_have_lookup_credentials() ? I feel master passdb is also a kind of passdb. http://wiki2.dovecot.org/PasswordDatabase > You can use multiple databases, so if the password doesn't match > in the first database, Dovecot checks the next one. This can be useful > if you want to easily support having both virtual users and also local > system users (see Authentication/MultipleDatabases ). This is exactly my use case, I use Kerberos for system users, I'm curious why master passdb isn't used to check "have_lookup_credentials" ability. http://wiki2.dovecot.org/Authentication/MultipleDatabases > Currently the fallback works only with the PLAIN authentication mechanism. I hope this limitation can be relaxed. Regards, Yubao Liu From tss at iki.fi Fri Jan 6 19:51:49 2012 From: tss at iki.fi (Timo Sirainen) Date: Fri, 6 Jan 2012 19:51:49 +0200 Subject: [Dovecot] Strange error: DIGEST-MD5 mechanism can't be supported with given passdbs In-Reply-To: <4F07332B.70708@gmail.com> References: <4F05EABC.7070309@gmail.com> <4F06D272.5010200@bunbun.be> <4F071E3B.2060405@gmail.com> <1325868288.17774.30.camel@hurina> <4F07332B.70708@gmail.com> Message-ID: On 6.1.2012, at 19.45, Yubao Liu wrote: > On 01/07/2012 12:44 AM, Timo Sirainen wrote: >> On Sat, 2012-01-07 at 00:15 +0800, Yubao Liu wrote: >> >>> I don't know why this function doesn't check auth->masterdbs, if I >>> insert these lines after line 128, that error goes away, and dovecot's >>> imap-login process happily does DIGEST-MD5 authentication [1]. >>> In my configuration, "masterdbs" contains "passdb passwd-file", >>> "passdbs" contains " passdb pam". >> So .. you want DIGEST-MD5 authentication for the master users, but not >> for anyone else? I hadn't really thought anyone would want that.. >> > Is there any special reason that master passdb isn't taken into > account in src/auth/auth.c:auth_passdb_list_have_lookup_credentials() ? > I feel master passdb is also a kind of passdb. I guess it could be changed. It wasn't done intentionally that way. > This is exactly my use case, I use Kerberos for system users, > I'm curious why master passdb isn't used to check "have_lookup_credentials" ability > http://wiki2.dovecot.org/Authentication/MultipleDatabases > > Currently the fallback works only with the PLAIN authentication mechanism. > > I hope this limitation can be relaxed. It might already be .. I don't remember. In any case you have only PAM passdb, so it shouldn't matter. GSSAPI isn't a passdb. From tss at iki.fi Fri Jan 6 21:40:44 2012 From: tss at iki.fi (Timo Sirainen) Date: Fri, 06 Jan 2012 21:40:44 +0200 Subject: [Dovecot] v2.1.rc3 released Message-ID: <1325878845.17774.38.camel@hurina> http://dovecot.org/releases/2.1/rc/dovecot-2.1.rc3.tar.gz http://dovecot.org/releases/2.1/rc/dovecot-2.1.rc3.tar.gz.sig Whops, rc2 was missing a file. I always run "make distcheck", which should catch these, but recently it has always failed due to clang static checking giving one "error" that I didn't really want to fix. Because of that the distcheck didn't finish and didn't check for the missing file. So, anyway, I've made clang happy again, and now that I see how bad idea it is to just ignore the failed distcheck, I won't do that again in future. :) From mailinglist at ngong.de Fri Jan 6 18:37:22 2012 From: mailinglist at ngong.de (mailinglist) Date: Fri, 06 Jan 2012 17:37:22 +0100 Subject: [Dovecot] change initial permissions on creation of mail folder Message-ID: <4F072342.1090901@ngong.de> Installed dovcot from Debian .deb file. Creating a new account for system users sets permission for user-only. Where to change initial permissions on creation of mail folder and other subdirectories. Installed dovecot using "apt-get install dovecot-imapd dovecot-pop3d". Any time when I create a new account in my mail client for a system user, Dovecot tries to create ~/mail/.imap/INBOX. The permissions for mail and .imap are set to 0700. By this permissions INBOX can not be created leading to an error message in log files. When I manualy change the permissions to 0770, INBOX is created From doctor at doctor.nl2k.ab.ca Fri Jan 6 22:12:56 2012 From: doctor at doctor.nl2k.ab.ca (The Doctor) Date: Fri, 6 Jan 2012 13:12:56 -0700 Subject: [Dovecot] v2.1.rc2 released In-Reply-To: <1325868127.17774.29.camel@hurina> References: <1325868127.17774.29.camel@hurina> Message-ID: <20120106201255.GA20598@doctor.nl2k.ab.ca> On Fri, Jan 06, 2012 at 06:42:07PM +0200, Timo Sirainen wrote: > http://dovecot.org/releases/2.1/rc/dovecot-2.1.rc2.tar.gz > http://dovecot.org/releases/2.1/rc/dovecot-2.1.rc2.tar.gz.sig > > Lots of fixes since rc1. Some of the changes were larger than I wanted > at RC stage, but they had to be done now.. Hopefully it's all over now, > and we can have v2.1.0 soon. :) > > Some of the more important changes: > > * dsync was merged into doveadm. There is still "dsync" symlink > pointing to "doveadm", which you can use the old way for now. > The preferred ways to run dsync are "doveadm sync" (for old "dsync > mirror") and "doveadm backup". > > + IMAP SPECIAL-USE extension to describe mailboxes > + Added mailbox {} sections, which deprecate autocreate plugin > + lib-fs: Added "mode" parameter to "posix" backend to specify mode > for created files/dirs (for mail_attachment_dir). > + inet_listener names are now used to figure out what type the socket > is when useful. For example naming service auth { inet_listener } to > auth-client vs. auth-userdb has different behavior. > + Added pop3c (= POP3 client) storage backend. > - LMTP proxying code was simplified, hopefully fixing its problems. > - dsync: Don't remove user's subscriptions for subscriptions=no > namespaces. > Suggestion: Get rid of the --as-needed ld flag. This is a show stopper for me. Also, Making all in doveadm Making all in dsync gcc -DHAVE_CONFIG_H -I. -I../../.. -I../../../src/lib -I../../../src/lib-test -I../../../src/lib-settings -I../../../src/lib-master -I../../../src/lib-mail -I../../../src/lib-imap -I../../../src/lib-index -I../../../src/lib-storage -I../../../src/doveadm -std=gnu99 -g -O2 -Wall -W -Wmissing-prototypes -Wmissing-declarations -Wpointer-arith -Wchar-subscripts -Wformat=2 -Wbad-function-cast -I/usr/contrib/include -MT doveadm-dsync.o -MD -MP -MF .deps/doveadm-dsync.Tpo -c -o doveadm-dsync.o doveadm-dsync.c doveadm-dsync.c:17:27: doveadm-dsync.h: No such file or directory doveadm-dsync.c:386: warning: no previous prototype for `doveadm_dsync_main' *** Error code 1 Stop. *** Error code 1 Stop. *** Error code 1 Stop. *** Error code 1 Stop. *** Error code 1 Looks like rc3 needed . -- Member - Liberal International This is doctor at nl2k.ab.ca Ici doctor at nl2k.ab.ca God, Queen and country! Never Satan President Republic! Beware AntiChrist rising! https://www.fullyfollow.me/rootnl2k Merry Christmas 2011 and Happy New Year 2012 ! From doctor at doctor.nl2k.ab.ca Fri Jan 6 22:19:14 2012 From: doctor at doctor.nl2k.ab.ca (The Doctor) Date: Fri, 6 Jan 2012 13:19:14 -0700 Subject: [Dovecot] v2.1.rc2 released In-Reply-To: <20120106201255.GA20598@doctor.nl2k.ab.ca> References: <1325868127.17774.29.camel@hurina> <20120106201255.GA20598@doctor.nl2k.ab.ca> Message-ID: <20120106201914.GC20598@doctor.nl2k.ab.ca> On Fri, Jan 06, 2012 at 01:12:56PM -0700, The Doctor wrote: > On Fri, Jan 06, 2012 at 06:42:07PM +0200, Timo Sirainen wrote: > > http://dovecot.org/releases/2.1/rc/dovecot-2.1.rc2.tar.gz > > http://dovecot.org/releases/2.1/rc/dovecot-2.1.rc2.tar.gz.sig > > > > Lots of fixes since rc1. Some of the changes were larger than I wanted > > at RC stage, but they had to be done now.. Hopefully it's all over now, > > and we can have v2.1.0 soon. :) > > > > Some of the more important changes: > > > > * dsync was merged into doveadm. There is still "dsync" symlink > > pointing to "doveadm", which you can use the old way for now. > > The preferred ways to run dsync are "doveadm sync" (for old "dsync > > mirror") and "doveadm backup". > > > > + IMAP SPECIAL-USE extension to describe mailboxes > > + Added mailbox {} sections, which deprecate autocreate plugin > > + lib-fs: Added "mode" parameter to "posix" backend to specify mode > > for created files/dirs (for mail_attachment_dir). > > + inet_listener names are now used to figure out what type the socket > > is when useful. For example naming service auth { inet_listener } to > > auth-client vs. auth-userdb has different behavior. > > + Added pop3c (= POP3 client) storage backend. > > - LMTP proxying code was simplified, hopefully fixing its problems. > > - dsync: Don't remove user's subscriptions for subscriptions=no > > namespaces. > > > > > Suggestion: > > Get rid of the --as-needed ld flag. This is a show stopper for me. > > Also, > > Making all in doveadm > Making all in dsync > gcc -DHAVE_CONFIG_H -I. -I../../.. -I../../../src/lib -I../../../src/lib-test -I../../../src/lib-settings -I../../../src/lib-master -I../../../src/lib-mail -I../../../src/lib-imap -I../../../src/lib-index -I../../../src/lib-storage -I../../../src/doveadm -std=gnu99 -g -O2 -Wall -W -Wmissing-prototypes -Wmissing-declarations -Wpointer-arith -Wchar-subscripts -Wformat=2 -Wbad-function-cast -I/usr/contrib/include -MT doveadm-dsync.o -MD -MP -MF .deps/doveadm-dsync.Tpo -c -o doveadm-dsync.o doveadm-dsync.c > doveadm-dsync.c:17:27: doveadm-dsync.h: No such file or directory > doveadm-dsync.c:386: warning: no previous prototype for `doveadm_dsync_main' > *** Error code 1 > > Stop. > *** Error code 1 > > Stop. > *** Error code 1 > > Stop. > *** Error code 1 > > Stop. > *** Error code 1 > > > Looks like rc3 needed . > Just noted your rc3 notice. Can you get an rc4 going where the above 2 mentions are fixed? > -- > Member - Liberal International This is doctor at nl2k.ab.ca Ici doctor at nl2k.ab.ca > God, Queen and country! Never Satan President Republic! Beware AntiChrist rising! > https://www.fullyfollow.me/rootnl2k > Merry Christmas 2011 and Happy New Year 2012 ! -- Member - Liberal International This is doctor at nl2k.ab.ca Ici doctor at nl2k.ab.ca God, Queen and country! Never Satan President Republic! Beware AntiChrist rising! https://www.fullyfollow.me/rootnl2k Merry Christmas 2011 and Happy New Year 2012 ! From tss at iki.fi Fri Jan 6 22:24:45 2012 From: tss at iki.fi (Timo Sirainen) Date: Fri, 6 Jan 2012 22:24:45 +0200 Subject: [Dovecot] v2.1.rc2 released In-Reply-To: <20120106201914.GC20598@doctor.nl2k.ab.ca> References: <1325868127.17774.29.camel@hurina> <20120106201255.GA20598@doctor.nl2k.ab.ca> <20120106201914.GC20598@doctor.nl2k.ab.ca> Message-ID: <01600D7A-F1E9-4DD9-8182-B3A5CB9A2859@iki.fi> On 6.1.2012, at 22.19, The Doctor wrote: >> doveadm-dsync.c:17:27: doveadm-dsync.h: No such file or directory >> doveadm-dsync.c:386: warning: no previous prototype for `doveadm_dsync_main' >> *** Error code 1 >> Looks like rc3 needed . >> > > Just noted your rc3 notice. > > Can you get an rc4 going where the above 2 mentions are fixed? rc3 fixes these. From dmiller at amfes.com Fri Jan 6 22:32:54 2012 From: dmiller at amfes.com (Daniel L. Miller) Date: Fri, 06 Jan 2012 12:32:54 -0800 Subject: [Dovecot] Possible mdbox corruption In-Reply-To: <75CA67D3-81FF-4FD5-8D61-6992E808DB09@iki.fi> References: <9D72A466-6ECD-4A10-A52A-20BF56B32916@iki.fi> <75CA67D3-81FF-4FD5-8D61-6992E808DB09@iki.fi> Message-ID: On 1/6/2012 9:36 AM, Timo Sirainen wrote: > On 6.1.2012, at 19.30, Daniel L. Miller wrote: > >> Jan 6 09:22:42 bubba dovecot: indexer-worker(user1 at domain.com): Error: fts_solr: Indexing failed: 400 Illegal character ((CTRL-CHAR, code 18)) at [row,col {unknown-source}]: [482765,16] >> Jan 6 09:22:42 bubba dovecot: indexer-worker: Error: >> >> Google seems to indicate that Solr cannot handle "invalid" characters - and that it is the responsibility of the calling program to strip out such. A quick search shows me a both an individual character comparison in Java and a regex used for the purpose. Is there any "illegal character protection" in the Dovecot Solr plugin? > Yes, there is. So I'm not really sure what it's complaining about. Are you using the "solr" or "solr_old" backend? > > "Solr". plugin { fts = solr fts_solr = url=http://localhost:8983/solr/ } -- Daniel From david at paperclipsystems.com Fri Jan 6 22:44:51 2012 From: david at paperclipsystems.com (David Egbert) Date: Fri, 06 Jan 2012 13:44:51 -0700 Subject: [Dovecot] failed: Too many levels of symbolic links Message-ID: <4F075D43.8090706@paperclipsystems.com> All, My dovecot install works great except for one error I keep seeing this in my logs. The folder has 7138 messages in it. I am informed the user they needed to reduce the number of messages in the folder and believe this will fix the problem. My question is about where the problem lies. Is the problem related to an internal limit with Dovecot v2.0.15 or with my Debian (3.1.0-1-amd64)? Thanks --- dovecot: imap(XXXXX at XXXXX.com): Error: readdir(/XXXX/XXXX/XXXXXXXXX/XXXXX/XXXXXXX/XXXXXXXXXXXXXXXXXXX/XXX) failed: Too many levels of symbolic links David Egbert Paperclip Systems, LLC --- This message, its contents, and attachments are confidential and are only authorized for the intended recipient. Disclosure, re-distribution, or use of said information is strictly prohibited, and may be excluded from disclosure by applicable law. If you are not the intended recipient, or their intermediary, please notify the sender and delete this message. From tss at iki.fi Fri Jan 6 23:16:33 2012 From: tss at iki.fi (Timo Sirainen) Date: Fri, 6 Jan 2012 23:16:33 +0200 Subject: [Dovecot] failed: Too many levels of symbolic links In-Reply-To: <4F075D43.8090706@paperclipsystems.com> References: <4F075D43.8090706@paperclipsystems.com> Message-ID: <703E9D47-8238-4E52-A60B-A466B77C5CBF@iki.fi> On 6.1.2012, at 22.44, David Egbert wrote: > dovecot: imap(XXXXX at XXXXX.com): Error: readdir(/XXXX/XXXX/XXXXXXXXX/XXXXX/XXXXXXX/XXXXXXXXXXXXXXXXXXX/XXX) failed: Too many levels of symbolic links You have a symlink loop. Either a symlink that points to itself or one of the parent directories. From e-frog at gmx.de Fri Jan 6 23:25:49 2012 From: e-frog at gmx.de (e-frog) Date: Fri, 06 Jan 2012 22:25:49 +0100 Subject: [Dovecot] 2.1.rc1 (056934abd2ef): virtual plugin mailbox search pattern In-Reply-To: <4EF4BB6C.3050902@gmx.de> References: <4EF4BB6C.3050902@gmx.de> Message-ID: <4F0766DD.1060805@gmx.de> ON 23.12.2011 18:33, wrote e-frog: > Hello Timo, > > With dovecot 2.1.rc1 (056934abd2ef) there seems to be something wrong > with virtual plugin mailbox search patterns. > > I'm using a virtual mailbox 'unread' with the following dovecot-virtual > file > > $ cat dovecot-virtual > * > unseen > > For testing propose I created the following folders with each containing > one unread message > > INBOX, INBOX/level1 and INBOX/level1/level2 > > 2.1.rc1 (056934abd2ef) > > 1 LIST "" "*" > * LIST (\HasChildren) "/" "INBOX" > * LIST (\HasChildren) "/" "INBOX/level1" > * LIST (\HasNoChildren) "/" "INBOX/level1/level2" > * LIST (\HasChildren) "/" "virtual" > * LIST (\HasNoChildren) "/" "virtual/unread" > 1 OK List completed. > 2 STATUS "INBOX" (UNSEEN) > * STATUS "INBOX" (UNSEEN 1) > 2 OK Status completed. > 3 STATUS "INBOX/level1" (UNSEEN) > * STATUS "INBOX/level1" (UNSEEN 1) > 3 OK Status completed. > 4 STATUS "INBOX/level1/level2" (UNSEEN) > * STATUS "INBOX/level1/level2" (UNSEEN 1) > 4 OK Status completed. > 5 STATUS "virtual/unread" (UNSEEN) > * STATUS "virtual/unread" (UNSEEN 1) > 5 OK Status completed. > > Result: virtual/unread shows only 1 unseen message. Further tests showed > it's the one from INBOX. The mails from the deeper levels are not found. > > Downgrading to 2.0.16 restores the correct behavior: > > 1 LIST "" "*" > * LIST (\HasChildren) "/" "INBOX" > * LIST (\HasChildren) "/" "INBOX/level1" > * LIST (\HasNoChildren) "/" "INBOX/level1/level2" > * LIST (\HasChildren) "/" "virtual" > * LIST (\HasNoChildren) "/" "virtual/unread" > 1 OK List completed. > 2 STATUS "INBOX" (UNSEEN) > * STATUS "INBOX" (UNSEEN 1) > 2 OK Status completed. > 3 STATUS "INBOX/level1" (UNSEEN) > * STATUS "INBOX/level1" (UNSEEN 1) > 3 OK Status completed. > 4 STATUS "INBOX/level1/level2" (UNSEEN) > * STATUS "INBOX/level1/level2" (UNSEEN 1) > 4 OK Status completed. > 5 STATUS "virtual/unread" (UNSEEN) > * STATUS "virtual/unread" (UNSEEN 3) > 5 OK Status completed. > > Result: virtual/unread shows 3 unseen messages as it should > > The namespace configuration is as following > > namespace { > hidden = no > inbox = yes > list = yes > location = > prefix = > separator = / > subscriptions = yes > type = private > } > namespace { > location = virtual:~/virtual > prefix = virtual/ > separator = / > subscriptions = no > type = private > } > > I've also tried this with location = virtual:~/virtual:LAYOUT=maildir++ > leading to the same result. > > Thanks, > e-frog Just tested this on 2.1.rc3 and this still doesn't work like in v2.0. It seems like the search stops at the first hierarchy separator. Is there anything in addition I can do to help fix this issue? Thanks, e-frog From david at paperclipsystems.com Fri Jan 6 23:41:04 2012 From: david at paperclipsystems.com (David Egbert) Date: Fri, 06 Jan 2012 14:41:04 -0700 Subject: [Dovecot] failed: Too many levels of symbolic links In-Reply-To: <703E9D47-8238-4E52-A60B-A466B77C5CBF@iki.fi> References: <4F075D43.8090706@paperclipsystems.com> <703E9D47-8238-4E52-A60B-A466B77C5CBF@iki.fi> Message-ID: <4F076A70.3090905@paperclipsystems.com> On 1/6/2012 2:16 PM, Timo Sirainen wrote: > On 6.1.2012, at 22.44, David Egbert wrote: > >> dovecot: imap(XXXXX at XXXXX.com): Error: readdir(/XXXX/XXXX/XXXXXXXXX/XXXXX/XXXXXXX/XXXXXXXXXXXXXXXXXXX/XXX) failed: Too many levels of symbolic links > You have a symlink loop. Either a symlink that points to itself or one of the parent directories. > I thought that might have been the case, but I checked and there are no symlinks in that directory, or any of the directories above it in the path. All of the directories and files were created by dovecot. I didn't notice this in the logs until recently. The files are stored on an NFS Raid if that makes any difference. --- David Egbert From tss at iki.fi Fri Jan 6 23:51:41 2012 From: tss at iki.fi (Timo Sirainen) Date: Fri, 6 Jan 2012 23:51:41 +0200 Subject: [Dovecot] failed: Too many levels of symbolic links In-Reply-To: <4F076A70.3090905@paperclipsystems.com> References: <4F075D43.8090706@paperclipsystems.com> <703E9D47-8238-4E52-A60B-A466B77C5CBF@iki.fi> <4F076A70.3090905@paperclipsystems.com> Message-ID: On 6.1.2012, at 23.41, David Egbert wrote: > On 1/6/2012 2:16 PM, Timo Sirainen wrote: >> On 6.1.2012, at 22.44, David Egbert wrote: >> >>> dovecot: imap(XXXXX at XXXXX.com): Error: readdir(/XXXX/XXXX/XXXXXXXXX/XXXXX/XXXXXXX/XXXXXXXXXXXXXXXXXXX/XXX) failed: Too many levels of symbolic links >> You have a symlink loop. Either a symlink that points to itself or one of the parent directories. >> > I thought that might have been the case, but I checked and there are no symlinks in that directory, or any of the directories above it in the path. All of the directories and files were created by dovecot. I didn't notice this in the logs until recently. The files are stored on an NFS Raid if that makes any difference. Well, then.. You have a bit too many Xes in there for me to guess which readdir() is the one failing. I guess it's /new or /cur for a Maildir? Anyway, readdir() is failing with ELOOP. Does it always fail with "Too many levels of symbolic links" or is it sometimes different? This sounds like a bug in Linux NFS client code. You can reproduce this always with this one user's Maildir? Can you do "ls" in the directory? From david at paperclipsystems.com Sat Jan 7 00:10:32 2012 From: david at paperclipsystems.com (David Egbert) Date: Fri, 06 Jan 2012 15:10:32 -0700 Subject: [Dovecot] failed: Too many levels of symbolic links In-Reply-To: References: <4F075D43.8090706@paperclipsystems.com> <703E9D47-8238-4E52-A60B-A466B77C5CBF@iki.fi> <4F076A70.3090905@paperclipsystems.com> Message-ID: <4F077158.4000500@paperclipsystems.com> On 1/6/2012 2:51 PM, Timo Sirainen wrote: > On 6.1.2012, at 23.41, David Egbert wrote: > >> On 1/6/2012 2:16 PM, Timo Sirainen wrote: >>> On 6.1.2012, at 22.44, David Egbert wrote: >>> >>>> dovecot: imap(XXXXX at XXXXX.com): Error: readdir(/XXXX/XXXX/XXXXXXXXX/XXXXX/XXXXXXX/XXXXXXXXXXXXXXXXXXX/XXX) failed: Too many levels of symbolic links >>> You have a symlink loop. Either a symlink that points to itself or one of the parent directories. >>> >> I thought that might have been the case, but I checked and there are no symlinks in that directory, or any of the directories above it in the path. All of the directories and files were created by dovecot. I didn't notice this in the logs until recently. The files are stored on an NFS Raid if that makes any difference. > Well, then.. You have a bit too many Xes in there for me to guess which readdir() is the one failing. I guess it's /new or /cur for a Maildir? > > Anyway, readdir() is failing with ELOOP. Does it always fail with "Too many levels of symbolic links" or is it sometimes different? This sounds like a bug in Linux NFS client code. You can reproduce this always with this one user's Maildir? Can you do "ls" in the directory? > Sorry about the X's... it is a client directory. We support many domains and their privacy is paramount. You are correct it is in the /cur directory. I can LS all of directories without problems. This user has 10+Gb in his mail box spread across 352 subscribed folders. As for the logs it is always the directory, always the same error. David Egbert From tss at iki.fi Sat Jan 7 00:30:37 2012 From: tss at iki.fi (Timo Sirainen) Date: Sat, 7 Jan 2012 00:30:37 +0200 Subject: [Dovecot] failed: Too many levels of symbolic links In-Reply-To: <4F077158.4000500@paperclipsystems.com> References: <4F075D43.8090706@paperclipsystems.com> <703E9D47-8238-4E52-A60B-A466B77C5CBF@iki.fi> <4F076A70.3090905@paperclipsystems.com> <4F077158.4000500@paperclipsystems.com> Message-ID: <4A0E9695-E78A-487F-AE53-888D27981EF1@iki.fi> On 7.1.2012, at 0.10, David Egbert wrote: >> Anyway, readdir() is failing with ELOOP. Does it always fail with "Too many levels of symbolic links" or is it sometimes different? This sounds like a bug in Linux NFS client code. You can reproduce this always with this one user's Maildir? Can you do "ls" in the directory? >> > Sorry about the X's... it is a client directory. We support many domains and their privacy is paramount. You are correct it is in the /cur directory. I can LS all of directories without problems. This user has 10+Gb in his mail box spread across 352 subscribed folders. As for the logs it is always the directory, always the same error. Try the attached test program. Run it as: ./readdir /path/to/Maildir/cur Does it also give non-zero error? -------------- next part -------------- A non-text attachment was scrubbed... Name: readdir.c Type: application/octet-stream Size: 271 bytes Desc: not available URL: From yubao.liu at gmail.com Sat Jan 7 05:36:27 2012 From: yubao.liu at gmail.com (Yubao Liu) Date: Sat, 07 Jan 2012 11:36:27 +0800 Subject: [Dovecot] Strange error: DIGEST-MD5 mechanism can't be supported with given passdbs In-Reply-To: References: <4F05EABC.7070309@gmail.com> <4F06D272.5010200@bunbun.be> <4F071E3B.2060405@gmail.com> <1325868288.17774.30.camel@hurina> <4F07332B.70708@gmail.com> Message-ID: <4F07BDBB.3060204@gmail.com> On 01/07/2012 01:51 AM, Timo Sirainen wrote: > On 6.1.2012, at 19.45, Yubao Liu wrote: >> On 01/07/2012 12:44 AM, Timo Sirainen wrote: >>> On Sat, 2012-01-07 at 00:15 +0800, Yubao Liu wrote: >>>> I don't know why this function doesn't check auth->masterdbs, if I >>>> insert these lines after line 128, that error goes away, and dovecot's >>>> imap-login process happily does DIGEST-MD5 authentication [1]. >>>> In my configuration, "masterdbs" contains "passdb passwd-file", >>>> "passdbs" contains " passdb pam". >>> So .. you want DIGEST-MD5 authentication for the master users, but not >>> for anyone else? I hadn't really thought anyone would want that.. >> Is there any special reason that master passdb isn't taken into >> account in src/auth/auth.c:auth_passdb_list_have_lookup_credentials() ? >> I feel master passdb is also a kind of passdb. > I guess it could be changed. It wasn't done intentionally that way. > I guess this change broke old way: http://hg.dovecot.org/dovecot-2.0/rev/b05793c609ac In old version, "auth->passdbs" contains all passdbs, this revision changes "auth->passdbs" to only contain non-master passdbs. I'm not sure which fix is better or even my proposal is correct or fully: a) in src/auth/auth.c:auth_passdb_preinit(), insert master passdb to auth->passdbs too, and remove duplicate code for masterdbs in auth_init() and auth_deinit(). b) add similar code for masterdbs in auth_passdb_list_have_verify_plain(), auth_passdb_list_have_lookup_credentials(), auth_passdb_list_have_set_credentials(). >> This is exactly my use case, I use Kerberos for system users, >> I'm curious why master passdb isn't used to check "have_lookup_credentials" ability >> http://wiki2.dovecot.org/Authentication/MultipleDatabases >>> Currently the fallback works only with the PLAIN authentication mechanism. >> I hope this limitation can be relaxed. > It might already be .. I don't remember. In any case you have only PAM passdb, so it shouldn't matter. GSSAPI isn't a passdb. If the fix above is added, then I can use CRAM-MD5 with master passwd-file passdb and normal pam passdb, else imap-login process can't startup due to check in auth_mech_list_verify_passdb(). Attached two patches against dovecot-2.0 branch for the two schemes, the first is cleaner but may affect other logics in other source files. Another related question is "pass" option in master passdb, if I set it to "yes", the authentication fails: Jan 7 11:26:00 gold dovecot: auth: Debug: client in: AUTH#0111#011CRAM-MD5#011service=imap#011secured#011lip=127.0.1.1#011rip=127.0.0.1#011lport=143#011rport=51771 Jan 7 11:26:00 gold dovecot: auth: Debug: client out: CONT#0111#011PDk4NjcwMDY1MTU3NzI3MjguMTMyNTkwNjc2MEBnb2xkPg== Jan 7 11:26:00 gold dovecot: auth: Debug: client in: CONT#0111#011ZGlla2VuKndlYm1haWwgYmNkMzFiMWE1YjQ1OWQ0OGRkZWQ4ZmIzZDhmMjVhZTc= Jan 7 11:26:00 gold dovecot: auth: Debug: auth(webmail,127.0.0.1,master): Master user lookup for login: dieken Jan 7 11:26:00 gold dovecot: auth: Debug: passwd-file(webmail,127.0.0.1,master): lookup: user=webmail file=/etc/dovecot/master-users Jan 7 11:26:00 gold dovecot: auth: passdb(webmail,127.0.0.1,master): Master user logging in as dieken Jan 7 11:26:00 gold dovecot: auth: Error: passdb(dieken,127.0.0.1): No passdbs support skipping password verification - pass=yes can't be used in master passdb Jan 7 11:26:00 gold dovecot: auth: Debug: password(dieken,127.0.0.1): passdb doesn't support credential lookups My normal passdb is a PAM passdb, it doesn't support credential lookups, that's reasonable, but I feel the comment for "pass" option is confusing: $ less /etc/dovecot/conf.d/auth-master.conf.ext .... # Example master user passdb using passwd-file. You can use any passdb though. passdb { driver = passwd-file master = yes args = /etc/dovecot/master-users # Unless you're using PAM, you probably still want the destination user to # be looked up from passdb that it really exists. pass=yes does that. pass = yes } According the comment, it's to check whether the real user exists, why not to check userdb but another passdb? Even it must check against passdb, in this case, it's obvious not necessary to lookup credentials, it's enough to to lookup user name only. Regards, Yubao Liu -------------- next part -------------- A non-text attachment was scrubbed... Name: schemeA-count-master-passdb-as-passdb-too.patch Type: text/x-patch Size: 1357 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: schemeB-also-check-against-master-passdbs.patch Type: text/x-patch Size: 1187 bytes Desc: not available URL: From phil at kernick.org Sat Jan 7 02:21:53 2012 From: phil at kernick.org (Phil Kernick) Date: Sat, 07 Jan 2012 10:51:53 +1030 Subject: [Dovecot] Attribute Cache flush errors on FreeBSD 8.2 Message-ID: <4F079021.4090001@kernick.org> I'm running dovecot 2.0.16 on FreeBSD 8.2 with the mail spool and indexes on an NFS server. Lines like the following keep appearing in syslog for access to each mailbox: Error: nfs_flush_attr_cache_fd_locked: fchown(/home/philk/Mail/Deleted) failed: Bad file descriptor This is coming from nfs-workarounds.c line 210, which tracing back seems to be coming from the call to mbox_lock on lib-storage/index/mbox/mbox-lock.c line 774. I have /home mounted with options acregmin=0,acregmax=0,acdirmin=0,acdirmax=0 (as FreeBSD doesn't have a noac option), but it throws the same error either way. The output of dovecot -n is below. Phil. # 2.0.16: /usr/local/etc/dovecot/dovecot.conf # OS: FreeBSD 8.2-RELEASE-p3 i386 auth_mechanisms = plain login auth_username_format = %Lu disable_plaintext_auth = no first_valid_gid = 1000 first_valid_uid = 1000 listen = *, [::] mail_fsync = always mail_location = mbox:~/Mail/:INBOX=/var/mail/%u mail_nfs_index = yes mail_nfs_storage = yes mail_privileged_group = mail mmap_disable = yes passdb { args = session=yes dovecot driver = pam } protocols = imap pop3 service auth { unix_listener /var/spool/postfix/private/auth { group = postfix mode = 0660 user = postfix } user = root } ssl_cert = Hi *, I am currently in the planning stage for a "new and improved" mail system at my university. Right now, everything is on one big backend server but this is causing me increasing amounts of pain, beginning with the time a full backup takes. So naturally, I want to split this big server into smaller ones. To keep things simple, I want to pin a user to a server so I can avoid things like NFS or cluster aware filesystems. The mapping for each account is then inserted into the LDAP object for each user and the frontend proxy (perdition at the moment) then uses this information to route each access to the correct backend storage server running dovecot. So far this has been working nice with my test setup. But: I also have to provide shared folders for users. Thankfully users don't have the right to share their own folders, which makes things easier (I hope). Right now, the setup works like this, using Courier: - complete virtual mail setup - global shared folders configured in /etc/courier/shared/index - inside /home/shared-folder-name/Maildir/courierimapacl specific user get access to a folder - each folder a user has access is mapped to the namespace #shared like #shared.shared-folder-name Now, if I split my backend storage server into multiple ones and user-A is on server-1 and user-B is on server-2, but both need to access the same shared folder, I have a problem. I could of course move all users needing access to a shared folder to the same server, but in the end, this will be a nightmare for me, because I forsee having to move users around on a daily basis. Right now, I am pondering with using an additional server with just the shared folders on it and using NFS (or a cluster FS) to mount the shared folder filesystem to each backend storage server, so each user has potential access to a shared folders data. Ideas? Suggestions? Nudges in the right direction? Gr??e, Sven. -- Sigmentation fault. Core dumped. From stan at hardwarefreak.com Sun Jan 8 02:35:37 2012 From: stan at hardwarefreak.com (Stan Hoeppner) Date: Sat, 07 Jan 2012 18:35:37 -0600 Subject: [Dovecot] Providing shared folders with multiple backend servers In-Reply-To: <68fd4hi9kbv8@mids.svenhartge.de> References: <68fd4hi9kbv8@mids.svenhartge.de> Message-ID: <4F08E4D9.1090203@hardwarefreak.com> On 1/7/2012 4:20 PM, Sven Hartge wrote: > Hi *, > > I am currently in the planning stage for a "new and improved" mail > system at my university. > > Right now, everything is on one big backend server but this is causing > me increasing amounts of pain, beginning with the time a full backup > takes. You failed to mention your analysis and diagnosis identifying the source of the slow backup, and other issues your eluded to but didn't mention specifically. You also didn't mention how you're doing this full backup (tar, IMAP; D2D or tape), where the backup bottleneck is, what mailbox storage format you're using, total mailbox count and filesystem space occupied. What is your disk storage configuration? Direct attach? Hardware or software RAID? What RAID level? How many disks? SAS or SATA? It's highly likely your problems can be solved without the drastic architecture change, and new problems it will introduce, that you describe below. > So naturally, I want to split this big server into smaller ones. Naturally? Many OPs spend significant x/y/z resources trying to avoid the "shared nothing" storage backend setup below. > To keep things simple, I want to pin a user to a server so I can avoid > things like NFS or cluster aware filesystems. The mapping for each > account is then inserted into the LDAP object for each user and the > frontend proxy (perdition at the moment) then uses this information to > route each access to the correct backend storage server running dovecot. Splitting the IMAP workload like this isn't keeping things simple, but increases complexity, on many levels. And there's nothing wrong with NFS and cluster filesystems if they are used correctly. > So far this has been working nice with my test setup. > > But: I also have to provide shared folders for users. Thankfully users > don't have the right to share their own folders, which makes things > easier (I hope). > > Right now, the setup works like this, using Courier: > > - complete virtual mail setup > - global shared folders configured in /etc/courier/shared/index > - inside /home/shared-folder-name/Maildir/courierimapacl specific user > get access to a folder > - each folder a user has access is mapped to the namespace #shared > like #shared.shared-folder-name > > Now, if I split my backend storage server into multiple ones and user-A > is on server-1 and user-B is on server-2, but both need to access the > same shared folder, I have a problem. Yes, you do. > I could of course move all users needing access to a shared folder to > the same server, but in the end, this will be a nightmare for me, > because I forsee having to move users around on a daily basis. See my comments above. > Right now, I am pondering with using an additional server with just the > shared folders on it and using NFS (or a cluster FS) to mount the shared > folder filesystem to each backend storage server, so each user has > potential access to a shared folders data. So you're going to implement a special case of what you're desperately trying to avoid? This makes no sense. > Ideas? Suggestions? Nudges in the right direction? Yes. We need more real information. Please provide: 1. Mailbox count, total maildir file count and size 2. Average/peak concurrent user connections 3. CPU type/speed/total core count, total RAM, free RAM (incl buffers) 4. Storage configuration--total spindles, RAID level, hard or soft RAID 5. Filesystem type 6. Backup software/method 7. Operating system Instead of telling us what you think the solution to your unidentified bottleneck is and then asking "yeah or nay", tell us what the problem is and allow us to recommend solutions. This way you'll get some education and multiple solutions that may very well be a better fit, will perform better, and possibly cost less in capital outlay and administration time/effort. -- Stan From sven at svenhartge.de Sun Jan 8 03:55:28 2012 From: sven at svenhartge.de (Sven Hartge) Date: Sun, 8 Jan 2012 02:55:28 +0100 Subject: [Dovecot] Providing shared folders with multiple backend servers References: <68fd4hi9kbv8@mids.svenhartge.de> <4F08E4D9.1090203@hardwarefreak.com> Message-ID: <78fdevu9kbv8@mids.svenhartge.de> Stan Hoeppner wrote: > It's highly likely your problems can be solved without the drastic > architecture change, and new problems it will introduce, that you > describe below. The main reason is I need to replace the hardware as its service contract ends this year and I am not able to extend it further. The box so far is fine, there are normally no problems during normal operations with speed or responsiveness towards the end-user. Sometimes, higher peak loads tend to strain the system a bit and this is starting to occur more often. First thought was to move this setup into our VMware cluster (yeah, I know, spare me the screams), since the hardware used there is way more powerfull than the hardware used now and I wouldn't have to buy new servers for my mail system (which is kind of painful to do in an universitary environment, especially in Germany, if you want to invest an amount of money above a certain amount). But then I thought about the problems with VMs this size and got to the idea with the distributed setup, splitting the one server into 4 or 6 backend servers. As I said: "idea". Other ideas making my life easier are more than welcome. >> Ideas? Suggestions? Nudges in the right direction? > Yes. We need more real information. Please provide: > 1. Mailbox count, total maildir file count and size about 10,000 Maildir++ boxes 900GB for 1300GB used, "df -i" says 11 million inodes used I know, this is very _tiny_ compared to the systems ISPs are using. > 2. Average/peak concurrent user connections IMAP: Average 800 concurrent user connections, peaking at about 1400. POP3: Average 300 concurrent user connections, peaking at about 600. > 3. CPU type/speed/total core count, total RAM, free RAM (incl buffers) Currently dual-core AMD Opteron 2210, 1.8GHz. Right now, in the middle of the night (2:30 AM here) on a Sunday, thus a low point in the usage pattern: total used free shared buffers cached Mem: 12335820 9720252 2615568 0 53112 680424 -/+ buffers/cache: 8986716 3349104 Swap: 5855676 10916 5844760 System reaches its 7 year this summer which is the end of its service contract. > 4. Storage configuration--total spindles, RAID level, hard or soft RAID RAID 6 with 12 SATA1.5 disks, external 4Gbit FC Back in 2005, a SAS enclosure was way to expensive for us to afford. > 5. Filesystem type XFS in a LVM to allow snapshots for backup I of course aligned the partions on the RAID correctly and of course created a filesystem with the correct parameters wrt. spindels, chunk size, etc. > 6. Backup software/method Full backup with Bacula, taking about 24 hours right now. Because of this, I switched to virtual full backups, only ever doing incremental and differental backups off of the real system and creating synthetic full backups inside Bacula. Works fine though, incremental taking 2 hours, differential about 4 hours. The main problem of the backup time is Maildir++. During a test, I copied the mail storage to a spare box, converted it to mdbox (50MB file size) and the backup was lightning fast compared to the Maildir++ format. Additonally compressing the mails inside the mdbox and not having Bacula compress them for me reduce the backup time further (and speeding up the access through IMAP and POP3). So this is the way to go, I think, regardless of which way I implement the backend mail server. > 7. Operating system Debian Linux Lenny, currently with kernel 2.6.39 > Instead of telling us what you think the solution to your unidentified > bottleneck is and then asking "yeah or nay", tell us what the problem is > and allow us to recommend solutions. I am not asking for "yay or nay", I just pointed out my idea, but I am open to other suggestions. If the general idea is to buy a new big single storage system, I am more than happy to do just this, because this will prevent any problems I might have with a distributed one before they even can occur. Maybe two HP DL180s (one for production and one as test/standby-system) with an SAS attached enclosure for storage? Keeping in mind the new system has to work for some time (again 5 to 7 years) I have to be able to extend the storage space without to much hassle. Gr??e, S? -- Sigmentation fault. Core dumped. From yubao.liu at gmail.com Sun Jan 8 04:56:33 2012 From: yubao.liu at gmail.com (Yubao Liu) Date: Sun, 08 Jan 2012 10:56:33 +0800 Subject: [Dovecot] Strange error: DIGEST-MD5 mechanism can't be supported with given passdbs In-Reply-To: <4F07BDBB.3060204@gmail.com> References: <4F05EABC.7070309@gmail.com> <4F06D272.5010200@bunbun.be> <4F071E3B.2060405@gmail.com> <1325868288.17774.30.camel@hurina> <4F07332B.70708@gmail.com> <4F07BDBB.3060204@gmail.com> Message-ID: <4F0905E1.9090603@gmail.com> Hi Timo, Did you review the patches in previous email? I tested two patches against my configuration(pasted in this thread too), they both work well. I prefer the first patch, but I'm not sure whether it breaks something else. Regards, Yubao Liu On 01/07/2012 11:36 AM, Yubao Liu wrote: > On 01/07/2012 01:51 AM, Timo Sirainen wrote: >> On 6.1.2012, at 19.45, Yubao Liu wrote: >>> On 01/07/2012 12:44 AM, Timo Sirainen wrote: >>>> On Sat, 2012-01-07 at 00:15 +0800, Yubao Liu wrote: >>>>> I don't know why this function doesn't check auth->masterdbs, if I >>>>> insert these lines after line 128, that error goes away, and >>>>> dovecot's >>>>> imap-login process happily does DIGEST-MD5 authentication [1]. >>>>> In my configuration, "masterdbs" contains "passdb passwd-file", >>>>> "passdbs" contains " passdb pam". >>>> So .. you want DIGEST-MD5 authentication for the master users, but not >>>> for anyone else? I hadn't really thought anyone would want that.. >>> Is there any special reason that master passdb isn't taken into >>> account in src/auth/auth.c:auth_passdb_list_have_lookup_credentials() ? >>> I feel master passdb is also a kind of passdb. >> I guess it could be changed. It wasn't done intentionally that way. >> > I guess this change broke old way: > http://hg.dovecot.org/dovecot-2.0/rev/b05793c609ac > > In old version, "auth->passdbs" contains all passdbs, this revision > changes "auth->passdbs" to only contain non-master passdbs. > > I'm not sure which fix is better or even my proposal is correct or fully: > a) in src/auth/auth.c:auth_passdb_preinit(), insert master passdb to > auth->passdbs too, and remove duplicate code for masterdbs > in auth_init() and auth_deinit(). > > b) add similar code for masterdbs in > auth_passdb_list_have_verify_plain(), > auth_passdb_list_have_lookup_credentials(), > auth_passdb_list_have_set_credentials(). >>> This is exactly my use case, I use Kerberos for system users, >>> I'm curious why master passdb isn't used to check >>> "have_lookup_credentials" ability >>> http://wiki2.dovecot.org/Authentication/MultipleDatabases >>>> Currently the fallback works only with the PLAIN authentication >>>> mechanism. >>> I hope this limitation can be relaxed. >> It might already be .. I don't remember. In any case you have only >> PAM passdb, so it shouldn't matter. GSSAPI isn't a passdb. > If the fix above is added, then I can use CRAM-MD5 with master > passwd-file passdb > and normal pam passdb, else imap-login process can't startup due to > check in > auth_mech_list_verify_passdb(). > > Attached two patches against dovecot-2.0 branch for the two schemes, > the first is cleaner but may affect other logics in other source files. > > > Another related question is "pass" option in master passdb, if I set > it to "yes", > the authentication fails: > Jan 7 11:26:00 gold dovecot: auth: Debug: client in: > AUTH#0111#011CRAM-MD5#011service=imap#011secured#011lip=127.0.1.1#011rip=127.0.0.1#011lport=143#011rport=51771 > Jan 7 11:26:00 gold dovecot: auth: Debug: client out: > CONT#0111#011PDk4NjcwMDY1MTU3NzI3MjguMTMyNTkwNjc2MEBnb2xkPg== > Jan 7 11:26:00 gold dovecot: auth: Debug: client in: > CONT#0111#011ZGlla2VuKndlYm1haWwgYmNkMzFiMWE1YjQ1OWQ0OGRkZWQ4ZmIzZDhmMjVhZTc= > Jan 7 11:26:00 gold dovecot: auth: Debug: > auth(webmail,127.0.0.1,master): Master user lookup for login: dieken > Jan 7 11:26:00 gold dovecot: auth: Debug: > passwd-file(webmail,127.0.0.1,master): lookup: user=webmail > file=/etc/dovecot/master-users > Jan 7 11:26:00 gold dovecot: auth: passdb(webmail,127.0.0.1,master): > Master user logging in as dieken > Jan 7 11:26:00 gold dovecot: auth: Error: passdb(dieken,127.0.0.1): > No passdbs support skipping password verification - pass=yes can't be > used in master passdb > Jan 7 11:26:00 gold dovecot: auth: Debug: password(dieken,127.0.0.1): > passdb doesn't support credential lookups > > My normal passdb is a PAM passdb, it doesn't support credential > lookups, that's > reasonable, but I feel the comment for "pass" option is confusing: > > $ less /etc/dovecot/conf.d/auth-master.conf.ext > .... > # Example master user passdb using passwd-file. You can use any passdb > though. > passdb { > driver = passwd-file > master = yes > args = /etc/dovecot/master-users > > # Unless you're using PAM, you probably still want the destination > user to > # be looked up from passdb that it really exists. pass=yes does that. > pass = yes > } > > According the comment, it's to check whether the real user exists, why > not > to check userdb but another passdb? Even it must check against passdb, > in this case, it's obvious not necessary to lookup credentials, it's > enough to > to lookup user name only. > > Regards, > Yubao Liu > From stan at hardwarefreak.com Sun Jan 8 15:09:00 2012 From: stan at hardwarefreak.com (Stan Hoeppner) Date: Sun, 08 Jan 2012 07:09:00 -0600 Subject: [Dovecot] Providing shared folders with multiple backend servers In-Reply-To: <78fdevu9kbv8@mids.svenhartge.de> References: <68fd4hi9kbv8@mids.svenhartge.de> <4F08E4D9.1090203@hardwarefreak.com> <78fdevu9kbv8@mids.svenhartge.de> Message-ID: <4F09956C.1030109@hardwarefreak.com> On 1/7/2012 7:55 PM, Sven Hartge wrote: > Stan Hoeppner wrote: > >> It's highly likely your problems can be solved without the drastic >> architecture change, and new problems it will introduce, that you >> describe below. > > The main reason is I need to replace the hardware as its service > contract ends this year and I am not able to extend it further. > > The box so far is fine, there are normally no problems during normal > operations with speed or responsiveness towards the end-user. > > Sometimes, higher peak loads tend to strain the system a bit and this is > starting to occur more often. ... > First thought was to move this setup into our VMware cluster (yeah, I > know, spare me the screams), since the hardware used there is way more > powerfull than the hardware used now and I wouldn't have to buy new > servers for my mail system (which is kind of painful to do in an > universitary environment, especially in Germany, if you want to invest > an amount of money above a certain amount). What's wrong with moving it onto VMware? This actually seems like a smart move given your description of the node hardware. It also gives you much greater backup flexibility with VCB (or whatever they call it today). You can snapshot the LUN over the SAN during off peak hours to a backup server and do the actual backup to the library at your leisure. Forgive me if the software names have changed as I've not used VMware since ESX3 back in 07. > But then I thought about the problems with VMs this size and got to the > idea with the distributed setup, splitting the one server into 4 or 6 > backend servers. Not sure what you mean by "VMs this size". Do you mean memory requirements or filesystem size? If the nodes have enough RAM that's no issue. And surely you're not thinking of using a .vmdk for the mailbox storage. You'd use an RDM SAN LUN. In fact you should be able to map in the existing XFS storage LUN and use it as is. Assuming it's not going into retirement as well. If an individual VMware node don't have sufficient RAM you could build a VM based Dovecot cluster, run these two VMs on separate nodes, and thin out the other VMs allowed to run on these nodes. Since you can't directly share XFS, build a tiny Debian NFS server VM and map the XFS LUN to it, export the filesystem to the two Dovecot VMs. You could install the Dovecot director on this NFS server VM as well. Converting from maildir to mdbox should help eliminate the NFS locking problems. I would do the conversion before migrating to this VM setup with NFS. Also, run the NFS server VM on the same physical node as one of the Dovecot servers. The NFS traffic will be a memory-memory copy instead of going over the GbE wire, decreasing IO latency and increasing performance for that Dovecot server. If it's possible to have Dovecot director or your fav load balancer weight more connections to one Deovecot node, funnel 10-15% more connections to this one. (I'm no director guru, in fact haven't use it yet). Assuming the CPUs in the VMware cluster nodes are clocked a decent amount higher than 1.8GHz I wouldn't monkey with configuring virtual smp for these two VMs, as they'll be IO bound not CPU bound. > As I said: "idea". Other ideas making my life easier are more than > welcome. I hope my suggestions contribute to doing so. :) >>> Ideas? Suggestions? Nudges in the right direction? > >> Yes. We need more real information. Please provide: > >> 1. Mailbox count, total maildir file count and size > > about 10,000 Maildir++ boxes > > 900GB for 1300GB used, "df -i" says 11 million inodes used Converting to mdbox will take a large burden off your storage, as you've seen. With ~1.3TB consumed of ~15TB you should have plenty of space to convert to mdbox while avoiding filesystem fragmentation. With maildir you likely didn't see heavy fragmentation due to small file sizes. With mdbox, especially at 50MB, you'll likely start seeing more fragmentation. Use this to periodically check the fragmentation level: $ xfs_db -c frag [device] -r e.g. $ xfs_db -c frag /dev/sda7 -r actual 76109, ideal 75422, fragmentation factor 0.90% I'd recommend running xfs_fsr when frag factor exceeds ~20-30%. The XFS developers recommend against running xfs_fsr too often as it can actually increases free space fragmentation while it decreases file fragmentation, especially on filesystems that are relatively full. Having heavily fragmented free space is worse than having fragmented files, as newly created files will automatically be fragged. > I know, this is very _tiny_ compared to the systems ISPs are using. Not everyone is an ISP, including me. :) >> 2. Average/peak concurrent user connections > > IMAP: Average 800 concurrent user connections, peaking at about 1400. > POP3: Average 300 concurrent user connections, peaking at about 600. > >> 3. CPU type/speed/total core count, total RAM, free RAM (incl buffers) > > Currently dual-core AMD Opteron 2210, 1.8GHz. Heheh, yeah, a bit long in the tooth, but not horribly underpowered for 1100 concurrent POP/IMAP users. Though this may be the reason for the sluggishness when you hit that 2000 concurrent user peak. Any chance you have some top output for the peak period? > Right now, in the middle of the night (2:30 AM here) on a Sunday, thus a > low point in the usage pattern: > > total used free shared buffers cached > Mem: 12335820 9720252 2615568 0 53112 680424 > -/+ buffers/cache: 8986716 3349104 > Swap: 5855676 10916 5844760 Ugh... "-m" and "-g" options exist for a reason. :) So this box has 12GB RAM, currently ~2.5GB free during off peak hours. It would be interesting to see free RAM and swap usage values during peak. That would tell use whether we're CPU or RAM starved. If both turned up clean then we'd need to look at iowait. If you're not RAM starved then moving to VMware nodes with 16/24/32GB RAM should work fine, as long as you don't stack many other VMs on top. Enabling memory dedup may help a little. > System reaches its 7 year this summer which is the end of its service > contract. Enjoy your retirement old workhorse. :) >> 4. Storage configuration--total spindles, RAID level, hard or soft RAID > > RAID 6 with 12 SATA1.5 disks, external 4Gbit FC I assume this means a LUN on a SAN array somewhere on the other end of that multi-mode cable, yes? Can you tell us what brand/model the box is? > Back in 2005, a SAS enclosure was way to expensive for us to afford. How one affords an FC SAN array but not a less expensive direct attach SAS enclosure is a mystery... :) >> 5. Filesystem type > > XFS in a LVM to allow snapshots for backup XFS is the only way to fly, IMNSHO. > I of course aligned the partions on the RAID correctly and of course > created a filesystem with the correct parameters wrt. spindels, chunk > size, etc. Which is critical for mitigating the RMW penalty of parity RAID. Speaking of which, why RAID6 for maildir? Given that your array is 90% vacant, why didn't you go with RAID10 for 3-5 times the random write performance? >> 6. Backup software/method > > Full backup with Bacula, taking about 24 hours right now. Because of > this, I switched to virtual full backups, only ever doing incremental > and differental backups off of the real system and creating synthetic > full backups inside Bacula. Works fine though, incremental taking 2 > hours, differential about 4 hours. Move to VMware and use VCB. You'll fall in love. > The main problem of the backup time is Maildir++. During a test, I > copied the mail storage to a spare box, converted it to mdbox (50MB > file size) and the backup was lightning fast compared to the Maildir++ > format. Well of course. You were surprised by this? How long has it been since you used mbox? mbox backs up even faster than mdbox. Why? Larger files and fewer of them. Which means the disks can actually do streaming reads, and don't have to beat their heads to death jumping all over the platters to read maildir files, which are scattered all over the place when created. Which is while maildir is described as a "random" IO workload. > Additonally compressing the mails inside the mdbox and not having Bacula > compress them for me reduce the backup time further (and speeding up the > access through IMAP and POP3). Again, no surprise here. When files exist on disk already compressed it takes less IO bandwidth to read the file data for a given actual file size. So if you have say 10MB files that compress down to 5MB, you can read twice as many files when the pipe is saturated, twice as much file data. > So this is the way to go, I think, regardless of which way I implement > the backend mail server. Which is why I asked my questions. :) mdbox would have been one of my recommendations, but you already discovered it. >> 7. Operating system > > Debian Linux Lenny, currently with kernel 2.6.39 :) Debian, XFS, Dovecot, FC SAN storage--I like your style. Lenny with 2.6.39? Is that a backport or rolled kernel? Not Squeeze? Interesting. I'm running Squeeze with rolled vanilla 2.6.38.6. It's been about 6 months so it's 'bout time I roll a new one. :) >> Instead of telling us what you think the solution to your unidentified >> bottleneck is and then asking "yeah or nay", tell us what the problem is >> and allow us to recommend solutions. > > I am not asking for "yay or nay", I just pointed out my idea, but I am > open to other suggestions. I think you've already discovered the best suggestions on your own. > If the general idea is to buy a new big single storage system, I am more > than happy to do just this, because this will prevent any problems I might > have with a distributed one before they even can occur. One box is definitely easier to administer and troubleshoot. Though I must say that even though it's more complex, I think the VM architecture I described is worth a serious look. If your current 12x1.5TB SAN array is being retired as well, you could piggy back onto the array(s) feeding the VMware farm, or expand them if necessary/possible. Adding drives is usually much cheaper than buying a new populated array chassis. Given your service contract comments it's unlikely you're the type to build your own servers. Being a hardwarefreak, I nearly always build my servers and storage from scratch. This may be worth a look merely for educational purposes. I just happened to have finished spec'ing out a new high volume 20TB IMAP server recently which should handle 5000 concurrent users without breaking a sweat, for only ~$7500 USD: Full parts list: http://secure.newegg.com/WishList/PublicWishDetail.aspx?WishListNumber=17069985 Summary: 2GHz 8-core 12MB L3 cache Magny Cours Opteron SuperMicro MBD-H8SGL-O w/32GB qualified quad channel reg ECC DDR3/1333 dual Intel 82574 GbE ports LSI 512MB PCIe 2.0 x8 RAID, 24 port SAS expander, 20x1TB 7.2k WD RE4 20 bay SAS/SATA 6G hot swap Norco chassis Create a RAID1 pair for /boot, the root filesystem, swap partition of say 8GB, 2GB partition for external XFS log, should have ~900GB left for utilitarian purposes. Configure two spares. Configure the remaining 16 drives as RAID10 with a 64KB stripe size (8KB, 16 sector strip size), yielding 8TB raw for the XFS backed mdbox mailstore. Enable the BBWC write cache (dang, forgot the battery module, +$175). This should yield approximately 8*150 = 1200 IOPS peak to/from disk, many thousands to BBWC, more than plenty for 5000 concurrent users given the IO behavior of most MUAs. Channel bond the NICs to the switch or round robin DNS the two IPs if pathing for redundancy. What's that? You want to support 10K users? Simply drop in another 4 sticks of the 8GB Kingston Reg ECC RAM for 64GB total, and plug one of these into the external SFF8088 port on the LSI card: http://www.newegg.com/Product/Product.aspx?Item=N82E16816133047 populated with 18 of the 1TB RE4 drives. Configure 16 drives the same as the primary array, grow it into your existing XFS. Since you have two identical arrays comprising the filesystem, sunit/swidth values are still valid so you don't need to add mount options. Configure 2 drives as hot spares. The additional 16 drive RAID10 doubles our disk IOPS to ~2400, maintaining our concurrent user to IOPS ratio at ~4:1, and doubles our mail storage to ~16TB. This expansion hardware will run an additional ~$6200. Grand total to support ~10K concurrent users (maybe more) with a quality DIY build is just over $14K USD, or ~$1.40 per mailbox. Not too bad for an 8-core, 64GB server with 32TB of hardware RAID10 mailbox storage and 38 total 1TB disks. I haven't run the numbers for a comparable HP system, but an educated guess says it would be quite a bit more expensive, not the server so much, but the storage. HP's disk drive prices are outrageous, though not approaching anywhere near the level of larceny EMC commits with it's drive sales. $2400 for a $300 Seagate drive wearing an EMC cape? Please.... > Maybe two HP DL180s (one for production and one as test/standby-system) > with an SAS attached enclosure for storage? If you're hooked on 1U chassis (I hate em) go with the DL165 G7. If not I'd go 2U, the DL385 G7. Magny Cours gives you more bang for the buck in this class of machines. The performance is excellent, and, if everybody buys Intel, AMD goes bankrupt, and then Chipzilla charges whatever it desires. They've already been sanctioned, and fined by the FTC at least twice. They paid Intergraph $800 million in an antitrust settlement in 2000 after they forced them out of the hardware business. They recently paid AMD $1 Billion in an antitrust settlement. They're just like Microsoft, putting competitors out of business by any and all means necessary, even if their conduct is illegal. Yes, I'd much rather give AMD my business, given they had superior CPUs to Intel for many years, and their current chips are still more than competitive. /end rant. ;) > Keeping in mind the new system has to work for some time (again 5 to 7 > years) I have to be able to extend the storage space without to much > hassle. Given you're currently only using ~1.3TB of ~15TB do you really see this as an issue? Will you be changing your policy or quotas? Will the university double its enrollment? If not I would think a new 12-16TB raw array would be more than plenty. If you really want growth potential get a SATABeast and start with 14 2TB SATA drives. You'll still have 28 empty SAS/SATA slots in the 4U chassis, 42 total. Max capacity is 84TB. You get dual 8Gb/s FC LC ports and dual GbE iSCSI ports per controller, all ports active, two controllers max. The really basic SKU runs about $20-25K USD with the single controller and a few small drives, before institutional/educational discounts. www.nexsan.com/satabeast I've used the SATABlade and SATABoy models (8 and 14 drives) and really like the simplicity of design and the httpd management interface. Good products, and one of the least expensive and feature rich in this class. Sorry this was so windy. I am the hardwarefreak after all. :) -- Stan From xamiw at arcor.de Sun Jan 8 17:37:10 2012 From: xamiw at arcor.de (xamiw at arcor.de) Date: Sun, 8 Jan 2012 16:37:10 +0100 (CET) Subject: [Dovecot] uid / gid and systemusers Message-ID: <1809497881.1135529.1326037030206.JavaMail.ngmail@webmail10.arcor-online.net> Hi all, I'm facing a problem when a user (q and g in this example) is logging into dovecot. Can anybody tell some hint? Thanks in advance. George /var/log/mail.log: ... Jan 8 16:18:28 test dovecot: User q is missing UID (see mail_uid setting) Jan 8 16:18:28 test dovecot: imap-login: Internal login failure (auth failed, 1 attempts): user=, method=PLAIN, rip=AAA.BBB.CCC.DDD, lip=EEE.FFF.GGG.HHH TLS <--- edited by me Jan 8 16:18:28 test dovecot: dovecot: User g is missing UID (see mail_uid setting) Jan 8 16:18:28 test dovecot: imap-login: Internal login failure (auth failed, 1 attempts): user=, method=PLAIN, rip=AAA.BBB.CCC.DDD, lip=EEE.FFF.GGG.HHH TLS <--- edited by me /etc/dovecot/dovecot.conf: protocols = imaps disable_plaintext_auth = yes shutdown_clients = yes log_timestamp = "%Y-%m-%d %H:%M:%S " ssl = yes ssl_cert_file = /etc/ssl/certs/dovecot.pem ssl_key_file = /etc/ssl/private/dovecot.pem mail_location = mbox:~/mail:INBOX=/var/mail/%u mail_privileged_group = mail mbox_write_locks = fnctl dotlock auth default { mechanisms = plain passdb shadow { } } /etc/passwd: ... g:x:1000:1000:test1,,,:/home/g:/bin/bash q:x:1001:1001:test2,,,:/home/q:/bin/bash /etc/group: ... g:x:1000: q:x:1001: From sven at svenhartge.de Sun Jan 8 17:39:45 2012 From: sven at svenhartge.de (Sven Hartge) Date: Sun, 8 Jan 2012 16:39:45 +0100 Subject: [Dovecot] Providing shared folders with multiple backend servers References: <68fd4hi9kbv8@mids.svenhartge.de> <4F08E4D9.1090203@hardwarefreak.com> <78fdevu9kbv8@mids.svenhartge.de> <4F09956C.1030109@hardwarefreak.com> Message-ID: <88fev069kbv8@mids.svenhartge.de> Stan Hoeppner wrote: > On 1/7/2012 7:55 PM, Sven Hartge wrote: >> Stan Hoeppner wrote: >> >>> It's highly likely your problems can be solved without the drastic >>> architecture change, and new problems it will introduce, that you >>> describe below. >> >> The main reason is I need to replace the hardware as its service >> contract ends this year and I am not able to extend it further. >> >> The box so far is fine, there are normally no problems during normal >> operations with speed or responsiveness towards the end-user. >> >> Sometimes, higher peak loads tend to strain the system a bit and this is >> starting to occur more often. > ... >> First thought was to move this setup into our VMware cluster (yeah, I >> know, spare me the screams), since the hardware used there is way more >> powerfull than the hardware used now and I wouldn't have to buy new >> servers for my mail system (which is kind of painful to do in an >> universitary environment, especially in Germany, if you want to invest >> an amount of money above a certain amount). > What's wrong with moving it onto VMware? This actually seems like a > smart move given your description of the node hardware. It also gives > you much greater backup flexibility with VCB (or whatever they call it > today). You can snapshot the LUN over the SAN during off peak hours to > a backup server and do the actual backup to the library at your leisure. > Forgive me if the software names have changed as I've not used VMware > since ESX3 back in 07. VCB as it was back in the days is dead. But yes, one of the reasons to use a VM was to be able to easily backup the whole shebang. >> But then I thought about the problems with VMs this size and got to the >> idea with the distributed setup, splitting the one server into 4 or 6 >> backend servers. > Not sure what you mean by "VMs this size". Do you mean memory > requirements or filesystem size? If the nodes have enough RAM that's no > issue. Memory size. I am a bit hesistant to deploy a VM with 16GB of RAM. My cluster nodes each have 48GB, so no problem on this side though. > And surely you're not thinking of using a .vmdk for the mailbox > storage. You'd use an RDM SAN LUN. No, I was not planning to use a VMDK backed disk for this. > In fact you should be able to map in the existing XFS storage LUN and > use it as is. Assuming it's not going into retirement as well. It is going to be retired as well, as it is as old as the server. It also is not connected to any SAN as well, only local to the backend server. And our VMware SAN is iSCSI based, so no way to plug a FC-based storage into it. > If an individual VMware node don't have sufficient RAM you could build a > VM based Dovecot cluster, run these two VMs on separate nodes, and thin > out the other VMs allowed to run on these nodes. Since you can't > directly share XFS, build a tiny Debian NFS server VM and map the XFS > LUN to it, export the filesystem to the two Dovecot VMs. You could > install the Dovecot director on this NFS server VM as well. Converting > from maildir to mdbox should help eliminate the NFS locking problems. I > would do the conversion before migrating to this VM setup with NFS. > Also, run the NFS server VM on the same physical node as one of the > Dovecot servers. The NFS traffic will be a memory-memory copy instead > of going over the GbE wire, decreasing IO latency and increasing > performance for that Dovecot server. If it's possible to have Dovecot > director or your fav load balancer weight more connections to one > Deovecot node, funnel 10-15% more connections to this one. (I'm no > director guru, in fact haven't use it yet). So, this reads like my idea in the first place. Only you place all the mails on the NFS server, whereas my idea was to just share the shared folders from a central point and keep the normal user dirs local to the different nodes, thus reducing network impact for the way more common user access. > Assuming the CPUs in the VMware cluster nodes are clocked a decent > amount higher than 1.8GHz I wouldn't monkey with configuring virtual smp > for these two VMs, as they'll be IO bound not CPU bound. 2.3GHz for most VMware nodes. >>>> Ideas? Suggestions? Nudges in the right direction? >> >>> Yes. We need more real information. Please provide: >> >>> 1. Mailbox count, total maildir file count and size >> >> about 10,000 Maildir++ boxes >> >> 900GB for 1300GB used, "df -i" says 11 million inodes used > Converting to mdbox will take a large burden off your storage, as you've > seen. With ~1.3TB consumed of ~15TB you should have plenty of space to > convert to mdbox while avoiding filesystem fragmentation. You got the numbers wrong. And I got a word wrong ;) Should have read "900GB _of_ 1300GB used". I am using 900GB of 1300GB. The disks are SATA1.5 (not SATA3 or SATA6) as in data transfer rate. The disks each are 150GB in size, so my maximum storage size of my underlying VG is 1500GB. root at ms1:~# vgs VG #PV #LV #SN Attr VSize VFree vg01 1 6 0 wz--n- 70.80G 40.80G vg02 1 1 0 wz--n- 1.45T 265.00G vg03 1 1 0 wz--n- 1.09T 0 Filesystem Size Used Avail Use% Mounted on /dev/mapper/vg02-home_lv 1.2T 867G 357G 71% /home /dev/mapper/vg03-backup_lv 1.1T 996G 122G 90% /backup So not much wiggle room left. But modifications to our systems are made, which allow me to temp-disable a user, convert and move his mailbox and re-enable him, which allows me to move them one at a time from the old system to the new one, without losing a mail or disrupting service to long and often. >> Right now, in the middle of the night (2:30 AM here) on a Sunday, thus a >> low point in the usage pattern: >> >> total used free shared buffers cached >> Mem: 12335820 9720252 2615568 0 53112 680424 >> -/+ buffers/cache: 8986716 3349104 >> Swap: 5855676 10916 5844760 > Ugh... "-m" and "-g" options exist for a reason. :) So this box has > 12GB RAM, currently ~2.5GB free during off peak hours. It would be > interesting to see free RAM and swap usage values during peak. That > would tell use whether we're CPU or RAM starved. If both turned up > clean then we'd need to look at iowait. If you're not RAM starved then > moving to VMware nodes with 16/24/32GB RAM should work fine, as long as > you don't stack many other VMs on top. Enabling memory dedup may help a > little. Well, peak hours are somewhat between 10:00 and 14:00 o'clock. Will check then. >> System reaches its 7 year this summer which is the end of its service >> contract. > Enjoy your retirement old workhorse. :) >>> 4. Storage configuration--total spindles, RAID level, hard or soft RAID >> >> RAID 6 with 12 SATA1.5 disks, external 4Gbit FC > I assume this means a LUN on a SAN array somewhere on the other end of > that multi-mode cable, yes? Can you tell us what brand/model the box is? This is a Transtec Provigo 610. This is a 24 disk enclosure, 12 disks with 150GB (7.200k) each for the main mail storage in RAID6 and another 10 disks with 150GB (5.400k) for a backup LUN. I daily rsnapshot my /home onto this local backup (20 days of retention), because it is easier to restore from than firing up Bacula, which has the long retention time of 90 days. But must users need a restore of mails from $yesterday or $the_day_before. >> Back in 2005, a SAS enclosure was way to expensive for us to afford. > How one affords an FC SAN array but not a less expensive direct attach > SAS enclosure is a mystery... :) Well, it was either Parallel-SCSI or FC back then, as far as I can remember. The price difference between the U320 version and the FC one was not so big and I wanted to avoid having to route those big SCSI-U320 through my racks. >>> 5. Filesystem type >> >> XFS in a LVM to allow snapshots for backup > XFS is the only way to fly, IMNSHO. >> I of course aligned the partions on the RAID correctly and of course >> created a filesystem with the correct parameters wrt. spindels, chunk >> size, etc. > Which is critical for mitigating the RMW penalty of parity RAID. > Speaking of which, why RAID6 for maildir? Given that your array is 90% > vacant, why didn't you go with RAID10 for 3-5 times the random write > performance? See above, not 1500GB disks, but 150GB ones. RAID6, because I wanted the double security. I have been kind of burned by the previous system and I tend to get nervous while tinking about data loss in my mail storage, because I know my users _will_ give me hell if that happens. >>> 6. Backup software/method >> >> Full backup with Bacula, taking about 24 hours right now. Because of >> this, I switched to virtual full backups, only ever doing incremental >> and differental backups off of the real system and creating synthetic >> full backups inside Bacula. Works fine though, incremental taking 2 >> hours, differential about 4 hours. > Move to VMware and use VCB. You'll fall in love. >> The main problem of the backup time is Maildir++. During a test, I >> copied the mail storage to a spare box, converted it to mdbox (50MB >> file size) and the backup was lightning fast compared to the Maildir++ >> format. > Well of course. You were surprised by this? No, I was not surprised by the speedup, I _knew_ mdbox would backup faster. Just how big it was. That a backup of 100 big files is faster than a backup of 100,000 little files is not exactly rocket sience. > How long has it been since you used mbox? mbox backs up even faster > than mdbox. Why? Larger files and fewer of them. Which means the > disks can actually do streaming reads, and don't have to beat their > heads to death jumping all over the platters to read maildir files, > which are scattered all over the place when created. Which is while > maildir is described as a "random" IO workload. I never used mbox as an admin. The box before the box before this one uses uw-imapd with mbox and I experienced the system as a user and it was horriffic. Most users back then never heard of IMAP folders and just stored their mails inside of INBOX, which of course got huge. If one of those users with a big mbox then deleted mails, it would literally lock the box up for everyone, as uw-imapd was copying (for example) a 600MB mbox file around to delete one mail. Of course, this was mostly because of the crappy uw-imapd and secondly by some poor design choices in the server itself (underpowered RAID controller, to small cache and a RAID5 setup, low RAM in the server). So the first thing we did back then, in 2004, was to change to Courier and convert from mbox to maildir, which made the mailsystem fly again, even on the same hardware, only the disk setup changed to RAID10. Then we bought new hardware (the one previous to the current one), this time with more RAM, better RAID controller, smarter disk setup. We outgrew this one really fast and a disk upgrade was not possible; it lasted only 2 years. So the next one got this external 24 disk array with 12 disks used at deployment. But Courier is showing its age and things like Sieve are only possible with great pain, so I want to avoid it. >> So this is the way to go, I think, regardless of which way I implement >> the backend mail server. > Which is why I asked my questions. :) mdbox would have been one of my > recommendations, but you already discovered it. And this is why I kind of hold this upgrade back until dovecot 2.1 is released, as it has some optimizations here. >>> 7. Operating system >> >> Debian Linux Lenny, currently with kernel 2.6.39 > :) Debian, XFS, Dovecot, FC SAN storage--I like your style. Lenny with > 2.6.39? Is that a backport or rolled kernel? Not Squeeze? That is a BPO-kernel. Not-yet Squeeze. I admin over 150 different systems here, plus I am the main VMware and SAN admin. So upgrades take some time until I grow an extra pair of eyes and arms. ;) And since I have been planning to re-implement the mailsystem for some time now, I held the update to the storage backends back. No use in disrupting service for the end user if I'm going to replace the whole thing with a new one in the end. >>> Instead of telling us what you think the solution to your unidentified >>> bottleneck is and then asking "yeah or nay", tell us what the problem is >>> and allow us to recommend solutions. >> >> I am not asking for "yay or nay", I just pointed out my idea, but I am >> open to other suggestions. > I think you've already discovered the best suggestions on your own. >> If the general idea is to buy a new big single storage system, I am more >> than happy to do just this, because this will prevent any problems I might >> have with a distributed one before they even can occur. > One box is definitely easier to administer and troubleshoot. Though I > must say that even though it's more complex, I think the VM architecture > I described is worth a serious look. If your current 12x1.5TB SAN array > is being retired as well, you could piggy back onto the array(s) feeding > the VMware farm, or expand them if necessary/possible. Adding drives is > usually much cheaper than buying a new populated array chassis. Given > your service contract comments it's unlikely you're the type to build > your own servers. Being a hardwarefreak, I nearly always build my > servers and storage from scratch. Naa, I have been doing this for too long. While I am perfectly capable of building such a server myself, I am now the kind of guy who wants to "yell" at a vendor, when their hardware fails. Which does not mean I am using any "Express" package or preconfigured server, I still read the specs and pick the parts which make the most sense for a job and then have that one custom build by HP or IBM or Dell or ... Personal build PCs and servers out of single parts have been nothing than a nightmare for me. And: my cowworkers need to be able to service them as well while I am not available and they are not as a hardware aficionado as I am. So "professional" hardware with a 5 to 7 year support contract is the way to go for me. > If you're hooked on 1U chassis (I hate em) go with the DL165 G7. If not > I'd go 2U, the DL385 G7. Magny Cours gives you more bang for the buck > in this class of machines. I have plenty space for 2U systems and already use DL385 G7s, I am not fixed on Intel or AMD, I'll gladly use the one which is the most fit for a given jobs. Gr??e, Sven -- Sigmentation fault. Core dumped. From sven at svenhartge.de Sun Jan 8 22:15:22 2012 From: sven at svenhartge.de (Sven Hartge) Date: Sun, 8 Jan 2012 21:15:22 +0100 Subject: [Dovecot] Providing shared folders with multiple backend servers References: <68fd4hi9kbv8@mids.svenhartge.de> <4F08E4D9.1090203@hardwarefreak.com> <78fdevu9kbv8@mids.svenhartge.de> <4F09956C.1030109@hardwarefreak.com> <88fev069kbv8@mids.svenhartge.de> Message-ID: Sven Hartge wrote: > Stan Hoeppner wrote: >> If an individual VMware node don't have sufficient RAM you could build a >> VM based Dovecot cluster, run these two VMs on separate nodes, and thin >> out the other VMs allowed to run on these nodes. Since you can't >> directly share XFS, build a tiny Debian NFS server VM and map the XFS >> LUN to it, export the filesystem to the two Dovecot VMs. You could >> install the Dovecot director on this NFS server VM as well. Converting >> from maildir to mdbox should help eliminate the NFS locking problems. I >> would do the conversion before migrating to this VM setup with NFS. >> Also, run the NFS server VM on the same physical node as one of the >> Dovecot servers. The NFS traffic will be a memory-memory copy instead >> of going over the GbE wire, decreasing IO latency and increasing >> performance for that Dovecot server. If it's possible to have Dovecot >> director or your fav load balancer weight more connections to one >> Deovecot node, funnel 10-15% more connections to this one. (I'm no >> director guru, in fact haven't use it yet). > So, this reads like my idea in the first place. > Only you place all the mails on the NFS server, whereas my idea was to > just share the shared folders from a central point and keep the normal > user dirs local to the different nodes, thus reducing network impact for > the way more common user access. To be a bit more concrete on this one: a) X backend servers which my frontend (being perdition or dovecot director) redirects users to, fixed, no random redirects. I might start with 4 backend servers, but I can easily scale them, either vertically by adding more RAM or vCPUs or horizontally by adding more VMs and reshuffling some mailboxes during the night. Why 4 and not 2? If I'm going to build a cluster, I already have to do the work to implement this and with 4 backends, I can distribute the load even further without much additional administrative overhead. But the load impact on each node gets lower with more nodes, if I am able to evenly spread my users across those nodes (like md5'ing the username and using the first 2 bits from that to determine which node the user resides on). b) 1 backend server for the public shared mailboxes, exporting them via NFS to the user backend servers Configuration like this, from http://wiki2.dovecot.org/SharedMailboxes/Public ,---- | # User's private mail location | mail_location = mdbox:~/mdbox | | # When creating any namespaces, you must also have a private namespace: | namespace { | type = private | separator = . | prefix = INBOX. | #location defaults to mail_location. | inbox = yes | } | | namespace { | type = public | separator = . | prefix = #shared. | location = mdbox:/srv/shared/ | subscriptions = no | } `---- With /srv/shared being the NFS mountpoint from my central public shared mailbox server. This setup would keep the amount of data transferred via NFS small (only a tiny fraction of my 10,000 users have access to a shared folder, mostly users in the IT-Team or in the administration of the university. Wouldn't such a setup be the "Best of Both Worlds"? Having the main traffic going to local disks (being RDMs) and also being able to provide shared folders to every user who needs them without the need to move those users onto one server? Gr??e, Sven. -- Sigmentation fault. Core dumped. From sven at svenhartge.de Sun Jan 8 23:07:11 2012 From: sven at svenhartge.de (Sven Hartge) Date: Sun, 8 Jan 2012 22:07:11 +0100 Subject: [Dovecot] Providing shared folders with multiple backend servers References: <68fd4hi9kbv8@mids.svenhartge.de> <4F08E4D9.1090203@hardwarefreak.com> <78fdevu9kbv8@mids.svenhartge.de> <4F09956C.1030109@hardwarefreak.com> <88fev069kbv8@mids.svenhartge.de> Message-ID: Sven Hartge wrote: > Sven Hartge wrote: >> Stan Hoeppner wrote: >>> If an individual VMware node don't have sufficient RAM you could build a >>> VM based Dovecot cluster, run these two VMs on separate nodes, and thin >>> out the other VMs allowed to run on these nodes. Since you can't >>> directly share XFS, build a tiny Debian NFS server VM and map the XFS >>> LUN to it, export the filesystem to the two Dovecot VMs. You could >>> install the Dovecot director on this NFS server VM as well. Converting >>> from maildir to mdbox should help eliminate the NFS locking problems. I >>> would do the conversion before migrating to this VM setup with NFS. >>> Also, run the NFS server VM on the same physical node as one of the >>> Dovecot servers. The NFS traffic will be a memory-memory copy instead >>> of going over the GbE wire, decreasing IO latency and increasing >>> performance for that Dovecot server. If it's possible to have Dovecot >>> director or your fav load balancer weight more connections to one >>> Deovecot node, funnel 10-15% more connections to this one. (I'm no >>> director guru, in fact haven't use it yet). >> So, this reads like my idea in the first place. >> Only you place all the mails on the NFS server, whereas my idea was to >> just share the shared folders from a central point and keep the normal >> user dirs local to the different nodes, thus reducing network impact for >> the way more common user access. > To be a bit more concrete on this one: > a) X backend servers which my frontend (being perdition or dovecot > director) redirects users to, fixed, no random redirects. > I might start with 4 backend servers, but I can easily scale them, > either vertically by adding more RAM or vCPUs or horizontally by > adding more VMs and reshuffling some mailboxes during the night. > Why 4 and not 2? If I'm going to build a cluster, I already have to do > the work to implement this and with 4 backends, I can distribute the > load even further without much additional administrative overhead. > But the load impact on each node gets lower with more nodes, if I am > able to evenly spread my users across those nodes (like md5'ing the > username and using the first 2 bits from that to determine which > node the user resides on). Ah, I forgot: I _already_ have the mechanisms in place to statically redirect/route accesses for users to different backends, since some of the users are already redirected to a different mailsystem at another location of my university. So using this mechanism to also redirect/route users internal to _my_ location is no big deal. This is what got me into the idea of several independant backend storages without the need to share the _whole_ storage, but just the shared folders for some users. (Are my words making any sense? I got the feeling I'm writing German with English words and nobody is really understanding anything ...) Gr??e, Sven. -- Sigmentation fault. Core dumped. From dmiller at amfes.com Mon Jan 9 01:40:48 2012 From: dmiller at amfes.com (Daniel L. Miller) Date: Sun, 08 Jan 2012 15:40:48 -0800 Subject: [Dovecot] Possible mdbox corruption In-Reply-To: <4F075A76.1040807@amfes.com> References: <4F04EDC8.6060809@amfes.com> <9D72A466-6ECD-4A10-A52A-20BF56B32916@iki.fi> <4F072FA9.2020009@amfes.com> <75CA67D3-81FF-4FD5-8D61-6992E808DB09@iki.fi> <4F075A76.1040807@amfes.com> Message-ID: On 1/6/2012 12:32 PM, Daniel L. Miller wrote: > On 1/6/2012 9:36 AM, Timo Sirainen wrote: >> On 6.1.2012, at 19.30, Daniel L. Miller wrote: >> >>> Jan 6 09:22:42 bubba dovecot: indexer-worker(user1 at domain.com): >>> Error: fts_solr: Indexing failed: 400 Illegal character ((CTRL-CHAR, >>> code 18)) at [row,col {unknown-source}]: [482765,16] >>> Jan 6 09:22:42 bubba dovecot: indexer-worker: Error: >>> >>> Google seems to indicate that Solr cannot handle "invalid" >>> characters - and that it is the responsibility of the calling >>> program to strip out such. A quick search shows me a both an >>> individual character comparison in Java and a regex used for the >>> purpose. Is there any "illegal character protection" in the Dovecot >>> Solr plugin? >> Yes, there is. So I'm not really sure what it's complaining about. >> Are you using the "solr" or "solr_old" backend? >> >> > "Solr". > > plugin { > fts = solr > fts_solr = url=http://localhost:8983/solr/ > } > Now seeing: Jan 8 15:40:09 bubba dovecot: imap(user1 at domain.com): Error: fts_solr: Lookup failed: 400 undefined field CC Jan 8 15:40:09 bubba dovecot: imap: Error: -- Daniel From dmiller at amfes.com Mon Jan 9 01:48:29 2012 From: dmiller at amfes.com (Daniel L. Miller) Date: Sun, 08 Jan 2012 15:48:29 -0800 Subject: [Dovecot] Solr plugin In-Reply-To: <4F0A2980.7050003@amfes.com> References: <4F04EDC8.6060809@amfes.com> <9D72A466-6ECD-4A10-A52A-20BF56B32916@iki.fi> <4F072FA9.2020009@amfes.com> <75CA67D3-81FF-4FD5-8D61-6992E808DB09@iki.fi> <4F075A76.1040807@amfes.com> <4F0A2980.7050003@amfes.com> Message-ID: On 1/8/2012 3:40 PM, Daniel L. Miller wrote: > On 1/6/2012 12:32 PM, Daniel L. Miller wrote: >> On 1/6/2012 9:36 AM, Timo Sirainen wrote: >>> On 6.1.2012, at 19.30, Daniel L. Miller wrote: >>> > > Jan 8 15:40:09 bubba dovecot: imap(user1 at domain.com): Error: > fts_solr: Lookup failed: 400 undefined field CC > Jan 8 15:40:09 bubba dovecot: imap: Error: > > Looking at the Solr output - looks like the CC parameter is being capitalized while all the other fieldnames are lowercase. -- Daniel From Ralf.Hildebrandt at charite.de Mon Jan 9 09:40:57 2012 From: Ralf.Hildebrandt at charite.de (Ralf Hildebrandt) Date: Mon, 9 Jan 2012 08:40:57 +0100 Subject: [Dovecot] 2.0.17: Index lost -> SAVEDON lost as well? Message-ID: <20120109074057.GC22506@charite.de> Today I encoundered these errors: Jan 9 08:30:06 mail dovecot: lmtp(31174, backup at backup.invalid): Error: Log synchronization error at seq=858,offset=44672 for /home/mailboxname/mdbox/storage/dovecot.map.index: Append with UID 282388, but next_uid = 282389 Jan 9 08:30:06 mail dovecot: lmtp(31819, backup at backup.invalid): Error: Log synchronization error at seq=858,offset=44672 for /home/mailboxname/mdbox/storage/dovecot.map.index: Append with UID 282388, but next_uid = 282389 Jan 9 08:30:06 mail dovecot: lmtp(32148, backup at backup.invalid): Error: Log synchronization error at seq=858,offset=44672 for /home/mailboxname/mdbox/storage/dovecot.map.index: Append with UID 282388, but next_uid = 282389 After that, the SAVEDON date for all mails was reset to today: mail:~# doveadm search -u backup at backup.invalid mailbox INBOX SAVEDON 2012-01-09 | wc -l 75650 mail:~# doveadm search -u backup at backup.invalid mailbox INBOX SAVEDON 2012-01-08 | wc -l 0 mail:~# doveadm search -u backup at backup.invalid mailbox INBOX SAVEDON 2012-01-07 | wc -l 0 Before, I was running this: vorgestern=`date -d "-2 day" +"%Y-%m-%d"` doveadm expunge -u backup at backup.invalid mailbox INBOX SAVEDBEFORE $vorgestern doveadm purge -u backup at backup.invalid Is there a way of restoring the SAVEDON info? -- Ralf Hildebrandt Gesch?ftsbereich IT | Abteilung Netzwerk Charit? - Universit?tsmedizin Berlin Campus Benjamin Franklin Hindenburgdamm 30 | D-12203 Berlin Tel. +49 30 450 570 155 | Fax: +49 30 450 570 962 ralf.hildebrandt at charite.de | http://www.charite.de From junk4 at klunky.co.uk Mon Jan 9 11:16:58 2012 From: junk4 at klunky.co.uk (J4K) Date: Mon, 09 Jan 2012 10:16:58 +0100 Subject: [Dovecot] dovecot: imap-login: Disconnected (no auth attempts) Message-ID: <4F0AB08A.7050605@klunky.co.uk> Morning everyone, On the 8th of Jan the TLS/SSL certificate I use with Dovecot expired. I replaced it with a new on the 9th of Jan. I tested this with Thunderbird and all is well. This morning people tell me they cannot get their email using their mobile telephones : K9 Mail I have reverted the SSL cert back to the old one just in case. Thunderbird will works. Dovecot 1:1.2.15-7 running on Debian 6 The messages in the logs are: Jan 9 10:11:37 logout dovecot: imap-login: Disconnected (no auth attempts): rip=90.131.95.130, lip=88.119.95.13, TLS: Disconnected Jan 9 10:11:38 logout dovecot: imap-login: Disconnected (no auth attempts): rip=90.131.95.130, lip=88.119.95.13, TLS: Disconnected In dovecot.conf I have this set : disable_plaintext_auth = no And the auth default mechanisms are set to: mechanisms = plain login What is strange is the only item that changed is the SSL cert, which has since been changed back to the old one (which has expired... ^^). Any ideas where I may look or change? Regards, S From robert at schetterer.org Mon Jan 9 11:27:26 2012 From: robert at schetterer.org (Robert Schetterer) Date: Mon, 09 Jan 2012 10:27:26 +0100 Subject: [Dovecot] dovecot: imap-login: Disconnected (no auth attempts) In-Reply-To: <4F0AB08A.7050605@klunky.co.uk> References: <4F0AB08A.7050605@klunky.co.uk> Message-ID: <4F0AB2FE.8000306@schetterer.org> Am 09.01.2012 10:16, schrieb J4K: > Morning everyone, > > On the 8th of Jan the TLS/SSL certificate I use with Dovecot expired. I > replaced it with a new on the 9th of Jan. I tested this with > Thunderbird and all is well. > > This morning people tell me they cannot get their email using their > mobile telephones : K9 Mail > > I have reverted the SSL cert back to the old one just in case. > Thunderbird will works. > > > Dovecot 1:1.2.15-7 running on Debian 6 > > The messages in the logs are: > > Jan 9 10:11:37 logout dovecot: imap-login: Disconnected (no auth > attempts): rip=90.131.95.130, lip=88.119.95.13, TLS: Disconnected > Jan 9 10:11:38 logout dovecot: imap-login: Disconnected (no auth > attempts): rip=90.131.95.130, lip=88.119.95.13, TLS: Disconnected > > In dovecot.conf I have this set : > > disable_plaintext_auth = no > > And the auth default mechanisms are set to: > mechanisms = plain login > > What is strange is the only item that changed is the SSL cert, which has > since been changed back to the old one (which has expired... ^^). > > Any ideas where I may look or change? > > Regards, S if you only changed the crt etc, and youre sure you did everything right perhaps you have forgot adding a needed intermediate cert ? read here http://www.trustico.co.uk/install/how-to-install-ssl-certificate.php Required Intermediate Certificates (CA Certificates) To successfully install your SSL Certificate you may be required to install an Intermediate CA Certificate. Please review the above installation instructions carefully to determine if an Intermediate CA Certificate is required, how to obtain it and correctly import it into your system. For more information please Contact Us. Alternatively, and for systems not covered by the above installation instructions, please use our Intermediate Certificate Wizard to find the correct CA Certificate or Root Bundle that is required for your SSL Certificate to function correctly. Find Out More Information -- Best Regards MfG Robert Schetterer Germany/Munich/Bavaria From junk4 at klunky.co.uk Mon Jan 9 11:39:24 2012 From: junk4 at klunky.co.uk (J4K) Date: Mon, 09 Jan 2012 10:39:24 +0100 Subject: [Dovecot] dovecot: imap-login: Disconnected (no auth attempts) In-Reply-To: <4F0AB2FE.8000306@schetterer.org> References: <4F0AB08A.7050605@klunky.co.uk> <4F0AB2FE.8000306@schetterer.org> Message-ID: <4F0AB5CC.108@klunky.co.uk> On 09/01/12 10:27, Robert Schetterer wrote: > Am 09.01.2012 10:16, schrieb J4K: >> Morning everyone, >> >> On the 8th of Jan the TLS/SSL certificate I use with Dovecot expired. I >> replaced it with a new on the 9th of Jan. I tested this with >> Thunderbird and all is well. >> >> This morning people tell me they cannot get their email using their >> mobile telephones : K9 Mail >> >> I have reverted the SSL cert back to the old one just in case. >> Thunderbird will works. >> >> >> Dovecot 1:1.2.15-7 running on Debian 6 >> >> The messages in the logs are: >> >> Jan 9 10:11:37 logout dovecot: imap-login: Disconnected (no auth >> attempts): rip=90.131.95.130, lip=88.119.95.13, TLS: Disconnected >> Jan 9 10:11:38 logout dovecot: imap-login: Disconnected (no auth >> attempts): rip=90.131.95.130, lip=88.119.95.13, TLS: Disconnected >> >> In dovecot.conf I have this set : >> >> disable_plaintext_auth = no >> >> And the auth default mechanisms are set to: >> mechanisms = plain login >> >> What is strange is the only item that changed is the SSL cert, which has >> since been changed back to the old one (which has expired... ^^). >> >> Any ideas where I may look or change? >> >> Regards, S > if you only changed the crt etc, and youre sure you did everything right > > perhaps you have forgot adding a needed intermediate cert ? > > read here > http://www.trustico.co.uk/install/how-to-install-ssl-certificate.php > > Required Intermediate Certificates (CA Certificates) > > To successfully install your SSL Certificate you may be required to > install an Intermediate CA Certificate. Please review the above > installation instructions carefully to determine if an Intermediate CA > Certificate is required, how to obtain it and correctly import it into > your system. For more information please Contact Us. > Alternatively, and for systems not covered by the above installation > instructions, please use our Intermediate Certificate Wizard to find the > correct CA Certificate or Root Bundle that is required for your SSL > Certificate to function correctly. Find Out More Information You may have some email problems with the mobile phone because of the certificates. Thunderbird and webmail are fine. You have only to access its complaint about a unknown certificate. I am working on the certificate problem. From robert at schetterer.org Mon Jan 9 11:41:18 2012 From: robert at schetterer.org (Robert Schetterer) Date: Mon, 09 Jan 2012 10:41:18 +0100 Subject: [Dovecot] dovecot: imap-login: Disconnected (no auth attempts) In-Reply-To: <4F0AB556.3040103@klunky.co.uk> References: <4F0AB08A.7050605@klunky.co.uk> <4F0AB2FE.8000306@schetterer.org> <4F0AB556.3040103@klunky.co.uk> Message-ID: <4F0AB63E.4020205@schetterer.org> Am 09.01.2012 10:37, schrieb Simon Loewenthal: > On 09/01/12 10:27, Robert Schetterer wrote: >> Am 09.01.2012 10:16, schrieb J4K: >>> Morning everyone, >>> >>> On the 8th of Jan the TLS/SSL certificate I use with Dovecot expired. I >>> replaced it with a new on the 9th of Jan. I tested this with >>> Thunderbird and all is well. >>> >>> This morning people tell me they cannot get their email using their >>> mobile telephones : K9 Mail >>> >>> I have reverted the SSL cert back to the old one just in case. >>> Thunderbird will works. >>> >>> >>> Dovecot 1:1.2.15-7 running on Debian 6 >>> >>> The messages in the logs are: >>> >>> Jan 9 10:11:37 logout dovecot: imap-login: Disconnected (no auth >>> attempts): rip=90.131.95.130, lip=88.119.95.13, TLS: Disconnected >>> Jan 9 10:11:38 logout dovecot: imap-login: Disconnected (no auth >>> attempts): rip=90.131.95.130, lip=88.119.95.13, TLS: Disconnected >>> >>> In dovecot.conf I have this set : >>> >>> disable_plaintext_auth = no >>> >>> And the auth default mechanisms are set to: >>> mechanisms = plain login >>> >>> What is strange is the only item that changed is the SSL cert, which has >>> since been changed back to the old one (which has expired... ^^). >>> >>> Any ideas where I may look or change? >>> >>> Regards, S >> if you only changed the crt etc, and youre sure you did everything right >> >> perhaps you have forgot adding a needed intermediate cert ? >> >> read here >> http://www.trustico.co.uk/install/how-to-install-ssl-certificate.php >> >> Required Intermediate Certificates (CA Certificates) >> >> To successfully install your SSL Certificate you may be required to >> install an Intermediate CA Certificate. Please review the above >> installation instructions carefully to determine if an Intermediate CA >> Certificate is required, how to obtain it and correctly import it into >> your system. For more information please Contact Us. >> Alternatively, and for systems not covered by the above installation >> instructions, please use our Intermediate Certificate Wizard to find the >> correct CA Certificate or Root Bundle that is required for your SSL >> Certificate to function correctly. Find Out More Information > I know that the intermediate certs are messed up, which is why I rolled > back to the old expired certificate. I did not expect an expired > certificate to block authentication, and it does not mean that it does > block. The problem may be elsewhere. that might be a k9 problem ( older versions ) or in android older versions, is there a ignore ssl failure option as workaround what does thunderbird tell you about the new cert ? but for sure the problem may elsewhere > > -- > PGP is optional: 4BA78604 > simon @ klunky . org > simon @ klunky . co.uk > I won't accept your confidentiality > agreement, and your Emails are kept. > ~???~ > -- Best Regards MfG Robert Schetterer Germany/Munich/Bavaria From junk4 at klunky.co.uk Mon Jan 9 11:52:22 2012 From: junk4 at klunky.co.uk (J4K) Date: Mon, 09 Jan 2012 10:52:22 +0100 Subject: [Dovecot] dovecot: imap-login: Disconnected (no auth attempts) In-Reply-To: <4F0AB63E.4020205@schetterer.org> References: <4F0AB08A.7050605@klunky.co.uk> <4F0AB2FE.8000306@schetterer.org> <4F0AB556.3040103@klunky.co.uk> <4F0AB63E.4020205@schetterer.org> Message-ID: <4F0AB8D6.8060204@klunky.co.uk> On 09/01/12 10:41, Robert Schetterer wrote: > Am 09.01.2012 10:37, schrieb Simon Loewenthal: >> On 09/01/12 10:27, Robert Schetterer wrote: >>> Am 09.01.2012 10:16, schrieb J4K: >>>> Morning everyone, >>>> >>>> On the 8th of Jan the TLS/SSL certificate I use with Dovecot expired. I >>>> replaced it with a new on the 9th of Jan. I tested this with >>>> Thunderbird and all is well. >>>> >>>> This morning people tell me they cannot get their email using their >>>> mobile telephones : K9 Mail >>>> >>>> I have reverted the SSL cert back to the old one just in case. >>>> Thunderbird will works. >>>> >>>> >>>> Dovecot 1:1.2.15-7 running on Debian 6 >>>> >>>> The messages in the logs are: >>>> >>>> Jan 9 10:11:37 logout dovecot: imap-login: Disconnected (no auth >>>> attempts): rip=90.131.95.130, lip=88.119.95.13, TLS: Disconnected >>>> Jan 9 10:11:38 logout dovecot: imap-login: Disconnected (no auth >>>> attempts): rip=90.131.95.130, lip=88.119.95.13, TLS: Disconnected >>>> >>>> In dovecot.conf I have this set : >>>> >>>> disable_plaintext_auth = no >>>> >>>> And the auth default mechanisms are set to: >>>> mechanisms = plain login >>>> >>>> What is strange is the only item that changed is the SSL cert, which has >>>> since been changed back to the old one (which has expired... ^^). >>>> >>>> Any ideas where I may look or change? >>>> >>>> Regards, S >>> if you only changed the crt etc, and youre sure you did everything right >>> >>> perhaps you have forgot adding a needed intermediate cert ? >>> >>> read here >>> http://www.trustico.co.uk/install/how-to-install-ssl-certificate.php >>> >>> Required Intermediate Certificates (CA Certificates) >>> >>> To successfully install your SSL Certificate you may be required to >>> install an Intermediate CA Certificate. Please review the above >>> installation instructions carefully to determine if an Intermediate CA >>> Certificate is required, how to obtain it and correctly import it into >>> your system. For more information please Contact Us. >>> Alternatively, and for systems not covered by the above installation >>> instructions, please use our Intermediate Certificate Wizard to find the >>> correct CA Certificate or Root Bundle that is required for your SSL >>> Certificate to function correctly. Find Out More Information >> I know that the intermediate certs are messed up, which is why I rolled >> back to the old expired certificate. I did not expect an expired >> certificate to block authentication, and it does not mean that it does >> block. The problem may be elsewhere. > that might be a k9 problem ( older versions ) or in android older > versions, is there a ignore ssl failure option as workaround > > what does thunderbird tell you about the new cert ? > > but for sure the problem may elsewhere >> -- >> PGP is optional: 4BA78604 >> simon @ klunky . org >> simon @ klunky . co.uk >> I won't accept your confidentiality >> agreement, and your Emails are kept. >> ~???~ >> > TB says unknown, and I know why. I have set the class 1 and class 2 certificate chain keys to the same, when these should be different. Damn, StartCom's certs are difficult to set up. Workaround for K9 (latest version) is to go to the Account Settings -> Fetching -> Incoming Server, and click Next. It will attempt to authenicate and then complain about the certificate. One can ignore the warning and accept the certificate. Cheers all. Simon From janm at transactionware.com Sun Jan 8 10:38:04 2012 From: janm at transactionware.com (Jan Mikkelsen) Date: Sun, 8 Jan 2012 19:38:04 +1100 Subject: [Dovecot] Building 2.1.rc1 with cluence, but without libstemmer In-Reply-To: <1324377324.3597.47.camel@innu> References: <1324377324.3597.47.camel@innu> Message-ID: <8D81449C-C294-4983-961E-17907EBDBF6A@transactionware.com> On 20/12/2011, at 9:35 PM, Timo Sirainen wrote: > [?] >> and libtextcat is dovecot 2.1.rc1 intended to be used against? > > http://www.let.rug.nl/vannoord/TextCat/ probably.. Basically I've just > used the libstemmer and libtextcat that are in Debian. Hmm. That seems to be been turned into libtextcat here ? http://software.wise-guys.nl/libtextcat/ Dovecot builds against this version, so I'm hopeful it will work OK. Thanks for the answers, I'm going to test out 2.1-rc3 tomorrow. Regards, Jan. From mpapet at yahoo.com Mon Jan 9 07:34:31 2012 From: mpapet at yahoo.com (Michael Papet) Date: Sun, 8 Jan 2012 21:34:31 -0800 (PST) Subject: [Dovecot] Newbie: LDA Isn't Logging Message-ID: <1326087271.17295.YahooMailClassic@web125406.mail.ne1.yahoo.com> I did some testing on a Debian testing VM. I built 2.0.17 from sources and copied the config straight over from the malfunctioning machine. LDA logging worked. So, it could be something about my system. But, running /usr/lib/dovecot/deliver still doesn't return a value on the command line as documented on the wiki. I've attached strace files from both the malfunctioning Debian packages machine and the built from sources VM. Unfortunately, I'm a new strace user, so I don't know what it all means. Michael --- On Tue, 1/3/12, Timo Sirainen wrote: > From: Timo Sirainen > Subject: Re: [Dovecot] Newbie: LDA Isn't Logging > To: "Michael" > Cc: dovecot at dovecot.org > Date: Tuesday, January 3, 2012, 4:15 AM > On Mon, 2012-01-02 at 22:48 -0800, > Michael wrote: > > Hi, > > > > I'm a newbie having some trouble getting deliver to > log anything.? Related to this, there are no return > values unless the -d is missing.? I'm using LDAP to > store virtual domain and user account information. > > > > Test #1: /usr/lib/dovecot/deliver -e -f mpapet at yahoo.com > -d zed at mailswansong.dom > < bad.mail > > Expected result: supposed to fail, there's no zed > account via ldap lookup and supposed to get a return code > per the wiki at http://wiki2.dovecot.org/LDA.? > Supposed to log too. > > Actual result: nothing gets delivered, no return code, > nothing is logged. > > As in return code is 0? Something's definitely wrong there > then. > > First check that deliver at least reads the config file. > Add something > broken in there, such as: "foo=bar" at the beginning of > dovecot.conf. > Does deliver fail now? > > Also running deliver via strace could show something > useful. > > > -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: sources_2.0.17_strace.txt URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: malfunctioning_debian_strace.txt URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: sources_2.0.17_no-user.txt URL: From bind at enas.net Mon Jan 9 12:18:50 2012 From: bind at enas.net (Urban Loesch) Date: Mon, 09 Jan 2012 11:18:50 +0100 Subject: [Dovecot] Proxy login failures Message-ID: <4F0ABF0A.1080404@enas.net> Hi, I'm using two dovecot pop3/imap proxies in front of our dovecot servers. Since some days I see many of the following errors in the logs of the two proxy-servers: ... dovecot: pop3-login: Error: proxy: Remote "IPV6-IP":110 disconnected: Connection closed: Connection reset by peer (state=0): user=, method=PLAIN, rip=remote-ip, lip=localip ... dovecot: imap-login: Error: proxy: Remote "IPV6-IP":143 disconnected: Connection closed: Connection reset by peer (state=0): user=, method=PLAIN, rip=remote-ip, lip=localip ... When this happens the Client gets the following error from the proxy: -ERR [IN-USE] Account is temporarily unavailable. System-details: OS: Debian Linux Proxy: 2.0.5-0~auto+23 Backend: 2.0.13-0~auto+54 Have you any idea what could cause this type of error? Thanks and regards Urban Loesch doveconf -n from one of our backendservers: # 2.0.13 (02d97fb66047): /etc/dovecot/dovecot.conf # OS: Linux 2.6.38.8-vs2.3.0.37-rc17-rol-em64t-timerp x86_64 Debian 6.0.2 ext4 auth_cache_negative_ttl = 0 auth_cache_size = 40 M auth_cache_ttl = 12 hours auth_mechanisms = plain login auth_username_format = %Lu auth_verbose = yes deliver_log_format = msgid=%m: %$ %p %w disable_plaintext_auth = no login_trusted_networks = our Proxy IP's (v4 and v6) mail_gid = mailstore mail_location = mdbox:/home/vmail/%d/%n:INDEX=/home/dovecotindex/%d/%n mail_plugins = " quota mail_log notify zlib" mail_uid = mailstore managesieve_notify_capability = mailto managesieve_sieve_capability = fileinto reject envelope encoded-character vacation subaddress comparator-i;ascii-numeric relational regex imap4flags copy include variables body enotify environment mailbox date ihave imapflags notify mdbox_rotate_size = 5 M passdb { args = /etc/dovecot/dovecot-sql-account.conf driver = sql } plugin { mail_log_events = delete undelete expunge copy mailbox_delete mailbox_rename mail_log_fields = uid box msgid size from mail_log_group_events = no quota = dict:Storage used::file:%h/dovecot-quota sieve = ~/.dovecot.sieve sieve_dir = ~/sieve sieve_extensions = +notify +imapflags sieve_max_redirects = 10 zlib_save = gz zlib_save_level = 5 } protocols = imap pop3 lmtp sieve service imap-login { inet_listener imap { port = 143 } service_count = 0 vsz_limit = 256 M } service lmtp { inet_listener lmtp { address = * port = 24 } unix_listener /var/spool/postfix/private/dovecot-lmtp { group = postfix mode = 0666 user = postfix } vsz_limit = 512 M } service pop3-login { inet_listener pop3 { port = 110 } service_count = 0 vsz_limit = 256 M } ssl = no ssl_cert = References: <4F0AB08A.7050605@klunky.co.uk> <4F0AB2FE.8000306@schetterer.org> <4F0AB556.3040103@klunky.co.uk> <4F0AB63E.4020205@schetterer.org> <4F0AB8D6.8060204@klunky.co.uk> Message-ID: <4F0AD221.6060007@arx.net> > TB says unknown, and I know why. I have set the class 1 and class 2 > certificate chain keys to the same, when these should be different. > Damn, StartCom's certs are difficult to set up. read this: http://binblog.info/2010/02/02/lengthy-chains/ basically, you start with YOUR cert and work you way up to the root CA with openssl x509 -in your_servers.{crt|pem} -subject -issuer > server- allinone.crt openssl x509 -in intermediate_authority.{crt|pem} -subject -issuer >> server-allinone.crt openssl x509 -in root_ca.{crt|pem} -subject -issuer >> server-allinone.crt then, in dovecot.conf ---8<--- ssl_cert_file = /path/to/server-allinone.crt ssl_key_file = /path/to/private.key ---8<--- It works for me but YMMV of course. Androids before 2.2 do not have startcom as a trusted CA and will complain anyhow. Best Regards, Thanos Chatziathanassiou > > Workaround for K9 (latest version) is to go to the Account Settings -> > Fetching -> Incoming Server, and click Next. It will attempt to > authenicate and then complain about the certificate. One can ignore the > warning and accept the certificate. > > Cheers all. > > Simon > -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 4271 bytes Desc: ?????????????????????????????? ???????????????? S/MIME URL: From stan at hardwarefreak.com Mon Jan 9 14:28:55 2012 From: stan at hardwarefreak.com (Stan Hoeppner) Date: Mon, 09 Jan 2012 06:28:55 -0600 Subject: [Dovecot] Providing shared folders with multiple backend servers In-Reply-To: <88fev069kbv8@mids.svenhartge.de> References: <68fd4hi9kbv8@mids.svenhartge.de> <4F08E4D9.1090203@hardwarefreak.com> <78fdevu9kbv8@mids.svenhartge.de> <4F09956C.1030109@hardwarefreak.com> <88fev069kbv8@mids.svenhartge.de> Message-ID: <4F0ADD87.5080103@hardwarefreak.com> On 1/8/2012 9:39 AM, Sven Hartge wrote: > Memory size. I am a bit hesistant to deploy a VM with 16GB of RAM. My > cluster nodes each have 48GB, so no problem on this side though. Shouldn't be a problem if you're going to spread the load over 2 to 4 cluster nodes. 16/2 = 8GB per VM, 16/4 = 4GB per Dovecot VM. This, assuming you are able to evenly spread user load. > And our VMware SAN is iSCSI based, so no way to plug a FC-based storage > into it. There are standalone FC-iSCSI bridges but they're marketed to bridge FC SAN islands over an IP WAN. Director class SAN switches can connect anything to anything, just buy the cards you need. Both of these are rather pricey. These wouldn't make sense in your environment. I'm just pointing out that it can be done. > So, this reads like my idea in the first place. > > Only you place all the mails on the NFS server, whereas my idea was to > just share the shared folders from a central point and keep the normal > user dirs local to the different nodes, thus reducing network impact for > the way more common user access. To be quite honest, after thinking this through a bit, many traditional advantages of a single shared mail store start to disappear. Whether you use NFS or a clusterFS, or 'local' disk (RDMs), all IO goes to the same array, so the traditional IO load balancing advantage disappears. The other main advantage, replacing a dead hardware node, simply mapping the LUNs to the new one and booting it up, also disappears due to VMware's unique abilities, including vmotion. Efficient use of storage isn't an issue as you can just as easily slice off a small LUN to each of 2/4 Dovecot VMs as a larger one to the NFS VM. So the only disadvantages I see are with the 'local' disk RDM mailstore location. 'manual' connection/mailbox/size balancing, all increasing administrator burden. > 2.3GHz for most VMware nodes. How many total cores per VMware node (all sockets)? > You got the numbers wrong. And I got a word wrong ;) > > Should have read "900GB _of_ 1300GB used". My bad. I misunderstood. > So not much wiggle room left. And that one is retiring anyway as you state below. So do you have plenty of space on your VMware SAN arrays? If not can you add disks or do you need another array chassis? > But modifications to our systems are made, which allow me to > temp-disable a user, convert and move his mailbox and re-enable him, > which allows me to move them one at a time from the old system to the > new one, without losing a mail or disrupting service to long and often. As it should be. > This is a Transtec Provigo 610. This is a 24 disk enclosure, 12 disks > with 150GB (7.200k) each for the main mail storage in RAID6 and another > 10 disks with 150GB (5.400k) for a backup LUN. I daily rsnapshot my > /home onto this local backup (20 days of retention), because it is > easier to restore from than firing up Bacula, which has the long > retention time of 90 days. But must users need a restore of mails from > $yesterday or $the_day_before. And your current iSCSI SAN array(s) backing the VMware farm? Total disks? Is it monolithic, or do you have multiple array chassis from one or multiple vendors? > Well, it was either Parallel-SCSI or FC back then, as far as I can > remember. The price difference between the U320 version and the FC one > was not so big and I wanted to avoid having to route those big SCSI-U320 > through my racks. Can't blame you there. I take it you hadn't built the iSCSI SAN yet at that point? > See above, not 1500GB disks, but 150GB ones. RAID6, because I wanted the > double security. I have been kind of burned by the previous system and I > tend to get nervous while tinking about data loss in my mail storage, > because I know my users _will_ give me hell if that happens. And as it turns out RAID10 wouldn't have provided you enough bytes. > I never used mbox as an admin. The box before the box before this one > uses uw-imapd with mbox and I experienced the system as a user and it > was horriffic. Most users back then never heard of IMAP folders and just > stored their mails inside of INBOX, which of course got huge. If one of > those users with a big mbox then deleted mails, it would literally lock > the box up for everyone, as uw-imapd was copying (for example) a 600MB > mbox file around to delete one mail. Yeah, ouch. IMAP with mbox works pretty well when users are marginally smart about organizing their mail, or a POP then delete setup. I'd bet if that was maildir in that era on that box it would have slowed things way down as well. Especially if the filesystem was XFS, which had horrible, abysmal really, unlink performance until 2.6.35 (2009). > Of course, this was mostly because of the crappy uw-imapd and secondly > by some poor design choices in the server itself (underpowered RAID > controller, to small cache and a RAID5 setup, low RAM in the server). That's a recipe for disaster. > So the first thing we did back then, in 2004, was to change to Courier > and convert from mbox to maildir, which made the mailsystem fly again, > even on the same hardware, only the disk setup changed to RAID10. I wonder how much gain you'd have seen if you stuck with RAID5 instead... > Then we bought new hardware (the one previous to the current one), this > time with more RAM, better RAID controller, smarter disk setup. We > outgrew this one really fast and a disk upgrade was not possible; it > lasted only 2 years. Did you need more space or more spindles? > But Courier is showing its age and things like Sieve are only possible > with great pain, so I want to avoid it. Don't blame ya. Lots of people migrate from Courier for Dovecot for similar reasons. > And this is why I kind of hold this upgrade back until dovecot 2.1 is > released, as it has some optimizations here. Sounds like it's going to be a bit more than an 'upgrade'. ;) > That is a BPO-kernel. Not-yet Squeeze. I admin over 150 different > systems here, plus I am the main VMware and SAN admin. So upgrades take > some time until I grow an extra pair of eyes and arms. ;) /me nods > And since I have been planning to re-implement the mailsystem for some > time now, I held the update to the storage backends back. No use in > disrupting service for the end user if I'm going to replace the whole > thing with a new one in the end. /me nods > Naa, I have been doing this for too long. While I am perfectly capable > of building such a server myself, I am now the kind of guy who wants to > "yell" at a vendor, when their hardware fails. At your scale it would simply be impractical, and impossible from a time management standpoint. > Personal build PCs and servers out of single parts have been nothing > than a nightmare for me. I've had nothing but good luck with "DIY" systems. My background is probably a bit different than most though. Hardware has been in my blood since I was a teenager in about '86. I used to design and build relatively high end custom -48vdc white box servers and SCSI arrays for telcos back in the day, along with standard 115v servers for SMBs. Also, note the RHS of my email address. ;) That is a nickname given to me about 13 years ago. I decided to adopt it for my vanity domain. > And: my cowworkers need to be able to service > them as well while I am not available and they are not as a hardware > aficionado as I am. That's the biggest reason right there. DIY is only really feasible if you run your own show, and will likely continue to be running it for a while. Or if staff is similarly skilled. Most IT folks these days aren't hardware oriented people. > So "professional" hardware with a 5 to 7 year support contract is the > way to go for me. Definitely. > I have plenty space for 2U systems and already use DL385 G7s, I am not > fixed on Intel or AMD, I'll gladly use the one which is the most fit for > a given jobs. Just out of curiosity do you have any Power or SPARC systems, or all x86? -- Stan From stan at hardwarefreak.com Mon Jan 9 15:13:40 2012 From: stan at hardwarefreak.com (Stan Hoeppner) Date: Mon, 09 Jan 2012 07:13:40 -0600 Subject: [Dovecot] Providing shared folders with multiple backend servers In-Reply-To: References: <68fd4hi9kbv8@mids.svenhartge.de> <4F08E4D9.1090203@hardwarefreak.com> <78fdevu9kbv8@mids.svenhartge.de> <4F09956C.1030109@hardwarefreak.com> <88fev069kbv8@mids.svenhartge.de> Message-ID: <4F0AE804.5070002@hardwarefreak.com> On 1/8/2012 2:15 PM, Sven Hartge wrote: > Wouldn't such a setup be the "Best of Both Worlds"? Having the main > traffic going to local disks (being RDMs) and also being able to provide > shared folders to every user who needs them without the need to move > those users onto one server? The only problems I can see at this time are: 1. Some users will have much larger mailboxes than others. Each year ~1/4 of your student population rotates, so if you manually place existing mailboxes now based on current size you have no idea who the big users are in the next freshman class, or the next. So you may have to do manual re-balancing of mailboxes, maybe frequently. 2. If you lose a Dovecot VM guest due to image file or other corruption, or some other rare cause, you can't restart that guest, but will have to build a new image from a template. This could cause either minor or significant downtime for ~1/4 of your mail users w/4 nodes. This is likely rare enough it's not worth consideration. 3. You will consume more SAN volumes and LUNs. Most arrays have a fixed number of each. May or may not be an issue. -- Stan From stan at hardwarefreak.com Mon Jan 9 15:38:20 2012 From: stan at hardwarefreak.com (Stan Hoeppner) Date: Mon, 09 Jan 2012 07:38:20 -0600 Subject: [Dovecot] Providing shared folders with multiple backend servers In-Reply-To: References: <68fd4hi9kbv8@mids.svenhartge.de> <4F08E4D9.1090203@hardwarefreak.com> <78fdevu9kbv8@mids.svenhartge.de> <4F09956C.1030109@hardwarefreak.com> <88fev069kbv8@mids.svenhartge.de> Message-ID: <4F0AEDCC.10109@hardwarefreak.com> On 1/8/2012 3:07 PM, Sven Hartge wrote: > Ah, I forgot: I _already_ have the mechanisms in place to statically > redirect/route accesses for users to different backends, since some of > the users are already redirected to a different mailsystem at another > location of my university. I assume you mean IMAP/POP connections, not SMTP. > So using this mechanism to also redirect/route users internal to _my_ > location is no big deal. > > This is what got me into the idea of several independant backend > storages without the need to share the _whole_ storage, but just the > shared folders for some users. > > (Are my words making any sense? I got the feeling I'm writing German with > English words and nobody is really understanding anything ...) You're making perfect sense, and frankly, if not for the .de TLD in your email address, I'd have thought you were an American. Your written English is probably better than mine, and it's my only language. To be fair to the Brits, I speak/write American English. ;) I'm guessing no one else has interest in this thread, or maybe simply lost interest as the replies have been lengthy, and not wholly Dovecot related. I accept some blame for that. -- Stan From sven at svenhartge.de Mon Jan 9 15:48:22 2012 From: sven at svenhartge.de (Sven Hartge) Date: Mon, 9 Jan 2012 14:48:22 +0100 Subject: [Dovecot] Providing shared folders with multiple backend servers References: <68fd4hi9kbv8@mids.svenhartge.de> <4F08E4D9.1090203@hardwarefreak.com> <78fdevu9kbv8@mids.svenhartge.de> <4F09956C.1030109@hardwarefreak.com> <88fev069kbv8@mids.svenhartge.de> <4F0ADD87.5080103@hardwarefreak.com> Message-ID: <08fhdkhkv5v8@mids.svenhartge.de> Stan Hoeppner wrote: > On 1/8/2012 9:39 AM, Sven Hartge wrote: >> Memory size. I am a bit hesistant to deploy a VM with 16GB of RAM. My >> cluster nodes each have 48GB, so no problem on this side though. > Shouldn't be a problem if you're going to spread the load over 2 to 4 > cluster nodes. 16/2 = 8GB per VM, 16/4 = 4GB per Dovecot VM. This, > assuming you are able to evenly spread user load. I think I will be able to do that. If I devide my users by using a hash like MD5 or SHA1 over their username, this should give me an even distribution. >> So, this reads like my idea in the first place. >> >> Only you place all the mails on the NFS server, whereas my idea was to >> just share the shared folders from a central point and keep the normal >> user dirs local to the different nodes, thus reducing network impact for >> the way more common user access. > To be quite honest, after thinking this through a bit, many traditional > advantages of a single shared mail store start to disappear. Whether > you use NFS or a clusterFS, or 'local' disk (RDMs), all IO goes to the > same array, so the traditional IO load balancing advantage disappears. > The other main advantage, replacing a dead hardware node, simply mapping > the LUNs to the new one and booting it up, also disappears due to > VMware's unique abilities, including vmotion. Efficient use of storage > isn't an issue as you can just as easily slice off a small LUN to each > of 2/4 Dovecot VMs as a larger one to the NFS VM. Yes. Plus I can much more easily increase a LUNs size, if the need arises. > So the only disadvantages I see are with the 'local' disk RDM mailstore > location. 'manual' connection/mailbox/size balancing, all increasing > administrator burden. Well, I don't see size balancing as a problem since I can increase the size of the disk for a node very easy. Load should be fairly even, if I distribute the 10,000 users across the nodes. Even if there is a slight imbalance, the systems should have enough power to smooth that out. I could measure the load every user creates and use that as a distribution key, but I believe this to be a wee bit over-engineered for my scenario. Initial placement of a new user will be automatic, during the activation of the account, so no administrative burden there. It seems my initial idea was not so bad after all ;) Now I "just" need o built a little test setup, put some dummy users on it and see, if anything bad happens while accessing the shared folders and how the reaction of the system is, should the shared folder server be down. >> 2.3GHz for most VMware nodes. > How many total cores per VMware node (all sockets)? 8 >> You got the numbers wrong. And I got a word wrong ;) >> >> Should have read "900GB _of_ 1300GB used". > My bad. I misunderstood. Here the memory statistics an 14:30 o'clock: total used free shared buffers cached Mem: 12046 11199 847 0 88 7926 -/+ buffers/cache: 3185 8861 Swap: 5718 10 5707 >> So not much wiggle room left. > And that one is retiring anyway as you state below. So do you have > plenty of space on your VMware SAN arrays? If not can you add disks > or do you need another array chassis? The SAN has plenty space. Over 70TiB at this time, with another 70TiB having just arrived and waiting to be connected. >> This is a Transtec Provigo 610. This is a 24 disk enclosure, 12 disks >> with 150GB (7.200k) each for the main mail storage in RAID6 and >> another 10 disks with 150GB (5.400k) for a backup LUN. I daily >> rsnapshot my /home onto this local backup (20 days of retention), >> because it is easier to restore from than firing up Bacula, which has >> the long retention time of 90 days. But must users need a restore of >> mails from $yesterday or $the_day_before. > And your current iSCSI SAN array(s) backing the VMware farm? Total > disks? Is it monolithic, or do you have multiple array chassis from > one or multiple vendors? The iSCSI storage nodes (HP P4500) use 600GB SAS6 at 15k rpm with 12 disks per node, configured in 2 RAID5 sets with 6 disks each. But this is internal to each storage node, which are kind of a blackbox and have to be treated as such. The HP P4500 is a but unique, since it does not consist of a head node which storage arrays connected to it, but of individual storage nodes forming a self balancing iSCSI cluster. (The nodes consist of DL320s G2.) So far, I had no performance or other problems with this setup and it scales quite nice, as you buy as you grow . And again, price was also a factor, deploying a FC-SAN would have cost us more than thrice the amount than the amount the deployment of an iSCSI solution did, because the latter is "just" ethernet, while the former would have needed a lot more totally new components. >> Well, it was either Parallel-SCSI or FC back then, as far as I can >> remember. The price difference between the U320 version and the FC one >> was not so big and I wanted to avoid having to route those big SCSI-U320 >> through my racks. > Can't blame you there. I take it you hadn't built the iSCSI SAN yet at > that point? No, at that time (2005/2006) nobody thought of a SAN. That is a fairly "new" idea here, first implemented for the VMware cluster in 2008. >> Then we bought new hardware (the one previous to the current one), >> this time with more RAM, better RAID controller, smarter disk setup. >> We outgrew this one really fast and a disk upgrade was not possible; >> it lasted only 2 years. > Did you need more space or more spindles? More space. The IMAP usage became more prominent which caused a steep rise in space needed on the mail storage server. But 74GiB SCA drives where expensive and 130GiB SCA drives where not available at that time. >> And this is why I kind of hold this upgrade back until dovecot 2.1 is >> released, as it has some optimizations here. > Sounds like it's going to be a bit more than an 'upgrade'. ;) Well, yes. It is more a re-implementation than an upgrade. >> I have plenty space for 2U systems and already use DL385 G7s, I am >> not fixed on Intel or AMD, I'll gladly use the one which is the most >> fit for a given jobs. > Just out of curiosity do you have any Power or SPARC systems, or all > x86? Central IT here this days only uses x86-based systems. There where some Sun SPARC systems, but both have been decomissioned. New SPARC hardware is just too expensive for our scale. And if you want to use virtualization, you can either use only SPARC systems and partition them or use x86 based systems. And then there is the need to virtualize Windows, so x86 is the only option. Most bigger Universities in Germany make nearly exclusive use of SPARC systems, but they had a central IT with big irons (IBM, HP, etc.) since back in the 1960's, so naturally the continue on that path. Gr??e, Sven. -- Sigmentation fault. Core dumped. From philip at turmel.org Mon Jan 9 15:50:49 2012 From: philip at turmel.org (Phil Turmel) Date: Mon, 09 Jan 2012 08:50:49 -0500 Subject: [Dovecot] Providing shared folders with multiple backend servers In-Reply-To: <4F0AEDCC.10109@hardwarefreak.com> References: <68fd4hi9kbv8@mids.svenhartge.de> <4F08E4D9.1090203@hardwarefreak.com> <78fdevu9kbv8@mids.svenhartge.de> <4F09956C.1030109@hardwarefreak.com> <88fev069kbv8@mids.svenhartge.de> <4F0AEDCC.10109@hardwarefreak.com> Message-ID: <4F0AF0B9.7030406@turmel.org> On 01/09/2012 08:38 AM, Stan Hoeppner wrote: > On 1/8/2012 3:07 PM, Sven Hartge wrote: [...] >> (Are my words making any sense? I got the feeling I'm writing German with >> English words and nobody is really understanding anything ...) > > You're making perfect sense, and frankly, if not for the .de TLD in your > email address, I'd have thought you were an American. Your written > English is probably better than mine, and it's my only language. To be > fair to the Brits, I speak/write American English. ;) Concur. My American ear is also perfectly happy. > I'm guessing no one else has interest in this thread, or maybe simply > lost interest as the replies have been lengthy, and not wholly Dovecot > related. I accept some blame for that. I've been following this thread with great interest, but no advice to offer. The content is entirely appropriate, and appreciated. Don't be embarrassed by your enthusiasm, Stan. Sven, a follow-up report when you have it all working as desired would also be appreciated (and appropriate). Thanks, Phil From sven at svenhartge.de Mon Jan 9 15:52:27 2012 From: sven at svenhartge.de (Sven Hartge) Date: Mon, 9 Jan 2012 14:52:27 +0100 Subject: [Dovecot] Providing shared folders with multiple backend servers References: <68fd4hi9kbv8@mids.svenhartge.de> <4F08E4D9.1090203@hardwarefreak.com> <78fdevu9kbv8@mids.svenhartge.de> <4F09956C.1030109@hardwarefreak.com> <88fev069kbv8@mids.svenhartge.de> <4F0AEDCC.10109@hardwarefreak.com> Message-ID: <18fhg73kv5v8@mids.svenhartge.de> Stan Hoeppner wrote: > On 1/8/2012 3:07 PM, Sven Hartge wrote: >> Ah, I forgot: I _already_ have the mechanisms in place to statically >> redirect/route accesses for users to different backends, since some >> of the users are already redirected to a different mailsystem at >> another location of my university. > I assume you mean IMAP/POP connections, not SMTP. Yes. perdition uses its popmap feature to redirect users of the other location to the IMAP/POP servers there. So we only need one central mailserver for the users to configure while we are able to physically store their mails at different datacenters. > I'm guessing no one else has interest in this thread, or maybe simply > lost interest as the replies have been lengthy, and not wholly Dovecot > related. I accept some blame for that. I will open a new thread with more concrete problems/questions after I setup my test setup. This will be more technical and less philosphical, I hope :) Gr??e, Sven -- Sigmentation fault. Core dumped. From sven at svenhartge.de Mon Jan 9 16:08:12 2012 From: sven at svenhartge.de (Sven Hartge) Date: Mon, 9 Jan 2012 15:08:12 +0100 Subject: [Dovecot] Providing shared folders with multiple backend servers References: <68fd4hi9kbv8@mids.svenhartge.de> <4F08E4D9.1090203@hardwarefreak.com> <78fdevu9kbv8@mids.svenhartge.de> <4F09956C.1030109@hardwarefreak.com> <88fev069kbv8@mids.svenhartge.de> <4F0AE804.5070002@hardwarefreak.com> Message-ID: <28fhgdvkv5v8@mids.svenhartge.de> Stan Hoeppner wrote: > On 1/8/2012 2:15 PM, Sven Hartge wrote: >> Wouldn't such a setup be the "Best of Both Worlds"? Having the main >> traffic going to local disks (being RDMs) and also being able to provide >> shared folders to every user who needs them without the need to move >> those users onto one server? > The only problems I can see at this time are: > 1. Some users will have much larger mailboxes than others. > Each year ~1/4 of your student population rotates, so if you > manually place existing mailboxes now based on current size > you have no idea who the big users are in the next freshman > class, or the next. So you may have to do manual re-balancing > of mailboxes, maybe frequently. The quota for students is 1GiB here. If I provide each of my 4 nodes with 500GiB of storage space, this gives me 2TiB now, which should be sufficient. If a nodes fills, I increase its storage space. Only if it fills too fast, I may have to rebalance users. And I never wanted to place the users based on their current size. I knew this was not going to work because of the reasons you mentioned. I just want to hash their username and use this as a function to distribute the users, keeping it simple and stupid. > 2. If you lose a Dovecot VM guest due to image file or other > corruption, or some other rare cause, you can't restart that guest, > but will have to build a new image from a template. This could > cause either minor or significant downtime for ~1/4 of your mail > users w/4 nodes. This is likely rare enough it's not worth > consideration. Yes, I know. But right now, if I lose my one and only mail storage servers, all users mailboxes will be offline, until I am either a) able to repair the server, b) move the disks to my identical backup system (or the backup system to the location of the failed one) or c) start the backup system and lose all mails not rsynced since the last rsync-run. It is not easy designing a mail system without a SPoF which still performs under load. For example, once a time I had a DRDB (active/passive( setup between the two storage systems. This would allow me to start my standby system without losing (nearly) any mail. But this was awful slow and sluggish. > 3. You will consume more SAN volumes and LUNs. Most arrays have a > fixed number of each. May or may not be an issue. Not really an issue here. The SAN is exclusive for the VMware cluster, so most LUNs are quite big (1TiB to 2TiB) but there are not many of them. Gr??e, Sven. -- Sigmentation fault. Core dumped. From tom at elysium.ltd.uk Mon Jan 9 16:41:55 2012 From: tom at elysium.ltd.uk (Tom Clark) Date: Mon, 9 Jan 2012 14:41:55 -0000 Subject: [Dovecot] Resetting a UID Message-ID: <025c01cccedc$d5ccd680$81668380$@elysium.ltd.uk> Hi, We've got a client with a Blackberry that hass deleted his emails off his Blackberry device. The BES won't re-download the messages as it believes it has already downloaded them (apparently it matches on UID). Is there any way of resetting a folder (and messages in the folder) UID? I know in courier you used to be able to touch the directory. Thanks, Tom Clark From tss at iki.fi Mon Jan 9 16:43:00 2012 From: tss at iki.fi (Timo Sirainen) Date: Mon, 09 Jan 2012 16:43:00 +0200 Subject: [Dovecot] Postfix user map Message-ID: <4F0AFCF4.1050506@iki.fi> http://hg.dovecot.org/dovecot-2.1/rev/f562bcaca215 implements "postmap" type sockets, which follow Postfix's tcp_table(5) protocol. So you can ask: get user at domain and Dovecot answers one of: - 200 1 - 500 User not found - 400 Internal failure So you can use this with Postfix: virtual_mailbox_maps = tcp:127.0.0.1:1234 With Dovecot you can enable it with: service auth { inet_listener postmap { listen = 127.0.0.1 port = 1234 } } Anyone have ideas if this could be improved, or used for some other purposes? From tss at iki.fi Mon Jan 9 16:51:07 2012 From: tss at iki.fi (Timo Sirainen) Date: Mon, 9 Jan 2012 16:51:07 +0200 Subject: [Dovecot] Providing shared folders with multiple backend servers In-Reply-To: <68fd4hi9kbv8@mids.svenhartge.de> References: <68fd4hi9kbv8@mids.svenhartge.de> Message-ID: Too much text in the rest of this thread so I haven't read it, but: On 8.1.2012, at 0.20, Sven Hartge wrote: > Right now, I am pondering with using an additional server with just the > shared folders on it and using NFS (or a cluster FS) to mount the shared > folder filesystem to each backend storage server, so each user has > potential access to a shared folders data. With NFS you'll run into problems with caching (http://wiki2.dovecot.org/NFS). Some cluster fs might work better. The "proper" solution for this that I've been thinking about would be to use v2.1's imapc backend with master users. So that when user A wants to access user B's shared folder, Dovecot connects to B's IMAP server using master user login, and accesses the mailbox via IMAP. Probably wouldn't be a big job to implement, mainly I'd need to figure out how this should be configured.. From tss at iki.fi Mon Jan 9 16:57:02 2012 From: tss at iki.fi (Timo Sirainen) Date: Mon, 9 Jan 2012 16:57:02 +0200 Subject: [Dovecot] 2.0.17: Index lost -> SAVEDON lost as well? In-Reply-To: <20120109074057.GC22506@charite.de> References: <20120109074057.GC22506@charite.de> Message-ID: <17E5EF90-C133-4F70-A377-0D93694181DE@iki.fi> On 9.1.2012, at 9.40, Ralf Hildebrandt wrote: > Today I encoundered these errors: > > Jan 9 08:30:06 mail dovecot: lmtp(31174, backup at backup.invalid): Error: Log synchronization error at seq=858,offset=44672 for > /home/mailboxname/mdbox/storage/dovecot.map.index: Append with UID 282388, but next_uid = 282389 Any idea why this happened? > After that, the SAVEDON date for all mails was reset to today: Yeah. The "save date" is stored only in index. And index rebuild drops all those fields. I guess this could/should be fixed in index rebuild. > Is there a way of restoring the SAVEDON info? Not currently without extra code (and even then you could only restore it to e.g. its received date). From sven at svenhartge.de Mon Jan 9 16:58:50 2012 From: sven at svenhartge.de (Sven Hartge) Date: Mon, 9 Jan 2012 15:58:50 +0100 Subject: [Dovecot] Providing shared folders with multiple backend servers References: <68fd4hi9kbv8@mids.svenhartge.de> Message-ID: <38fhk6pkv5v8@mids.svenhartge.de> Timo Sirainen wrote: > On 8.1.2012, at 0.20, Sven Hartge wrote: >> Right now, I am pondering with using an additional server with just >> the shared folders on it and using NFS (or a cluster FS) to mount the >> shared folder filesystem to each backend storage server, so each user >> has potential access to a shared folders data. > With NFS you'll run into problems with caching > (http://wiki2.dovecot.org/NFS). Some cluster fs might work better. > The "proper" solution for this that I've been thinking about would be > to use v2.1's imapc backend with master users. So that when user A > wants to access user B's shared folder, Dovecot connects to B's IMAP > server using master user login, and accesses the mailbox via IMAP. > Probably wouldn't be a big job to implement, mainly I'd need to figure > out how this should be configured.. Luckily, in my case, User A does not access anythin from User B, but instead both User A and User B access the same public folder, which is different from any folder of User A and User B. Gr??e, Sven. -- Sigmentation fault. Core dumped. From tss at iki.fi Mon Jan 9 17:00:21 2012 From: tss at iki.fi (Timo Sirainen) Date: Mon, 9 Jan 2012 17:00:21 +0200 Subject: [Dovecot] Solr plugin In-Reply-To: References: <4F04EDC8.6060809@amfes.com> <9D72A466-6ECD-4A10-A52A-20BF56B32916@iki.fi> <4F072FA9.2020009@amfes.com> <75CA67D3-81FF-4FD5-8D61-6992E808DB09@iki.fi> <4F075A76.1040807@amfes.com> <4F0A2980.7050003@amfes.com> Message-ID: On 9.1.2012, at 1.48, Daniel L. Miller wrote: > On 1/8/2012 3:40 PM, Daniel L. Miller wrote: >> On 1/6/2012 12:32 PM, Daniel L. Miller wrote: >>> On 1/6/2012 9:36 AM, Timo Sirainen wrote: >>>> On 6.1.2012, at 19.30, Daniel L. Miller wrote: >>>> >> >> Jan 8 15:40:09 bubba dovecot: imap(user1 at domain.com): Error: fts_solr: Lookup failed: 400 undefined field CC >> Jan 8 15:40:09 bubba dovecot: imap: Error: >> >> > > Looking at the Solr output - looks like the CC parameter is being capitalized while all the other fieldnames are lowercase. Did you look at the input? Looking at the code, it should be lowercased. Maybe Solr just uppercases it for some reason. Are you using a Solr schema that has "cc" field? From Ralf.Hildebrandt at charite.de Mon Jan 9 17:02:49 2012 From: Ralf.Hildebrandt at charite.de (Ralf Hildebrandt) Date: Mon, 9 Jan 2012 16:02:49 +0100 Subject: [Dovecot] 2.0.17: Index lost -> SAVEDON lost as well? In-Reply-To: <17E5EF90-C133-4F70-A377-0D93694181DE@iki.fi> References: <20120109074057.GC22506@charite.de> <17E5EF90-C133-4F70-A377-0D93694181DE@iki.fi> Message-ID: <20120109150249.GH22506@charite.de> * Timo Sirainen : > On 9.1.2012, at 9.40, Ralf Hildebrandt wrote: > > > Today I encoundered these errors: > > > > Jan 9 08:30:06 mail dovecot: lmtp(31174, backup at backup.invalid): Error: Log synchronization error at seq=858,offset=44672 for > > /home/mailboxname/mdbox/storage/dovecot.map.index: Append with UID 282388, but next_uid = 282389 > > Any idea why this happened? I was running those commands: # new style (dovecot) vorgestern=`date -d "-2 day" +"%Y-%m-%d"` doveadm expunge -u backup at backup.invalid mailbox INBOX SAVEDBEFORE $vorgestern doveadm purge -u backup at backup.invalid > > After that, the SAVEDON date for all mails was reset to today: > > Yeah. The "save date" is stored only in index. And index rebuild drops > all those fields. I guess this could/should be fixed in index rebuild. It's ok. Right now it only affects my expiry method. -- Ralf Hildebrandt Gesch?ftsbereich IT | Abteilung Netzwerk Charit? - Universit?tsmedizin Berlin Campus Benjamin Franklin Hindenburgdamm 30 | D-12203 Berlin Tel. +49 30 450 570 155 | Fax: +49 30 450 570 962 ralf.hildebrandt at charite.de | http://www.charite.de From CMarcus at Media-Brokers.com Mon Jan 9 17:14:37 2012 From: CMarcus at Media-Brokers.com (Charles Marcus) Date: Mon, 09 Jan 2012 10:14:37 -0500 Subject: [Dovecot] Providing shared folders with multiple backend servers In-Reply-To: References: <68fd4hi9kbv8@mids.svenhartge.de> Message-ID: <4F0B045D.1010101@Media-Brokers.com> On 2012-01-09 9:51 AM, Timo Sirainen wrote: > The "proper" solution for this that I've been thinking about would be > to use v2.1's imapc backend with master users. So that when user A > wants to access user B's shared folder, Dovecot connects to B's IMAP > server using master user login, and accesses the mailbox via IMAP. > Probably wouldn't be a big job to implement, mainly I'd need to > figure out how this should be configured. Sounds interesting... would this be the new officially supported method for sharing mailboxes in all cases? Or is this just for shared mailboxes on NFS shares? It sounds like this might be a proper (fully supported without kludges) way to get what I had asked about before, with respect to expanding on the concept of Master users for sharing an entire account with one or more other users... -- Best regards, Charles From noeldude at gmail.com Mon Jan 9 17:32:01 2012 From: noeldude at gmail.com (Noel) Date: Mon, 09 Jan 2012 09:32:01 -0600 Subject: [Dovecot] Postfix user map In-Reply-To: <4F0AFCF4.1050506@iki.fi> References: <4F0AFCF4.1050506@iki.fi> Message-ID: <4F0B0871.6040500@gmail.com> On 1/9/2012 8:43 AM, Timo Sirainen wrote: > http://hg.dovecot.org/dovecot-2.1/rev/f562bcaca215 implements > "postmap" type sockets, which follow Postfix's tcp_table(5) > protocol. So you can ask: > > get user at domain > > and Dovecot answers one of: > > - 200 1 > - 500 User not found > - 400 Internal failure > > So you can use this with Postfix: > > virtual_mailbox_maps = tcp:127.0.0.1:1234 > > With Dovecot you can enable it with: > > service auth { > inet_listener postmap { > listen = 127.0.0.1 > port = 1234 > } > } > > Anyone have ideas if this could be improved, or used for some > other purposes? Cool. Does this just check for valid user existence, or can it also check for over-quota (and respond 500 overquota I suppose)? -- Noel Jones From robert at schetterer.org Mon Jan 9 17:37:32 2012 From: robert at schetterer.org (Robert Schetterer) Date: Mon, 09 Jan 2012 16:37:32 +0100 Subject: [Dovecot] Postfix user map In-Reply-To: <4F0B0871.6040500@gmail.com> References: <4F0AFCF4.1050506@iki.fi> <4F0B0871.6040500@gmail.com> Message-ID: <4F0B09BC.3010300@schetterer.org> Am 09.01.2012 16:32, schrieb Noel: > On 1/9/2012 8:43 AM, Timo Sirainen wrote: >> http://hg.dovecot.org/dovecot-2.1/rev/f562bcaca215 implements >> "postmap" type sockets, which follow Postfix's tcp_table(5) >> protocol. So you can ask: >> >> get user at domain >> >> and Dovecot answers one of: >> >> - 200 1 >> - 500 User not found >> - 400 Internal failure >> >> So you can use this with Postfix: >> >> virtual_mailbox_maps = tcp:127.0.0.1:1234 >> >> With Dovecot you can enable it with: >> >> service auth { >> inet_listener postmap { >> listen = 127.0.0.1 >> port = 1234 >> } >> } >> >> Anyone have ideas if this could be improved, or used for some >> other purposes? > > > Cool. > Does this just check for valid user existence, or can it also check > for over-quota (and respond 500 overquota I suppose)? if you use dove lmtp with postfix it allready works "like that way" for over quota > > > -- Noel Jones -- Best Regards MfG Robert Schetterer Germany/Munich/Bavaria From noeldude at gmail.com Mon Jan 9 17:46:44 2012 From: noeldude at gmail.com (Noel) Date: Mon, 09 Jan 2012 09:46:44 -0600 Subject: [Dovecot] Postfix user map In-Reply-To: <4F0B09BC.3010300@schetterer.org> References: <4F0AFCF4.1050506@iki.fi> <4F0B0871.6040500@gmail.com> <4F0B09BC.3010300@schetterer.org> Message-ID: <4F0B0BE4.8010907@gmail.com> On 1/9/2012 9:37 AM, Robert Schetterer wrote: > Am 09.01.2012 16:32, schrieb Noel: >> On 1/9/2012 8:43 AM, Timo Sirainen wrote: >>> http://hg.dovecot.org/dovecot-2.1/rev/f562bcaca215 implements >>> "postmap" type sockets, which follow Postfix's tcp_table(5) >>> protocol. So you can ask: >>> >>> get user at domain >>> >>> and Dovecot answers one of: >>> >>> - 200 1 >>> - 500 User not found >>> - 400 Internal failure >>> >>> So you can use this with Postfix: >>> >>> virtual_mailbox_maps = tcp:127.0.0.1:1234 >>> >>> With Dovecot you can enable it with: >>> >>> service auth { >>> inet_listener postmap { >>> listen = 127.0.0.1 >>> port = 1234 >>> } >>> } >>> >>> Anyone have ideas if this could be improved, or used for some >>> other purposes? >> >> Cool. >> Does this just check for valid user existence, or can it also check >> for over-quota (and respond 500 overquota I suppose)? > if you use dove lmtp with postfix it allready works "like that way" > for over quota That can reject over-quota users during the postfix SMTP conversation? -- Noel Jones From robert at schetterer.org Mon Jan 9 17:50:49 2012 From: robert at schetterer.org (Robert Schetterer) Date: Mon, 09 Jan 2012 16:50:49 +0100 Subject: [Dovecot] Postfix user map In-Reply-To: <4F0B0BE4.8010907@gmail.com> References: <4F0AFCF4.1050506@iki.fi> <4F0B0871.6040500@gmail.com> <4F0B09BC.3010300@schetterer.org> <4F0B0BE4.8010907@gmail.com> Message-ID: <4F0B0CD9.3090402@schetterer.org> Am 09.01.2012 16:46, schrieb Noel: > On 1/9/2012 9:37 AM, Robert Schetterer wrote: >> Am 09.01.2012 16:32, schrieb Noel: >>> On 1/9/2012 8:43 AM, Timo Sirainen wrote: >>>> http://hg.dovecot.org/dovecot-2.1/rev/f562bcaca215 implements >>>> "postmap" type sockets, which follow Postfix's tcp_table(5) >>>> protocol. So you can ask: >>>> >>>> get user at domain >>>> >>>> and Dovecot answers one of: >>>> >>>> - 200 1 >>>> - 500 User not found >>>> - 400 Internal failure >>>> >>>> So you can use this with Postfix: >>>> >>>> virtual_mailbox_maps = tcp:127.0.0.1:1234 >>>> >>>> With Dovecot you can enable it with: >>>> >>>> service auth { >>>> inet_listener postmap { >>>> listen = 127.0.0.1 >>>> port = 1234 >>>> } >>>> } >>>> >>>> Anyone have ideas if this could be improved, or used for some >>>> other purposes? >>> >>> Cool. >>> Does this just check for valid user existence, or can it also check >>> for over-quota (and respond 500 overquota I suppose)? >> if you use dove lmtp with postfix it allready works "like that way" >> for over quota > > > That can reject over-quota users during the postfix SMTP conversation? jep ,it does, i was glad having/testing this feature in dove 2 release, avoiding overquota backscatter etc > > > > -- Noel Jones -- Best Regards MfG Robert Schetterer Germany/Munich/Bavaria From stan at hardwarefreak.com Mon Jan 9 17:56:36 2012 From: stan at hardwarefreak.com (Stan Hoeppner) Date: Mon, 09 Jan 2012 09:56:36 -0600 Subject: [Dovecot] Providing shared folders with multiple backend servers In-Reply-To: <28fhgdvkv5v8@mids.svenhartge.de> References: <68fd4hi9kbv8@mids.svenhartge.de> <4F08E4D9.1090203@hardwarefreak.com> <78fdevu9kbv8@mids.svenhartge.de> <4F09956C.1030109@hardwarefreak.com> <88fev069kbv8@mids.svenhartge.de> <4F0AE804.5070002@hardwarefreak.com> <28fhgdvkv5v8@mids.svenhartge.de> Message-ID: <4F0B0E34.8080901@hardwarefreak.com> On 1/9/2012 8:08 AM, Sven Hartge wrote: > Stan Hoeppner wrote: > The quota for students is 1GiB here. If I provide each of my 4 nodes > with 500GiB of storage space, this gives me 2TiB now, which should be > sufficient. If a nodes fills, I increase its storage space. Only if it > fills too fast, I may have to rebalance users. That should work. > And I never wanted to place the users based on their current size. I > knew this was not going to work because of the reasons you mentioned. > > I just want to hash their username and use this as a function to > distribute the users, keeping it simple and stupid. My apologies Sven. I just re-read your first messages and you did mention this method. > Yes, I know. But right now, if I lose my one and only mail storage > servers, all users mailboxes will be offline, until I am either a) able > to repair the server, b) move the disks to my identical backup system (or > the backup system to the location of the failed one) or c) start the > backup system and lose all mails not rsynced since the last rsync-run. True. 3/4 of users remaining online is much better than none. :) > It is not easy designing a mail system without a SPoF which still > performs under load. And many other systems for that matter. > For example, once a time I had a DRDB (active/passive( setup between the > two storage systems. This would allow me to start my standby system > without losing (nearly) any mail. But this was awful slow and sluggish. Eric Rostetter at University of Texas at Austin has reported good performance with his twin Dovecot DRBD cluster. Though in his case he's doing active/active DRBD with GFS2 sitting on top, so there is no failover needed. DRBD is obviously not an option for your current needs. >> 3. You will consume more SAN volumes and LUNs. Most arrays have a >> fixed number of each. May or may not be an issue. > > Not really an issue here. The SAN is exclusive for the VMware cluster, > so most LUNs are quite big (1TiB to 2TiB) but there are not many of > them. I figured this wouldn't be a problem. I'm just trying to be thorough, mentioning anything I can think of that might be an issue. The more I think about your planned architecture the more it reminds me of a "shared nothing" database cluster--even a relatively small one can outrun a well tuned mainframe, especially doing decision support/data mining workloads (TPC-H). As long as you're prepared for the extra administration, which you obviously are, this setup will yield better performance than the NFS setup I recommended. Performance may not be quite as good as 4 physical hosts with local storage, but you haven't mentioned the details of your SAN storage nor the current load on it, so obviously I can't say with any certainty. If the controller currently has plenty of spare IOPS then the performance difference would be minimal. And using the SAN allows automatic restart of a VM if a physical node dies. As with Phil, I'm anxious to see how well it works in production. When you send an update please CC me directly as sometimes I don't read all the list mail. I hope my participation was helpful to you Sven, even if only to a small degree. Best of luck with the implementation. -- Stan From sven at svenhartge.de Mon Jan 9 18:16:14 2012 From: sven at svenhartge.de (Sven Hartge) Date: Mon, 9 Jan 2012 17:16:14 +0100 Subject: [Dovecot] Providing shared folders with multiple backend servers References: <68fd4hi9kbv8@mids.svenhartge.de> <4F08E4D9.1090203@hardwarefreak.com> <78fdevu9kbv8@mids.svenhartge.de> <4F09956C.1030109@hardwarefreak.com> <88fev069kbv8@mids.svenhartge.de> <4F0AE804.5070002@hardwarefreak.com> <28fhgdvkv5v8@mids.svenhartge.de> <4F0B0E34.8080901@hardwarefreak.com> Message-ID: <48fhoafkv5v8@mids.svenhartge.de> Stan Hoeppner wrote: > The more I think about your planned architecture the more it reminds > me of a "shared nothing" database cluster--even a relatively small one > can outrun a well tuned mainframe, especially doing decision > support/data mining workloads (TPC-H). > As long as you're prepared for the extra administration, which you > obviously are, this setup will yield better performance than the NFS > setup I recommended. Performance may not be quite as good as 4 > physical hosts with local storage, but you haven't mentioned the > details of your SAN storage nor the current load on it, so obviously I > can't say with any certainty. If the controller currently has plenty > of spare IOPS then the performance difference would be minimal. This is the beauty of the HP P4500: every node is a controller, load is automagically balanced between all nodes of a storage cluster. The more nodes (up to ten) you add, the more performance you get. So far, I have not been able to push our current SAN to its limits, even with totally artificial benchmarks, so I am quite confident in its performance for the given task. But if everything fails and the performance is not good, I can still go ahead and buy dedicated hardware for the mailsystem. The only thing left is the NFS problem with caching Timo mentioned, but since the accesses to a central public shared folder will be only a minor portion of a clients access, I am hoping the impact will be minimal. Only testing will tell. Gr??e, Sven. -- Sigmentation fault. Core dumped. From robert at schetterer.org Mon Jan 9 18:19:20 2012 From: robert at schetterer.org (Robert Schetterer) Date: Mon, 09 Jan 2012 17:19:20 +0100 Subject: [Dovecot] Postfix user map In-Reply-To: <4F0B0CD9.3090402@schetterer.org> References: <4F0AFCF4.1050506@iki.fi> <4F0B0871.6040500@gmail.com> <4F0B09BC.3010300@schetterer.org> <4F0B0BE4.8010907@gmail.com> <4F0B0CD9.3090402@schetterer.org> Message-ID: <4F0B1388.209@schetterer.org> Am 09.01.2012 16:50, schrieb Robert Schetterer: > Am 09.01.2012 16:46, schrieb Noel: >> On 1/9/2012 9:37 AM, Robert Schetterer wrote: >>> Am 09.01.2012 16:32, schrieb Noel: >>>> On 1/9/2012 8:43 AM, Timo Sirainen wrote: >>>>> http://hg.dovecot.org/dovecot-2.1/rev/f562bcaca215 implements >>>>> "postmap" type sockets, which follow Postfix's tcp_table(5) >>>>> protocol. So you can ask: >>>>> >>>>> get user at domain >>>>> >>>>> and Dovecot answers one of: >>>>> >>>>> - 200 1 >>>>> - 500 User not found >>>>> - 400 Internal failure >>>>> >>>>> So you can use this with Postfix: >>>>> >>>>> virtual_mailbox_maps = tcp:127.0.0.1:1234 >>>>> >>>>> With Dovecot you can enable it with: >>>>> >>>>> service auth { >>>>> inet_listener postmap { >>>>> listen = 127.0.0.1 >>>>> port = 1234 >>>>> } >>>>> } >>>>> >>>>> Anyone have ideas if this could be improved, or used for some >>>>> other purposes? >>>> >>>> Cool. >>>> Does this just check for valid user existence, or can it also check >>>> for over-quota (and respond 500 overquota I suppose)? >>> if you use dove lmtp with postfix it allready works "like that way" >>> for over quota >> >> >> That can reject over-quota users during the postfix SMTP conversation? > > jep ,it does, i was glad having/testing this feature in dove 2 release, > avoiding > overquota backscatter etc i am afraid i wasnt total corect here in fact i havent seen backscatter overquota on my servers since using dove lmtp with postfix but i guess there may cases left in which it could happen you should ask Timo for exact tec answer the postfix answer ever was write some policy daemon for it ( which i found extremly complicated at my try, and stopped it ) but i guess its always a problem comparing the size of a mail with space left in mailstore i.e with many reciepts of one mail etc, whatever tec solution is used so i should have said dove lmtp is the best/easy solution for overquota i know at present and my problems with it are solved for now > >> >> >> >> -- Noel Jones > > -- Best Regards MfG Robert Schetterer Germany/Munich/Bavaria From tss at iki.fi Mon Jan 9 20:09:46 2012 From: tss at iki.fi (Timo Sirainen) Date: Mon, 9 Jan 2012 20:09:46 +0200 Subject: [Dovecot] Postfix user map In-Reply-To: <4F0B0871.6040500@gmail.com> References: <4F0AFCF4.1050506@iki.fi> <4F0B0871.6040500@gmail.com> Message-ID: <8A109A75-164C-41B4-A13B-19C3F1D01E12@iki.fi> On 9.1.2012, at 17.32, Noel wrote: > On 1/9/2012 8:43 AM, Timo Sirainen wrote: >> http://hg.dovecot.org/dovecot-2.1/rev/f562bcaca215 implements >> "postmap" type sockets, which follow Postfix's tcp_table(5) >> protocol. So you can ask: >> >> get user at domain >> >> and Dovecot answers one of: >> >> - 200 1 >> - 500 User not found >> - 400 Internal failure >> >> Anyone have ideas if this could be improved, or used for some >> other purposes? > > > Cool. > Does this just check for valid user existence, or can it also check > for over-quota (and respond 500 overquota I suppose)? Hmm. That looked potentially useful, but Postfix doesn't seem to support it at least that way, since the message to SMTP client is the same regardless of what I add after 500 reply. Also that would have required me to move the code somewhere else from auth process, since auth doesn't know the quota usage. And internally Dovecot would still have had to do auth lookup separately, so there's really no benefit in doing this vs. having Postfix do two lookups. From tss at iki.fi Mon Jan 9 20:12:38 2012 From: tss at iki.fi (Timo Sirainen) Date: Mon, 9 Jan 2012 20:12:38 +0200 Subject: [Dovecot] Postfix user map In-Reply-To: <4F0B1388.209@schetterer.org> References: <4F0AFCF4.1050506@iki.fi> <4F0B0871.6040500@gmail.com> <4F0B09BC.3010300@schetterer.org> <4F0B0BE4.8010907@gmail.com> <4F0B0CD9.3090402@schetterer.org> <4F0B1388.209@schetterer.org> Message-ID: <70A95B54-98CE-4B4D-8B76-CDA279353202@iki.fi> On 9.1.2012, at 18.19, Robert Schetterer wrote: > i am afraid i wasnt total corect here > in fact i havent seen backscatter overquota on my servers > since using dove lmtp with postfix LMTP shouldn't matter here. In most configs mails are put to queue first, and only from there they are sent to LMTP, and if LMTP rejects a mail then backscatter is sent. Maybe the difference you're seeing is that it's now Postfix sending the bounce (or perhaps skipping it?) instead of dovecot-lda (unless you gave -e parameter). From tss at iki.fi Mon Jan 9 20:15:00 2012 From: tss at iki.fi (Timo Sirainen) Date: Mon, 9 Jan 2012 20:15:00 +0200 Subject: [Dovecot] Providing shared folders with multiple backend servers In-Reply-To: <4F0B045D.1010101@Media-Brokers.com> References: <68fd4hi9kbv8@mids.svenhartge.de> <4F0B045D.1010101@Media-Brokers.com> Message-ID: On 9.1.2012, at 17.14, Charles Marcus wrote: > On 2012-01-09 9:51 AM, Timo Sirainen wrote: >> The "proper" solution for this that I've been thinking about would be >> to use v2.1's imapc backend with master users. So that when user A >> wants to access user B's shared folder, Dovecot connects to B's IMAP >> server using master user login, and accesses the mailbox via IMAP. >> Probably wouldn't be a big job to implement, mainly I'd need to >> figure out how this should be configured. > > Sounds interesting... would this be the new officially supported method for sharing mailboxes in all cases? Or is this just for shared mailboxes on NFS shares? Well, it would be one officially supported way to do it. It would also help when using multiple UIDs. From sven at svenhartge.de Mon Jan 9 20:25:55 2012 From: sven at svenhartge.de (Sven Hartge) Date: Mon, 9 Jan 2012 19:25:55 +0100 Subject: [Dovecot] Providing shared folders with multiple backend servers References: <68fd4hi9kbv8@mids.svenhartge.de> Message-ID: <68fi0aakv5v8@mids.svenhartge.de> Timo Sirainen wrote: > On 8.1.2012, at 0.20, Sven Hartge wrote: >> Right now, I am pondering with using an additional server with just >> the shared folders on it and using NFS (or a cluster FS) to mount the >> shared folder filesystem to each backend storage server, so each user >> has potential access to a shared folders data. > With NFS you'll run into problems with caching > (http://wiki2.dovecot.org/NFS). Some cluster fs might work better. Can "mmap_disable = yes" and the other NFS options be set per namespace or only globally? Gr??e, Sven. -- Sigmentation fault. Core dumped. From tss at iki.fi Mon Jan 9 20:35:20 2012 From: tss at iki.fi (Timo Sirainen) Date: Mon, 9 Jan 2012 20:35:20 +0200 Subject: [Dovecot] Providing shared folders with multiple backend servers In-Reply-To: <68fi0aakv5v8@mids.svenhartge.de> References: <68fd4hi9kbv8@mids.svenhartge.de> <68fi0aakv5v8@mids.svenhartge.de> Message-ID: On 9.1.2012, at 20.25, Sven Hartge wrote: > Timo Sirainen wrote: >> On 8.1.2012, at 0.20, Sven Hartge wrote: > >>> Right now, I am pondering with using an additional server with just >>> the shared folders on it and using NFS (or a cluster FS) to mount the >>> shared folder filesystem to each backend storage server, so each user >>> has potential access to a shared folders data. > >> With NFS you'll run into problems with caching >> (http://wiki2.dovecot.org/NFS). Some cluster fs might work better. > > Can "mmap_disable = yes" and the other NFS options be set per namespace > or only globally? Currently only globally. From tss at iki.fi Mon Jan 9 20:36:36 2012 From: tss at iki.fi (Timo Sirainen) Date: Mon, 9 Jan 2012 20:36:36 +0200 Subject: [Dovecot] Resetting a UID In-Reply-To: <025c01cccedc$d5ccd680$81668380$@elysium.ltd.uk> References: <025c01cccedc$d5ccd680$81668380$@elysium.ltd.uk> Message-ID: <002EEBD6-7E83-41EF-B2DC-BAA101FA92D5@iki.fi> On 9.1.2012, at 16.41, Tom Clark wrote: > We've got a client with a Blackberry that hass deleted his emails off his > Blackberry device. The BES won't re-download the messages as it believes it > has already downloaded them (apparently it matches on UID). You can delete dovecot.index* and dovecot-uidlist files. Assuming you're using maildir. > Is there any way of resetting a folder (and messages in the folder) UID? I > know in courier you used to be able to touch the directory. I doubt Courier would do that without deleting courierimapuiddb. From tss at iki.fi Mon Jan 9 20:40:01 2012 From: tss at iki.fi (Timo Sirainen) Date: Mon, 9 Jan 2012 20:40:01 +0200 Subject: [Dovecot] Proxy login failures In-Reply-To: <4F0ABF0A.1080404@enas.net> References: <4F0ABF0A.1080404@enas.net> Message-ID: <69796D8B-5CFE-48A2-A092-B1A32331BC1F@iki.fi> On 9.1.2012, at 12.18, Urban Loesch wrote: > I'm using two dovecot pop3/imap proxies in front of our dovecot servers. > Since some days I see many of the following errors in the logs of the two proxy-servers: > > dovecot: pop3-login: Error: proxy: Remote "IPV6-IP":110 disconnected: Connection closed: Connection reset by peer (state=0): user=, method=PLAIN, rip=remote-ip, lip=localip > > When this happens the Client gets the following error from the proxy: > -ERR [IN-USE] Account is temporarily unavailable. The connection to remote server dies before authentication finishes. The reason for why that happens should be logged by the backend server. Sounds like it crashes. Check for ANY error messages in backend servers. From tss at iki.fi Mon Jan 9 20:43:09 2012 From: tss at iki.fi (Timo Sirainen) Date: Mon, 9 Jan 2012 20:43:09 +0200 Subject: [Dovecot] Newbie: LDA Isn't Logging In-Reply-To: <1326087271.17295.YahooMailClassic@web125406.mail.ne1.yahoo.com> References: <1326087271.17295.YahooMailClassic@web125406.mail.ne1.yahoo.com> Message-ID: <98056774-CE97-4A39-AFEF-3FB22330D430@iki.fi> On 9.1.2012, at 7.34, Michael Papet wrote: > LDA logging worked. So, it could be something about my system. But, running /usr/lib/dovecot/deliver still doesn't return a value on the command line as documented on the wiki. > > I've attached strace files from both the malfunctioning Debian packages machine and the built from sources VM. Unfortunately, I'm a new strace user, so I don't know what it all means. The last line in the malfunctioning deliver: exit_group(67) = ? So Dovecot exits with value 67, which means EX_NOUSER. Looks like everything is working correctly. Are you maybe running a wrapper script that hides the exit code? Or in some other way checking it wrong.. From tss at iki.fi Mon Jan 9 20:44:07 2012 From: tss at iki.fi (Timo Sirainen) Date: Mon, 9 Jan 2012 20:44:07 +0200 Subject: [Dovecot] uid / gid and systemusers In-Reply-To: <1809497881.1135529.1326037030206.JavaMail.ngmail@webmail10.arcor-online.net> References: <1809497881.1135529.1326037030206.JavaMail.ngmail@webmail10.arcor-online.net> Message-ID: On 8.1.2012, at 17.37, xamiw at arcor.de wrote: > Jan 8 16:18:28 test dovecot: User q is missing UID (see mail_uid setting) > Jan 8 16:18:28 test dovecot: imap-login: Internal login failure (auth failed, 1 attempts): user=, method=PLAIN, rip=AAA.BBB.CCC.DDD, lip=EEE.FFF.GGG.HHH TLS <--- edited by me .. > auth default { > mechanisms = plain > passdb shadow { > } > } You have passdb, but no userdb. > /etc/passwd: > ... > g:x:1000:1000:test1,,,:/home/g:/bin/bash > q:x:1001:1001:test2,,,:/home/q:/bin/bash To use /etc/passwd as userdb, you need to add userdb passwd {} From sven at svenhartge.de Mon Jan 9 20:47:09 2012 From: sven at svenhartge.de (Sven Hartge) Date: Mon, 9 Jan 2012 19:47:09 +0100 Subject: [Dovecot] Providing shared folders with multiple backend servers References: <68fd4hi9kbv8@mids.svenhartge.de> <68fi0aakv5v8@mids.svenhartge.de> Message-ID: <78fi1g2kv5v8@mids.svenhartge.de> Timo Sirainen wrote: > On 9.1.2012, at 20.25, Sven Hartge wrote: >> Timo Sirainen wrote: >>> On 8.1.2012, at 0.20, Sven Hartge wrote: >>>> Right now, I am pondering with using an additional server with just >>>> the shared folders on it and using NFS (or a cluster FS) to mount >>>> the shared folder filesystem to each backend storage server, so >>>> each user has potential access to a shared folders data. >> >>> With NFS you'll run into problems with caching >>> (http://wiki2.dovecot.org/NFS). Some cluster fs might work better. >> >> Can "mmap_disable = yes" and the other NFS options be set per >> namespace or only globally? > Currently only globally. Ah, too bad. Back to the drawing board then. Implementing my idea in my environment using a cluster filesystem would be a very big pain in the lower back, so I need a different idea to share the shared folders with all nodes but still keeping the user specific mailboxes fixed and local to a node. The imapc backed namespace you mentioned sounds very interesting, but this is not implemented right now for shared folders, is it? Gr??e, Sven. -- Sigmentation fault. Core dumped. From tss at iki.fi Mon Jan 9 20:59:03 2012 From: tss at iki.fi (Timo Sirainen) Date: Mon, 9 Jan 2012 20:59:03 +0200 Subject: [Dovecot] Strange error: DIGEST-MD5 mechanism can't be supported with given passdbs In-Reply-To: <4F07BDBB.3060204@gmail.com> References: <4F05EABC.7070309@gmail.com> <4F06D272.5010200@bunbun.be> <4F071E3B.2060405@gmail.com> <1325868288.17774.30.camel@hurina> <4F07332B.70708@gmail.com> <4F07BDBB.3060204@gmail.com> Message-ID: <491E7C43-2C87-4FD6-8AC0-E79F22E9749F@iki.fi> On 7.1.2012, at 5.36, Yubao Liu wrote: > In old version, "auth->passdbs" contains all passdbs, this revision > changes "auth->passdbs" to only contain non-master passdbs. > > I'm not sure which fix is better or even my proposal is correct or fully: > a) in src/auth/auth.c:auth_passdb_preinit(), insert master passdb to > auth->passdbs too, and remove duplicate code for masterdbs > in auth_init() and auth_deinit(). Not a good idea. The master passdb needs to be treated specially, otherwise you might accidentally allow regular users logging in as other users. > b) add similar code for masterdbs in auth_passdb_list_have_verify_plain(), > auth_passdb_list_have_lookup_credentials(), auth_passdb_list_have_set_credentials(). Kind of annoying code duplication, but .. I guess it can't really be helped. Added: http://hg.dovecot.org/dovecot-2.0/rev/bed15faedfd4 > Another related question is "pass" option in master passdb, if I set it to "yes", > the authentication fails: .. > My normal passdb is a PAM passdb, it doesn't support credential lookups, that's > reasonable, Right. > but I feel the comment for "pass" option is confusing: > > # Unless you're using PAM, you probably still want the destination user to > # be looked up from passdb that it really exists. pass=yes does that. > pass = yes > } > > According the comment, it's to check whether the real user exists, why not > to check userdb but another passdb? Well.. It is going to check userdb eventually anyway, so it would still fail, just a bit later and maybe with different error message. > Even it must check against passdb, > in this case, it's obvious not necessary to lookup credentials, it's enough to > to lookup user name only. There's currently no passdb that supports "does user exist?" lookup, but doesn't support credentials lookup, so this is more of a theoretical issue. (I guess maybe PAM could be abused in some configurations to do the check, but that's rather ugly..) From noeldude at gmail.com Mon Jan 9 21:04:13 2012 From: noeldude at gmail.com (Noel) Date: Mon, 09 Jan 2012 13:04:13 -0600 Subject: [Dovecot] Postfix user map In-Reply-To: <8A109A75-164C-41B4-A13B-19C3F1D01E12@iki.fi> References: <4F0AFCF4.1050506@iki.fi> <4F0B0871.6040500@gmail.com> <8A109A75-164C-41B4-A13B-19C3F1D01E12@iki.fi> Message-ID: <4F0B3A2D.4020301@gmail.com> On 1/9/2012 12:09 PM, Timo Sirainen wrote: > On 9.1.2012, at 17.32, Noel wrote: > > Cool. > Does this just check for valid user existence, or can it also check > for over-quota (and respond 500 overquota I suppose)? > Hmm. That looked potentially useful, but Postfix doesn't seem to support it at least that way, since the message to SMTP client is the same regardless of what I add after 500 reply. Also that would have required me to move the code somewhere else from auth process, since auth doesn't know the quota usage. And internally Dovecot would still have had to do auth lookup separately, so there's really no benefit in doing this vs. having Postfix do two lookups. How about a separate TCP lookup for quota status? This would be really useful for sites that don't have that information in a shared sql table (or no SQL in postfix), and get rid of kludgy policy services used to check quota status. This would be used with a check_recipient_access table, response would be something like: 200 DUNNO quota OK 200 REJECT user over quota 500 user not found -- Noel Jones From david at paperclipsystems.com Mon Jan 9 21:15:36 2012 From: david at paperclipsystems.com (David Egbert) Date: Mon, 09 Jan 2012 12:15:36 -0700 Subject: [Dovecot] failed: Too many levels of symbolic links In-Reply-To: <4A0E9695-E78A-487F-AE53-888D27981EF1@iki.fi> References: <4F075D43.8090706@paperclipsystems.com> <703E9D47-8238-4E52-A60B-A466B77C5CBF@iki.fi> <4F076A70.3090905@paperclipsystems.com> <4F077158.4000500@paperclipsystems.com> <4A0E9695-E78A-487F-AE53-888D27981EF1@iki.fi> Message-ID: <4F0B3CD8.8050501@paperclipsystems.com> On 1/6/2012 3:30 PM, Timo Sirainen wrote: > On 7.1.2012, at 0.10, David Egbert wrote: > >>> Anyway, readdir() is failing with ELOOP. Does it always fail with "Too many levels of symbolic links" or is it sometimes different? This sounds like a bug in Linux NFS client code. You can reproduce this always with this one user's Maildir? Can you do "ls" in the directory? >>> >> Sorry about the X's... it is a client directory. We support many domains and their privacy is paramount. You are correct it is in the /cur directory. I can LS all of directories without problems. This user has 10+Gb in his mail box spread across 352 subscribed folders. As for the logs it is always the directory, always the same error. > Try the attached test program. Run it as: ./readdir /path/to/Maildir/cur > > Does it also give non-zero error? > I ran it, and it returned: readdir() errno = 0 The user backed up their data and then removed the folder from the server. The error is now gone so I am assuming there was some corrupt file in the directory. Thanks for all of the help. David Egbert Paperclip Systems, LLC --- This message, its contents, and attachments are confidential and are only authorized for the intended recipient. Disclosure, re-distribution, or use of said information is strictly prohibited, and may be excluded from disclosure by applicable law. If you are not the intended recipient, or their intermediary, please notify the sender and delete this message. From tss at iki.fi Mon Jan 9 21:16:09 2012 From: tss at iki.fi (Timo Sirainen) Date: Mon, 9 Jan 2012 21:16:09 +0200 Subject: [Dovecot] Providing shared folders with multiple backend servers In-Reply-To: <78fi1g2kv5v8@mids.svenhartge.de> References: <68fd4hi9kbv8@mids.svenhartge.de> <68fi0aakv5v8@mids.svenhartge.de> <78fi1g2kv5v8@mids.svenhartge.de> Message-ID: On 9.1.2012, at 20.47, Sven Hartge wrote: >>> Can "mmap_disable = yes" and the other NFS options be set per >>> namespace or only globally? > >> Currently only globally. > > Ah, too bad. > > Back to the drawing board then. mmap_disable=yes works pretty well even if you're only using it for local filesystems. It just spends some more memory when reading dovecot.index.cache files. > Implementing my idea in my environment using a cluster filesystem would > be a very big pain in the lower back, so I need a different idea to > share the shared folders with all nodes but still keeping the user > specific mailboxes fixed and local to a node. > > The imapc backed namespace you mentioned sounds very interesting, but > this is not implemented right now for shared folders, is it? Well.. If you don't need users sharing mailboxes to each others, then you can probably already do this with Dovecot v2.1: 1. Configure the user Dovecots: namespace { type = public prefix = Shared/ location = imapc:~/imapc-shared } imapc_host = sharedmails.example.com imapc_password = master-user-password # With latest v2.1 hg you can do: imapc_user = shareduser imapc_master_user = %u # With v2.1.rc2 and older you need to do: imapc_user = shareduser*%u auth_master_user_separator = * 2. Configure the shared Dovecot: You need master passdb that allows all existing users to log in as "shareduser" user. You can probably simply do (not tested): passdb { type = static args = user=shareduser master = yes } The "shareduser" owns all of the actual shared mailboxes and has the necessary ACLs set up for individual users. ACLs use the master username (= the real username in this case) to do the ACL checks. From tss at iki.fi Mon Jan 9 21:19:34 2012 From: tss at iki.fi (Timo Sirainen) Date: Mon, 9 Jan 2012 21:19:34 +0200 Subject: [Dovecot] Providing shared folders with multiple backend servers In-Reply-To: References: <68fd4hi9kbv8@mids.svenhartge.de> <68fi0aakv5v8@mids.svenhartge.de> <78fi1g2kv5v8@mids.svenhartge.de> Message-ID: <08C9B341-1292-44F4-AB6B-D6D804ED60BE@iki.fi> On 9.1.2012, at 21.16, Timo Sirainen wrote: > passdb { > type = static > args = user=shareduser Of course you should also require a password: args = user=shareduser pass=master-user-password From tss at iki.fi Mon Jan 9 21:31:00 2012 From: tss at iki.fi (Timo Sirainen) Date: Mon, 9 Jan 2012 21:31:00 +0200 Subject: [Dovecot] change initial permissions on creation of mail folder In-Reply-To: <4F072342.1090901@ngong.de> References: <4F072342.1090901@ngong.de> Message-ID: <061F5BDF-A47F-40F5-8B86-E42C585B9EBB@iki.fi> On 6.1.2012, at 18.37, mailinglist wrote: > Installed dovcot from Debian .deb file. Creating a new account for system users sets permission for user-only. Where to change initial permissions on creation of mail folder and other subdirectories. Permissions for folders are taken from the mail root directory. http://wiki2.dovecot.org/SharedMailboxes/Permissions has details. Permissions for newly created mail root directory are always 0700. If you want something else, create the mail directory with wanted permissions at the same time as you create the user. > Installed dovecot using "apt-get install dovecot-imapd dovecot-pop3d". Any time when I create a new account in my mail client for a system user, Dovecot tries to create ~/mail/.imap/INBOX. The permissions for mail and .imap are set to 0700. By this permissions INBOX can not be created leading to an error message in log files. When I manualy change the permissions to 0770, INBOX is created I don't really understand why INBOX couldn't be created. 0700 should be enough for most installations. Unless you have a very good reason you shouldn't use 0770 for mails (sounds more like you've a weirdly configured mail setup). From tss at iki.fi Mon Jan 9 21:31:51 2012 From: tss at iki.fi (Timo Sirainen) Date: Mon, 9 Jan 2012 21:31:51 +0200 Subject: [Dovecot] FTS-Solr plugin In-Reply-To: References: Message-ID: <51D1C049-8E87-4AB7-9A20-6BDB0748A569@iki.fi> On 6.1.2012, at 19.35, Daniel L. Miller wrote: > Solr plugin appears to break when mailbox names have an ampersand in the name. The messages appear to indicate '&' gets translated to '&--'. What message? With fts=solr (not solr_old) the mailbox name isn't used in Solr at all. It uses mailbox GUIDs. From sven at svenhartge.de Mon Jan 9 21:31:58 2012 From: sven at svenhartge.de (Sven Hartge) Date: Mon, 9 Jan 2012 20:31:58 +0100 Subject: [Dovecot] Providing shared folders with multiple backend servers References: <68fd4hi9kbv8@mids.svenhartge.de> <68fi0aakv5v8@mids.svenhartge.de> <78fi1g2kv5v8@mids.svenhartge.de> Message-ID: <98fi3r4kv5v8@mids.svenhartge.de> Timo Sirainen wrote: > On 9.1.2012, at 20.47, Sven Hartge wrote: >>>> Can "mmap_disable = yes" and the other NFS options be set per >>>> namespace or only globally? >> >>> Currently only globally. >> >> Ah, too bad. >> >> Back to the drawing board then. > mmap_disable=yes works pretty well even if you're only using it for local filesystems. It just spends some more memory when reading dovecot.index.cache files. >> Implementing my idea in my environment using a cluster filesystem would >> be a very big pain in the lower back, so I need a different idea to >> share the shared folders with all nodes but still keeping the user >> specific mailboxes fixed and local to a node. >> >> The imapc backed namespace you mentioned sounds very interesting, but >> this is not implemented right now for shared folders, is it? > Well.. If you don't need users sharing mailboxes to each others, God heavens, no! If I allowed users to share their mailboxes with other users, hell would break loose. Nononono, just shared folders set up by the admin team, statically assigned to groups of users (for example, the central postmaster@ mail alias ends in such a shared folder). > then you can probably already do this with Dovecot v2.1: > 1. Configure the user Dovecots: > namespace { > type = public > prefix = Shared/ > location = imapc:~/imapc-shared > } > imapc_host = sharedmails.example.com > imapc_password = master-user-password > # With latest v2.1 hg you can do: > imapc_user = shareduser > imapc_master_user = %u > # With v2.1.rc2 and older you need to do: > imapc_user = shareduser*%u > auth_master_user_separator = * So, in my case, this would look like this: ,---- | # User's private mail location | mail_location = mdbox:~/mdbox | | # When creating any namespaces, you must also have a private namespace: | namespace { | type = private | separator = . | prefix = INBOX. | #location defaults to mail_location. | inbox = yes | } | | namespace { | type = public | separator = . | prefix = #shared. | location = imapc:~/imapc-shared | subscriptions = no | } | | imapc_host = m-st-sh-01.foo.bar | imapc_password = master-user-password | imapc_user = shareduser | imapc_master_user = %u `---- Where do I add "list = children"? In the user-dovecots shared namespace or on the shared-dovecots private namespace? > 2. Configure the shared Dovecot: > You need master passdb that allows all existing users to log in as "shareduser" user. You can probably simply do (not tested): > passdb { > type = static > args = user=shareduser pass=master-user-password > master = yes > } > The "shareduser" owns all of the actual shared mailboxes and has the > necessary ACLs set up for individual users. ACLs use the master > username (= the real username in this case) to do the ACL checks. So this is kind of "backwards", since normally the imapc_master_user would be the static user and imapc_user would be dynamic, right? All in all, a _very_ interesting configuration. Gr??e, Sven. -- Sigmentation fault. Core dumped. From tss at iki.fi Mon Jan 9 21:38:59 2012 From: tss at iki.fi (Timo Sirainen) Date: Mon, 9 Jan 2012 21:38:59 +0200 Subject: [Dovecot] Providing shared folders with multiple backend servers In-Reply-To: <98fi3r4kv5v8@mids.svenhartge.de> References: <68fd4hi9kbv8@mids.svenhartge.de> <68fi0aakv5v8@mids.svenhartge.de> <78fi1g2kv5v8@mids.svenhartge.de> <98fi3r4kv5v8@mids.svenhartge.de> Message-ID: <183FE5C1-811F-44A1-B7C6-BC57AC92C0C2@iki.fi> On 9.1.2012, at 21.31, Sven Hartge wrote: > ,---- > | # User's private mail location > | mail_location = mdbox:~/mdbox > | > | # When creating any namespaces, you must also have a private namespace: > | namespace { > | type = private > | separator = . > | prefix = INBOX. > | #location defaults to mail_location. > | inbox = yes > | } > | > | namespace { > | type = public > | separator = . > | prefix = #shared. I'd probably just use "Shared." as prefix, since it is visible to users. Anyway if you want to use # you need to put the value in "quotes" or it's treated as comment. > | location = imapc:~/imapc-shared > | subscriptions = no list = children here > | } > | > | imapc_host = m-st-sh-01.foo.bar > | imapc_password = master-user-password > | imapc_user = shareduser > | imapc_master_user = %u > `---- > > Where do I add "list = children"? In the user-dovecots shared namespace > or on the shared-dovecots private namespace? Shared-dovecot always has mailboxes (at least INBOX), so list=children would equal list=yes. > >> 2. Configure the shared Dovecot: > >> You need master passdb that allows all existing users to log in as "shareduser" user. You can probably simply do (not tested): > >> passdb { >> type = static >> args = user=shareduser pass=master-user-password >> master = yes >> } > >> The "shareduser" owns all of the actual shared mailboxes and has the >> necessary ACLs set up for individual users. ACLs use the master >> username (= the real username in this case) to do the ACL checks. > > So this is kind of "backwards", since normally the imapc_master_user would be > the static user and imapc_user would be dynamic, right? Right. Also in this Dovecot you want a regular namespace without prefix: namespace inbox { separator = / list = yes inbox = yes } You might as well use the proper separator here in case you ever change it for users. From sven at svenhartge.de Mon Jan 9 21:45:12 2012 From: sven at svenhartge.de (Sven Hartge) Date: Mon, 9 Jan 2012 20:45:12 +0100 Subject: [Dovecot] Providing shared folders with multiple backend servers References: <68fd4hi9kbv8@mids.svenhartge.de> <68fi0aakv5v8@mids.svenhartge.de> <78fi1g2kv5v8@mids.svenhartge.de> <98fi3r4kv5v8@mids.svenhartge.de> <183FE5C1-811F-44A1-B7C6-BC57AC92C0C2@iki.fi> Message-ID: Timo Sirainen wrote: > On 9.1.2012, at 21.31, Sven Hartge wrote: >> ,---- >> | # User's private mail location >> | mail_location = mdbox:~/mdbox >> | >> | # When creating any namespaces, you must also have a private namespace: >> | namespace { >> | type = private >> | separator = . >> | prefix = INBOX. >> | #location defaults to mail_location. >> | inbox = yes >> | } >> | >> | namespace { >> | type = public >> | separator = . >> | prefix = #shared. > I'd probably just use "Shared." as prefix, since it is visible to > users. Anyway if you want to use # you need to put the value in > "quotes" or it's treated as comment. I have to use "#shared.", because this is what Courier uses. Unfortunately I have to stick to prefixes and seperators used currently. >> | location = imapc:~/imapc-shared What is the syntax of this location? What does "imapc-shared" do in this case? >> | subscriptions = no > list = children here >> | } >> | >> | imapc_host = m-st-sh-01.foo.bar >> | imapc_password = master-user-password >> | imapc_user = shareduser >> | imapc_master_user = %u >> `---- >> >> Where do I add "list = children"? In the user-dovecots shared namespace >> or on the shared-dovecots private namespace? > Shared-dovecot always has mailboxes (at least INBOX), so list=children would equal list=yes. OK, seems logical. >> >>> 2. Configure the shared Dovecot: >> >>> You need master passdb that allows all existing users to log in as "shareduser" user. You can probably simply do (not tested): >> >>> passdb { >>> type = static >>> args = user=shareduser pass=master-user-password >>> master = yes >>> } >> >>> The "shareduser" owns all of the actual shared mailboxes and has the >>> necessary ACLs set up for individual users. ACLs use the master >>> username (= the real username in this case) to do the ACL checks. >> >> So this is kind of "backwards", since normally the imapc_master_user would be >> the static user and imapc_user would be dynamic, right? > Right. Also in this Dovecot you want a regular namespace without prefix: > namespace inbox { > separator = / > list = yes > inbox = yes > } > You might as well use the proper separator here in case you ever change it for users. Is this seperator converted to '.' on the frontend? The department supporting our users will give me hell if anything visible changes in the layout of the folders for the end user. Gr??e, Sven. -- Sigmentation fault. Core dumped. From tss at iki.fi Mon Jan 9 22:05:48 2012 From: tss at iki.fi (Timo Sirainen) Date: Mon, 9 Jan 2012 22:05:48 +0200 Subject: [Dovecot] Providing shared folders with multiple backend servers In-Reply-To: References: <68fd4hi9kbv8@mids.svenhartge.de> <68fi0aakv5v8@mids.svenhartge.de> <78fi1g2kv5v8@mids.svenhartge.de> <98fi3r4kv5v8@mids.svenhartge.de> <183FE5C1-811F-44A1-B7C6-BC57AC92C0C2@iki.fi> Message-ID: On 9.1.2012, at 21.45, Sven Hartge wrote: >>> | location = imapc:~/imapc-shared > > What is the syntax of this location? What does "imapc-shared" do in this > case? It's the directory for index files. The backend IMAP server is used as a rather dummy storage, so if for example you do a FETCH 1:* BODYSTRUCTURE command, all of the message bodies are downloaded to the user's Dovecot server which parses them. But with indexes this is done only once (same as with any other mailbox format). If you want SEARCH BODY to be fast, you'd also need to use some kind of full text search indexes. If your users share the same UID (or 0666 mode would probably work too), you could share the index files rather than make them per-user. Then you could use imapc:/shared/imapc or something. BTW. All message flags are shared between users. If you want per-user flags you'd need to modify the code. >> Right. Also in this Dovecot you want a regular namespace without prefix: > >> namespace inbox { >> separator = / >> list = yes >> inbox = yes >> } > >> You might as well use the proper separator here in case you ever change it for users. > > Is this seperator converted to '.' on the frontend? Yes, as long as you explicitly specify the separator setting to the public namespace. From sven at svenhartge.de Mon Jan 9 22:13:23 2012 From: sven at svenhartge.de (Sven Hartge) Date: Mon, 9 Jan 2012 21:13:23 +0100 Subject: [Dovecot] Providing shared folders with multiple backend servers References: <68fd4hi9kbv8@mids.svenhartge.de> <68fi0aakv5v8@mids.svenhartge.de> <78fi1g2kv5v8@mids.svenhartge.de> <98fi3r4kv5v8@mids.svenhartge.de> <183FE5C1-811F-44A1-B7C6-BC57AC92C0C2@iki.fi> Message-ID: Timo Sirainen wrote: > On 9.1.2012, at 21.45, Sven Hartge wrote: >>>> | location = imapc:~/imapc-shared >> >> What is the syntax of this location? What does "imapc-shared" do in this >> case? > It's the directory for index files. The backend IMAP server is used as > a rather dummy storage, so if for example you do a FETCH 1:* > BODYSTRUCTURE command, all of the message bodies are downloaded to the > user's Dovecot server which parses them. But with indexes this is done > only once (same as with any other mailbox format). If you want SEARCH > BODY to be fast, you'd also need to use some kind of full text search > indexes. The bodies are downloaded but not stored, right? Just the index files are stored locally. > If your users share the same UID (or 0666 mode would probably work > too), you could share the index files rather than make them per-user. > Then you could use imapc:/shared/imapc or something. Hmm. Yes, this is a fully virtual setup, every users mail is owned by the virtmail user. Does this sharing of index files have any security or privacy issues? Not every user sees every shared folder, so an information leak has to be avoided at all costs. > BTW. All message flags are shared between users. If you want per-user > flags you'd need to modify the code. No, I need shared message flags, as this is the reason we introduced shared folders, so one user can see, if a mail has already been read or replied to. >>> Right. Also in this Dovecot you want a regular namespace without prefix: >> >>> namespace inbox { >>> separator = / >>> list = yes >>> inbox = yes >>> } >> >>> You might as well use the proper separator here in case you ever change it for users. >> >> Is this seperator converted to '.' on the frontend? > Yes, as long as you explicitly specify the separator setting to the > public namespace. OK, good to know, one for my documentation with an '!' behind it. Gr??e, Sven -- Sigmentation fault. Core dumped. From tss at iki.fi Mon Jan 9 22:20:44 2012 From: tss at iki.fi (Timo Sirainen) Date: Mon, 9 Jan 2012 22:20:44 +0200 Subject: [Dovecot] Providing shared folders with multiple backend servers In-Reply-To: References: <68fd4hi9kbv8@mids.svenhartge.de> <68fi0aakv5v8@mids.svenhartge.de> <78fi1g2kv5v8@mids.svenhartge.de> <98fi3r4kv5v8@mids.svenhartge.de> <183FE5C1-811F-44A1-B7C6-BC57AC92C0C2@iki.fi> Message-ID: <8002CDFC-88BB-47D2-96D2-8F7EFB26DD86@iki.fi> On 9.1.2012, at 22.13, Sven Hartge wrote: > Timo Sirainen wrote: >> On 9.1.2012, at 21.45, Sven Hartge wrote: > >>>>> | location = imapc:~/imapc-shared >>> >>> What is the syntax of this location? What does "imapc-shared" do in this >>> case? > >> It's the directory for index files. The backend IMAP server is used as >> a rather dummy storage, so if for example you do a FETCH 1:* >> BODYSTRUCTURE command, all of the message bodies are downloaded to the >> user's Dovecot server which parses them. But with indexes this is done >> only once (same as with any other mailbox format). If you want SEARCH >> BODY to be fast, you'd also need to use some kind of full text search >> indexes. > > The bodies are downloaded but not stored, right? Just the index files > are stored locally. Right. >> If your users share the same UID (or 0666 mode would probably work >> too), you could share the index files rather than make them per-user. >> Then you could use imapc:/shared/imapc or something. > > Hmm. Yes, this is a fully virtual setup, every users mail is owned by > the virtmail user. Does this sharing of index files have any security or > privacy issues? There are no privacy issues, at least currently, since there is no per-user data. If you had wanted per-user flags this wouldn't have worked. > Not every user sees every shared folder, so an information leak has to > be avoided at all costs. Oh, that reminds me, it doesn't actually work :) Because Dovecot deletes those directories it doesn't see on the remote server. You might be able to use imapc:~/imapc:INDEX=/shared/imapc though. The nice thing about shared imapc indexes is that each user doesn't have to re-index the message. From bind at enas.net Mon Jan 9 22:23:18 2012 From: bind at enas.net (Urban Loesch) Date: Mon, 09 Jan 2012 21:23:18 +0100 Subject: [Dovecot] Proxy login failures In-Reply-To: <69796D8B-5CFE-48A2-A092-B1A32331BC1F@iki.fi> References: <4F0ABF0A.1080404@enas.net> <69796D8B-5CFE-48A2-A092-B1A32331BC1F@iki.fi> Message-ID: <4F0B4CB6.2080703@enas.net> Am 09.01.2012 19:40, schrieb Timo Sirainen: > On 9.1.2012, at 12.18, Urban Loesch wrote: > >> I'm using two dovecot pop3/imap proxies in front of our dovecot servers. >> Since some days I see many of the following errors in the logs of the two proxy-servers: >> >> dovecot: pop3-login: Error: proxy: Remote "IPV6-IP":110 disconnected: Connection closed: Connection reset by peer (state=0): user=, method=PLAIN, rip=remote-ip, lip=localip >> >> When this happens the Client gets the following error from the proxy: >> -ERR [IN-USE] Account is temporarily unavailable. > The connection to remote server dies before authentication finishes. The reason for why that happens should be logged by the backend server. Sounds like it crashes. Check for ANY error messages in backend servers. > I still did that, but I found nothing in the logs. The only thing I could think about is that all 7 backend servers are virtual servers (using technology from http://linux-vserver.org) and they all are running on the same physical machine (DELL PER610 with 32GB RAM, RAID 10 SAS - load between 0.5 and 2.0, iowait about 1-5%). So they are sharing the same kernel. Also all servers are connected to a mysql server, running on a different machine in the same subnet. Could it be that either the kernel needs some tcp tuning ore perhaps the answers from the remote mysql server could be to slow in some cases? Now I switched 2 of the 7 backend servers to the backup mysql slave server. Should be no problem because dovecot is only reading from it. If it helps I will see tomorrow an I let you know. thanks Urban From sven at svenhartge.de Mon Jan 9 22:24:09 2012 From: sven at svenhartge.de (Sven Hartge) Date: Mon, 9 Jan 2012 21:24:09 +0100 Subject: [Dovecot] Providing shared folders with multiple backend servers References: <68fd4hi9kbv8@mids.svenhartge.de> <68fi0aakv5v8@mids.svenhartge.de> <78fi1g2kv5v8@mids.svenhartge.de> <98fi3r4kv5v8@mids.svenhartge.de> <183FE5C1-811F-44A1-B7C6-BC57AC92C0C2@iki.fi> <8002CDFC-88BB-47D2-96D2-8F7EFB26DD86@iki.fi> Message-ID: Timo Sirainen wrote: > On 9.1.2012, at 22.13, Sven Hartge wrote: >> Timo Sirainen wrote: >>> On 9.1.2012, at 21.45, Sven Hartge wrote: >>>>>> | location = imapc:~/imapc-shared >>>> >>>> What is the syntax of this location? What does "imapc-shared" do in >>>> this case? >> >>> It's the directory for index files. The backend IMAP server is used >>> as a rather dummy storage, so if for example you do a FETCH 1:* >>> BODYSTRUCTURE command, all of the message bodies are downloaded to >>> the user's Dovecot server which parses them. But with indexes this >>> is done only once (same as with any other mailbox format). If you >>> want SEARCH BODY to be fast, you'd also need to use some kind of >>> full text search indexes. >> >>> If your users share the same UID (or 0666 mode would probably work >>> too), you could share the index files rather than make them >>> per-user. Then you could use imapc:/shared/imapc or something. >> Hmm. Yes, this is a fully virtual setup, every users mail is owned by >> the virtmail user. Does this sharing of index files have any security >> or privacy issues? > There are no privacy issues, at least currently, since there is no > per-user data. If you had wanted per-user flags this wouldn't have > worked. OK. I think I will go with the per-user index files for now and pay the extra in bandwidth and processing power needed. All in all, of 10,000 users, only about 100 use shared folders. Gr??e, Sven. -- Sigmentation fault. Core dumped. From xamiw at arcor.de Tue Jan 10 00:30:21 2012 From: xamiw at arcor.de (xamiw at arcor.de) Date: Mon, 9 Jan 2012 23:30:21 +0100 (CET) Subject: [Dovecot] uid / gid and systemusers In-Reply-To: References: <1809497881.1135529.1326037030206.JavaMail.ngmail@webmail10.arcor-online.net> Message-ID: <778892216.47622.1326148221293.JavaMail.ngmail@webmail16.arcor-online.net> That's it, thanks a lot. ----- Original Nachricht ---- Von: Timo Sirainen An: xamiw at arcor.de Datum: 09.01.2012 19:44 Betreff: Re: [Dovecot] uid / gid and systemusers > On 8.1.2012, at 17.37, xamiw at arcor.de wrote: > > > Jan 8 16:18:28 test dovecot: User q is missing UID (see mail_uid > setting) > > Jan 8 16:18:28 test dovecot: imap-login: Internal login failure (auth > failed, 1 attempts): user=, method=PLAIN, rip=AAA.BBB.CCC.DDD, > lip=EEE.FFF.GGG.HHH TLS <--- edited by me > .. > > auth default { > > mechanisms = plain > > passdb shadow { > > } > > } > > You have passdb, but no userdb. > > > /etc/passwd: > > ... > > g:x:1000:1000:test1,,,:/home/g:/bin/bash > > q:x:1001:1001:test2,,,:/home/q:/bin/bash > > To use /etc/passwd as userdb, you need to add userdb passwd {} > > From tss at iki.fi Tue Jan 10 00:39:00 2012 From: tss at iki.fi (Timo Sirainen) Date: Tue, 10 Jan 2012 00:39:00 +0200 Subject: [Dovecot] Proxy login failures In-Reply-To: <4F0B4CB6.2080703@enas.net> References: <4F0ABF0A.1080404@enas.net> <69796D8B-5CFE-48A2-A092-B1A32331BC1F@iki.fi> <4F0B4CB6.2080703@enas.net> Message-ID: <27646CE2-F912-4D61-9016-F6BBE0DA9C56@iki.fi> On 9.1.2012, at 22.23, Urban Loesch wrote: >>> I'm using two dovecot pop3/imap proxies in front of our dovecot servers. >>> Since some days I see many of the following errors in the logs of the two proxy-servers: >>> >>> dovecot: pop3-login: Error: proxy: Remote "IPV6-IP":110 disconnected: Connection closed: Connection reset by peer (state=0): user=, method=PLAIN, rip=remote-ip, lip=localip >>> >>> When this happens the Client gets the following error from the proxy: >>> -ERR [IN-USE] Account is temporarily unavailable. >> The connection to remote server dies before authentication finishes. The reason for why that happens should be logged by the backend server. Sounds like it crashes. Check for ANY error messages in backend servers. >> > > I still did that, but I found nothing in the logs. It's difficult to guess then. At the very least there should be an "Info" message about a new connection at the time when this failure happened. If there's not even that, then maybe the problem is network related. > The only thing I could think about is that all 7 backend servers are virtual servers (using technology from http://linux-vserver.org) and they all are running > on the same physical machine (DELL PER610 with 32GB RAM, RAID 10 SAS - load between 0.5 and 2.0, iowait about 1-5%). So they are sharing the same kernel. For testing, or what's the point in doing that? :) But the load is low enough that I doubt it has anything to do with it. > Also all servers are connected to a mysql server, running on a different machine in the same subnet. Could it be that either the kernel needs some tcp tuning ore perhaps the answers from the remote mysql server > could be to slow in some cases? MySQL server problem would show up with a different error message. TCP tuning is also unlikely to help, since the connection probably dies within a second. Actually it would be a good idea to log the duration. This patch adds it: http://hg.dovecot.org/dovecot-2.0/raw-rev/8438f66433a6 These are the only explanations that I can think of for the error: * Remote Dovecot crashes / kills the connection (it would log an error message) * Remote Dovecot server is full of handling existing connections (It would log a warning) * Network trouble, something in the middle disconnecting the connection * Source/destination OS trouble, disconnecting the connection * Some hang that results in eventual disconnection. The duration patch would show if this is the case. From dmiller at amfes.com Tue Jan 10 02:21:32 2012 From: dmiller at amfes.com (Daniel L. Miller) Date: Mon, 09 Jan 2012 16:21:32 -0800 Subject: [Dovecot] Solr plugin In-Reply-To: References: <4F04EDC8.6060809@amfes.com> <9D72A466-6ECD-4A10-A52A-20BF56B32916@iki.fi> <4F072FA9.2020009@amfes.com> <75CA67D3-81FF-4FD5-8D61-6992E808DB09@iki.fi> <4F075A76.1040807@amfes.com> <4F0A2980.7050003@amfes.com> <4F0A2B4D.2040106@amfes.com> Message-ID: On 1/9/2012 7:00 AM, Timo Sirainen wrote: > On 9.1.2012, at 1.48, Daniel L. Miller wrote: > >> On 1/8/2012 3:40 PM, Daniel L. Miller wrote: >>> On 1/6/2012 12:32 PM, Daniel L. Miller wrote: >>>> On 1/6/2012 9:36 AM, Timo Sirainen wrote: >>>>> On 6.1.2012, at 19.30, Daniel L. Miller wrote: >>>>> >>> Jan 8 15:40:09 bubba dovecot: imap(user1 at domain.com): Error: fts_solr: Lookup failed: 400 undefined field CC >>> Jan 8 15:40:09 bubba dovecot: imap: Error: >>> >>> >> Looking at the Solr output - looks like the CC parameter is being capitalized while all the other fieldnames are lowercase. > Did you look at the input? Looking at the code, it should be lowercased. Maybe Solr just uppercases it for some reason. Are you using a Solr schema that has "cc" field? > I see the following in a running Solr instance. This is generated from a Windoze Thunderbird 8.0 client: Jan 9, 2012 4:20:13 PM org.apache.solr.core.SolrCore execute INFO: [] webapp=/solr path=/select params={fl=uid,score&sort=uid+asc&fq=%2Bbox:c1af150abfc9df4d7f7a00003bc41c5f+%2Buser:"dmiller at amfes.com"&q=from:"test"+OR+to:"test"+OR+CC:"test"+OR+subject:"test"+OR+body:"test"&rows=9038} status=400 QTime=4 That's where I see the uppercased CC. -- Daniel From tss at iki.fi Tue Jan 10 02:28:46 2012 From: tss at iki.fi (Timo Sirainen) Date: Tue, 10 Jan 2012 02:28:46 +0200 Subject: [Dovecot] Solr plugin In-Reply-To: References: <4F04EDC8.6060809@amfes.com> <9D72A466-6ECD-4A10-A52A-20BF56B32916@iki.fi> <4F072FA9.2020009@amfes.com> <75CA67D3-81FF-4FD5-8D61-6992E808DB09@iki.fi> <4F075A76.1040807@amfes.com> <4F0A2980.7050003@amfes.com> <4F0A2B4D.2040106@amfes.com> Message-ID: <78E1EDA2-62A8-4CD3-BA82-6239FDC975EB@iki.fi> On 10.1.2012, at 2.21, Daniel L. Miller wrote: >> Did you look at the input? Looking at the code, it should be lowercased. Maybe Solr just uppercases it for some reason. Are you using a Solr schema that has "cc" field? > > I see the following in a running Solr instance. This is generated from a Windoze Thunderbird 8.0 client: > > Jan 9, 2012 4:20:13 PM org.apache.solr.core.SolrCore execute > INFO: [] webapp=/solr path=/select params={fl=uid,score&sort=uid+asc&fq=%2Bbox:c1af150abfc9df4d7f7a00003bc41c5f+%2Buser:"dmiller at amfes.com"&q=from:"test"+OR+to:"test"+OR+CC:"test"+OR+subject:"test"+OR+body:"test"&rows=9038} status=400 QTime=4 Oh, you were talking about the searching part, not indexing. Yeah, there it wasn't necessarily lowercased. Fixed: http://hg.dovecot.org/dovecot-2.1/rev/075591a4b6a8 From stan at hardwarefreak.com Tue Jan 10 04:19:22 2012 From: stan at hardwarefreak.com (Stan Hoeppner) Date: Mon, 09 Jan 2012 20:19:22 -0600 Subject: [Dovecot] Providing shared folders with multiple backend servers In-Reply-To: <08fhdkhkv5v8@mids.svenhartge.de> References: <68fd4hi9kbv8@mids.svenhartge.de> <4F08E4D9.1090203@hardwarefreak.com> <78fdevu9kbv8@mids.svenhartge.de> <4F09956C.1030109@hardwarefreak.com> <88fev069kbv8@mids.svenhartge.de> <4F0ADD87.5080103@hardwarefreak.com> <08fhdkhkv5v8@mids.svenhartge.de> Message-ID: <4F0BA02A.3050405@hardwarefreak.com> On 1/9/2012 7:48 AM, Sven Hartge wrote: > It seems my initial idea was not so bad after all ;) Yeah, but you didn't know how "not so bad" it really was until you had me analyze it, flesh it out, and confirm it. ;) > Now I "just" need o > built a little test setup, put some dummy users on it and see, if > anything bad happens while accessing the shared folders and how the > reaction of the system is, should the shared folder server be down. It won't be down. Because instead of using NFS you're going to use GFS2 for the shared folder LUN so each user accesses the shared folders locally just as they do their mailbox. Pat yourself on the back Sven, you just eliminated a SPOF. ;) >> How many total cores per VMware node (all sockets)? > > 8 Fairly beefy. Dual socket quad core Xeons I'd guess. > Here the memory statistics an 14:30 o'clock: > > total used free shared buffers cached > Mem: 12046 11199 847 0 88 7926 > -/+ buffers/cache: 3185 8861 > Swap: 5718 10 5707 That doesn't look too bad. How many IMAP user connections at that time? Is that a high average or low for that day? The RAM numbers in isolation only paint a partial picture... > The SAN has plenty space. Over 70TiB at this time, with another 70TiB > having just arrived and waiting to be connected. 140TB of 15k storage. Wow, you're so under privileged. ;) > The iSCSI storage nodes (HP P4500) use 600GB SAS6 at 15k rpm with 12 > disks per node, configured in 2 RAID5 sets with 6 disks each. > > But this is internal to each storage node, which are kind of a blackbox > and have to be treated as such. I cringe every time I hear 'black box'... > The HP P4500 is a but unique, since it does not consist of a head node > which storage arrays connected to it, but of individual storage nodes > forming a self balancing iSCSI cluster. (The nodes consist of DL320s G2.) The 'black box' is Lefthand Networks SAN/iQ software stack. I wasn't that impressed with it when I read about it 8 or so years ago. IIRC, load balancing across cluster nodes is accomplished by resending host packets from a receiving node to another node after performing special sauce calculations regarding cluster load. Hence the need, apparently, for a full power, hot running, multi-core x86 CPU instead of an embedded low power/wattage type CPU such as MIPS, PPC, i960 descended IOP3xx, or even the Atom if they must stick with x86 binaries. If this choice was merely due to economy of scale of their server boards, they could have gone with a single socket board instead of the dual, which would have saved money. So this choice of a dual socket Xeon board wasn't strictly based on cost or ease of manufacture. Many/most purpose built SAN arrays on the market don't use full power x86 chips, but embedded RISC chips, to cut cost, power draw, and heat generation. These RISC chips are typically in order designs, don't have branch prediction or register renaming logic circuits and they have tiny caches. This is because block moving code handles streams of data and doesn't typically branch nor have many conditionals. For streaming apps, data caches simply get in the way, although an instruction cache is beneficial. HP's choice of full power CPUs that have such features suggests branching conditional code is used. Which makes sense when running algorithms that attempt to calculate the least busy node. Thus, this 'least busy node' calculation and packet shipping adds non trivial latency to host SCSI IO command completion, compared to traditional FC/iSCSI SAN arrays, or DAS, and thus has implications for high IOPS workloads and especially those making heavy use of FSYNC, such as SMTP and IMAP servers. FSYNC performance may not be an issue if the controller instantly acks FSYNC before data hits platter, but then you may run into bigger problems as you have no guarantee data hit the disk. Or, you may not run into perceptible performance issues at all given the number of P4500s you have and the proportionally light IO load of your 10K mail users. Sheer horsepower alone may prove sufficient. Just in case, it may prove beneficial to fire up ImapTest or some other synthetic mail workload generator to see if array response times are acceptable under heavy mail loads. > So far, I had no performance or other problems with this setup and it > scales quite nice, as you buy as you grow . I'm glad the Lefthand units are working well for you so far. Are you hitting the arrays with any high random IOPS workloads as of yet? > And again, price was also a factor, deploying a FC-SAN would have cost > us more than thrice the amount than the amount the deployment of an iSCSI > solution did, because the latter is "just" ethernet, while the former > would have needed a lot more totally new components. I guess that depends on the features you need, such as PIT backups, remote replication, etc. I expanded a small FC SAN about 5 years ago for the same cost as an iSCSI array, simply due to the fact that the least expensive _quality_ unit with a good reputation happened to have both iSCSI and FC ports included. It was a 1U 8x500GB Nexsan Satablade, their smallest unit (since discontinued). Ran about $8K USD IIRC. Nexsan continues to offer excellent products. For anyone interested in high density high performance FC+iSCSI SAN arrays at a midrange price, add Nexsan to your vendor research list: http://www.nexsan.com > No, at that time (2005/2006) nobody thought of a SAN. That is a fairly > "new" idea here, first implemented for the VMware cluster in 2008. You must have slower adoption on that side of the pond. As I just mentioned, I was expanding an already existing small FC SAN in 2006 that had been in place since 2004 IIRC. And this was at a small private 6-12 school with enrollment of about 500. iSCSI SANs took off like a rocket in the States around 06/07, in tandem with VMware ESX going viral here. > More space. The IMAP usage became more prominent which caused a steep > rise in space needed on the mail storage server. But 74GiB SCA drives > where expensive and 130GiB SCA drives where not available at that time. With 144TB of HP Lefthand 15K SAS drives it appears you're no longer having trouble funding storage purchases. ;) >>> And this is why I kind of hold this upgrade back until dovecot 2.1 is >>> released, as it has some optimizations here. > >> Sounds like it's going to be a bit more than an 'upgrade'. ;) > > Well, yes. It is more a re-implementation than an upgrade. It actually sounds like fun. To me anyway. ;) I love this stuff. > Central IT here this days only uses x86-based systems. There where some Sun > SPARC systems, but both have been decomissioned. New SPARC hardware is > just too expensive for our scale. And if you want to use virtualization, > you can either use only SPARC systems and partition them or use x86 > based systems. And then there is the need to virtualize Windows, so x86 > is the only option. Definitely a trend for a while now. > Most bigger Universities in Germany make nearly exclusive use of SPARC > systems, but they had a central IT with big irons (IBM, HP, etc.) since > back in the 1960's, so naturally the continue on that path. Siemens/Fujitsu machines or SUN machines? I've been under the impression that Fujitsu sold more SPARC boxen in Europe, or at least Germany, than SUN did, due to the Siemens partnership. I could be wrong here. -- Stan From robert at schetterer.org Tue Jan 10 08:06:38 2012 From: robert at schetterer.org (Robert Schetterer) Date: Tue, 10 Jan 2012 07:06:38 +0100 Subject: [Dovecot] Postfix user map In-Reply-To: <70A95B54-98CE-4B4D-8B76-CDA279353202@iki.fi> References: <4F0AFCF4.1050506@iki.fi> <4F0B0871.6040500@gmail.com> <4F0B09BC.3010300@schetterer.org> <4F0B0BE4.8010907@gmail.com> <4F0B0CD9.3090402@schetterer.org> <4F0B1388.209@schetterer.org> <70A95B54-98CE-4B4D-8B76-CDA279353202@iki.fi> Message-ID: <4F0BD56E.9090808@schetterer.org> Am 09.01.2012 19:12, schrieb Timo Sirainen: > On 9.1.2012, at 18.19, Robert Schetterer wrote: > >> i am afraid i wasnt total corect here >> in fact i havent seen backscatter overquota on my servers >> since using dove lmtp with postfix > > LMTP shouldn't matter here. In most configs mails are put to queue first, and only from there they are sent to LMTP, and if LMTP rejects a mail then backscatter is sent. Maybe the difference you're seeing is that it's now Postfix sending the bounce (or perhaps skipping it?) instead of dovecot-lda (unless you gave -e parameter). > Hi Timo, thx for clearing anyway backscatter with overquota was rare ever so no big problem -- Best Regards MfG Robert Schetterer Germany/Munich/Bavaria From yubao.liu at gmail.com Tue Jan 10 08:58:37 2012 From: yubao.liu at gmail.com (Liu Yubao) Date: Tue, 10 Jan 2012 14:58:37 +0800 Subject: [Dovecot] Strange error: DIGEST-MD5 mechanism can't be supported with given passdbs In-Reply-To: <491E7C43-2C87-4FD6-8AC0-E79F22E9749F@iki.fi> References: <4F05EABC.7070309@gmail.com> <4F06D272.5010200@bunbun.be> <4F071E3B.2060405@gmail.com> <1325868288.17774.30.camel@hurina> <4F07332B.70708@gmail.com> <4F07BDBB.3060204@gmail.com> <491E7C43-2C87-4FD6-8AC0-E79F22E9749F@iki.fi> Message-ID: On Tue, Jan 10, 2012 at 2:59 AM, Timo Sirainen wrote: > On 7.1.2012, at 5.36, Yubao Liu wrote: > >> In old version, ?"auth->passdbs" contains all passdbs, this revision >> changes "auth->passdbs" to only contain non-master passdbs. >> >> I'm not sure which fix is better or even my proposal is correct or fully: >> ?a) in src/auth/auth.c:auth_passdb_preinit(), insert master passdb to >> ? ? ?auth->passdbs too, ?and remove duplicate code for masterdbs >> ? ? ?in auth_init() and auth_deinit(). > > Not a good idea. The master passdb needs to be treated specially, otherwise you might accidentally allow regular users logging in as other users. > Sorry I don't understand well. This scheme adds all master dbs to auth->passdbs, auth->masterdbs are not changed and still contains only master users. I guess dovecot lookups auth->masterdbs for master users and auth->passdbs for regular users, regular users don't know master users' passwords so they can't login as other users. http://wiki2.dovecot.org/Authentication/MasterUsers The "Example configuration" already shows master user account can be added to auth->passdbs too. This scheme does bring unexpected issue, the master users can't have separate passwords for regular login as themselves(because masterdbs are also added to passdbs), the risk of password leak increases much, but I don't think it's a good practice to do regular login with master user account. Quoted from same wiki page(I really enjoy the wonderful Dovecot wiki, it's the most well organized and documented wiki in open source projects, thank you very much!): "If you want master users to be able to log in as themselves, you'll need to either add the user to the normal passdb or add the passdb to dovecot.conf twice, with and without master=yes. Note that if the passdbs point to different locations, the user can have a different password when logging in as other users than when logging in as himself. This is a good idea since it can avoid accidentally logging in as someone else. " Anyway, the scheme B is much less risky and much simple, just a little annoying code duplication:-) >> ?b) add similar code for masterdbs in auth_passdb_list_have_verify_plain(), >> ? ? ?auth_passdb_list_have_lookup_credentials(), auth_passdb_list_have_set_credentials(). > > Kind of annoying code duplication, but .. I guess it can't really be helped. Added: > http://hg.dovecot.org/dovecot-2.0/rev/bed15faedfd4 > Thank you very much, I don't have to maintain my private package:-) >> Another related question is "pass" option in master passdb, if I set it to "yes", >> the authentication fails: > .. >> My normal passdb is a PAM passdb, ?it doesn't support credential lookups, that's >> reasonable, > > Right. > >> but I feel the comment for "pass" option is confusing: >> >> ?# Unless you're using PAM, you probably still want the destination user to >> ?# be looked up from passdb that it really exists. pass=yes does that. >> ?pass = yes >> } >> >> According the comment, it's to check whether the real user exists, why not >> to check userdb but another passdb? > > Well.. It is going to check userdb eventually anyway, so it would still fail, just a bit later and maybe with different error message. If Dovecot doesn't check password for the real user against passdb (actually it doesn't have the password of real user because it's doing master user proxy authorization), it won't fail on userdb lookup because the userdb does contain the real user, in my case, the real user is system user and absolutely exists. > >> Even it must check against passdb, >> in this case, it's obvious not necessary to lookup credentials, it's enough to >> to lookup user name only. > > There's currently no passdb that supports "does user exist?" lookup, but doesn't support credentials lookup, so this is more of a theoretical issue. (I guess maybe PAM could be abused in some configurations to do the check, but that's rather ugly..) I don't understand why master user proxy authorization in Dovecot has to check real user against his credential, does that mean "user*master" has to authenticate twice? one for master, one for user, but often client can't provide two passwords in single login and the regular passdb such as PAM passdb doesn't support credentials lookup. So I feel it's better Dovecot checks only destination user names in passdbs or userdbs after master user authentication part succeeds to decide whether the destination user exists, just as the comment for "pass=yes" describes. This may not be a bug, IMHO just a confusing feature. Regards, Yubao Liu From l.chelchowski at slupsk.eurocar.pl Tue Jan 10 11:34:37 2012 From: l.chelchowski at slupsk.eurocar.pl (l.chelchowski) Date: Tue, 10 Jan 2012 10:34:37 +0100 Subject: [Dovecot] Quota-warning and setresgid Message-ID: Hi! Please help me with this. The problem exists when quota-warning is executing: LOG: Jan 10 10:15:06 lmtp(85973): Debug: none: root=, index=, control=, inbox=, alt= Jan 10 10:15:06 lmtp(85973): Info: Connect from local Jan 10 10:15:06 lmtp(85973): Debug: Loading modules from directory: /usr/local/lib/dovecot Jan 10 10:15:06 lmtp(85973): Debug: Module loaded: /usr/local/lib/dovecot/lib10_quota_plugin.so Jan 10 10:15:06 lmtp(85973): Debug: Module loaded: /usr/local/lib/dovecot/lib90_sieve_plugin.so Jan 10 10:15:06 lmtp(85973): Debug: auth input: tester at domain.eu home=/home/vmail/domain.eu/tester/ mail=maildir:/home/vmail/domain.eu/tester/:INDEX=/var/mail/vmail/domain.eu/tester at domain.eu/index/public uid=101 gid=12 quota_rule=*:storage=2097 acl_groups= Jan 10 10:15:06 lmtp(85973): Debug: Added userdb setting: mail=maildir:/home/vmail/domain.eu/tester/:INDEX=/var/mail/vmail/domain.eu/tester at domain.eu/index/public Jan 10 10:15:06 lmtp(85973): Debug: Added userdb setting: plugin/quota_rule=*:storage=2097 Jan 10 10:15:06 lmtp(85973): Debug: Added userdb setting: plugin/acl_groups= Jan 10 10:15:06 lmtp(85973, tester at domain.eu): Debug: Effective uid=101, gid=12, home=/home/vmail/domain.eu/tester/ Jan 10 10:15:06 lmtp(85973, tester at domain.eu): Debug: Quota root: name=user backend=dict args=:proxy::quotadict Jan 10 10:15:06 lmtp(85973, tester at domain.eu): Debug: Quota rule: root=user mailbox=* bytes=2147328 messages=0 Jan 10 10:15:06 lmtp(85973, tester at domain.eu): Debug: Quota rule: root=user mailbox=Trash bytes=+429465 (20%) messages=0 Jan 10 10:15:06 lmtp(85973, tester at domain.eu): Debug: Quota rule: root=user mailbox=SPAM bytes=+429465 (20%) messages=0 Jan 10 10:15:06 lmtp(85973, tester at domain.eu): Debug: Quota warning: bytes=1717862 (80%) messages=0 reverse=no command=quota-warning 80 tester at domain.eu Jan 10 10:15:06 lmtp(85973, tester at domain.eu): Debug: Quota warning: bytes=1932595 (90%) messages=0 reverse=no command=quota-warning 90 tester at domain.eu Jan 10 10:15:06 lmtp(85973, tester at domain.eu): Debug: Quota warning: bytes=2039961 (95%) messages=0 reverse=no command=quota-warning 95 tester at domain.eu Jan 10 10:15:06 lmtp(85973, tester at domain.eu): Debug: dict quota: user=tester at domain.eu, uri=proxy::quotadict, noenforcing=0 Jan 10 10:15:06 lmtp(85973, tester at domain.eu): Debug: Namespace : type=private, prefix=, sep=/, inbox=yes, hidden=no, list=yes, subscriptions=yes location=maildir:/home/vmail/domain.eu/tester/:INDEX=/var/mail/vmail/domain.eu/tester at domain.eu/index/public Jan 10 10:15:06 lmtp(85973, tester at domain.eu): Debug: maildir++: root=/home/vmail/domain.eu/tester, index=/var/mail/vmail/domain.eu/tester at domain.eu/index/public, control=, inbox=/home/vmail/domain.eu/tester, alt= Jan 10 10:15:06 lmtp(85973, tester at domain.eu): Debug: Namespace : type=public, prefix=Public/, sep=/, inbox=no, hidden=no, list=children, subscriptions=yes location=maildir:/home/vmail/public/:CONTROL=/var/mail/vmail/domain.eu/tester/control/public:INDEX=/var/mail/vmail/domain.eu/tester/index/public:LAYOUT=fs Jan 10 10:15:06 lmtp(85973, tester at domain.eu): Debug: fs: root=/home/vmail/public, index=/var/mail/vmail/domain.eu/tester/index/public, control=/var/mail/vmail/domain.eu/tester/control/public, inbox=, alt= Jan 10 10:15:06 lmtp(85973, tester at domain.eu): Debug: Namespace : type=shared, prefix=Shared/%u/, sep=/, inbox=no, hidden=no, list=children, subscriptions=no location=maildir:%h/:INDEX=/var/mail/vmail/domain.eu/tester at domain.eu/index/shared/%u Jan 10 10:15:06 lmtp(85973, tester at domain.eu): Debug: shared: root=/var/run/dovecot, index=, control=, inbox=, alt= ... Jan 10 10:15:06 lmtp(85973, tester at domain.eu): Debug: quota: Executing warning: quota-warning 95 tester at domain.eu Jan 10 10:15:06 lmtp(85973, tester at domain.eu): Info: bLUfAJoBDE/VTwEA9hAjDg: sieve: msgid=<4F0C0180.3040704 at domain.eu>: stored mail into mailbox 'INBOX' Jan 10 10:15:06 lmtp(85973): Info: Disconnect from local: Client quit (in reset) Jan 10 10:15:06 lda: Debug: Loading modules from directory: /usr/local/lib/dovecot Jan 10 10:15:06 lda: Debug: Module loaded: /usr/local/lib/dovecot/lib01_acl_plugin.so Jan 10 10:15:06 lda: Debug: Module loaded: /usr/local/lib/dovecot/lib10_quota_plugin.so Jan 10 10:15:06 lda: Debug: Module loaded: /usr/local/lib/dovecot/lib90_sieve_plugin.so Jan 10 10:15:06 lda: Debug: auth input: tester at domain.eu home=/home/vmail/domain.eu/tester/ mail=maildir:/home/vmail/domain.eu/tester/:INDEX=/var/mail/vmail/domain.eu/tester at domain.eu/index/public uid=101 gid=12 quota_rule=*:storage=2097 acl_groups= Jan 10 10:15:06 lda: Debug: Added userdb setting: mail=maildir:/home/vmail/domain.eu/tester/:INDEX=/var/mail/vmail/domain.eu/tester at domain.eu/index/public Jan 10 10:15:06 lda: Debug: Added userdb setting: plugin/quota_rule=*:storage=2097 Jan 10 10:15:06 lda: Debug: Added userdb setting: plugin/acl_groups= Jan 10 10:15:06 lda(tester at domain.eu): Fatal: setresgid(12(mail),12(mail),101(vmail)) failed with euid=101(vmail): Operation not permitted Jan 10 10:15:06 master: Error: service(quota-warning): child 85974 returned error 75 dovecot -n # 2.0.16: /usr/local/etc/dovecot/dovecot.conf # OS: FreeBSD 8.2-RELEASE-p3 amd64 auth_master_user_separator = * auth_mechanisms = plain login cram-md5 auth_username_format = %Lu dict { quotadict = mysql:/usr/local/etc/dovecot/dovecot-dict-sql.conf } disable_plaintext_auth = no first_valid_gid = 12 first_valid_uid = 101 log_path = /var/log/dovecot.log mail_debug = yes mail_gid = vmail mail_plugins = " quota acl" mail_privileged_group = vmail mail_uid = vmail managesieve_notify_capability = mailto managesieve_sieve_capability = fileinto reject envelope encoded-character vacation subaddress comparator-i;ascii-numeric relational regex imap4flags copy include variables body enotify environment mailbox date namespace { inbox = yes location = prefix = separator = / type = private } namespace { list = children location = maildir:/home/vmail/public/:CONTROL=/var/mail/vmail/%d/%n/control/public:INDEX=/var/mail/vmail/%d/%n/index/public:LAYOUT=fs prefix = Public/ separator = / subscriptions = yes type = public } namespace { list = children location = maildir:%%h/:INDEX=/var/mail/vmail/%d/%u/index/shared/%%u prefix = Shared/%%u/ separator = / subscriptions = no type = shared } passdb { args = /usr/local/etc/dovecot/dovecot-sql.conf driver = sql } passdb { args = /usr/local/etc/dovecot/passwd.masterusers driver = passwd-file master = yes pass = yes } plugin { acl = vfile:/usr/local/etc/dovecot/acls acl_shared_dict = file:/usr/local/etc/dovecot/shared/shared-mailboxes.db autocreate = Trash autocreate2 = Junk autocreate3 = Sent autocreate4 = Drafts autocreate5 = Archives autosubscribe = Trash autosubscribe2 = Junk autosubscribe3 = Sent autosubscribe4 = Drafts autosubscribe5 = Public/Poczta autosubscribe6 = Archives fts = squat fts_squat = partial=4 full=10 quota = dict:user::proxy::quotadict quota_rule2 = Trash:storage=+20%% quota_rule3 = SPAM:storage=+20%% quota_warning = storage=80%% quota-warning 80 %u quota_warning2 = storage=90%% quota-warning 90 %u quota_warning3 = storage=95%% quota-warning 95 %u sieve = ~/.dovecot.sieve sieve_before = /usr/local/etc/dovecot/sieve/default.sieve sieve_dir = ~/sieve sieve_global_dir = /usr/local/etc/dovecot/sieve sieve_global_path = /usr/local/etc/dovecot/sieve/default.sieve } protocols = imap pop3 sieve lmtp service auth { unix_listener /var/spool/postfix/private/auth { group = mail mode = 0660 user = postfix } unix_listener auth-userdb { group = mail mode = 0660 user = vmail } } service dict { unix_listener dict { mode = 0600 user = vmail } } service imap { executable = imap postlogin } service lmtp { unix_listener /var/spool/postfix/private/dovecot-lmtp { group = postfix mode = 0660 user = postfix } } service managesieve { drop_priv_before_exec = yes } service pop3 { drop_priv_before_exec = yes } service postlogin { executable = script-login rawlog } service quota-warning { executable = script /usr/local/bin/quota-warning.sh unix_listener quota-warning { user = vmail } user = vmail } ssl = no userdb { args = /usr/local/etc/dovecot/dovecot-sql.conf driver = sql } verbose_proctitle = yes protocol imap { imap_client_workarounds = delay-newmail tb-extra-mailbox-sep mail_plugins = " acl imap_acl autocreate fts fts_squat quota imap_quota" } protocol lmtp { mail_plugins = quota sieve } protocol pop3 { pop3_client_workarounds = outlook-no-nuls oe-ns-eoh pop3_uidl_format = %08Xu%08Xv } protocol lda { deliver_log_format = msgid=%m: %$ mail_plugins = sieve acl quota postmaster_address = postmaster at domain.eu sendmail_path = /usr/sbin/sendmail } -- ?ukasz From bind at enas.net Tue Jan 10 14:22:27 2012 From: bind at enas.net (Urban Loesch) Date: Tue, 10 Jan 2012 13:22:27 +0100 Subject: [Dovecot] Proxy login failures In-Reply-To: <27646CE2-F912-4D61-9016-F6BBE0DA9C56@iki.fi> References: <4F0ABF0A.1080404@enas.net> <69796D8B-5CFE-48A2-A092-B1A32331BC1F@iki.fi> <4F0B4CB6.2080703@enas.net> <27646CE2-F912-4D61-9016-F6BBE0DA9C56@iki.fi> Message-ID: <4F0C2D83.4010108@enas.net> On 09.01.2012 23:39, Timo Sirainen wrote: > On 9.1.2012, at 22.23, Urban Loesch wrote: > >>>> I'm using two dovecot pop3/imap proxies in front of our dovecot servers. >>>> Since some days I see many of the following errors in the logs of the two proxy-servers: >>>> >>>> dovecot: pop3-login: Error: proxy: Remote "IPV6-IP":110 disconnected: Connection closed: Connection reset by peer (state=0): user=, method=PLAIN, rip=remote-ip, lip=localip >>>> >>>> When this happens the Client gets the following error from the proxy: >>>> -ERR [IN-USE] Account is temporarily unavailable. >>> The connection to remote server dies before authentication finishes. The reason for why that happens should be logged by the backend server. Sounds like it crashes. Check for ANY error messages in backend servers. >>> >> >> I still did that, but I found nothing in the logs. > > It's difficult to guess then. At the very least there should be an "Info" message about a new connection at the time when this failure happened. If there's not even that, then maybe the problem is network related. No, there is nothing. > >> The only thing I could think about is that all 7 backend servers are virtual servers (using technology from http://linux-vserver.org) and they all are running >> on the same physical machine (DELL PER610 with 32GB RAM, RAID 10 SAS - load between 0.5 and 2.0, iowait about 1-5%). So they are sharing the same kernel. > > For testing, or what's the point in doing that? :) But the load is low enough that I doubt it has anything to do with it. This because the hardware is fast enough for handling about 40.000 Mailaccounts (both IMAP and POP). That tells me that dovecot is a really good piece of software. Very performant in my eyes. > >> Also all servers are connected to a mysql server, running on a different machine in the same subnet. Could it be that either the kernel needs some tcp tuning ore perhaps the answers from the remote mysql server >> could be to slow in some cases? > > MySQL server problem would show up with a different error message. TCP tuning is also unlikely to help, since the connection probably dies within a second. Actually it would be a good idea to log the duration. This patch adds it: > http://hg.dovecot.org/dovecot-2.0/raw-rev/8438f66433a6 > I installed the patch on my proxies and I got this: ... Jan 10 09:30:45 imap2 dovecot: pop3-login: Error: proxy: Remote "IPV6-IP":110 disconnected: Connection closed: Connection reset by peer (state=0, duration=0s): user=, method=PLAIN, rip=remote-ip, lip=local-ip Jan 10 09:45:21 imap2 dovecot: pop3-login: Error: proxy: Remote "IPV6-IP":110 disconnected: Connection closed: Connection reset by peer (state=0, duration=1s): user=, method=PLAIN, rip=remote-ip, lip=local-ip ... As you can see the duration is between 0 and 1 seconds. During this errors there was a tcpdump running on proxy #2 (imap2 in the above logs). In the time range of "09:30:45:00 - 09:30:46:00" I got an error that the backend server has resetted the connection (RST Flag set). The fact that dovecot on the backend server writes nothing in the log I think that the connection will be resetted on a lower level. Here is what whireshark tells me about that: No. Source Time Destination Protocol Info 101235 IPv6-Proxy-Server 2012-01-10 09:29:38.015073 IPv6-Backend-Server TCP 35341 > pop3 [SYN] Seq=0 Win=14400 Len=0 MSS=1440 SACK_PERM=1 TSV=1925901864 TSER=0 WS=7 101236 IPv6-Backend-Server 2012-01-10 09:29:38.015157 IPv6-Proxy-Server TCP pop3 > 35341 [SYN, ACK] Seq=0 Ack=1 Win=14280 Len=0 MSS=1440 SACK_PERM=1 TSV=309225565 TSER=1925901864 WS=7 101248 IPv6-Proxy-Server 2012-01-10 09:29:38.233046 IPv6-Backend-Server POP [TCP ACKed lost segment] [TCP Previous segment lost] C: UIDL 101249 IPv6-Backend-Server 2012-01-10 09:29:38.233312 IPv6-Proxy-Server POP S: +OK 101250 IPv6-Proxy-Server 2012-01-10 09:29:38.233328 IPv6-Backend-Server TCP 35341 > pop3 [ACK] Seq=57 Ack=50 Win=14464 Len=0 TSV=1925901886 TSER=309225587 101263 IPv6-Proxy-Server 2012-01-10 09:29:38.452210 IPv6-Backend-Server POP C: LIST 101264 IPv6-Backend-Server 2012-01-10 09:29:38.452403 IPv6-Proxy-Server POP S: +OK 0 messages: 101265 IPv6-Proxy-Server 2012-01-10 09:29:38.452426 IPv6-Backend-Server TCP 35341 > pop3 [ACK] Seq=63 Ack=70 Win=14464 Len=0 TSV=1925901908 TSER=309225609 101324 IPv6-Proxy-Server 2012-01-10 09:29:38.671209 IPv6-Backend-Server POP C: QUIT 101325 IPv6-Backend-Server 2012-01-10 09:29:38.671566 IPv6-Proxy-Server POP S: +OK Logging out. 101326 IPv6-Proxy-Server 2012-01-10 09:29:38.671678 IPv6-Backend-Server TCP 35341 > pop3 [FIN, ACK] Seq=69 Ack=89 Win=14464 Len=0 TSV=1925901930 TSER=309225631 101327 IPv6-Backend-Server 2012-01-10 09:29:38.671759 IPv6-Proxy-Server TCP pop3 > 35341 [ACK] Seq=89 Ack=70 Win=14336 Len=0 TSV=309225631 TSER=1925901930 134205 IPv6-Proxy-Server 2012-01-10 09:30:45.477314 IPv6-Backend-Server TCP [TCP Port numbers reused] 35341 > pop3 [SYN] Seq=0 Win=14400 Len=0 MSS=1440 SACK_PERM=1 TSV=1925908610 TSER=0 WS=7 134206 IPv6-Backend-Server 2012-01-10 09:30:45.477458 IPv6-Proxy-Server TCP pop3 > 35341 [SYN, ACK] Seq=0 Ack=1 Win=14280 Len=0 MSS=1440 SACK_PERM=1 TSV=309232311 TSER=1925908610 WS=7 134207 IPv6-Proxy-Server 2012-01-10 09:30:45.477499 IPv6-Backend-Server TCP 35341 > pop3 [ACK] Seq=1 Ack=1 Win=14464 Len=0 TSV=1925908610 TSER=309232311 134208 IPv6-Backend-Server 2012-01-10 09:30:45.477589 IPv6-Proxy-Server TCP pop3 > 35341 [RST] Seq=1 Win=0 Len=0 136052 IPv6-Backend-Server 2012-01-10 09:30:49.477950 IPv6-Proxy-Server TCP pop3 > 35341 [SYN, ACK] Seq=0 Ack=1 Win=14280 Len=0 MSS=1440 SACK_PERM=1 TSV=309232712 TSER=1925908610 WS=7 136053 IPv6-Proxy-Server 2012-01-10 09:30:49.477978 IPv6-Backend-Server TCP 35341 > pop3 [RST] Seq=1 Win=0 Len=0 138363 IPv6-Backend-Server 2012-01-10 09:30:55.877899 IPv6-Proxy-Server TCP pop3 > 35341 [SYN, ACK] Seq=0 Ack=1 Win=14280 Len=0 MSS=1440 SACK_PERM=1 TSV=309233352 TSER=1925908610 WS=7 138364 IPv6-Proxy-Server 2012-01-10 09:30:55.877925 IPv6-Backend-Server TCP 35341 > pop3 [RST] Seq=1 Win=0 Len=0 143154 IPv6-Backend-Server 2012-01-10 09:31:08.678005 IPv6-Proxy-Server TCP pop3 > 35341 [SYN, ACK] Seq=0 Ack=1 Win=14280 Len=0 MSS=1440 SACK_PERM=1 TSV=309234632 TSER=1925908610 WS=7 152353 IPv6-Backend-Server 2012-01-10 09:31:32.678103 IPv6-Proxy-Server TCP pop3 > 35341 [SYN, ACK] Seq=0 Ack=1 Win=14280 Len=0 MSS=1440 SACK_PERM=1 TSV=309237032 TSER=1925908610 WS=7 165891 IPv6-Backend-Server 2012-01-10 09:32:20.688324 IPv6-Proxy-Server TCP pop3 > 35341 [SYN, ACK] Seq=0 Ack=1 Win=14280 Len=0 MSS=1440 SACK_PERM=1 TSV=309241833 TSER=1925908610 WS=7 From Seq-No. 101235 - 101327 the session looks ok for me. But on Seq-No. 134205 whireshark tells me that the TCP port with source port number "35341" will be reused and on Seq-No. 34208 (after the TCP Session has been established correctly - see Seq-No. 134205 to 134207) the backend server sends a RST Packet for the session and the proxy logs the error message, that the connection has been resetted by the peer. I have no idea if dovecot is sendig the TCP reset or the kernel by himself. About 1,5 hours ago I changed the kernel flag "/proc/sys/net/ipv4/tcp_tw_recycle" to "1" on the physical backend machine. Since that I got no more error messages on the proxies. Changing the default values in "tcp_fin-timeout" or "tcp_tw_reuse" have had no effect. Only "tcp_tw_recycle" seems to help. Thanks Urban > These are the only explanations that I can think of for the error: > > * Remote Dovecot crashes / kills the connection (it would log an error message) > * Remote Dovecot server is full of handling existing connections (It would log a warning) > * Network trouble, something in the middle disconnecting the connection > * Source/destination OS trouble, disconnecting the connection > * Some hang that results in eventual disconnection. The duration patch would show if this is the case. > > From Ralf.Hildebrandt at charite.de Tue Jan 10 15:06:48 2012 From: Ralf.Hildebrandt at charite.de (Ralf Hildebrandt) Date: Tue, 10 Jan 2012 14:06:48 +0100 Subject: [Dovecot] 2.0.17: Index lost -> SAVEDON lost as well? In-Reply-To: <20120109150249.GH22506@charite.de> References: <20120109074057.GC22506@charite.de> <17E5EF90-C133-4F70-A377-0D93694181DE@iki.fi> <20120109150249.GH22506@charite.de> Message-ID: <20120110130648.GD6686@charite.de> * Ralf Hildebrandt : > * Timo Sirainen : > > On 9.1.2012, at 9.40, Ralf Hildebrandt wrote: > > > > > Today I encoundered these errors: > > > > > > Jan 9 08:30:06 mail dovecot: lmtp(31174, backup at backup.invalid): Error: Log synchronization error at seq=858,offset=44672 for > > > /home/mailboxname/mdbox/storage/dovecot.map.index: Append with UID 282388, but next_uid = 282389 > > > > Any idea why this happened? > > I was running those commands: > > # new style (dovecot) > vorgestern=`date -d "-2 day" +"%Y-%m-%d"` > doveadm expunge -u backup at backup.invalid mailbox INBOX SAVEDBEFORE $vorgestern > doveadm purge -u backup at backup.invalid So today: # doveadm search -u backup at backup.invalid mailbox INBOX SAVEDON 2012-01-08 | wc -l 0 # doveadm search -u backup at backup.invalid mailbox INBOX SAVEDON 2012-01-09 | wc -l 0 # doveadm search -u backup at backup.invalid mailbox INBOX SAVEDON 2012-01-10 | wc -l 45724 # doveadm search -u backup at backup.invalid mailbox INBOX SAVEDON 2012-01-11 | wc -l 0 Then: doveadm expunge -u backup at backup.invalid mailbox INBOX SAVEDBEFORE 2012-01-08 && \ doveadm purge -u backup at backup.invalid resulted in: doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/3f/4d/3f4d8043d87e248a2e97f87be1f604301573be49-72e4a90683d70a4fc47e000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/3f/4d/3f4d8043d87e248a2e97f87be1f604301573be49-afef6f1bf1d40a4f6773000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/32/7f/327f6d3cccc7aceb42da69ee7f3baea3267d631f-f4f5b90703d10a4f8d68000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/21/f4/21f48fad649f1b7249f9aab98b7c079b6ac19b5b-9a4fcb1e83d70a4fcd7e000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/21/f4) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/9a/fd/9afd968e9524449a151f64bd2fb1610dcf81da95-fe508a2195d30a4fb86f000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/9a/fd/9afd968e9524449a151f64bd2fb1610dcf81da95-a04fcb1e83d70a4fcd7e000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/9a/fd/9afd968e9524449a151f64bd2fb1610dcf81da95-beba543b03cf0a4fe062000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/9a/fd/9afd968e9524449a151f64bd2fb1610dcf81da95-52c15a0f03d10a4f9b68000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/9a/fd/9afd968e9524449a151f64bd2fb1610dcf81da95-c4ba543b03cf0a4fe062000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/00/04/00048d4ec98f654ad681a97b07d2e806a09c1641-22a9531683d70a4fc97e000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/00/04) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/c4/ae/c4aebf70927db7997eb8755c61a490581aff94a6-27bb543b03cf0a4fe062000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/bb/91/bb913960266ce20c2fea64ceaed1fb29eab868ce-4ba9531683d70a4fc97e000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/34/0b/340b8ae1e2c6ccbfba161475440b172caaff92b3-1d518a2195d30a4fb86f000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/4c/1e/4c1e264df5d168ed4e676267a4dcf38cd82e9797-1e518a2195d30a4fb86f000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/4c/1e) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/ca/a7/caa75263442d125e08493b237c332351604b651a-1f518a2195d30a4fb86f000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/ca/a7) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/c7/75/c775a5736e1800e3c654291b42f942ebebc6e343-c2327907cad70a4fd47e000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/c7/75) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/1b/da/1bdaede5f6b4175e577fa4148a1d2c75b6291047-c3327907cad70a4fd47e000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/1b/da) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/e4/83/e4838792800058921c4dce395f5c038e3072f053-c4327907cad70a4fd47e000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/e4/83) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/f6/bc/f6bc6d4a0127e275a61e0f8c3c56407240547bd6-4850cb1e83d70a4fcd7e000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/f6/bc) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/7f/9d/7f9dcd43a8a04aa0a0e438d1568129baf6d66105-c104472383d70a4fd17e000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/f7/4d/f74df38ff421889090e995383b5c81912c15879b-db04472383d70a4fd17e000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/f7/4d) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/bf/ef/bfef000b86fd483daefce6472bec6e1694aaac94-9a416f0c03d10a4f9868000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/bf/ef) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/bf/ef/bfef000b86fd483daefce6472bec6e1694aaac94-55bb543b03cf0a4fe062000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/bf/ef) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/95/fa/95fa16f52171e9cc30ca288eacf22ce8f5aa2fff-5bbb543b03cf0a4fe062000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/95/fa/95fa16f52171e9cc30ca288eacf22ce8f5aa2fff-61bb543b03cf0a4fe062000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/95/fa/95fa16f52171e9cc30ca288eacf22ce8f5aa2fff-62f6b90703d10a4f8d68000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/95/fa/95fa16f52171e9cc30ca288eacf22ce8f5aa2fff-ec327907cad70a4fd47e000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/95/fa/95fa16f52171e9cc30ca288eacf22ce8f5aa2fff-b9a9531683d70a4fc97e000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/24/c4/24c4692ab968bfd94cf1ca62fb46a88b7dcd78f1-df71181354d80a4f4e02000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/24/c4) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/9f/86/9f862a2ed2f9c8f9cffbfea60883da513abf390d-67c35a0f03d10a4f9b68000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/9f/86) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/31/bf/31bf583bd7db531f5b634f6f2220eb8c803f720d-50bd543b03cf0a4fe062000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/31/bf) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/41/97/4197a3de49f40e5f6c641be08b4e710c02a8a9f4-28e78c26add90a4f1706000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/41/97) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/fc/f7/fcf791a4e521548aceae0a62b3924b075f1c7b31-63ab531683d70a4fc97e000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/bd/18/bd18eecdcc9e9f17b851a1742c7ca6f8f7badfe7-2872181354d80a4f4e02000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/bb/1d/bb1da62d3688f09d4223188d0e16986a57458b91-2e72181354d80a4f4e02000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/bb/1d) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/eb/6e/eb6ebe01f3e1feaa1f5635cef5b8286e375dfdb1-3de78c26add90a4f1706000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/eb/6e) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/f1/94/f1943d0e581f54fe68f3ae052e3d2eba75ff3822-76c45a0f03d10a4f9b68000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/f1/94) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/38/27/3827505dc4412178b87c758a4f5d697644260e9e-0ee88c26add90a4f1706000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/38/27) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/ca/4e/ca4eec1630af1e986c66593bce522c32db4060cb-2b2f152613da0a4fab06000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/ca/4e) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/1b/46/1b4691175805ffb373f5c8406f33f79b41dceed2-c772181354d80a4f4e02000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/1b/46) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/78/94/789426ef58857e12e448e590201cf81acda1d3f0-af528a2195d30a4fb86f000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/b2/01/b201c8727f5d286fd3f99f61710e900aaae42bcf-0f446f0c03d10a4f9868000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/7a/58/7a58d847b53f9980365be256607df90bd4885152-10446f0c03d10a4f9868000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/83/21/8321d504845fb7fa518ffbbe9e65ba79357dc40d-11446f0c03d10a4f9868000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/2b/90/2b90a014dadfb625859720a930743d76ff1dc960-1ad49606d5d90a4f9f06000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/2b/90) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/26/bb/26bb6ef66a1c9374cde9dd4ee40c03d52a37a078-1bd49606d5d90a4f9f06000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/26/bb) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/7e/b6/7eb6cf6ecf3375708e922879cb3695c45c943650-6e73181354d80a4f4e02000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/7e/b6) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/3e/56/3e56af1afce990d2633c722e5c0b241064be0908-6f73181354d80a4f4e02000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/3e/56) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/9e/bd/9ebdb50383e3f1166b2aa753e78b855fae505528-21d49606d5d90a4f9f06000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/9e/bd) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/24/c4/24c4692ab968bfd94cf1ca62fb46a88b7dcd78f1-88e88c26add90a4f1706000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/24/c4) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/24/22/2422e76785185795d11ff19cd23c10af2df4aee3-9373181354d80a4f4e02000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/09/3b/093bbae088c17039975e55fe49f683ab5ac79f89-0112a307d5d90a4fa006000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/09/3b) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/5a/30/5a30cbae4a3900fdb2bb20e405db5d00ab93ffe3-0c12a307d5d90a4fa006000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/5a/30) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/1b/7f/1b7f03005f41026e42e354cb3a8dd805d793720e-5ed49606d5d90a4f9f06000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/1b/7f) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/fd/51/fd51d22ba92f2e018f842851149fffb81f1f1264-64d49606d5d90a4f9f06000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/fd/51) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/70/5c/705c1daf8f35ca7bf9f7bbf2fdf1b29de33766f8-65d49606d5d90a4f9f06000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/70/5c) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/e4/e0/e4e08863ae910339a3809ea51ddefb0a4db9c646-66d49606d5d90a4f9f06000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/e4/e0) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/6c/cc/6ccc5e659c6de92852315bfe977cab24b6238dc9-67d49606d5d90a4f9f06000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/6c/cc) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/7f/9c/7f9c841c810561bde8a5e3b3e51c55de53620f47-68d49606d5d90a4f9f06000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/7f/9c) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/d3/7e/d37eb19d8379eb292971bb931068e34ece403f1f-aac55a0f03d10a4f9b68000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/d3/7e) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/70/6b/706b010991ced768476e9921efd1d8baef6af103-abc55a0f03d10a4f9b68000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/70/6b) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/98/4e/984e36f32eb16f85349bafe5ef5b7c2367a30d45-bc73181354d80a4f4e02000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/98/4e) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/60/8d/608da5e9d2b4eb43705b62a7605068c886bc486b-8cd49606d5d90a4f9f06000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/60/8d) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/f9/e7/f9e75a76c6aacb9259e3055f5dffee9b7b37179d-eb73181354d80a4f4e02000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/f9/e7) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/c2/68/c268bd0abe0e334e64cf40e3b4e571eae6415c40-bbc55a0f03d10a4f9b68000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/c2/68) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/c5/aa/c5aac7e0a301a798a4eedc0140bd3f71329046df-ffbd543b03cf0a4fe062000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/f8/26/f82660e313a8a22e0152bf457231cb5a535eebcb-cbe88c26add90a4f1706000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/f8/26) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/9d/8f/9d8f0e6d58d86e5672876a4a5ac0d626f01b2653-6812a307d5d90a4fa006000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/9d/8f) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/f7/b5/f7b51d6500a594ea870151e1f6845ae1ca4dfa88-73538a2195d30a4fb86f000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/43/73/43737f46f935ea8a2077ebb3c4bc356572bb07ff-7e538a2195d30a4fb86f000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/43/73) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/4e/29/4e29fdbf309d66faba520a4a3e3f87ead728c7af-3774181354d80a4f4e02000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/4e/29) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/12/34/123406129b4eb0c148093a81dc07d63f01d6d409-8712a307d5d90a4fa006000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/12/34) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/78/8c/788c84e6f0aa744ba996e3adad4912547b85860d-90446f0c03d10a4f9868000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/78/8c) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/9f/68/9f688dfa87bd86e0302544a6f4051be4c0ebe9f3-2cf9b90703d10a4f8d68000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/19/86/198610c4bc9753908fcfe6c1bd6b330d8df7f7af-2df9b90703d10a4f8d68000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/3b/58/3b58ae30de03e06cba4520328dcaf8461321361f-4274181354d80a4f4e02000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/3b/58) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/2f/ec/2fecc3dfa406921e622a638d382f123826008e68-c12f152613da0a4fab06000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/08/e6/08e61656d670261693ada0e71514a17c523dc239-c22f152613da0a4fab06000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/d4/9d/d49dccea098551240dcae8a6454738a103d050eb-cd2f152613da0a4fab06000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/d4/9d) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/26/a8/26a8bd4bfa69aa7e699d9c9749443ed2b72bbbd9-96446f0c03d10a4f9868000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/26/a8) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/6d/f5/6df5a5c2dee317fa2c9d2aa2de9730ef0e086912-47f9b90703d10a4f8d68000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/ba/f0/baf0274f01c12252224fc0d67a7018a6323127ff-48f9b90703d10a4f8d68000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/f8/26/f82660e313a8a22e0152bf457231cb5a535eebcb-4ef9b90703d10a4f8d68000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/f8/26) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/5f/91/5f91067608300c30a80b6441b4bfe5d2e7ac3ab5-9c446f0c03d10a4f9868000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/5f/91) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/8d/d4/8dd4d6fd05df137fe72ae6bea359c4c740b41bdb-9712a307d5d90a4fa006000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/c8/97/c8979042d406fa3db0f0d5ab9d8e40fc5087c116-4d74181354d80a4f4e02000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/6b/24/6b2434e0486a4e033a5057e323c7fa76baced4aa-a7538a2195d30a4fb86f000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/6b/24) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/05/16/051673f94b6dbc17a265e6cb8a0f189f5a47518d-a8538a2195d30a4fb86f000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/9c/c1/9cc1bebd4a576aa28bf4a54f54206bf447c1e31c-87be543b03cf0a4fe062000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/9c/c1) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/04/00/04008a41d5d6d43a67ab5b823f9d80852b6f828e-f32f152613da0a4fab06000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/04/00) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/d0/b1/d0b1d7351d2a174541124b35f5471e57dd480795-fe2f152613da0a4fab06000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/d0/b1) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/3e/76/3e7645ee60d67e21c96d79ad0d961c7d9f5ca074-68f9b90703d10a4f8d68000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/3e/76) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/5e/5e/5e5e0db722cbdf59d7b23a50ab9613cd41585861-04e98c26add90a4f1706000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/5e/5e) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/1c/32/1c32564e82ebe7acc876ea5a1fd7e9f00a695c97-52ce1105c0db0a4f3c0c000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/1c/32) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/7a/bf/7abfe0943de9fa8e6cc468a3017705d1e44c9af4-15e98c26add90a4f1706000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/7a/bf) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/ca/22/ca22fb0b943c243bae14f43c08454765679805a5-25e98c26add90a4f1706000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/ca/22) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/44/3a/443ad09efc00aec5fd5579d1bfa741efcc54625c-9974181354d80a4f4e02000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/24/c4/24c4692ab968bfd94cf1ca62fb46a88b7dcd78f1-d4538a2195d30a4fb86f000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/24/c4) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/00/b3/00b3c0397b0f24dcc44410e60984147f1f6dbd4c-76ce1105c0db0a4f3c0c000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/00/b3) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/2d/e5/2de509ec0507087ceec4d16dc39260f4b369886f-86ce1105c0db0a4f3c0c000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/2d/e5) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/66/63/666325747133a0e84525bc402bb2984339037e31-c912a307d5d90a4fa006000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/66/63) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/22/f9/22f9241435febe0df6556f7d681792f1cc5b1637-ca12a307d5d90a4fa006000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/fa/1c/fa1c7138221dc54ab7b6d833970b6d230304fff3-91f9b90703d10a4f8d68000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/b5/44/b54476a0510cb83fd0144397169d603a72b3d8db-54e98c26add90a4f1706000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/b5/44) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/61/41/6141377ddbeaaa2d2f641392993723ac09dab7af-201cb50cb6db0a4f370c000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/2e/a3/2ea3fffc796512c7bf235885f3b37b0ec9c4c620-211cb50cb6db0a4f370c000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/85/cd/85cd1f6f08d3becd97d64653129bb71e513aa265-b0f9b90703d10a4f8d68000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/85/cd) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/bf/3c/bf3c51578bfee97ad56b9f0b1f6f74bbc8b30316-311cb50cb6db0a4f370c000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/bf/3c) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/c3/15/c315fdf651d9bc65d468634598ea1a8a5ef2f0dc-321cb50cb6db0a4f370c000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/c3/15) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/c9/6f/c96ff3f27a51bb19282196cf416a51a079c5f75e-2d30152613da0a4fab06000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/c9/6f) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/f8/61/f86119c665d89911b256e68c77354502389180eb-2e30152613da0a4fab06000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/a3/7c/a37c6f0966bc866dcb44b5304610604675bbb81b-2f30152613da0a4fab06000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/a3/7c) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/72/20/7220923da82b479c5381bdc7efe8a392c890b09d-02bf543b03cf0a4fe062000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/72/20) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/c9/6f/c96ff3f27a51bb19282196cf416a51a079c5f75e-5975181354d80a4f4e02000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/c9/6f) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/f8/61/f86119c665d89911b256e68c77354502389180eb-5a75181354d80a4f4e02000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/a3/7c/a37c6f0966bc866dcb44b5304610604675bbb81b-5b75181354d80a4f4e02000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/a3/7c) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/c1/47/c14767dfeaadbd2a8205767e9a274e2854c1b97c-451cb50cb6db0a4f370c000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/c1/47) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/6b/33/6b33d1b0fb794a97b1185158797bf360b96d3e62-461cb50cb6db0a4f370c000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/6b/33) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/c9/6f/c96ff3f27a51bb19282196cf416a51a079c5f75e-91e98c26add90a4f1706000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/c9/6f) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/f8/61/f86119c665d89911b256e68c77354502389180eb-92e98c26add90a4f1706000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/a3/7c/a37c6f0966bc866dcb44b5304610604675bbb81b-93e98c26add90a4f1706000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/a3/7c) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/21/57/2157e48b0f4d145909a05dc88dd9f4ab5eacba92-7215b60eb6db0a4f380c000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/21/57) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/2a/3a/2a3aade76b96474ff4d625ed2ecd9261dde5098e-e412a307d5d90a4fa006000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/6e/26/6e26032f9b2e9bb3eb6d042710b3a593a0ef4a6d-5b1cb50cb6db0a4f370c000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/6e/26) failed: No such file or directory doveadm(backup at backup.invalid): Error: unlink(/path/to/attachments/7a/7f/7a7fce2fad3b3046d2744f271e676f84b7bc931e-611cb50cb6db0a4f370c000063bdf393) failed: No such file or directory doveadm(backup at backup.invalid): Error: rmdir(/path/to/attachments/7a/7f) failed: No such file or directory doveadm(backup at backup.invalid): Error: Corrupted dbox file /path/to/mdbox/storage/m.434 (around offset=61672172): purging found mismatched offsets (61672142 vs 61665615, 7661/10801) doveadm(backup at backup.invalid): Warning: mdbox /path/to/mdbox/storage: rebuilding indexes doveadm(backup at backup.invalid): Error: Purging namespace '' failed: Internal error occurred. Refer to server log for more information. [2012-01-10 13:59:52] After that: # doveadm search -u backup at backup.invalid mailbox INBOX SAVEDON 2012-01-08 | wc -l 0 # doveadm search -u backup at backup.invalid mailbox INBOX SAVEDON 2012-01-09 | wc -l 0 # doveadm search -u backup at backup.invalid mailbox INBOX SAVEDON 2012-01-10 | wc -l 189 # doveadm search -u backup at backup.invalid mailbox INBOX SAVEDON 2012-01-11 | wc -l 0 # fgrep dovecot: /var/log/mail.log |grep -v "dovecot: lmtp" # Nothing! -- Ralf Hildebrandt Gesch?ftsbereich IT | Abteilung Netzwerk Charit? - Universit?tsmedizin Berlin Campus Benjamin Franklin Hindenburgdamm 30 | D-12203 Berlin Tel. +49 30 450 570 155 | Fax: +49 30 450 570 962 ralf.hildebrandt at charite.de | http://www.charite.de From ath at b-one.net Tue Jan 10 16:05:14 2012 From: ath at b-one.net (Anders) Date: Tue, 10 Jan 2012 15:05:14 +0100 Subject: [Dovecot] A small bug and a question about CONTEXT=SEARCH Message-ID: <20120110140514.57EB0E51BC96F@bmail-n01.one.com> Hi, I have been looking at search and sorting with dovecot and have run into some things. The first one I think may be a minor bug because a set of commands result in the socket connection being closed without warning: UID SEARCH RETURN (SAVE COUNT) CHARSET UTF-8 (UNDELETED TEXT "foo") UID SEARCH RETURN (COUNT MIN) CHARSET UTF-8 () $ The empty paranthesis before the reference to the previous search result ($) is not legal IMAP, but should not cause the socket to be closed I think. Then I have question about RFC5267 and the announcement of CONTEXT=SEARCH in the capabilities. I think this RFC is supported by dovecot, or maybe just part of the RFC is supported? At least when I include the CONTEXT ADDTO or REMOVEFROM keywords I get an error, but UPDATE and CANCELUPDATE seem to be supported. The RFC has been updated by the RFC describing the NOTIFY extension to IMAP so maybe it has been decided to not add these keywords until a later time? I am using dovecot version 2.0.15 (with patches from Apple). Best Regards Anders From bschmidt at cms.hu-berlin.de Tue Jan 10 16:16:14 2012 From: bschmidt at cms.hu-berlin.de (Burckhard Schmidt) Date: Tue, 10 Jan 2012 15:16:14 +0100 Subject: [Dovecot] rewriting mail_location Message-ID: <4F0C482E.7000900@cms.hu-berlin.de> Hello, I have LDAP as userdb. Entries containing attributes mail=alias.user1 at some.domain.de and uid=user1 Mail to alias.user1 at some.domain.de gets delivered into /datatest/user/alias.user1 instead of /datatest/user/user1 by lda. I have userdb { args = /usr/dovecot/etc/ldapuser.conf driver = ldap } with a ldapuser.conf: hosts ... base ... user_filter = (&(|(mail=%u)(mail=%n at some.domain) (uid=%u))(objectClass=posixAccount)) user_attrs = uid=mail_location=maildir:/datatest/user/%$, uidNumber=29,gidNumber=133 I hoped local part of attribute mail could be replaced by uid for local delivery with dovecots lda? Any hints how to do that? (With postfix I could rewrite the address to uid at host and use local_transport = dovecot.) postfix has virtual_transport = dovecot. LDAP entry: mail: alias.user1 at some.domain.de uid: user1 homeDirectory: /dev/null uidNumber: 464 gidNumber: 100 mail to alias.user1 at some.domain.de Jan 10 14:03:24 ubu1004 postfix/qmgr[25221]: C434D1EE: from=, size=239, nrcpt=1 (queue active) Jan 10 14:03:24 ubu1004 dovecot: lda: Debug: Loading modules from directory: /usr/dovecot/lib/dovecot Jan 10 14:03:24 ubu1004 dovecot: lda: Debug: Module loaded: /usr/dovecot/lib/dovecot/lib20_autocreate_plugin.so Jan 10 14:03:24 ubu1004 dovecot: auth: Debug: master in: USER#0111#011alias.user1#011service=lda Jan 10 14:03:24 ubu1004 dovecot: auth: Debug: ldap(alias.user1): user search: base=ou=users,ou=...,c=de scope=subtree filter=(&(|(mail=alias.user1)(mail=alias.user1 at some.domain.de)(uid=alias.user1))(objectClass=posixAccount)) fields=uid,uidNumber,gidNumber some substitutions are visibly: Jan 10 14:03:24 ubu1004 dovecot: auth: Debug: ldap(alias.user1): result: uid(location=maildir:/datatest/user/%$/maildir)=user1 gidNumber(133)=100 uidNumber(29)=464 Jan 10 14:03:24 ubu1004 dovecot: auth: Debug: master out: USER#0111#011alias.user1#011location=maildir:/datatest/user/user1/maildir#011133=100#01129=464 Jan 10 14:03:24 ubu1004 dovecot: lda: Debug: auth input: alias.user1 location=maildir:/datatest/user/user1/maildir 133=100 29=464 Jan 10 14:03:24 ubu1004 dovecot: lda: Debug: Added userdb setting: plugin/location=maildir:/datatest/user/user1/maildir but alias "alias.user1" still used for delivery: Jan 10 14:03:24 ubu1004 dovecot: lda: Debug: Added userdb setting: plugin/133=100 Jan 10 14:03:24 ubu1004 dovecot: lda: Debug: Added userdb setting: plugin/29=464 Jan 10 14:03:24 ubu1004 dovecot: lda(alias.user1): Debug: Effective uid=29, gid=133, home= Jan 10 14:03:24 ubu1004 dovecot: lda(alias.user1): Debug: Namespace : type=private, prefix=, sep=/, inbox=yes, hidden=no, list=yes, subscriptions=yes location=maildir:/datatest/user/alias.user1/maildir:INDEX=/datatest/addons/index/alias.user1:CONTROL=/datatest/user/alias.user1/control:LAYOUT=fs Jan 10 14:03:24 ubu1004 dovecot: lda(alias.user1): Debug: fs: root=/datatest/user/alias.user1/maildir, index=/datatest/addons/index/alias.user1, control=/datatest/user/alias.user1/control, inbox=/datatest/user/alias.user1/maildir, alt= Jan 10 14:03:24 ubu1004 dovecot: lda(alias.user1): Debug: Namespace : Using permissions from /datatest/user/alias.user1/maildir: mode=0700 gid=-1 Jan 10 14:03:24 ubu1004 dovecot: lda(alias.user1): Debug: none: root=, index=, control=, inbox=, alt= Jan 10 14:03:24 ubu1004 dovecot: lda(alias.user1): Debug: Destination address: alias.user1 at ubu1004 (source: user at hostname) Jan 10 14:03:24 ubu1004 dovecot: lda(alias.user1): msgid=unspecified: saved mail to INBOX Jan 10 14:03:24 ubu1004 postfix/pipe[25226]: C434D1EE: to=, relay=dovecot, delay=14, delays=14/0.01/0/0.02, dsn=2.0.0, status=sent (delivered via dovecot service) Jan 10 14:03:24 ubu1004 postfix/qmgr[25221]: C434D1EE: removed dovecot -n # 2.0.17 (684381041dc4+): /usr/dovecot/etc/dovecot/dovecot.conf # OS: Linux 2.6.32-34-generic-pae i686 Ubuntu 10.04.3 LTS ext4 mail_gid = sysdov mail_location = maildir:/datatest/user/%n/maildir:INDEX=/datatest/addons/index/%n:CONTROL=/datatest/user/%n/control:LAYOUT=fs mail_plugins = autocreate mail_uid = sysdov passdb { args = failure_show_msg=yes imap driver = pam } service auth { client_limit = 30000 unix_listener auth-userdb { group = sysdov #effective 133 mode = 01204 user = sysdov #effective 29 } } userdb { args = /usr/dovecot/etc/ldapuser.conf driver = ldap } protocol lda { mail_plugins = autocreate } and ldapuser.conf: hosts ... base ... user_filter = (&(|(mail=%u)(mail=%n at some.domain) (uid=%u))(objectClass=posixAccount)) user_attrs = uid=mail_location=maildir:/datatest/user/%$, uidNumber=29,gidNumber=133 local part of mail should be replaced by uid for local delivery -- Regards --- Burckhard Schmidt From tss at iki.fi Tue Jan 10 16:16:51 2012 From: tss at iki.fi (Timo Sirainen) Date: Tue, 10 Jan 2012 16:16:51 +0200 Subject: [Dovecot] A small bug and a question about CONTEXT=SEARCH In-Reply-To: <20120110140514.57EB0E51BC96F@bmail-n01.one.com> References: <20120110140514.57EB0E51BC96F@bmail-n01.one.com> Message-ID: <1326205011.6987.90.camel@innu> On Tue, 2012-01-10 at 15:05 +0100, Anders wrote: > the socket connection being closed without warning: > > UID SEARCH RETURN (SAVE COUNT) CHARSET UTF-8 (UNDELETED TEXT "foo") You mean it closes with above also? It works fine with me. > UID SEARCH RETURN (COUNT MIN) CHARSET UTF-8 () $ This was fixed in v2.0.17. > Then I have question about RFC5267 and the announcement of > CONTEXT=SEARCH > > in the capabilities. I think this RFC is supported by dovecot, or maybe > just > > part of the RFC is supported? All of it is supported, as far as I know. > At least when I include the CONTEXT ADDTO or REMOVEFROM keywords I get > an error, These are server notifications. Clients aren't supposed to send them. From divizio at exentrica.it Tue Jan 10 17:16:17 2012 From: divizio at exentrica.it (Luca Di Vizio) Date: Tue, 10 Jan 2012 16:16:17 +0100 Subject: [Dovecot] little bug with Director in 2.1? Message-ID: Hi, in 2.1rc3 the "director_servers" setting does not accept hostnames as documented (with ip no problems). It works correctly in 2.0.17. Greetings, Luca From Juergen.Obermann at hrz.uni-giessen.de Tue Jan 10 17:32:07 2012 From: Juergen.Obermann at hrz.uni-giessen.de (=?iso-8859-1?b?SvxyZ2Vu?= Obermann) Date: Tue, 10 Jan 2012 16:32:07 +0100 Subject: [Dovecot] Panic: file mbox-sync.c: line 1348: assertion failed Message-ID: <20120110163207.182538xtgzoxjg8w@webmail.hrz.uni-giessen.de> Hallo, I have the following problem with doveadm: # gdb --args /opt/local/bin/doveadm -v mailbox status -u userxy/g029 'messages' "Software-alle/AK-Software-Tagung" GNU gdb 5.3 Copyright 2002 Free Software Foundation, Inc. GDB is free software, covered by the GNU General Public License, and you are welcome to change it and/or distribute copies of it under certain conditions. Type "show copying" to see the conditions. There is absolutely no warranty for GDB. Type "show warranty" for details. This GDB was configured as "sparc-sun-solaris2.8"... (gdb) run Starting program: /opt/local/bin/doveadm -v mailbox status -u g029 messages Software-alle/AK-Software-Tagung warning: Lowest section in /lib/libthread.so.1 is .dynamic at 00000074 warning: Lowest section in /lib/libdl.so.1 is .hash at 000000b4 doveadm(g029): Panic: file mbox-sync.c: line 1348: assertion failed: (file_size >= sync_ctx->expunged_space + trailer_size) doveadm(g029): Error: Raw backtrace: 0xff1cbc30 -> 0xff319544 -> 0xff319fa8 -> 0xff31add8 -> 0xff31b278 -> 0xff2a69b0 -> 0xff2a6bac -> 0x16808 -> 0x1b8fc -> 0x16ba0 -> 0x177cc -> 0x17944 -> 0x17a50 -> 0x204e8 -> 0x165c8 Program received signal SIGABRT, Aborted. 0xfe94dcdc in _lwp_kill () from /lib/libc.so.1 (gdb) bt full #0 0xfe94dcdc in _lwp_kill () from /lib/libc.so.1 No symbol table info available. #1 0xfe8e6fb4 in raise () from /lib/libc.so.1 No symbol table info available. #2 0xfe8c2078 in abort () from /lib/libc.so.1 No symbol table info available. #3 0xff1cb984 in default_fatal_finish () from /opt/local/lib/dovecot/libdovecot.so.0 No symbol table info available. #4 0xff1cbc38 in i_panic () from /opt/local/lib/dovecot/libdovecot.so.0 No symbol table info available. #5 0xff31954c in mbox_sync_handle_eof_updates () from /opt/local/lib/dovecot/libdovecot-storage.so.0 No symbol table info available. #6 0xff319fb0 in mbox_sync_do () from /opt/local/lib/dovecot/libdovecot-storage.so.0 No symbol table info available. #7 0xff31ade0 in mbox_sync_int () from /opt/local/lib/dovecot/libdovecot-storage.so.0 No symbol table info available. #8 0xff31b280 in mbox_storage_sync_init () from /opt/local/lib/dovecot/libdovecot-storage.so.0 No symbol table info available. #9 0xff2a69b8 in mailbox_sync_init () from /opt/local/lib/dovecot/libdovecot-storage.so.0 No symbol table info available. #10 0xff2a6bb4 in mailbox_sync () from /opt/local/lib/dovecot/libdovecot-storage.so.0 No symbol table info available. #11 0x00016810 in doveadm_mailbox_find_and_sync () No symbol table info available. #12 0x0001b904 in cmd_mailbox_status_run () No symbol table info available. #13 0x00016ba8 in doveadm_mail_next_user () No symbol table info available. #14 0x000177d4 in doveadm_mail_cmd () No symbol table info available. #15 0x0001794c in doveadm_mail_try_run_multi_word () No symbol table info available. #16 0x00017a58 in doveadm_mail_try_run () No symbol table info available. #17 0x000204f0 in main () No symbol table info available. (gdb) quit The program is running. Exit anyway? (y or n) y My configuration ist as follows: # /opt/local/bin/doveconf -n # 2.0.16: /opt/local/etc/dovecot/dovecot.conf # OS: SunOS 5.10 sun4v auth_verbose = yes disable_plaintext_auth = no lda_mailbox_autocreate = yes lda_mailbox_autosubscribe = yes listen = imap.hrz.uni-giessen.de localhost mail_location = mbox:~/Mail:INBOX=/var/mail/%u mail_plugins = mail_log notify zlib managesieve_notify_capability = mailto managesieve_sieve_capability = fileinto reject envelope encoded-character vacation subaddress comparator-i;ascii-numeric relational regex imap4flags copy include variables body enotify environment mailbox date ihave mdbox_rotate_interval = 1 days mdbox_rotate_size = 16 M namespace { inbox = yes location = prefix = separator = / type = private } namespace { hidden = yes list = no location = prefix = Mail/ separator = / subscriptions = yes type = private } passdb { driver = pam } passdb { args = /opt/local/etc/dovecot/dovecot-ldap.conf.ext driver = ldap } plugin { autocreate = Trash autocreate2 = caughtspam autocreate3 = Sent autocreate4 = Drafts autosubscribe = Trash autosubscribe2 = caughtspam autosubscribe3 = Sent autosubscribe4 = Drafts mail_log_events = delete undelete expunge copy mailbox_delete mailbox_rename mail_log_fields = uid box msgid size sieve = ~/.dovecot.sieve sieve_dir = ~/sieve zlib_save = gz zlib_save_level = 3 } postmaster_address = postmaster at hrz.uni-giessen.de quota_full_tempfail = yes sendmail_path = /usr/lib/sendmail service auth { client_limit = 11120 } service imap-login { process_min_avail = 16 service_count = 0 vsz_limit = 640 M } service imap { process_limit = 4096 vsz_limit = 1 G } ssl_cert = References: <3a8f9df5e523c0391c41964ae3d09d1b@imapproxy.hrz> <677F82FE-850B-43EC-86C1-6B99ED74642A@iki.fi> Message-ID: Am 20.12.2011 06:45, schrieb Timo Sirainen: > On 16.12.2011, at 0.00, J?rgen Obermann wrote: > >> Hello, >> when I try to convert from mbox to mdbox with dsync with one user it >> always panics: >> >> # /opt/local/bin/dsync -v -u userxy backup ssh root at minerva1 >> /opt/local/bin/dsync -v -u userxy >> dsync-remote(userxy): Panic: Trying to allocate 2147483648 bytes > > Well, this is clearly the problem.. But it's difficult to guess where > it's allocating that. I'd need a gdb backtrace. Does it write a core > file to userxy's home dir? If not, try replacing dsync with a script > that runs "ulimit -c unlimited" first and then execs dsync. > http://dovecot.org/bugreport.html tells what to do with core once you > have it. > > Alternative idea: Does it crash also when dsyncing locally? > gdb --args dsync -u userxy backup mdbox:/tmp/foobar > run > bt full Sorry, this problem is gone, I cannot reproduce it any more, neither locally nor with remote dsync. I found out that the user has one huge mail in his drafts folder with a 1GB video object attachment, but he surely never could send this mail because the mail size is limited to 50MB. Greetings, J?rgen -- J?rgen Obermann Hochschulrechenzentrum der Justus-Liebig-Universit?t Gie?en Heinrich-Buff-Ring 44 Tel. 0641-9913054 From mark at msapiro.net Tue Jan 10 18:34:20 2012 From: mark at msapiro.net (Mark Sapiro) Date: Tue, 10 Jan 2012 08:34:20 -0800 Subject: [Dovecot] Clients show .subscriptions folder Message-ID: Since upgrading from dovecot-2.1.rc1 to dovecot-2.1.rc3, some clients are showing a .subscriptions file in the user's mbox path as a folder. Some clients such as T'bird on Mac OS X create this file listing subscribed mbox files. Other clients such as T'bird on Windows XP show this file as a folder in the folder list even though it cannot be accessed as a folder (dovecot returns CANNOT Mailbox is not a valid mbox file). I think this may be a result of uncommenting the inbox namespace in conf.d/10-mail.conf . Is there a way to supress exposing this file to clients that don't use it? # dovecot -n # 2.1.rc3: /usr/local/etc/dovecot/dovecot.conf # OS: Linux 2.6.18-8.1.14.el5 i686 CentOS release 5 (Final) auth_mechanisms = plain apop login auth_worker_max_count = 5 mail_location = mbox:~/Mail:INBOX=/var/spool/mail/%u mail_privileged_group = mail mbox_write_locks = fcntl dotlock namespace inbox { inbox = yes location = mailbox Drafts { special_use = \Drafts } mailbox Junk { special_use = \Junk } mailbox Sent { special_use = \Sent } mailbox "Sent Messages" { special_use = \Sent } mailbox Trash { special_use = \Trash } prefix = } passdb { args = /usr/local/etc/dovecot.passwd driver = passwd-file } passdb { driver = pam } protocols = imap pop3 service auth { unix_listener /var/spool/postfix/private/auth { mode = 0666 } } ssl_cert = The highway is for gamblers, San Francisco Bay Area, California better use your sense - B. Dylan From mark.davenport at yahoo.co.uk Tue Jan 10 21:12:04 2012 From: mark.davenport at yahoo.co.uk (sparkietm) Date: Tue, 10 Jan 2012 11:12:04 -0800 (PST) Subject: [Dovecot] Dovecot under Virtual Host environment?? Message-ID: <33114381.post@talk.nabble.com> Hi all, Could anyone point me to a walk-through of setting up Dovecot for multiple domains under a Virtual Host environment? I'd like to offer clients their own email domain for each virtual host e.g john at client_1.com, sue at client_2.com. I'm guessing this is fairly common set-up, but a search on "multiple domains" didn't bring much back. Also, would it be possible to offer each client webmail? Many thanks in advance Mark -- View this message in context: http://old.nabble.com/Dovecot-under-Virtual-Host-environment---tp33114381p33114381.html Sent from the Dovecot mailing list archive at Nabble.com. From robert at schetterer.org Tue Jan 10 23:00:11 2012 From: robert at schetterer.org (Robert Schetterer) Date: Tue, 10 Jan 2012 22:00:11 +0100 Subject: [Dovecot] Dovecot under Virtual Host environment?? In-Reply-To: <33114381.post@talk.nabble.com> References: <33114381.post@talk.nabble.com> Message-ID: <4F0CA6DB.4010809@schetterer.org> Am 10.01.2012 20:12, schrieb sparkietm: > > Hi all, > Could anyone point me to a walk-through of setting up Dovecot for multiple > domains under a Virtual Host environment? I'd like to offer clients their > own email domain for each virtual host e.g john at client_1.com, > sue at client_2.com. I'm guessing this is fairly common set-up, but a search > on "multiple domains" didn't bring much back. > > Also, would it be possible to offer each client webmail? > > Many thanks in advance > > Mark look here for examples, for webmail i.e roundecube, horde , squirrelmail are widly used http://wiki.dovecot.org/HowTo From joseba.torre at ehu.es Wed Jan 11 13:12:16 2012 From: joseba.torre at ehu.es (Joseba Torre) Date: Wed, 11 Jan 2012 12:12:16 +0100 Subject: [Dovecot] Providing shared folders with multiple backend servers In-Reply-To: <4F0AF0B9.7030406@turmel.org> References: <68fd4hi9kbv8@mids.svenhartge.de> <4F08E4D9.1090203@hardwarefreak.com> <78fdevu9kbv8@mids.svenhartge.de> <4F09956C.1030109@hardwarefreak.com> <88fev069kbv8@mids.svenhartge.de> <4F0AEDCC.10109@hardwarefreak.com> <4F0AF0B9.7030406@turmel.org> Message-ID: <4F0D6E90.5010603@ehu.es> El 09/01/12 14:50, Phil Turmel escribi?: > I've been following this thread with great interest, but no advice to offer. > The content is entirely appropriate, and appreciated. Don't be embarrassed > by your enthusiasm, Stan. +1 From sven at svenhartge.de Wed Jan 11 14:50:54 2012 From: sven at svenhartge.de (Sven Hartge) Date: Wed, 11 Jan 2012 13:50:54 +0100 Subject: [Dovecot] Providing shared folders with multiple backend servers References: <68fd4hi9kbv8@mids.svenhartge.de> Message-ID: Sven Hartge wrote: > I am currently in the planning stage for a "new and improved" mail > system at my university. OK, executive summary of the design ideas so far: - deployment of X (starting with 4, but easily scalable) virtual servers on VMware ESX - storage will be backed by a RDM on our iSCSI SAN. + main mailbox storage will be on 15k SAS6 600GB disks + backup rsnapshot storage will be on 7.2k SAS6 2TB disks - XFS filesystem on LVM, allowing easy local snapshots for rsnapshot - sharing folders from one user to another is not needed - central public shared folders reside on its own storage server and are accessed through the imapc-backend configured for the "#shared."-namespace (needs dovecot 2.1~rc3 or higher) - mdbox with compression (23h lifetime, 50MB max size) - quota in MySQL, allowing my MXes to check the quota for a user _before_ accepting any mail for him. This is a much needed feature, currently not possible and thus leading to backscatter right now. - + Backup with bacula for file level backup every 24 hours (120 days retention) + rsnapshot to node local backup space for easier access (14 days retention) + possibly SAN-based remote snapshots to different storage tier. Because sharing a RDM (or VMDK) with multiple VMs pins the VM to an ESX server and prohibits HA and DRS in the ESX cluster and because of my bad experience with cluster FS I want to avoid one and use only local storage for the personal mailboxes of the users. Each user is fixed to one server, routing/redirecting of IMAP/POP3 connections happens via perdition (popmap feature via LDAP lookup) in a frontend server (this component is already working since some 3-ish years). So each node is isolated from the other nodes, knows only its users and does not care about users on other nodes. This prevents usage of the dovecot director, which only works if all nodes are able to access all mailboxes (correct?) I am aware this creates a SPoF for an 1/X portion of my users in the case of a VM failure, but this is deemed acceptable, since the use of VMs will allow me to quickly deploy a new one and reattach the RDM. (And if my whole iSCSI storage or ESX cluster fails, I have other, bigger problems than a non-functional mail system.) Comments? Gr??e, Sven. -- Sigmentation fault. Core dumped. From forumer at smartmobili.com Wed Jan 11 16:11:12 2012 From: forumer at smartmobili.com (forumer at smartmobili.com) Date: Wed, 11 Jan 2012 15:11:12 +0100 Subject: [Dovecot] Log imap commands Message-ID: Hi, I am trying to optimize an imap library and I am comparing with some existing webmail, for instance from roundcube I can log the imap commands with the following format : [11-Jan-2012 14:22:55 +0100]: [DBD1] S: * OK [CAPABILITY IMAP4rev1 LITERAL+ SASL-IR LOGIN-REFERRALS ID ENABLE IDLE STARTTLS AUTH=PLAIN AUTH=LOGIN AUTH=DIGEST-MD5 AUTH=CRAM-MD5] Dovecot ready. [11-Jan-2012 14:22:55 +0100]: [DBD1] C: A0001 ID ("name" "Roundcube Webmail" "version" "0.6" "php" "5.3.5-1ubuntu7.4" "os" "Linux" "command" "/") [11-Jan-2012 14:22:55 +0100]: [DBD1] S: * ID NIL [11-Jan-2012 14:22:55 +0100]: [DBD1] S: A0001 OK ID completed. [11-Jan-2012 14:22:55 +0100]: [DBD1] C: A0002 AUTHENTICATE CRAM-MD5 [11-Jan-2012 14:22:55 +0100]: [DBD1] S: + RDM1MTE1NjkxOTQzODE4NDEuMTMyNjI4ODE3NUBzZC0zMDYzNT4= [11-Jan-2012 14:22:55 +0100]: [DBD1] C: d2ViZ3Vlc3RAc21hcnRtb2JpbGkuY29tIDczODMxNjUzZmVlYzdjNDVlNzRkYTg1YjIwMjk2NWM0 [11-Jan-2012 14:22:55 +0100]: [DBD1] S: A0002 OK [CAPABILITY IMAP4rev1 LITERAL+ SASL-IR LOGIN-REFERRALS ID ENABLE IDLE SORT SORT=DISPLAY THREAD=REFERENCES THREAD=REFS MULTIAPPEND UNSELECT CHILDREN NAMESPACE UIDPLUS LIST-EXTENDED I18NLEVEL=1 CONDSTORE QRESYNC ESEARCH ESORT SEARCHRES WITHIN CONTEXT=SEARCH LIST-STATUS FUZZY] Logged in [11-Jan-2012 14:22:55 +0100]: [DBD1] C: A0003 NAMESPACE [11-Jan-2012 14:22:55 +0100]: [DBD1] S: * NAMESPACE (("" ".")) NIL NIL [11-Jan-2012 14:22:55 +0100]: [DBD1] S: A0003 OK Namespace completed. [11-Jan-2012 14:22:55 +0100]: [DBD1] C: A0004 LOGOUT [11-Jan-2012 14:22:55 +0100]: [DBD1] S: * BYE Logging out ... And now I would like to do the same from my imap library so I have started wireshark but it's a bit messy and difficult to compare. I was wondering if dovecot allows to log imap communications ? Thanks From wgillespie+dovecot at es2eng.com Wed Jan 11 16:19:44 2012 From: wgillespie+dovecot at es2eng.com (Willie Gillespie) Date: Wed, 11 Jan 2012 07:19:44 -0700 Subject: [Dovecot] Log imap commands In-Reply-To: References: Message-ID: <4F0D9A80.2020707@es2eng.com> On 1/11/2012 7:11 AM, forumer at smartmobili.com wrote: > I was wondering if dovecot allows to log imap communications ? You could look at Rawlog http://wiki.dovecot.org/Debugging/Rawlog http://wiki2.dovecot.org/Debugging/Rawlog From gerv at esrf.fr Wed Jan 11 17:04:06 2012 From: gerv at esrf.fr (Didier Gervaise) Date: Wed, 11 Jan 2012 16:04:06 +0100 Subject: [Dovecot] How to solve a "Connection queue full problem" - dovecot version 2.0.16 Message-ID: <4F0DA4E6.8040205@esrf.fr> Hello, I put in production dovecot yesterday. After an hour, nobody could log in ("Max number of imap connection" error message on Thunderbird) Afterward, I found these messages in the logs: Jan 10 09:21:20 mailsrv dovecot: [ID 583609 mail.info] imap-login: Disconnected: Connection queue full (no auth attempts): rip=xxx.xxx.xxx.xxx, lip=xxx.xxx.xxx.xxx In the panic, I changed these values in /usr/local/etc/dovecot/conf.d/10-master.conf default_process_limit = 20000 default_client_limit = 20000 This apparently solved the problem but now I have these messages when I start dovecot: Jan 11 14:41:08 mailsrvspare dovecot: [ID 583609 mail.info] master: Dovecot v2.0.15 starting up Jan 11 14:41:08 mailsrvspare dovecot: [ID 583609 mail.warning] config: Warning: service auth { client_limit=4096 } is lower than required under max. load (103024) Jan 11 14:41:08 mailsrvspare dovecot: [ID 583609 mail.warning] config: Warning: service anvil { client_limit=20000 } is lower than required under max. load (60003) What should I do ? - adding "service_count = 0" in service imap-login { ... } and removing the modifications I did in 10-master.conf ? or - should I configure differently default_process_limit and default_client_limit ? It is a small site (about 1000 users). Currently I have 666 imap processes and 136 imap-login processes. Additionnal infos: The Server is a Solaris 10 Sun X4540 32GB RAM mailsrv:~ % /usr/local/sbin/dovecot -n # 2.0.16: /usr/local/etc/dovecot/dovecot.conf doveconf: Warning: service auth { client_limit=4096 } is lower than required under max. load (103024) doveconf: Warning: service anvil { client_limit=20000 } is lower than required under max. load (60003) # OS: SunOS 5.10 i86pc default_client_limit = 20000 default_process_limit = 20000 disable_plaintext_auth = no first_valid_uid = 100 mail_debug = yes mail_plugins = " quota" mail_privileged_group = mail managesieve_notify_capability = mailto managesieve_sieve_capability = fileinto reject envelope encoded-character vacation subaddress comparator-i;ascii-numeric relational regex imap4flags copy include variables body enotify environment mailbox date ihave passdb { driver = pam } plugin { quota = maildir:User quota quota_rule = ?:storage=4G quota_rule2 = Trash:storage=+100M quota_warning = storage=95%% quota-warning 95 %u quota_warning2 = storage=90%% quota-warning 90 %u quota_warning3 = storage=80%% quota-warning 80 %u sieve = ~/.dovecot.sieve sieve_dir = ~/ } postmaster_address = postmaster at esrf.fr protocols = imap pop3 lmtp sieve service imap-login { inet_listener imap { port = 143 } inet_listener imaps { port = 993 } } service imap { process_limit = 2000 } service managesieve-login { inet_listener sieve { port = 4190 } } service pop3-login { inet_listener pop3 { port = 110 } inet_listener pop3s { port = 995 } } service quota-warning { executable = script /usr/local/bin/quota-warning.sh unix_listener quota-warning { user = dovecot } user = dovecot } ssl_cert = References: <4F0D9A80.2020707@es2eng.com> Message-ID: <53bff2b57fe462c70eb9795cda5b8f06@smartmobili.com> Le 11.01.2012 15:19, Willie Gillespie a ?crit?: > On 1/11/2012 7:11 AM, forumer at smartmobili.com wrote: >> I was wondering if dovecot allows to log imap communications ? > > You could look at Rawlog > http://wiki.dovecot.org/Debugging/Rawlog > http://wiki2.dovecot.org/Debugging/Rawlog Ok so I suppose I need to rebuild dovecot with the --with-rawlog option but I am under ubuntu and I was using some dovecot-2.x source package hosted here : http://xi.rename-it.nl/debian/ But now it seems to be dead, any idea where I could find a deb-src for dovecot 2.x ? From CMarcus at Media-Brokers.com Wed Jan 11 17:23:55 2012 From: CMarcus at Media-Brokers.com (Charles Marcus) Date: Wed, 11 Jan 2012 10:23:55 -0500 Subject: [Dovecot] Log imap commands In-Reply-To: <53bff2b57fe462c70eb9795cda5b8f06@smartmobili.com> References: <4F0D9A80.2020707@es2eng.com> <53bff2b57fe462c70eb9795cda5b8f06@smartmobili.com> Message-ID: <4F0DA98B.4080705@Media-Brokers.com> On 2012-01-11 10:09 AM, forumer at smartmobili.com wrote: > Le 11.01.2012 15:19, Willie Gillespie a ?crit : >> On 1/11/2012 7:11 AM, forumer at smartmobili.com wrote: >>> I was wondering if dovecot allows to log imap communications ? >> >> You could look at Rawlog >> http://wiki.dovecot.org/Debugging/Rawlog >> http://wiki2.dovecot.org/Debugging/Rawlog > > Ok so I suppose I need to rebuild dovecot with the --with-rawlog option > but I am under ubuntu > and I was using some dovecot-2.x source package hosted here : > http://xi.rename-it.nl/debian/ > But now it seems to be dead, any idea where I could find a deb-src for > dovecot 2.x ? Another option that shouldn't require recompiling might be the MailLog plugin: http://wiki2.dovecot.org/Plugins/MailLog -- Best regards, Charles From Frank.Post at pallas.com Wed Jan 11 17:35:42 2012 From: Frank.Post at pallas.com (Frank Post) Date: Wed, 11 Jan 2012 16:35:42 +0100 Subject: [Dovecot] sieve under lmtp using wrong homedir ? Message-ID: Hi, i have a problem with dovecot-2.0.15. All is working well except lmtp. Sieve scripts are correctly saved under /var/vmail/test.com/test/sieve, but under lmtp sieve will use /var/vmail//testuser/ Uid testuser has mail=test at test.com configured in ldap. As i could see in the debug logs, there is a difference between the auth "master out" lines, but why ? working if managesieve stores scripts: Jan 11 15:02:42 auth: Debug: master in: REQUEST 3533701121 23001 1 7ec31d3c65cb934785e8eb0f33a182ae Jan 11 15:02:42 auth: Debug: ldap(test at test.com,10.234.201.4): result: mail(user)=test at test.com Jan 11 15:02:42 auth: Debug: master out: USER 3533701121 test at test.com home=/var/vmail/test.com/test uid=5000 gid=5000 Jan 11 15:02:42 managesieve(test at test.com): Debug: Effective uid=5000, gid=5000, home=/var/vmail/test.com/test but under lmtp not: Jan 11 14:39:53 auth: Debug: master in: USER 1 testuser service=lmtp lip=10.234.201.9 rip=10.234.201.4 Jan 11 14:39:53 auth: Debug: auth(testuser,10.234.201.4): username changed testuser -> test at test.com Jan 11 14:39:53 auth: Debug: ldap(test at test.com,10.234.201.4): result: mail(user)=test at test.com Jan 11 14:39:53 auth: Debug: master out: USER 1 test at test.com home=/var/vmail//testuser uid=5000 gid=5000 Jan 11 14:39:53 lmtp(8499): Debug: auth input: test at test.com home=/var/vmail//testuser uid=5000 gid=5000 Jan 11 14:39:53 lmtp(8499): Debug: changed username to test at test.com Jan 11 14:39:53 lmtp(8499, test at test.com): Debug: Effective uid=5000, gid=5000, home=/var/vmail//testuser Jan 11 14:39:53 lmtp(8499, test at test.com): Debug: Quota root: name=User quota backend=maildir args= Jan 11 14:39:53 lmtp(8499, test at test.com): Debug: Quota rule: root=User quota mailbox=* bytes=2147483648 messages=0 Jan 11 14:39:53 lmtp(8499, test at test.com): Debug: Quota warning: bytes=1932735283 (90%) messages=0 reverse=no command=quota-warning 90 test at test.com Jan 11 14:39:53 lmtp(8499, test at test.com): Debug: maildir++: root=/var/vmail/test.com/test/Maildir, index=/var/dovecot/indexes/test.com/test, control=, inbox=/var/vmail/test.com/test/Maildir, alt= Jan 11 14:39:53 lmtp(8499, test at test.com): Debug: trash: No trash setting - plugin disabled Jan 11 14:39:53 lmtp(8499, test at test.com): Debug: sieve: include: sieve_global_dir is not set; it is currently not possible to include `:global' scripts. Jan 11 14:39:53 lmtp(8499, test at test.com): Debug: pla8CymRDU8zIQAAFrfQGQ: sieve: user's script path /var/vmail//testuser/.dovecot.sieve doesn't exist (using global script path in stead) Jan 11 14:39:53 lmtp(8499, test at test.com): Debug: pla8CymRDU8zIQAAFrfQGQ: sieve: user has no valid personal script Jan 11 14:39:53 lmtp(8499, test at test.com): Debug: pla8CymRDU8zIQAAFrfQGQ: sieve: no scripts to execute: reverting to default delivery. Jan 11 14:39:53 lmtp(8499, test at test.com): Debug: Namespace : Using permissions from /var/vmail/test.com/test/Maildir: mode=0700 gid=-1 Thanks, for your help. Frank -------------- next part -------------- A non-text attachment was scrubbed... Name: dovecot-front.conf Type: application/octet-stream Size: 4126 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: dovecot-back.conf Type: application/octet-stream Size: 3394 bytes Desc: not available URL: From ath at b-one.net Wed Jan 11 17:57:46 2012 From: ath at b-one.net (Anders) Date: Wed, 11 Jan 2012 16:57:46 +0100 Subject: [Dovecot] A small bug and a question about CONTEXT=SEARCH Message-ID: <20120111155746.BD7BDDA030B2B@bmail06.one.com> On Tue, 2012-01-10 at 15:05 +0100, Anders wrote: > > the socket connection being closed without warning: > > UID SEARCH RETURN (SAVE COUNT) CHARSET UTF-8 (UNDELETED TEXT "foo") > > You mean it closes with above also? It works fine with me. No, that also works fine here :-) > > UID SEARCH RETURN (COUNT MIN) CHARSET UTF-8 () $ > > This was fixed in v2.0.17. Great, thanks! > > Then I have question about RFC5267 and the announcement of > > CONTEXT=SEARCH > > in the capabilities. I think this RFC is supported by dovecot, or maybe > > just part of the RFC is supported? > > All of it is supported, as far as I know. > > At least when I include the CONTEXT ADDTO or REMOVEFROM keywords I get > > an error, > These are server notifications. Clients aren't supposed to send them. Sorry, apparently I was a bit too fast there. ADDTO and REMOVEFROM should not be sent by a client, but I think that a client can send CONTEXT as a hint to the server, see http://tools.ietf.org/html/rfc5267#section-4.2 Thanks! Regards Anders From forumer at smartmobili.com Wed Jan 11 19:19:28 2012 From: forumer at smartmobili.com (forumer at smartmobili.com) Date: Wed, 11 Jan 2012 18:19:28 +0100 Subject: [Dovecot] Log imap commands In-Reply-To: <53bff2b57fe462c70eb9795cda5b8f06@smartmobili.com> References: <4F0D9A80.2020707@es2eng.com> <53bff2b57fe462c70eb9795cda5b8f06@smartmobili.com> Message-ID: <1f53a15cea08527fb79bd71037fa161f@smartmobili.com> Le 11.01.2012 16:09, forumer at smartmobili.com a ?crit?: > Le 11.01.2012 15:19, Willie Gillespie a ?crit?: >> On 1/11/2012 7:11 AM, forumer at smartmobili.com wrote: >>> I was wondering if dovecot allows to log imap communications ? >> >> You could look at Rawlog >> http://wiki.dovecot.org/Debugging/Rawlog >> http://wiki2.dovecot.org/Debugging/Rawlog > > Ok so I suppose I need to rebuild dovecot with the --with-rawlog > option but I am under ubuntu > and I was using some dovecot-2.x source package hosted here : > http://xi.rename-it.nl/debian/ > But now it seems to be dead, any idea where I could find a deb-src > for dovecot 2.x ? Actually I finally found that repository it still working. From adrian.minta at gmail.com Wed Jan 11 19:30:47 2012 From: adrian.minta at gmail.com (Adrian Minta) Date: Wed, 11 Jan 2012 19:30:47 +0200 Subject: [Dovecot] howto disable indexing on dovecot-lda ? In-Reply-To: <4F06F0E7.904@gmail.com> References: <4F06D5D9.20001@gmail.com> <4F06DFF5.40707@hardwarefreak.com> <4F06F0E7.904@gmail.com> Message-ID: <4F0DC747.4070505@gmail.com> Hello, I tested with "mail_location = whatever-you-have-now:INDEX=MEMORY" and it seems to help, but in the mean time I found another option completely undocumented that seems to do exactly what I wanted: protocol lda { mailbox_list_index_disable= yes } Does anyone knows exactly what "mailbox_list_index_disable" does and if is still available in 2.0 and 2.1 branch ? From kadafax at gmail.com Wed Jan 11 20:00:37 2012 From: kadafax at gmail.com (huret deffgok) Date: Wed, 11 Jan 2012 19:00:37 +0100 Subject: [Dovecot] Dovecot LDA and address extensions - folders flood Message-ID: Hi list, This post is slightly OT, I hope no one will take offense. I was following the wiki on using dovecot LDA with postfix and implemented, for our future mail server, the address extensions mechanism: an email sent to "validUser+foldername at mydomain.com" will have dovecot-lda automagically create and subscribe the "foldername" folder. With some basic scripting I was able to create hundreds of folders in a few seconds. So my question is how do you implement this great feature in a secure way so that funny random people out there cant flood your mailbox with gigatons of folder. Thanks, kfx From CMarcus at Media-Brokers.com Wed Jan 11 20:04:49 2012 From: CMarcus at Media-Brokers.com (Charles Marcus) Date: Wed, 11 Jan 2012 13:04:49 -0500 Subject: [Dovecot] Dovecot LDA and address extensions - folders flood In-Reply-To: References: Message-ID: <4F0DCF41.7040204@Media-Brokers.com> On 2012-01-11 1:00 PM, huret deffgok wrote: > Hi list, > > This post is slightly OT, I hope no one will take offense. > I was following the wiki on using dovecot LDA with postfix and implemented, > for our future mail server, the address extensions mechanism: an email sent > to "validUser+foldername at mydomain.com" will have dovecot-lda automagically > create and subscribe the "foldername" folder. With some basic scripting I > was able to create hundreds of folders in a few seconds. So my question is > how do you implement this great feature in a secure way so that funny > random people out there cant flood your mailbox with gigatons of folder. Don't have it autocreate the folder... Seriously, there is no way to provide that functionality and have the system determine when it is *you* doing it or someone else... But I think it is a non problem... how often do you receive plus-addressed spam?? -- Best regards, Charles From forumer at smartmobili.com Wed Jan 11 20:29:26 2012 From: forumer at smartmobili.com (forumer at smartmobili.com) Date: Wed, 11 Jan 2012 19:29:26 +0100 Subject: [Dovecot] Log imap commands In-Reply-To: <1f53a15cea08527fb79bd71037fa161f@smartmobili.com> References: <4F0D9A80.2020707@es2eng.com> <53bff2b57fe462c70eb9795cda5b8f06@smartmobili.com> <1f53a15cea08527fb79bd71037fa161f@smartmobili.com> Message-ID: I have added the following lines to dovecot configuration(/etc/dovecot/conf.d/10-master.conf) : ... service pop3 { # Max. number of POP3 processes (connections) #process_limit = 1024 } service postlogin { executable = script-login -d rawlog unix_listener postlogin { } } ... and I have created a folder dovecot.rawlog was shown below : root at vf-12345:/home/vmail/smartmobili.com/webguest# ls -la ... drwxrwxrwx 2 vmail vmail 4096 2012-01-11 19:11 dovecot.rawlog/ -rw------- 1 vmail vmail 19002 2011-12-27 13:01 dovecot-uidlist -rw------- 1 vmail vmail 8 2012-01-11 12:52 dovecot-uidvalidity ... And after that I have restarted dovecot and logged in with the webguest account but cannot see any logs. What am I doing wrong ? From geoffb at corp.sonic.net Wed Jan 11 20:53:30 2012 From: geoffb at corp.sonic.net (Geoffrey Broadwell) Date: Wed, 11 Jan 2012 10:53:30 -0800 Subject: [Dovecot] (no subject) Message-ID: <1326308010.2329.47.camel@rover> I'm working on a Dovecot plugin, but I'm pretty new to Dovecot, so there's a LOT to learn about the code base and it's pretty slow going. I've got a few things coded so far, but I want to make sure I'm headed down the right path and get some advice before I go too much further. A couple years ago, I wrote some code for our Courier implementation that sent a magic UDP packet to a small server each time a user modified their voicemail IMAP folder. That UDP server would then connect back to Courier via IMAP again and check whether the folder had any unread messages left in it. Finally, it would contact our phone switches to modify the state of the message waiting indicator (MWI) on that user's phone line appropriately. Fast forward to now, and we want to migrate wholesale to Dovecot 2.x. The servers are all in place, they've been well tested and burned in (with Dovecot 2.0.15 I believe), and the final migration is pretty much waiting on a port to Dovecot of the MWI update functionality. The good news is that I originally spent some effort to isolate the UDP packet generation and delivery, and I used purely standard portable code as per APUE2, so I think that chunk of code should be reusable with only minor modifications. I'm aware that internally Dovecot has its own memory, buffer, and string management functions, but it doesn't feel like a win to try to convert the existing code. It's small, completely isolated, and well reviewed -- I'd be more afraid of using the new (to me) Dovecot API incorrectly than I am that the existing code has bugs in buffer handling. By cribbing from other plugins and editing appropriately, I've also created the skeleton for my plugin: Makefile, docs, conf snippet, .spec (I'll be deploying the plugin as an RPM), and so on. I've got the beginnings of the .h and .c written, just enough to init and deinit the plugin by calling mail_storage_hooks_{add,remove}() with some stub hook functions. This all seems good so far; test builds are error-free and seem sane. So now the hard part is writing the piece that I can't just crib from elsewhere -- making sure that I hook every place in Dovecot that the user's voicemail folder can be changed in a way that would change it between having one or more unread messages, and not having any unread messages at all (or vice-versa, of course). At the same time, I want to minimize the performance impact to Dovecot (and the load on the UDP server) by only hooking the places I need to, filtering out as many false positives as I can without introducing massive complexity, and only pinging the UDP server when it's most likely to notice a change in the state of that user's voicemail server. It seems to me that I need to at least capture mailbox_allocated from the mail_storage hooks, for a couple reasons: 1. The state of the voicemail folder could be changed because the entire folder is created, destroyed, or renamed. 2. I want to only do further checks when I'm sure I'm looking at the voicemail folder. There's no reason to do work when the user is working with any other folder. So now the questions: Does all of the above seem sane so far? Do I need to hook mail_allocated as well, or will I be able to see any change I need to monitor just from the mailbox? Finally, I'm lost about what operations on the mailbox and the mails within it I need to check. Can anyone offer some advice (or doc pointers) on this? Thank you! -'f From geoffb at corp.sonic.net Wed Jan 11 20:57:11 2012 From: geoffb at corp.sonic.net (Geoffrey Broadwell) Date: Wed, 11 Jan 2012 10:57:11 -0800 Subject: [Dovecot] Need help with details for new Dovecot plugin In-Reply-To: <1326308010.2329.47.camel@rover> References: <1326308010.2329.47.camel@rover> Message-ID: <1326308231.2329.50.camel@rover> My sincere apologies for the subjectless email (my MUA should have caught that!); the above is the corrected subject line. -'f On Wed, 2012-01-11 at 10:53 -0800, Geoffrey Broadwell wrote: > I'm working on a Dovecot plugin, but I'm pretty new to Dovecot, so > there's a LOT to learn about the code base and it's pretty slow going. > I've got a few things coded so far, but I want to make sure I'm headed > down the right path and get some advice before I go too much further. > > A couple years ago, I wrote some code for our Courier implementation > that sent a magic UDP packet to a small server each time a user modified > their voicemail IMAP folder. That UDP server would then connect back to > Courier via IMAP again and check whether the folder had any unread > messages left in it. Finally, it would contact our phone switches to > modify the state of the message waiting indicator (MWI) on that user's > phone line appropriately. > > Fast forward to now, and we want to migrate wholesale to Dovecot 2.x. > The servers are all in place, they've been well tested and burned in > (with Dovecot 2.0.15 I believe), and the final migration is pretty much > waiting on a port to Dovecot of the MWI update functionality. > > The good news is that I originally spent some effort to isolate the UDP > packet generation and delivery, and I used purely standard portable code > as per APUE2, so I think that chunk of code should be reusable with only > minor modifications. I'm aware that internally Dovecot has its own > memory, buffer, and string management functions, but it doesn't feel > like a win to try to convert the existing code. It's small, completely > isolated, and well reviewed -- I'd be more afraid of using the new (to > me) Dovecot API incorrectly than I am that the existing code has bugs in > buffer handling. > > By cribbing from other plugins and editing appropriately, I've also > created the skeleton for my plugin: Makefile, docs, conf snippet, .spec > (I'll be deploying the plugin as an RPM), and so on. I've got the > beginnings of the .h and .c written, just enough to init and deinit the > plugin by calling mail_storage_hooks_{add,remove}() with some stub hook > functions. This all seems good so far; test builds are error-free and > seem sane. > > So now the hard part is writing the piece that I can't just crib from > elsewhere -- making sure that I hook every place in Dovecot that the > user's voicemail folder can be changed in a way that would change it > between having one or more unread messages, and not having any unread > messages at all (or vice-versa, of course). At the same time, I want to > minimize the performance impact to Dovecot (and the load on the UDP > server) by only hooking the places I need to, filtering out as many > false positives as I can without introducing massive complexity, and > only pinging the UDP server when it's most likely to notice a change in > the state of that user's voicemail server. > > It seems to me that I need to at least capture mailbox_allocated from > the mail_storage hooks, for a couple reasons: > > 1. The state of the voicemail folder could be changed because > the entire folder is created, destroyed, or renamed. > > 2. I want to only do further checks when I'm sure I'm looking at > the voicemail folder. There's no reason to do work when the > user is working with any other folder. > > So now the questions: > > Does all of the above seem sane so far? > > Do I need to hook mail_allocated as well, or will I be able to see any > change I need to monitor just from the mailbox? > > Finally, I'm lost about what operations on the mailbox and the mails > within it I need to check. Can anyone offer some advice (or doc > pointers) on this? > > Thank you! > > > -'f > > From nicolas.kowalski at gmail.com Wed Jan 11 21:01:18 2012 From: nicolas.kowalski at gmail.com (Nicolas KOWALSKI) Date: Wed, 11 Jan 2012 20:01:18 +0100 Subject: [Dovecot] proxy, managesieve and ssl? Message-ID: <20120111190118.GD14492@petole.demisel.net> Hello, On a dovecot 2.0.14 proxy, I found that proxying managesieve works well when using 'starttls' option in pass_attrs, but does not work when using 'ssl' option. The backend server is also dovecot 2.0.14; when using the ssl option, it reports "no auth attempts" in the logs about managesieve-login, and meanwhile the MUA, Thunderbird with sieve plugin, reports [TRYLATER] account is temporary disabled; no problem when using starttls option on the proxy, all works well. I would like to use IMAPs, instead of IMAP+STARTTLS, from proxy to backend, and have Managesieve still working. Is this supported? Thanks, -- Nicolas From kadafax at gmail.com Wed Jan 11 21:05:43 2012 From: kadafax at gmail.com (huret deffgok) Date: Wed, 11 Jan 2012 20:05:43 +0100 Subject: [Dovecot] Dovecot LDA and address extensions - folders flood In-Reply-To: <4F0DCF41.7040204@Media-Brokers.com> References: <4F0DCF41.7040204@Media-Brokers.com> Message-ID: On Wed, Jan 11, 2012 at 7:04 PM, Charles Marcus wrote: > On 2012-01-11 1:00 PM, huret deffgok wrote: > >> Hi list, >> >> This post is slightly OT, I hope no one will take offense. >> I was following the wiki on using dovecot LDA with postfix and >> implemented, >> for our future mail server, the address extensions mechanism: an email >> sent >> to "validUser+foldername@**mydomain.com" >> will have dovecot-lda automagically >> create and subscribe the "foldername" folder. With some basic scripting I >> was able to create hundreds of folders in a few seconds. So my question is >> how do you implement this great feature in a secure way so that funny >> random people out there cant flood your mailbox with gigatons of folder. >> > > Don't have it autocreate the folder... > > Seriously, there is no way to provide that functionality and have the > system determine when it is *you* doing it or someone else... > > But I think it is a non problem... how often do you receive plus-addressed > spam?? None from now. But I was thinking about something like malice rather than spamming. For me it's an open door to DOS the service. What about a functionality that would throttle the rate of creation of folders from one IP address, with a ban in case of abuse ? Or maybe should I look at the file system level. From CMarcus at Media-Brokers.com Wed Jan 11 21:25:24 2012 From: CMarcus at Media-Brokers.com (Charles Marcus) Date: Wed, 11 Jan 2012 14:25:24 -0500 Subject: [Dovecot] Dovecot LDA and address extensions - folders flood In-Reply-To: References: <4F0DCF41.7040204@Media-Brokers.com> Message-ID: <4F0DE224.8000900@Media-Brokers.com> On 2012-01-11 2:05 PM, huret deffgok wrote: > On Wed, Jan 11, 2012 at 7:04 PM, Charles Marcus wrote: >> On 2012-01-11 1:00 PM, huret deffgok wrote: >>> This post is slightly OT, I hope no one will take offense. I was >>> following the wiki on using dovecot LDA with postfix and >>> implemented, for our future mail server, the address extensions >>> mechanism: an email sent to >>> "validUser+foldername@**mydomain.com" >>> will have dovecot-lda automagically create and subscribe the >>> "foldername" folder. With some basic scripting I was able to >>> create hundreds of folders in a few seconds. So my question is >>> how do you implement this great feature in a secure way so that >>> funny random people out there cant flood your mailbox with >>> gigatons of folder. >> Don't have it autocreate the folder... >> >> Seriously, there is no way to provide that functionality and have the >> system determine when it is *you* doing it or someone else... >> >> But I think it is a non problem... how often do you receive plus-addressed >> spam?? > None from now. But I was thinking about something like malice rather than > spamming. For me it's an open door to DOS the service. > What about a functionality that would throttle the rate of creation of > folders from one IP address, with a ban in case of abuse ? Or maybe should > I look at the file system level. Again - and no offense - but I think you are tilting at windmills... If you get hit by this, you will not only have thousands or millions of folders, you'll have one email for each folder. So, the question is, how do you prevent being flooded with spam... and the answer is, decent anti-spam measures. I prefer ASSP, but I just wish you could use it as an after queue content filter (for its most excellent content filtering and more importantly quarantine management/block reporting features/functionality). That said, postfix, with sane anti-spam measures, along with the most excellent new postscreen (available in 2.8+ I believe) is good enough to stop most anything like this that you may be worried about. Like I said, set up postfix (or your smtp server) right, and this is a non-issue. -- Best regards, Charles From tss at iki.fi Wed Jan 11 22:34:33 2012 From: tss at iki.fi (Timo Sirainen) Date: Wed, 11 Jan 2012 22:34:33 +0200 Subject: [Dovecot] proxy, managesieve and ssl? In-Reply-To: <20120111190118.GD14492@petole.demisel.net> References: <20120111190118.GD14492@petole.demisel.net> Message-ID: <95F23E50-BD64-4844-8838-04E5BB9033A7@iki.fi> On 11.1.2012, at 21.01, Nicolas KOWALSKI wrote: > On a dovecot 2.0.14 proxy, I found that proxying managesieve works well > when using 'starttls' option in pass_attrs, but does not work when using > 'ssl' option. The backend server is also dovecot 2.0.14; when using the > ssl option, it reports "no auth attempts" in the logs about > managesieve-login, and meanwhile the MUA, Thunderbird with sieve plugin, > reports [TRYLATER] account is temporary disabled; no problem when using > starttls option on the proxy, all works well. > > I would like to use IMAPs, instead of IMAP+STARTTLS, from proxy to > backend, and have Managesieve still working. Is this supported? You'll need to kludge it a little bit. I guess you're using LDAP, since you mentioned pass_attrs? protocol sieve { passdb { args = ldap-with-starttls.conf } } protocol !sieve { passdb { args = ldap-with-ssl.conf } } From stephan at rename-it.nl Wed Jan 11 23:06:51 2012 From: stephan at rename-it.nl (Stephan Bosch) Date: Wed, 11 Jan 2012 22:06:51 +0100 Subject: [Dovecot] proxy, managesieve and ssl? In-Reply-To: <20120111190118.GD14492@petole.demisel.net> References: <20120111190118.GD14492@petole.demisel.net> Message-ID: <4F0DF9EB.50605@rename-it.nl> On 1/11/2012 8:01 PM, Nicolas KOWALSKI wrote: > Hello, > > On a dovecot 2.0.14 proxy, I found that proxying managesieve works well > when using 'starttls' option in pass_attrs, but does not work when using > 'ssl' option. The backend server is also dovecot 2.0.14; when using the > ssl option, it reports "no auth attempts" in the logs about > managesieve-login, and meanwhile the MUA, Thunderbird with sieve plugin, > reports [TRYLATER] account is temporary disabled; no problem when using > starttls option on the proxy, all works well. > > I would like to use IMAPs, instead of IMAP+STARTTLS, from proxy to > backend, and have Managesieve still working. Is this supported? Although there is no such thing as a standard sieveS protocol, you can make Dovecot v2.x talk SSL from the start at a ManageSieve socket. Since normally people will not use something like this, it is not available by default. In conf.d/20-managesieve.conf you can adjust the service definition of ManageSieve as follows: service managesieve-login { inet_listener sieve { port = 4190 } inet_listener sieves { port = 5190 ssl = yes } } This starts the normal protocol on port 4190 and the direct-SSL version on an alternative port. You can also put the ssl=yes directly in the port 4190 listener, as long as no client will have to connect to this server directly (no client will support it). Regards, Stephan. From michael.abbott at apple.com Thu Jan 12 01:09:17 2012 From: michael.abbott at apple.com (Mike Abbott) Date: Wed, 11 Jan 2012 17:09:17 -0600 Subject: [Dovecot] MASTER_AUTH_MAX_DATA_SIZE Message-ID: <1BCAD28D-8120-45C9-BAA2-B6597C34545A@apple.com> In 2.0.17 you increased LOGIN_MAX_INBUF_SIZE from 1024 to 4096. Should you also have increased MASTER_AUTH_MAX_DATA_SIZE from (1024*2) to (4096*2)? /* This should be kept in sync with LOGIN_MAX_INBUF_SIZE. Multiply it by two to make sure there's space to transfer the command tag */ From dlie76 at yahoo.com.au Thu Jan 12 04:30:49 2012 From: dlie76 at yahoo.com.au (Daminto Lie) Date: Wed, 11 Jan 2012 18:30:49 -0800 (PST) Subject: [Dovecot] could not start dovecot - unknown section type Message-ID: <1326335449.87714.YahooMailNeo@web113411.mail.gq1.yahoo.com> Hi, I was wondering if I could get some help with the following error when trying to start dovecot service on Ubuntu Server 10.04. The error message is as follows ?* Starting IMAP/POP3 mail server dovecot?????????????????????????????????????? Error: Error in configuration file /usr/local/etc/dovecot/dovecot.conf line 15: Unknown section type Fatal: Invalid configuration in /usr/local/etc/dovecot/dovecot.conf [fail] I have just managed to upgrade it from 1.2.19 to 2.0.17. Then, I tried to start the dovecot by running the command $ sudo /etc/init.d/dovecot start And I received the above message. Below is the configuration for dovecot.conf # 2.0.17 (684381041dc4+): /usr/local/etc/dovecot/dovecot.conf # OS: Linux 2.6.32-37-generic-pae i686 Ubuntu 10.04.3 LTS ext4 auth_debug = yes auth_debug_passwords = yes auth_mechanisms = plain login auth_username_format = %Lu auth_verbose = yes base_dir = /var/run/dovecot disable_plaintext_auth = no first_valid_uid = 1001 last_valid_uid = 2000 log_timestamp = "%Y-%m-%d %H:%M:%S " mail_location = maildir:/home/vmail/%u/Maildir mail_privileged_group = mail passdb { ? driver = pam } passdb { ? args = /usr/local/etc/dovecot/dovecot-ldap.conf ? driver = ldap } plugin { ? quota = maildir ? quota_rule = *:storage=3GB ? quota_rule2 = Trash:storage=20%% ? quota_rule3 = Spam:storage=10%% ? quota_warning = storage=95%% /usr/local/bin/quota-warning.sh 95 ? quota_warning2 = storage=80%% /usr/local/bin/quota-warning.sh 80 } protocols = imap service auth { ? unix_listener /var/run/dovecot-auth-master { ??? group = vmail ??? mode = 0660 ??? user = vmail ? } ? unix_listener /var/spool/postfix/private/auth { ??? group = mail ??? mode = 0660 ??? user = postfix ? } ? user = root } service imap-login { ? chroot = login ? executable = /usr/lib/dovecot/imap-login ? inet_listener imap { ??? address = * ??? port = 143 ? } ? user = dovecot } service imap { ? executable = /usr/lib/dovecot/imap } service pop3-login { ? chroot = login ? user = dovecot } ssl = no userdb { ? driver = passwd } userdb { ? args = uid=1001 gid=1001 home=/home/vmail/%u allow_all_users=yes ? driver = static } verbose_proctitle = yes protocol imap { ? imap_client_workarounds = delay-newmail ? mail_plugins = quota imap_quota } protocol pop3 { ? pop3_uidl_format = %08Xu%08Xv } protocol lda { ? auth_socket_path = /var/run/dovecot-auth-master ? mail_plugins = quota ? postmaster_address = postmaster at example.com ? rejection_reason = Your message to <%t> was automatically rejected:%n%r ? sendmail_path = /usr/lib/sendmail } Any help would be greatly appreciated. Thank you From rob0 at gmx.co.uk Thu Jan 12 05:12:38 2012 From: rob0 at gmx.co.uk (/dev/rob0) Date: Wed, 11 Jan 2012 21:12:38 -0600 Subject: [Dovecot] could not start dovecot - unknown section type In-Reply-To: <1326335449.87714.YahooMailNeo@web113411.mail.gq1.yahoo.com> References: <1326335449.87714.YahooMailNeo@web113411.mail.gq1.yahoo.com> Message-ID: <201201112112.39736@harrier.slackbuilds.org> On Wednesday 11 January 2012 20:30:49 Daminto Lie wrote: > I was wondering if I could get some help with the following error > when trying to start dovecot service on Ubuntu Server 10.04. > > The error message is as follows > > * Starting IMAP/POP3 mail server > dovecot > > Error: Error in configuration file > /usr/local/etc/dovecot/dovecot.conf line 15: Unknown section type > Fatal: Invalid configuration in > /usr/local/etc/dovecot/dovecot.conf [fail] > > > I have just managed to upgrade it from 1.2.19 to 2.0.17. Then, I > tried to start the dovecot by running the command > > > $ sudo /etc/init.d/dovecot start > > And I received the above message. It would seem that you did not upgrade the init script, and the old one is reading the config file and expecting a different format. You used source to upgrade, which means you did not "upgrade" in the conventional sense -- you installed new software. Either fix the script or run without it: dovecot start See: http://wiki2.dovecot.org/CompilingSource http://wiki2.dovecot.org/RunningDovecot > Below is the configuration for dovecot.conf snip -- http://rob0.nodns4.us/ -- system administration and consulting Offlist GMX mail is seen only if "/dev/rob0" is in the Subject: From dlie76 at yahoo.com.au Thu Jan 12 07:19:42 2012 From: dlie76 at yahoo.com.au (Daminto Lie) Date: Wed, 11 Jan 2012 21:19:42 -0800 (PST) Subject: [Dovecot] could not start dovecot - unknown section type In-Reply-To: <201201112112.39736@harrier.slackbuilds.org> References: <1326335449.87714.YahooMailNeo@web113411.mail.gq1.yahoo.com> <201201112112.39736@harrier.slackbuilds.org> Message-ID: <1326345582.91512.YahooMailNeo@web113409.mail.gq1.yahoo.com> Thank you for your reply. Yes, you're right. I should not have called it an upgrade since I actually removed dovecot 1.2.9 completely and installed the dovecot 2.0.17 from the source. Later, I mucked up the init file because I still used the old version one. I'm sorry about this. I remember I tried to upgrade by running doveconf -n -c dovecot.conf > dovecot-2.conf, I got an error message saying doveconf: command not found. Then, I tried to google it to find solutions but to no avail. This is why I decided to install it from scratch. Thank you for your help ________________________________ From: /dev/rob0 To: dovecot at dovecot.org Sent: Thursday, 12 January 2012 2:12 PM Subject: Re: [Dovecot] could not start dovecot - unknown section type On Wednesday 11 January 2012 20:30:49 Daminto Lie wrote: > I was wondering if I could get some help with the following error > when trying to start dovecot service on Ubuntu Server 10.04. > > The error message is as follows > >? * Starting IMAP/POP3 mail server > dovecot? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? > > Error: Error in configuration file > /usr/local/etc/dovecot/dovecot.conf line 15: Unknown section type > Fatal: Invalid configuration in > /usr/local/etc/dovecot/dovecot.conf [fail] > > > I have just managed to upgrade it from 1.2.19 to 2.0.17. Then, I > tried to start the dovecot by running the command > > > $ sudo /etc/init.d/dovecot start > > And I received the above message. It would seem that you did not upgrade the init script, and the old one is reading the config file and expecting a different format. You used source to upgrade, which means you did not "upgrade" in the conventional sense -- you installed new software. Either fix the script or run without it: ??? dovecot start See: ??? http://wiki2.dovecot.org/CompilingSource ??? http://wiki2.dovecot.org/RunningDovecot > Below is the configuration for dovecot.conf snip -- ? http://rob0.nodns4.us/ -- system administration and consulting ? Offlist GMX mail is seen only if "/dev/rob0" is in the Subject: From nicolas.kowalski at gmail.com Thu Jan 12 10:47:07 2012 From: nicolas.kowalski at gmail.com (Nicolas KOWALSKI) Date: Thu, 12 Jan 2012 09:47:07 +0100 Subject: [Dovecot] proxy, managesieve and ssl? In-Reply-To: <95F23E50-BD64-4844-8838-04E5BB9033A7@iki.fi> References: <20120111190118.GD14492@petole.demisel.net> <95F23E50-BD64-4844-8838-04E5BB9033A7@iki.fi> Message-ID: <20120112084707.GE14492@petole.demisel.net> On Wed, Jan 11, 2012 at 10:34:33PM +0200, Timo Sirainen wrote: > On 11.1.2012, at 21.01, Nicolas KOWALSKI wrote: > > > I would like to use IMAPs, instead of IMAP+STARTTLS, from proxy to > > backend, and have Managesieve still working. Is this supported? > > You'll need to kludge it a little bit. I guess you're using LDAP, since you mentioned pass_attrs? Yes, I am using LDAP. > protocol sieve { > passdb { > args = ldap-with-starttls.conf > } > } When just adding the above, it works perfectly, Thanks! > protocol !sieve { > passdb { > args = ldap-with-ssl.conf > } > } Is this really needed? It looks like it works without it. When using it, I get this error: Jan 12 09:40:59 imap1 dovecot: auth: Fatal: No passdbs specified in configuration file. PLAIN mechanism needs one Jan 12 09:40:59 imap1 dovecot: master: Error: service(auth): command startup failed, throttling -- Nicolas From nicolas.kowalski at gmail.com Thu Jan 12 10:58:13 2012 From: nicolas.kowalski at gmail.com (Nicolas KOWALSKI) Date: Thu, 12 Jan 2012 09:58:13 +0100 Subject: [Dovecot] proxy, managesieve and ssl? In-Reply-To: <4F0DF9EB.50605@rename-it.nl> References: <20120111190118.GD14492@petole.demisel.net> <4F0DF9EB.50605@rename-it.nl> Message-ID: <20120112085813.GF14492@petole.demisel.net> On Wed, Jan 11, 2012 at 10:06:51PM +0100, Stephan Bosch wrote: > On 1/11/2012 8:01 PM, Nicolas KOWALSKI wrote: > > > >I would like to use IMAPs, instead of IMAP+STARTTLS, from proxy to > >backend, and have Managesieve still working. Is this supported? > > Although there is no such thing as a standard sieveS protocol, you > can make Dovecot v2.x talk SSL from the start at a ManageSieve > socket. Since normally people will not use something like this, it > is not available by default. > > In conf.d/20-managesieve.conf you can adjust the service definition > of ManageSieve as follows: > > service managesieve-login { > inet_listener sieve { > port = 4190 > } > > inet_listener sieves { > port = 5190 > ssl = yes > } > } This works well, when using (as Timo wrote) a different ldap pass_attrs for sieve, specifying this specific 5190 port. Thanks for your suggestion. > This starts the normal protocol on port 4190 and the direct-SSL > version on an alternative port. You can also put the ssl=yes > directly in the port 4190 listener, as long as no client will have > to connect to this server directly (no client will support it). Well, as this is non-standard, I guess I will not use it. I much prefer to stick with what has been RFCed. -- Nicolas From kjonca at o2.pl Thu Jan 12 12:39:06 2012 From: kjonca at o2.pl (Kamil =?iso-8859-2?Q?Jo=F1ca?=) Date: Thu, 12 Jan 2012 11:39:06 +0100 Subject: [Dovecot] compressed mboxes very slow References: <87iptnoans.fsf@alfa.kjonca> Message-ID: <8739blw6gl.fsf@alfa.kjonca> kjonca at o2.pl (Kamil Jo?ca) writes: > I have some archive mails in gzipped mboxes. I could use them with > dovecot 1.x without problems. > But recently I have installed dovecot 2.0.12, and they are slow. very > slow. Recently I have to read some compressed mboxes again, and no progress :( I took 2.0.17 sources and put some i_debug ("#kjonca["__FILE__",%d,%s] %d", __LINE__,__func__,...some parameters ...); lines into istream-bzlib.c, istream-raw-mbox.c and istream-limit.c and found that: in istream-limit.c in function around lines 40-45: --8<---------------cut here---------------start------------->8--- i_stream_seek(stream->parent, lstream->istream.parent_start_offset + stream->istream.v_offset); stream->pos -= stream->skip; stream->skip = 0; --8<---------------cut here---------------end--------------->8--- seeks stream, (calling i_stream_raw_mbox_seek in file istream-raw-mbox.c ) and then (line 50 ) --8<---------------cut here---------------start------------->8--- if ((ret = i_stream_read(stream->parent)) == -2) return -2; --8<---------------cut here---------------end--------------->8--- tries to read some data earlier in stream, and with compressed mboxes it cause reread file from the beginning. Then I commented out (just for testing) lines 40-45 from istream-limit.c and bzipped mbox can be opened in reasonable time. (MOreover I can read some randomly picked mails without problems) Unfortunately, meanig of fields in istream* structures is very unclear for me (especially skip,pos and offset) to write proper code by myself. KJ -- http://sporothrix.wordpress.com/2011/01/16/usa-sie-krztusza-kto-nastepny/ Jak kto? ma pecha, to z?amie z?b podczas seksu oralnego (S.Sok??) From info_postfix at gmx.ch Thu Jan 12 12:00:52 2012 From: info_postfix at gmx.ch (maximus12) Date: Thu, 12 Jan 2012 02:00:52 -0800 (PST) Subject: [Dovecot] Server Time 45min ahead Message-ID: <33126760.post@talk.nabble.com> Hi, I have the issue that my server clock is 45min fast. Therefore I would like to install ntp. I read a lot on the internet about dovecot and ntp. My issue is that 45 min are a lot an I would like to minimize mail server downtimes as much as possible. I don't care if the time corrections with ntp takes more than a few month. Does anyone know how I should proceed (e.g. how I have to setup ntp -> no time jump during installation and afterwards). Thanks a lot for your help! -- View this message in context: http://old.nabble.com/Server-Time-45min-ahead-tp33126760p33126760.html Sent from the Dovecot mailing list archive at Nabble.com. From Ralf.Hildebrandt at charite.de Thu Jan 12 13:43:50 2012 From: Ralf.Hildebrandt at charite.de (Ralf Hildebrandt) Date: Thu, 12 Jan 2012 12:43:50 +0100 Subject: [Dovecot] Server Time 45min ahead In-Reply-To: <33126760.post@talk.nabble.com> References: <33126760.post@talk.nabble.com> Message-ID: <20120112114350.GQ1341@charite.de> * maximus12 : > > Hi, > > I have the issue that my server clock is 45min fast. Therefore I would like > to install ntp. > I read a lot on the internet about dovecot and ntp. > My issue is that 45 min are a lot an I would like to minimize mail server > downtimes as much as possible. > I don't care if the time corrections with ntp takes more than a few month. > > Does anyone know how I should proceed (e.g. how I have to setup ntp -> no > time jump during installation and afterwards). stop dovecot & postfix ntpdate timeserver start dovecot & postfix start ntpd the time jump is only really critical when the programs are running. -- Ralf Hildebrandt Gesch?ftsbereich IT | Abteilung Netzwerk Charit? - Universit?tsmedizin Berlin Campus Benjamin Franklin Hindenburgdamm 30 | D-12203 Berlin Tel. +49 30 450 570 155 | Fax: +49 30 450 570 962 ralf.hildebrandt at charite.de | http://www.charite.de From info_postfix at gmx.ch Thu Jan 12 13:49:15 2012 From: info_postfix at gmx.ch (maximus12) Date: Thu, 12 Jan 2012 03:49:15 -0800 (PST) Subject: [Dovecot] Server Time 45min ahead In-Reply-To: <20120112114350.GQ1341@charite.de> References: <33126760.post@talk.nabble.com> <20120112114350.GQ1341@charite.de> Message-ID: <33127262.post@talk.nabble.com> Thanks a lot for your quick response. I thought that dovecot won't start until the server time reaches the time before the "time jump". >From your point of view dovecot will start normally if I adjust the time when dovecot is stopped? Thanks a lot for clarification. Ralf Hildebrandt wrote: > > * maximus12 : >> >> Hi, >> >> I have the issue that my server clock is 45min fast. Therefore I would >> like >> to install ntp. >> I read a lot on the internet about dovecot and ntp. >> My issue is that 45 min are a lot an I would like to minimize mail server >> downtimes as much as possible. >> I don't care if the time corrections with ntp takes more than a few >> month. >> >> Does anyone know how I should proceed (e.g. how I have to setup ntp -> no >> time jump during installation and afterwards). > > stop dovecot & postfix > ntpdate timeserver > start dovecot & postfix > start ntpd > > the time jump is only really critical when the programs are running. > > -- > Ralf Hildebrandt > Gesch?ftsbereich IT | Abteilung Netzwerk > Charit? - Universit?tsmedizin Berlin > Campus Benjamin Franklin > Hindenburgdamm 30 | D-12203 Berlin > Tel. +49 30 450 570 155 | Fax: +49 30 450 570 962 > ralf.hildebrandt at charite.de | http://www.charite.de > > > -- View this message in context: http://old.nabble.com/Server-Time-45min-ahead-tp33126760p33127262.html Sent from the Dovecot mailing list archive at Nabble.com. From Ralf.Hildebrandt at charite.de Thu Jan 12 13:57:05 2012 From: Ralf.Hildebrandt at charite.de (Ralf Hildebrandt) Date: Thu, 12 Jan 2012 12:57:05 +0100 Subject: [Dovecot] Server Time 45min ahead In-Reply-To: <33127262.post@talk.nabble.com> References: <33126760.post@talk.nabble.com> <20120112114350.GQ1341@charite.de> <33127262.post@talk.nabble.com> Message-ID: <20120112115705.GS1341@charite.de> * maximus12 : > > Thanks a lot for your quick response. > > I thought that dovecot won't start until the server time reaches the time > before the "time jump". > > From your point of view dovecot will start normally if I adjust the time > when dovecot is stopped? Don't take my word for it, but I think the behaviour is this: * dovecot is running, time jumps backwards -> dovecot exits * dovecot is not running, time jumps backwards -> dovecot can be started It also depends on your dovecot version, see http://wiki.dovecot.org/TimeMovedBackwards -- Ralf Hildebrandt Gesch?ftsbereich IT | Abteilung Netzwerk Charit? - Universit?tsmedizin Berlin Campus Benjamin Franklin Hindenburgdamm 30 | D-12203 Berlin Tel. +49 30 450 570 155 | Fax: +49 30 450 570 962 ralf.hildebrandt at charite.de | http://www.charite.de From Harlan.Stenn at pfcs.com Thu Jan 12 14:47:31 2012 From: Harlan.Stenn at pfcs.com (Harlan Stenn) Date: Thu, 12 Jan 2012 07:47:31 -0500 Subject: [Dovecot] Server Time 45min ahead In-Reply-To: <20120112114350.GQ1341@charite.de> References: <33126760.post@talk.nabble.com> <20120112114350.GQ1341@charite.de> Message-ID: <20120112124731.42C752842A@gwc.pfcs.com> Ralf wrote: > stop dovecot & postfix > ntpdate timeserver > start dovecot & postfix > start ntpd Speaking as stenn at ntp.org, I recommend: - run 'ntpd -gN' as early as possible in the startup sequence (no need for ntpdate) then as late as possible in the startup sequence, run: - ntp-wait -v -s 1 ; start dovecot and postfix (and database servers) H From moseleymark at gmail.com Thu Jan 12 20:32:16 2012 From: moseleymark at gmail.com (Mark Moseley) Date: Thu, 12 Jan 2012 10:32:16 -0800 Subject: [Dovecot] moving mail out of alt storage In-Reply-To: <87obylafsw.fsf_-_@algae.riseup.net> References: <87sjnya3z5.fsf@algae.riseup.net> <1316077133.12936.18.camel@hurina> <87obylafsw.fsf_-_@algae.riseup.net> Message-ID: On Thu, Sep 15, 2011 at 10:14 AM, Micah Anderson wrote: > Timo Sirainen writes: > >> On Wed, 2011-09-14 at 23:17 -0400, Micah Anderson wrote: >>> I moved some mail into the alt storage: >>> >>> doveadm altmove -u johnd at example.com seen savedbefore 1w >>> >>> and now I want to move it back to the regular INBOX, but I can't see how >>> I can do that with either 'altmove' or 'mailbox move'. >> >> Is this sdbox or mdbox? With sdbox you could simply "mv" the files. Or >> apply patch: http://hg.dovecot.org/dovecot-2.0/rev/1910c76a6cc9 > > This is mdbox, which is why I am not sure how to operate because I am > used to individual files as is with maildir. > > micah > I'm curious about this too. Is moving the m.# file out of the ALT path's storage/ directory into the non-ALT storage/ directory sufficient? Or will that cause odd issues? From tss at iki.fi Thu Jan 12 22:20:06 2012 From: tss at iki.fi (Timo Sirainen) Date: Thu, 12 Jan 2012 22:20:06 +0200 Subject: [Dovecot] MASTER_AUTH_MAX_DATA_SIZE In-Reply-To: <1BCAD28D-8120-45C9-BAA2-B6597C34545A@apple.com> References: <1BCAD28D-8120-45C9-BAA2-B6597C34545A@apple.com> Message-ID: <09EF3E7A-15A2-45EE-91BD-6EEFD1FD8049@iki.fi> On 12.1.2012, at 1.09, Mike Abbott wrote: > In 2.0.17 you increased LOGIN_MAX_INBUF_SIZE from 1024 to 4096. > Should you also have increased MASTER_AUTH_MAX_DATA_SIZE from (1024*2) to (4096*2)? > /* This should be kept in sync with LOGIN_MAX_INBUF_SIZE. Multiply it by two > to make sure there's space to transfer the command tag */ Well, yes.. Although I'd rather not do that. 1. Command tag length needs to be restricted to something reasonable, maybe 100 chars, so it won't have to be multiplied by 2 but just added the 100 (+1 for NUL). 2. Maybe I can change the LOGIN_MAX_INBUF_SIZE back to its original size and change the AUTHENTICATE command handling to read the SASL initial response to a separate buffer. I'll try doing those next week. From mcbdovecot at robuust.nl Fri Jan 13 01:10:59 2012 From: mcbdovecot at robuust.nl (Maarten Bezemer) Date: Fri, 13 Jan 2012 00:10:59 +0100 (CET) Subject: [Dovecot] Need help with details for new Dovecot plugin In-Reply-To: <1326308231.2329.50.camel@rover> References: <1326308010.2329.47.camel@rover> <1326308231.2329.50.camel@rover> Message-ID: >> A couple years ago, I wrote some code for our Courier implementation >> that sent a magic UDP packet to a small server each time a user modified >> their voicemail IMAP folder. That UDP server would then connect back to >> Courier via IMAP again and check whether the folder had any unread >> messages left in it. Finally, it would contact our phone switches to >> modify the state of the message waiting indicator (MWI) on that user's >> phone line appropriately. Using a Dovecot plugin for this would require mail delivery to go through Dovecot as well as all mail access. So, no postfix or exim or whatever doing mail delivery by itself (mbox/maildir), and no MUAs accessing mail locally. With courier, you probably had everything going through courier, but with Dovecot, that need not always be the case. So, using a dovecot-plugin for this may not even catch everything. Of course I don't know anything about the details of the project (number of users, requirements for speed of MWI updates, mail storage type, etc.) but if it's not a very large setup and mail storage is mbox or maildir, I'd probably go for cron-based external monitoring using find and stuff like that. Maybe even with login scripting for extra triggering. HTH... -- Maarten From tss at iki.fi Fri Jan 13 01:17:47 2012 From: tss at iki.fi (Timo Sirainen) Date: Fri, 13 Jan 2012 01:17:47 +0200 Subject: [Dovecot] (no subject) In-Reply-To: <1326308010.2329.47.camel@rover> References: <1326308010.2329.47.camel@rover> Message-ID: On 11.1.2012, at 20.53, Geoffrey Broadwell wrote: > So now the hard part is writing the piece that I can't just crib from > elsewhere -- making sure that I hook every place in Dovecot that the > user's voicemail folder can be changed in a way that would change it > between having one or more unread messages, and not having any unread > messages at all (or vice-versa, of course). At the same time, I want to > minimize the performance impact to Dovecot (and the load on the UDP > server) by only hooking the places I need to, filtering out as many > false positives as I can without introducing massive complexity, and > only pinging the UDP server when it's most likely to notice a change in > the state of that user's voicemail server. I think notify plugin would help you do this the easiest way. See mail_log plugin for an example of how to use it. From noel.butler at ausics.net Fri Jan 13 03:15:13 2012 From: noel.butler at ausics.net (Noel Butler) Date: Fri, 13 Jan 2012 11:15:13 +1000 Subject: [Dovecot] Server Time 45min ahead In-Reply-To: <20120112124731.42C752842A@gwc.pfcs.com> References: <33126760.post@talk.nabble.com> <20120112114350.GQ1341@charite.de> <20120112124731.42C752842A@gwc.pfcs.com> Message-ID: <1326417313.5785.3.camel@tardis> On Thu, 2012-01-12 at 07:47 -0500, Harlan Stenn wrote: > > then as late as possible in the startup sequence, run: > > - ntp-wait -v -s 1 ; start dovecot and postfix (and database servers) I'll +1 that advice, I introduced ntp-wait sometime ago when dovecot kept bitching, not a single glitch since. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 490 bytes Desc: This is a digitally signed message part URL: From user+dovecot at localhost.localdomain.org Fri Jan 13 03:25:41 2012 From: user+dovecot at localhost.localdomain.org (Pascal Volk) Date: Fri, 13 Jan 2012 02:25:41 +0100 Subject: [Dovecot] 2.1.rc3 (1a722c7676bb): doveadm mailbox list withholds child mailboxes Message-ID: <4F0F8815.8070609@localhost.localdomain.org> Probably I've overlooked something. But a quick search in `hg log -k doveadm` didn't show appropriate information. doveadm mailbox list -u user at example.com doesn't show child mailboxes. mailbox = N/A || \*: Sent Trash INBOX Drafts Junk-E-Mail Supplier mailbox = Supplier*: Supplier mailbox = Supplier/*: Supplier/Dell Supplier/VMware Supplier/? The same problem exists in `doveadm mailbox status` Regards, Pascal -- The trapper recommends today: defaced.1201301 at localdomain.org From moseleymark at gmail.com Fri Jan 13 04:00:08 2012 From: moseleymark at gmail.com (Mark Moseley) Date: Thu, 12 Jan 2012 18:00:08 -0800 Subject: [Dovecot] MySQL server has gone away Message-ID: I'm running 2.0.17 and I'm still seeing a decent amount of "MySQL server has gone away" errors, despite having multiple hosts defined in my auth userdb 'connect'. This is Debian Lenny 32-bit and I'm seeing the same thing with 2.0.16 on Debian Squeeze 64-bit. E.g.: Jan 12 20:30:33 auth-worker: Error: mysql: Query failed, retrying: MySQL server has gone away Our mail mysql servers are busy enough that wait_timeout is set to a whopping 30 seconds. On my regular boxes, I see a good deal of these in the logs. I've been doing a lot of mucking with doveadm/dsync (working on maildir->mdbox migration finally, yay!) on test boxes (same dovecot package & version) and when I get this error, despite the log saying it's retrying, it doesn't seem to be. Instead I get: dsync(root): Error: user ...: Auth USER lookup failed dsync(root): Fatal: User lookup failed: Internal error occurred. Refer to server log for more information. Watching tcpdump at the same time, it looks like it's going through some of the mysql servers, but all of them have by now disconnected and are in CLOSE_WAIT. Here's an (edited) example after doing a dsync that completes without errors, with tcpdump running in the background: # sleep 30; netstat -ant | grep 3306; dsync -C^ -u mailbox at test.com backup mdbox:~/mdbox tcp 1 0 10.1.15.129:57436 10.1.52.48:3306 CLOSE_WAIT tcp 1 0 10.1.15.129:49917 10.1.52.49:3306 CLOSE_WAIT tcp 1 0 10.1.15.129:35904 10.1.52.47:3306 CLOSE_WAIT 20:49:59.725005 IP 10.1.15.129.35904 > 10.1.52.47.3306: F 1126:1126(0) ack 807 win 1004 20:49:59.725459 IP 10.1.52.47.3306 > 10.1.15.129.35904: . ack 1127 win 123 20:49:59.725568 IP 10.1.15.129.57436 > 10.1.52.48.3306: F 1126:1126(0) ack 807 win 1004 20:49:59.725779 IP 10.1.52.48.3306 > 10.1.15.129.57436: . ack 1127 win 123 dsync(root): Error: user mailbox at test.com: Auth USER lookup failed dsync(root): Fatal: User lookup failed: Internal error occurred. Refer to server log for more information. 10.1.15.129 in this case is the dovecot server, and the 10.1.52.0/24 boxes are mysql servers. That's the same pattern I've seen almost every time. Just a FIN packet to two of the servers (ack'd by the mysql server) and then it fails. Is the retry mechanism supposed to transparently start a new connection, or is this how it works? In connecting remotely to these same servers (which aren't getting production traffic, so I'm the only person connecting to them), I get seemingly random disconnects via IMAP, always coinciding with a "MySQL server has gone away" error in the logs. This is non-production, so I'm happy to turn on whatever debugging would be useful. Here's doveconf -n from the box the tcpdump was on. This box is just configured for lmtp (but have seen the same thing on one configured for IMAP/POP as well), so it's pretty small, config-wise: # 2.0.17: /etc/dovecot/dovecot/dovecot.conf # OS: Linux 3.0.9-nx i686 Debian 5.0.9 auth_cache_negative_ttl = 0 auth_cache_ttl = 0 auth_debug = yes auth_failure_delay = 0 base_dir = /var/run/dovecot/ debug_log_path = /var/log/dovecot/debug.log default_client_limit = 3005 default_internal_user = doveauth default_process_limit = 1500 deliver_log_format = M=%m, F=%f, S="%s" => %$ disable_plaintext_auth = no first_valid_uid = 199 last_valid_uid = 201 lda_mailbox_autocreate = yes listen = * log_path = /var/log/dovecot/mail.log mail_debug = yes mail_fsync = always mail_location = maildir:~/Maildir:INDEX=/var/cache/dovecot/%2Mu/%2.2Mu/%u mail_nfs_index = yes mail_nfs_storage = yes mail_plugins = zlib quota mail_privileged_group = mail mail_uid = 200 managesieve_notify_capability = mailto managesieve_sieve_capability = fileinto reject envelope encoded-character vacation subaddress comparator-i;ascii-numeric relational regex imap4flags copy include variables body enotify environment mailbox date ihave mdbox_rotate_interval = 1 days mmap_disable = yes namespace { hidden = no inbox = yes list = yes location = prefix = INBOX. separator = . subscriptions = yes type = private } passdb { args = /opt/dovecot/etc/lmtp/sql.conf driver = sql } plugin { info_log_path = /var/log/dovecot/dovecot-deliver.log log_path = /var/log/dovecot/dovecot-deliver.log quota = maildir:User quota quota_rule = *:bytes=25M quota_rule2 = INBOX.Trash:bytes=+10%% quota_rule3 = *:messages=3000 sieve = ~/sieve/dovecot.sieve sieve_before = /etc/dovecot/scripts/spam.sieve sieve_dir = ~/sieve/ zlib_save = gz zlib_save_level = 3 } protocols = lmtp sieve service auth-worker { unix_listener auth-worker { mode = 0666 } user = doveauth } service auth { client_limit = 8000 unix_listener login/auth { mode = 0666 } user = doveauth } service lmtp { executable = lmtp -L process_min_avail = 10 unix_listener lmtp { mode = 0666 } } ssl = no userdb { driver = prefetch } userdb { args = /opt/dovecot/etc/lmtp/sql.conf driver = sql } verbose_proctitle = yes protocol lmtp { mail_plugins = zlib quota sieve } Thanks! From henson at acm.org Fri Jan 13 04:51:29 2012 From: henson at acm.org (Paul B. Henson) Date: Thu, 12 Jan 2012 18:51:29 -0800 Subject: [Dovecot] MySQL server has gone away In-Reply-To: References: Message-ID: <4F0F9C31.8070009@acm.org> On 1/12/2012 6:00 PM, Mark Moseley wrote: > Jan 12 20:30:33 auth-worker: Error: mysql: Query failed, retrying: > MySQL server has gone away I've actually been meaning to send a similar message for the last couple of months :). We run dovecot solely as a sasl authentication provider to postfix for smtp authentication. We're currently running 2.0.15 with a handful of patches from a few months ago when Timo fixed mysql failover. We also see sporadic messages like that in the logs: Jan 11 01:00:57 sparky dovecot: auth-worker: Error: mysql: Query failed, retrying: MySQL server has gone away We do have a timeout on the mysql servers, so I don't necessarily mind this message, except we also see some number of these: Jan 11 01:00:57 sparky dovecot: auth-worker: Error: sql(clgeurts,108.38.64.98): Password query failed: MySQL server has gone away The mysql servers have never been down or unresponsive, if it retries, it should succeed. I'm not sure what's happening here, perhaps it tries the query on one mysql server connection (we have two configured) which has timed out, and then tries the other one, and if the other one has also timed out just fails? I also see some auth timeouts: Jan 11 22:06:02 sparky dovecot: auth: CRAM-MD5(?,200.37.175.14): Request 10232.28 timeouted after 150 secs, state=2 I'm not sure if they're related to the mysql timeouts. There are also some postfix auth errors: Jan 11 23:55:41 sparky postfix/smtpd[20994]: warning: unknown[200.37.175.14]: SASL CRAM-MD5 authentication failed: Connection lost to authentication server Which I think happen when dovecot takes too long to respond. I haven't had time to dig into it or get any debugging info, but just thought I'd pipe up when I saw your similar question :). -- Paul B. Henson | (909) 979-6361 | http://www.csupomona.edu/~henson/ Operating Systems and Network Analyst | henson at csupomona.edu California State Polytechnic University | Pomona CA 91768 From user+dovecot at localhost.localdomain.org Fri Jan 13 05:10:31 2012 From: user+dovecot at localhost.localdomain.org (Pascal Volk) Date: Fri, 13 Jan 2012 04:10:31 +0100 Subject: [Dovecot] 2.1.rc3 (1a722c7676bb):dsync umlaut problems Message-ID: <4F0FA0A7.10909@localhost.localdomain.org> All umlauts in mailbox names are lost after converting mbox/Maildir mailboxes to mdbox. [location2 scp-ed from the old server] # ls -d /srv/import/Maildir/.Gel\&APY-schte\ Elemente/ /srv/import/Maildir/.Gel&APY-schte Elemente/ # dsync -u jane at example.com -v mirror maildir:/srv/import/Maildir/ ? dsync(jane at example.com): Info: Gel?schte Elemente: only in dest ? ? # doveadm mailbox list -u jane at example.com Gel* Gel__schte_Elemente # ls -d mdbox/mailboxes/Gel* mdbox/mailboxes/Gel__schte_Elemente Regards, Pascal -- The trapper recommends today: cafefeed.1201303 at localdomain.org From mark at msapiro.net Fri Jan 13 06:37:24 2012 From: mark at msapiro.net (Mark Sapiro) Date: Thu, 12 Jan 2012 20:37:24 -0800 Subject: [Dovecot] Clients show .subscriptions folder In-Reply-To: References: Message-ID: <4F0FB504.5070802@msapiro.net> Mark Sapiro wrote: > Since upgrading from dovecot-2.1.rc1 to dovecot-2.1.rc3, some clients > are showing a .subscriptions file in the user's mbox path as a folder. > > Some clients such as T'bird on Mac OS X create this file listing > subscribed mbox files. Other clients such as T'bird on Windows XP show > this file as a folder in the folder list even though it cannot be > accessed as a folder (dovecot returns CANNOT Mailbox is not a valid > mbox file). > > I think this may be a result of uncommenting the inbox namespace in > conf.d/10-mail.conf > . > > Is there a way to supress exposing this file to clients that don't use > it? I worked around this by setting the client to show only subscribed folders. -- Mark Sapiro The highway is for gamblers, San Francisco Bay Area, California better use your sense - B. Dylan From kjonca at o2.pl Fri Jan 13 08:20:13 2012 From: kjonca at o2.pl (Kamil =?iso-8859-2?Q?Jo=F1ca?=) Date: Fri, 13 Jan 2012 07:20:13 +0100 Subject: [Dovecot] dovecot 2.0.15 - purge errors Message-ID: <87hb00run6.fsf@alfa.kjonca> Dovecot 2.0.15, debian package, am I lost some mails? How can I check what is in *.broken file? --8<---------------cut here---------------start------------->8--- $doveadm -v purge doveadm(kjonca): Error: Corrupted dbox file /home/kjonca/Mail/0/storage/m.6469 (around offset=291530): purging found mismatched offsets (291500 vs 299692, 60/215) doveadm(kjonca): Warning: mdbox /home/kjonca/Mail/0/storage: rebuilding indexes doveadm(kjonca): Error: Corrupted dbox file /home/kjonca/Mail/0/storage/m.6469 (around offset=599914): metadata header has bad magic value doveadm(kjonca): Warning: dbox: Copy of the broken file saved to /home/kjonca/Mail/0/storage/m.6469.broken doveadm(kjonca): Warning: Transaction log file /home/kjonca/Mail/0/storage/dovecot.map.index.log was locked for 211 seconds doveadm(kjonca): Error: Purging namespace '' failed: Internal error occurred. Refer to server log for more information. [2012-01-13 06:45:07] --8<---------------cut here---------------end--------------->8--- doveconf -n --8<---------------cut here---------------start------------->8--- # 2.0.15: /etc/dovecot/dovecot.conf # OS: Linux 2.6.38+3-64 x86_64 Debian wheezy/sid auth_debug = yes auth_mechanisms = digest-md5 cram-md5 login plain auth_verbose = yes listen = alfa log_path = /var/log/dovecot log_timestamp = "%Y-%m-%d %H:%M:%S " mail_debug = yes mail_location = mdbox:~/Mail/0 mail_log_prefix = "%Us(%u): " mail_plugins = zlib notify acl mail_privileged_group = mail namespace { hidden = no inbox = yes list = yes location = prefix = separator = / subscriptions = yes type = private } namespace { hidden = no inbox = no list = yes location = mbox:~/Mail/Old:CONTROL=~/Mail/.dovecot/control/Old:INDEX=~/Mail/.dovecot/index/Old prefix = "#Old/" separator = / subscriptions = yes type = private } passdb { args = scheme=PLAIN /etc/security/dovecot.pwd driver = passwd-file } plugin { acl = vfile mail_log_events = delete undelete expunge copy mailbox_delete mailbox_rename mail_log_fields = uid box msgid size zlib_save = bz2 zlib_save_level = 9 } protocols = imap service auth { user = root } service imap-login { process_limit = 2 process_min_avail = 1 } service imap { vsz_limit = 512 M } service pop3-login { process_limit = 2 process_min_avail = 1 } service pop3 { vsz_limit = 512 M } ssl = no userdb { driver = passwd } verbose_proctitle = yes protocol imap { mail_max_userip_connections = 20 mail_plugins = zlib imap_zlib mail_log notify acl } protocol pop3 { pop3_uidl_format = %08Xu%08Xv } protocol lda { deliver_log_format = msgid=%m: %$ log_path = ~/log/deliver.log postmaster_address = root at localhost } --8<---------------cut here---------------end--------------->8--- -- Gdyby kto? mia? zb?dny Toshiba G450 - to ch?tnie przejm? ;) ---------------- Biologia poucza, ze je?li ci? co? ugryz?o, to niemal pewne, ze by?a to samica. From goetz.reinicke at filmakademie.de Fri Jan 13 11:01:05 2012 From: goetz.reinicke at filmakademie.de (=?ISO-8859-15?Q?G=F6tz_Reinicke?=) Date: Fri, 13 Jan 2012 10:01:05 +0100 Subject: [Dovecot] more than 200 imap processes for one user Message-ID: <4F0FF2D1.4040909@filmakademie.de> HI, recently I noticed, that our dovecot server (RH EL 5.7 dovecot-1.0.7-7.el5_7.1) 'fires' up a lot of imap processes only for one user. I counted 214 :-) most of tham in the 'S' state and are started nearly at the same time within 5 minutes. Usually users do have about 4 to 10 .... Dose anyone has an idea, what could be the cause? Thanks for any suggestion and best regards . G?tz -- G?tz Reinicke IT-Koordinator Tel. +49 7141 969 420 Fax +49 7141 969 55 420 E-Mail goetz.reinicke at filmakademie.de Filmakademie Baden-W?rttemberg GmbH Akademiehof 10 71638 Ludwigsburg www.filmakademie.de Eintragung Amtsgericht Stuttgart HRB 205016 Vorsitzender des Aufsichtsrats: J?rgen Walter MdL Staatssekret?r im Ministerium f?r Wissenschaft, Forschung und Kunst Baden-W?rttemberg Gesch?ftsf?hrer: Prof. Thomas Schadt -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 5161 bytes Desc: S/MIME Kryptografische Unterschrift URL: From tss at iki.fi Fri Jan 13 11:36:38 2012 From: tss at iki.fi (Timo Sirainen) Date: Fri, 13 Jan 2012 11:36:38 +0200 Subject: [Dovecot] MySQL server has gone away In-Reply-To: References: Message-ID: On 13.1.2012, at 4.00, Mark Moseley wrote: > I'm running 2.0.17 and I'm still seeing a decent amount of "MySQL > server has gone away" errors, despite having multiple hosts defined in > my auth userdb 'connect'. This is Debian Lenny 32-bit and I'm seeing > the same thing with 2.0.16 on Debian Squeeze 64-bit. > > E.g.: > > Jan 12 20:30:33 auth-worker: Error: mysql: Query failed, retrying: > MySQL server has gone away > > Our mail mysql servers are busy enough that wait_timeout is set to a > whopping 30 seconds. On my regular boxes, I see a good deal of these > in the logs. I've been doing a lot of mucking with doveadm/dsync > (working on maildir->mdbox migration finally, yay!) on test boxes > (same dovecot package & version) and when I get this error, despite > the log saying it's retrying, it doesn't seem to be. Instead I get: > > dsync(root): Error: user ...: Auth USER lookup failed Try with only one host in the "connect" string? My guess: Both the connections have timed out, and the retrying fails as well (there is only one retry). Although if the retrying lookup fails, there should be an error logged about it also (you don't see one?) Also another idea to avoid them in the first place: service auth-worker { idle_kill = 20 } From tss at iki.fi Fri Jan 13 11:40:02 2012 From: tss at iki.fi (Timo Sirainen) Date: Fri, 13 Jan 2012 11:40:02 +0200 Subject: [Dovecot] more than 200 imap processes for one user In-Reply-To: <4F0FF2D1.4040909@filmakademie.de> References: <4F0FF2D1.4040909@filmakademie.de> Message-ID: On 13.1.2012, at 11.01, G?tz Reinicke wrote: > recently I noticed, that our dovecot server (RH EL 5.7 > dovecot-1.0.7-7.el5_7.1) 'fires' up a lot of imap processes only for one > user. v1.1+ limits this to 10 processes by default. > Dose anyone has an idea, what could be the cause? Some client gone crazy. From janfrode at tanso.net Fri Jan 13 12:26:56 2012 From: janfrode at tanso.net (Jan-Frode Myklebust) Date: Fri, 13 Jan 2012 11:26:56 +0100 Subject: [Dovecot] dsync conversion and ldap attributes Message-ID: <20120113102656.GA12031@dibs.tanso.net> I have: mail_home = /srv/mailstore/%256RHu/%d/%n mail_location = maildir:~/:INDEX=/indexes/%1u/%1.1u/%u userdb { args = /etc/dovecot/dovecot-ldap.conf.ext driver = ldap } and the dovecot-ldap.conf.ext specifies: user_attrs = mailMessageStore=home, mailLocation=mail, mailQuota=quota_rule=*:storage=%$ Now I want to convert individual users to mdbox using dsync, but how to I tell location2 to not fetch "home" and "mail" from ldap and use different mail_location (mdbox:~/mdbox) ? I.e. I want converted accounts stored in mail_location mdbox:/srv/mailstore/%256RHu/%d/%n/mdbox. -jf From CMarcus at Media-Brokers.com Fri Jan 13 13:38:01 2012 From: CMarcus at Media-Brokers.com (Charles Marcus) Date: Fri, 13 Jan 2012 06:38:01 -0500 Subject: [Dovecot] Need help with details for new Dovecot plugin In-Reply-To: References: <1326308010.2329.47.camel@rover> <1326308231.2329.50.camel@rover> Message-ID: <4F101799.6040002@Media-Brokers.com> On 2012-01-12 6:10 PM, Maarten Bezemer wrote: > Of course I don't know anything about the details of the project (number > of users, requirements for speed of MWI updates, mail storage type, > etc.) but if it's not a very large setup and mail storage is mbox or > maildir, I'd probably go for cron-based external monitoring using find > and stuff like that. Maybe even with login scripting for extra triggering. I know that dovecot supports inotify (not sure how or in what way, and ianap, so may be totally off base), so maybe that could be leveraged? -- Best regards, Charles From CMarcus at Media-Brokers.com Fri Jan 13 13:41:51 2012 From: CMarcus at Media-Brokers.com (Charles Marcus) Date: Fri, 13 Jan 2012 06:41:51 -0500 Subject: [Dovecot] Need help with details for new Dovecot plugin - was: Re: (no subject) In-Reply-To: References: <1326308010.2329.47.camel@rover> Message-ID: <4F10187F.3010507@Media-Brokers.com> On 2012-01-12 6:17 PM, Timo Sirainen wrote: > On 11.1.2012, at 20.53, Geoffrey Broadwell wrote: >> So now the hard part is writing the piece that I can't just crib from >> elsewhere -- making sure that I hook every place in Dovecot that the >> user's voicemail folder can be changed in a way that would change it >> between having one or more unread messages, and not having any unread >> messages at all (or vice-versa, of course). At the same time, I want to >> minimize the performance impact to Dovecot (and the load on the UDP >> server) by only hooking the places I need to, filtering out as many >> false positives as I can without introducing massive complexity, and >> only pinging the UDP server when it's most likely to notice a change in >> the state of that user's voicemail server. > I think notify plugin would help you do this the easiest way. See > mail_log plugin for an example of how to use it. Oops, should have read all messages before replying (I usually skip messages with (no subject), but I try to read everything on some lists (dovecot is one of them)... Timo - searching on 'inotify' or 'notify' on both wiki1 and wiki2 has 'no results'... maybe the search indexes need to be updated? Or, is it just that there really is no documentation of inotify on either of the wikis? -- Best regards, Charles From joseba.torre at ehu.es Fri Jan 13 15:59:25 2012 From: joseba.torre at ehu.es (Joseba Torre) Date: Fri, 13 Jan 2012 14:59:25 +0100 Subject: [Dovecot] Dsync and compressed mailboxes Message-ID: <4F1038BD.1010605@ehu.es> Hi, I will begin two migrations next week, and in both cases I plan to use compressed mailboxes with mdbox format. But in the last minute one doubt has appeared: is dsync aware of compressed mailboxes? I'm not sure if dsync -u $USER mirror mdbox:compressed_mdbox_path works, or if I have to use something else (I guess that with a running dovecot dsync backup should work). Thanks. From ivo at crm.walltopia.com Fri Jan 13 19:11:30 2012 From: ivo at crm.walltopia.com (IVO GELOV (CRM)) Date: Fri, 13 Jan 2012 19:11:30 +0200 Subject: [Dovecot] Using Dovecot-auth to return error code 450 (or other 4xx) to Postfix when user is on vacation Message-ID: Hello to all members. I am using Dovecot for 5 years, but this is my first post here. I am aware of the various autoresponder scripts for vacation autoreplies (I am using Virtual Vacation 3.1 by Mischa Peters). I have an issue with auto-replies - it is vulnerable to spamming with forged email address. Forging can be prevented with several Postfix settings, which I did in the past - but was forced to remove, because our company occasionaly has clients with improper configurations and those settings prevent us to receive their legitimate mail (and this of course is not good for the business). So I have though about another idea. Since I use Dovecot-auth to verify mailbox existence - I just wonder is it possible to somehow indicate specific error code (and hopefully descriptive text also) to Postfix (e.g. 450 or some other temporary failure) when the owner of the mailbox is currently on vacation ? Best wishes, IVO GELOV From CMarcus at Media-Brokers.com Fri Jan 13 20:03:36 2012 From: CMarcus at Media-Brokers.com (Charles Marcus) Date: Fri, 13 Jan 2012 13:03:36 -0500 Subject: [Dovecot] Using Dovecot-auth to return error code 450 (or other 4xx) to Postfix when user is on vacation In-Reply-To: References: Message-ID: <4F1071F8.4080202@Media-Brokers.com> On 2012-01-13 12:11 PM, IVO GELOV (CRM) wrote: > I am aware of the various autoresponder scripts for vacation autoreplies > (I am using Virtual Vacation 3.1 by Mischa Peters). > I have an issue with auto-replies - it is vulnerable to spamming with > forged email address. I think you are using an extremely old/outdated version... The latest version would not suffer this problem, because it has a lot of message types that it will *not* respond to, including messages appearing to be from yourself... Get the latest version fro the postfixadmin package. However, I don't know how to use it without also using postfixadmin (it creates databases for storing the vacation message, etc)... -- Best regards, Charles From moseleymark at gmail.com Fri Jan 13 20:29:45 2012 From: moseleymark at gmail.com (Mark Moseley) Date: Fri, 13 Jan 2012 10:29:45 -0800 Subject: [Dovecot] MySQL server has gone away In-Reply-To: References: Message-ID: On Fri, Jan 13, 2012 at 1:36 AM, Timo Sirainen wrote: > On 13.1.2012, at 4.00, Mark Moseley wrote: > >> I'm running 2.0.17 and I'm still seeing a decent amount of "MySQL >> server has gone away" errors, despite having multiple hosts defined in >> my auth userdb 'connect'. This is Debian Lenny 32-bit and I'm seeing >> the same thing with 2.0.16 on Debian Squeeze 64-bit. >> >> E.g.: >> >> Jan 12 20:30:33 auth-worker: Error: mysql: Query failed, retrying: >> MySQL server has gone away >> >> Our mail mysql servers are busy enough that wait_timeout is set to a >> whopping 30 seconds. On my regular boxes, I see a good deal of these >> in the logs. I've been doing a lot of mucking with doveadm/dsync >> (working on maildir->mdbox migration finally, yay!) on test boxes >> (same dovecot package & version) and when I get this error, despite >> the log saying it's retrying, it doesn't seem to be. Instead I get: >> >> dsync(root): Error: user ...: Auth USER lookup failed > > Try with only one host in the "connect" string? My guess: Both the connections have timed out, and the retrying fails as well (there is only one retry). Although if the retrying lookup fails, there should be an error logged about it also (you don't see one?) > > Also another idea to avoid them in the first place: > > service auth-worker { > ?idle_kill = 20 > } > With just one 'connect' host, it seems to reconnect just fine (using the same tests as above) and I'm not seeing the same error. It worked every time that I tried, with no complaints of "MySQL server has gone away". If there are multiple hosts, it seems like the most robust thing to do would be to exhaust the existing connections and if none of those succeed, then start a new connection to one of them. It will probably result in much more convoluted logic but it'd probably match better what people expect from a retry. Alternatively, since in all my tests, the mysql server has closed the connection prior to this, is the auth worker not recognizing its connection is already half-closed (in which case, it probably shouldn't even consider it a legitimate connection and just automatically reconnect, i.e. try #1, not the retry, which would happen after another failure). I'll give the idle_kill a try too. I kind of like the idea of idle_kill for auth processes anyway, just to free up some connections on the mysql server. From ghandidrivesahumvee at rocketfish.com Fri Jan 13 20:59:02 2012 From: ghandidrivesahumvee at rocketfish.com (Dovecot-GDH) Date: Fri, 13 Jan 2012 10:59:02 -0800 Subject: [Dovecot] Dsync and compressed mailboxes In-Reply-To: <4F1038BD.1010605@ehu.es> References: <4F1038BD.1010605@ehu.es> Message-ID: <01D2B152-D1C3-4A89-8CE7-608357ADCBC2@rocketfish.com> The dsync process will be aware of whatever configuration file it refers to. The best thing to do is to set up a separate instance of Dovecot with compression enabled (really not that hard to do) and point dsync to that separate instances's configuration. Mailboxes written by dsync will be compressed. On Jan 13, 2012, at 5:59 AM, Joseba Torre wrote: > Hi, > > I will begin two migrations next week, and in both cases I plan to use compressed mailboxes with mdbox format. But in the last minute one doubt has appeared: is dsync aware of compressed mailboxes? I'm not sure if > > dsync -u $USER mirror mdbox:compressed_mdbox_path > > works, or if I have to use something else (I guess that with a running dovecot dsync backup should work). > > Thanks. From robert at schetterer.org Fri Jan 13 21:38:28 2012 From: robert at schetterer.org (Robert Schetterer) Date: Fri, 13 Jan 2012 20:38:28 +0100 Subject: [Dovecot] MySQL server has gone away In-Reply-To: References: Message-ID: <4F108834.60709@schetterer.org> Am 13.01.2012 19:29, schrieb Mark Moseley: > On Fri, Jan 13, 2012 at 1:36 AM, Timo Sirainen wrote: >> On 13.1.2012, at 4.00, Mark Moseley wrote: >> >>> I'm running 2.0.17 and I'm still seeing a decent amount of "MySQL >>> server has gone away" errors, despite having multiple hosts defined in >>> my auth userdb 'connect'. This is Debian Lenny 32-bit and I'm seeing >>> the same thing with 2.0.16 on Debian Squeeze 64-bit. >>> >>> E.g.: >>> >>> Jan 12 20:30:33 auth-worker: Error: mysql: Query failed, retrying: >>> MySQL server has gone away >>> >>> Our mail mysql servers are busy enough that wait_timeout is set to a >>> whopping 30 seconds. On my regular boxes, I see a good deal of these >>> in the logs. I've been doing a lot of mucking with doveadm/dsync >>> (working on maildir->mdbox migration finally, yay!) on test boxes >>> (same dovecot package & version) and when I get this error, despite >>> the log saying it's retrying, it doesn't seem to be. Instead I get: >>> >>> dsync(root): Error: user ...: Auth USER lookup failed >> >> Try with only one host in the "connect" string? My guess: Both the connections have timed out, and the retrying fails as well (there is only one retry). Although if the retrying lookup fails, there should be an error logged about it also (you don't see one?) >> >> Also another idea to avoid them in the first place: >> >> service auth-worker { >> idle_kill = 20 >> } >> > > With just one 'connect' host, it seems to reconnect just fine (using > the same tests as above) and I'm not seeing the same error. It worked > every time that I tried, with no complaints of "MySQL server has gone > away". > > If there are multiple hosts, it seems like the most robust thing to do > would be to exhaust the existing connections and if none of those > succeed, then start a new connection to one of them. It will probably > result in much more convoluted logic but it'd probably match better > what people expect from a retry. > > Alternatively, since in all my tests, the mysql server has closed the > connection prior to this, is the auth worker not recognizing its > connection is already half-closed (in which case, it probably > shouldn't even consider it a legitimate connection and just > automatically reconnect, i.e. try #1, not the retry, which would > happen after another failure). > > I'll give the idle_kill a try too. I kind of like the idea of > idle_kill for auth processes anyway, just to free up some connections > on the mysql server. by the way , if you use sql for auth have you tried auth caching ? http://wiki.dovecot.org/Authentication/Caching i.e. # Authentication cache size (e.g. 10M). 0 means it's disabled. Note that # bsdauth, PAM and vpopmail require cache_key to be set for caching to be used. auth_cache_size = 10M # Time to live for cached data. After TTL expires the cached record is no # longer used, *except* if the main database lookup returns internal failure. # We also try to handle password changes automatically: If user's previous # authentication was successful, but this one wasn't, the cache isn't used. # For now this works only with plaintext authentication. auth_cache_ttl = 1 hour # TTL for negative hits (user not found, password mismatch). # 0 disables caching them completely. auth_cache_negative_ttl = 0 -- Best Regards MfG Robert Schetterer Germany/Munich/Bavaria From moseleymark at gmail.com Fri Jan 13 22:45:03 2012 From: moseleymark at gmail.com (Mark Moseley) Date: Fri, 13 Jan 2012 12:45:03 -0800 Subject: [Dovecot] MySQL server has gone away In-Reply-To: <4F108834.60709@schetterer.org> References: <4F108834.60709@schetterer.org> Message-ID: On Fri, Jan 13, 2012 at 11:38 AM, Robert Schetterer wrote: > Am 13.01.2012 19:29, schrieb Mark Moseley: >> On Fri, Jan 13, 2012 at 1:36 AM, Timo Sirainen wrote: >>> On 13.1.2012, at 4.00, Mark Moseley wrote: >>> >>>> I'm running 2.0.17 and I'm still seeing a decent amount of "MySQL >>>> server has gone away" errors, despite having multiple hosts defined in >>>> my auth userdb 'connect'. This is Debian Lenny 32-bit and I'm seeing >>>> the same thing with 2.0.16 on Debian Squeeze 64-bit. >>>> >>>> E.g.: >>>> >>>> Jan 12 20:30:33 auth-worker: Error: mysql: Query failed, retrying: >>>> MySQL server has gone away >>>> >>>> Our mail mysql servers are busy enough that wait_timeout is set to a >>>> whopping 30 seconds. On my regular boxes, I see a good deal of these >>>> in the logs. I've been doing a lot of mucking with doveadm/dsync >>>> (working on maildir->mdbox migration finally, yay!) on test boxes >>>> (same dovecot package & version) and when I get this error, despite >>>> the log saying it's retrying, it doesn't seem to be. Instead I get: >>>> >>>> dsync(root): Error: user ...: Auth USER lookup failed >>> >>> Try with only one host in the "connect" string? My guess: Both the connections have timed out, and the retrying fails as well (there is only one retry). Although if the retrying lookup fails, there should be an error logged about it also (you don't see one?) >>> >>> Also another idea to avoid them in the first place: >>> >>> service auth-worker { >>> ?idle_kill = 20 >>> } >>> >> >> With just one 'connect' host, it seems to reconnect just fine (using >> the same tests as above) and I'm not seeing the same error. It worked >> every time that I tried, with no complaints of "MySQL server has gone >> away". >> >> If there are multiple hosts, it seems like the most robust thing to do >> would be to exhaust the existing connections and if none of those >> succeed, then start a new connection to one of them. It will probably >> result in much more convoluted logic but it'd probably match better >> what people expect from a retry. >> >> Alternatively, since in all my tests, the mysql server has closed the >> connection prior to this, is the auth worker not recognizing its >> connection is already half-closed (in which case, it probably >> shouldn't even consider it a legitimate connection and just >> automatically reconnect, i.e. try #1, not the retry, which would >> happen after another failure). >> >> I'll give the idle_kill a try too. I kind of like the idea of >> idle_kill for auth processes anyway, just to free up some connections >> on the mysql server. > > by the way , if you use sql for auth have you tried auth caching ? > > http://wiki.dovecot.org/Authentication/Caching > > i.e. > > # Authentication cache size (e.g. 10M). 0 means it's disabled. Note that > # bsdauth, PAM and vpopmail require cache_key to be set for caching to > be used. > > auth_cache_size = 10M > > # Time to live for cached data. After TTL expires the cached record is no > # longer used, *except* if the main database lookup returns internal > failure. > # We also try to handle password changes automatically: If user's previous > # authentication was successful, but this one wasn't, the cache isn't used. > # For now this works only with plaintext authentication. > > auth_cache_ttl = 1 hour > > # TTL for negative hits (user not found, password mismatch). > # 0 disables caching them completely. > > auth_cache_negative_ttl = 0 Yup, we have caching turned on for our production boxes. On this particular box, I'd just shut off caching so that I could work on a script for converting from maildir->mdbox and run it repeatedly on the same mailbox. I got tired of restarting dovecot between each test :) From user+dovecot at localhost.localdomain.org Fri Jan 13 23:04:12 2012 From: user+dovecot at localhost.localdomain.org (Pascal Volk) Date: Fri, 13 Jan 2012 22:04:12 +0100 Subject: [Dovecot] 2.1.rc3 (1a722c7676bb):dsync umlaut problems In-Reply-To: <4F0FA0A7.10909@localhost.localdomain.org> References: <4F0FA0A7.10909@localhost.localdomain.org> Message-ID: <4F109C4C.5050402@localhost.localdomain.org> On 01/13/2012 04:10 AM Pascal Volk wrote: > All umlauts in mailbox names are lost after converting mbox/Maildir > mailboxes to mdbox. > > # ls -d /srv/import/Maildir/.Gel\&APY-schte\ Elemente/ > /srv/import/Maildir/.Gel&APY-schte Elemente/ > ? > # doveadm mailbox list -u jane at example.com Gel* > Gel__schte_Elemente Oh, and child mailboxes with umlauts becomes top level mailboxes: # ls -d /srv/import/Maildir/.INBOX.Projekte.K\&APY-ln /srv/import/Maildir/.INBOX.Projekte.K&APY-ln #ls -d mdbox/mailboxes/INBOX_Projekte_K__ln mdbox/mailboxes/INBOX_Projekte_K__ln Regards, Pascal -- The trapper recommends today: f007ba11.1201305 at localdomain.org From user+dovecot at localhost.localdomain.org Sat Jan 14 00:04:33 2012 From: user+dovecot at localhost.localdomain.org (Pascal Volk) Date: Fri, 13 Jan 2012 23:04:33 +0100 Subject: [Dovecot] 2.1.rc3 (1a722c7676bb): Panic: file ostream.c: line 173 (o_stream_sendv): assertion failed: (stream->stream_errno != 0) Message-ID: <4F10AA71.6030901@localhost.localdomain.org> Hi Timo, today some imap processes are crashed. Regards, Pascal -- The trapper recommends today: f007ba11.1201322 at localdomain.org -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: core.imap.1326475521-24777_bt.txt URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: doveconf-n.txt URL: From info_postfix at gmx.ch Sat Jan 14 00:15:02 2012 From: info_postfix at gmx.ch (maximus12) Date: Fri, 13 Jan 2012 14:15:02 -0800 (PST) Subject: [Dovecot] Server Time 45min ahead In-Reply-To: <20120112115705.GS1341@charite.de> References: <33126760.post@talk.nabble.com> <20120112114350.GQ1341@charite.de> <33127262.post@talk.nabble.com> <20120112115705.GS1341@charite.de> Message-ID: <33137241.post@talk.nabble.com> Hi Ralf, Thanks for your help. Dovecot stop Change the server time Dovecot start Got a warning but it worked! Thanks a lot for your help. (With dovecot 1.x) -- View this message in context: http://old.nabble.com/Server-Time-45min-ahead-tp33126760p33137241.html Sent from the Dovecot mailing list archive at Nabble.com. From henson at acm.org Sat Jan 14 00:46:08 2012 From: henson at acm.org (Paul B. Henson) Date: Fri, 13 Jan 2012 14:46:08 -0800 Subject: [Dovecot] MySQL server has gone away In-Reply-To: References: Message-ID: <20120113224607.GS4844@bender.csupomona.edu> On Fri, Jan 13, 2012 at 01:36:38AM -0800, Timo Sirainen wrote: > Also another idea to avoid them in the first place: > > service auth-worker { > idle_kill = 20 > } Ah, set the auth-worker timeout to less than the mysql timeout to prevent a stale mysql connection from ever being used. I'll try that, thanks. -- Paul B. Henson | (909) 979-6361 | http://www.csupomona.edu/~henson/ Operating Systems and Network Analyst | henson at csupomona.edu California State Polytechnic University | Pomona CA 91768 From moseleymark at gmail.com Sat Jan 14 01:19:28 2012 From: moseleymark at gmail.com (Mark Moseley) Date: Fri, 13 Jan 2012 15:19:28 -0800 Subject: [Dovecot] MySQL server has gone away In-Reply-To: <20120113224607.GS4844@bender.csupomona.edu> References: <20120113224607.GS4844@bender.csupomona.edu> Message-ID: On Fri, Jan 13, 2012 at 2:46 PM, Paul B. Henson wrote: > On Fri, Jan 13, 2012 at 01:36:38AM -0800, Timo Sirainen wrote: > >> Also another idea to avoid them in the first place: >> >> service auth-worker { >> ? idle_kill = 20 >> } > > Ah, set the auth-worker timeout to less than the mysql timeout to > prevent a stale mysql connection from ever being used. I'll try that, > thanks. I gave that a try. Sometimes it seems to kill off the auth-worker but not till after a minute or so (with idle_kill = 20). Other times, the worker stays around for more like 5 minutes (I gave up watching), despite being idle -- and I'm the only person connecting to it, so it's definitely idle. Does auth-worker perhaps only wake up every so often to check its idle status? To test, I kicked off a dsync, then grabbed a netstat: tcp 0 0 10.1.15.129:40070 10.1.52.47:3306 ESTABLISHED 29146/auth worker [ tcp 0 0 10.1.15.129:33369 10.1.52.48:3306 ESTABLISHED 29146/auth worker [ tcp 0 0 10.1.15.129:54083 10.1.52.49:3306 ESTABLISHED 29146/auth worker [ then kicked off this loop: # while true; do date; ps p 29146 |tail -n1; sleep 1; done Fri Jan 13 18:05:14 EST 2012 29146 ? S 0:00 dovecot/auth worker [0 wait, 0 passdb, 0 userdb] Fri Jan 13 18:05:15 EST 2012 29146 ? S 0:00 dovecot/auth worker [0 wait, 0 passdb, 0 userdb] .... More lines of the loop ... Fri Jan 13 18:05:35 EST 2012 29146 ? S 0:00 dovecot/auth worker [0 wait, 0 passdb, 0 userdb] 18:05:36.252976 IP 10.1.52.48.3306 > 10.1.15.129.33369: F 77:77(0) ack 92 win 91 18:05:36.288549 IP 10.1.15.129.33369 > 10.1.52.48.3306: . ack 78 win 913 Fri Jan 13 18:05:36 EST 2012 29146 ? S 0:00 dovecot/auth worker [0 wait, 0 passdb, 0 userdb] 18:05:37.196204 IP 10.1.52.49.3306 > 10.1.15.129.54083: F 806:806(0) ack 1126 win 123 18:05:37.228594 IP 10.1.15.129.54083 > 10.1.52.49.3306: . ack 807 win 1004 18:05:37.411955 IP 10.1.52.47.3306 > 10.1.15.129.40070: F 806:806(0) ack 1126 win 123 18:05:37.448573 IP 10.1.15.129.40070 > 10.1.52.47.3306: . ack 807 win 1004 Fri Jan 13 18:05:37 EST 2012 29146 ? S 0:00 dovecot/auth worker [0 wait, 0 passdb, 0 userdb] ... more lines of the loop ... Fri Jan 13 18:10:13 EST 2012 29146 ? S 0:00 dovecot/auth worker [0 wait, 0 passdb, 0 userdb] Fri Jan 13 18:10:14 EST 2012 29146 ? S 0:00 dovecot/auth worker [0 wait, 0 passdb, 0 userdb] ^C at which point I bailed out. Looking again a couple of minutes later, it was gone. Nothing else was going on and the logs don't show any activity between 18:05:07 and 18:10:44. From henson at acm.org Sat Jan 14 02:19:12 2012 From: henson at acm.org (Paul B. Henson) Date: Fri, 13 Jan 2012 16:19:12 -0800 Subject: [Dovecot] MySQL server has gone away In-Reply-To: <4F108834.60709@schetterer.org> References: <4F108834.60709@schetterer.org> Message-ID: <20120114001912.GZ4844@bender.csupomona.edu> On Fri, Jan 13, 2012 at 11:38:28AM -0800, Robert Schetterer wrote: > by the way , if you use sql for auth have you tried auth caching ? > > http://wiki.dovecot.org/Authentication/Caching Hmm, hadn't tried that, but flipped it on to see how it might work out. The only tradeoff is a potential delay between when an account is disabled and when it can stop authenticating. I set the timeout to 10 minutes for now, with an hour timeout for negative caching. That page says you can send a USR2 signal to the auth process for cache stats? That doesn't seem to work. OTOH, that page is for version 1, not 2; is there some other way to generate cache stats in version 2? Thanks... -- Paul B. Henson | (909) 979-6361 | http://www.csupomona.edu/~henson/ Operating Systems and Network Analyst | henson at csupomona.edu California State Polytechnic University | Pomona CA 91768 From henson at acm.org Sat Jan 14 03:54:29 2012 From: henson at acm.org (Paul B. Henson) Date: Fri, 13 Jan 2012 17:54:29 -0800 Subject: [Dovecot] MySQL server has gone away In-Reply-To: References: Message-ID: <4F10E055.4030303@acm.org> On 1/13/2012 10:29 AM, Mark Moseley wrote: > connection prior to this, is the auth worker not recognizing its > connection is already half-closed (in which case, it probably > shouldn't even consider it a legitimate connection and just > automatically reconnect, i.e. try #1, not the retry, which would > happen after another failure). I don't think there's any way to tell from the mysql api that the server has closed the connection short of trying to use it and getting that specific error. I suppose that specific error could be special cased as an immediate "try again with no penalty" rather than considered a failure. -- Paul B. Henson | (909) 979-6361 | http://www.csupomona.edu/~henson/ Operating Systems and Network Analyst | henson at csupomona.edu California State Polytechnic University | Pomona CA 91768 From robert at schetterer.org Sat Jan 14 10:01:12 2012 From: robert at schetterer.org (Robert Schetterer) Date: Sat, 14 Jan 2012 09:01:12 +0100 Subject: [Dovecot] MySQL server has gone away In-Reply-To: <20120114001912.GZ4844@bender.csupomona.edu> References: <4F108834.60709@schetterer.org> <20120114001912.GZ4844@bender.csupomona.edu> Message-ID: <4F113648.2000902@schetterer.org> Am 14.01.2012 01:19, schrieb Paul B. Henson: > On Fri, Jan 13, 2012 at 11:38:28AM -0800, Robert Schetterer wrote: > >> by the way , if you use sql for auth have you tried auth caching ? >> >> http://wiki.dovecot.org/Authentication/Caching > > Hmm, hadn't tried that, but flipped it on to see how it might work out. > The only tradeoff is a potential delay between when an account is > disabled and when it can stop authenticating. I set the timeout to 10 > minutes for now, with an hour timeout for negative caching. dont know if i unserstand you right perhaps this is what you mean, i use this with/cause fail2ban # TTL for negative hits (user not found, password mismatch). # 0 disables caching them completely. auth_cache_negative_ttl = 0 > > That page says you can send a USR2 signal to the auth process for cache > stats? That doesn't seem to work. OTOH, that page is for version 1, not > 2; is there some other way to generate cache stats in version 2? auth cache works with dove 2, no idea about dove 1 ,didnt test, but i guess it does > > Thanks... > -- Best Regards MfG Robert Schetterer Germany/Munich/Bavaria From yubao.liu at gmail.com Sat Jan 14 15:49:31 2012 From: yubao.liu at gmail.com (Yubao Liu) Date: Sat, 14 Jan 2012 21:49:31 +0800 Subject: [Dovecot] [PATCH] support master user to login as other users by DIGEST-MD5 SASL proxy authorization Message-ID: <4F1187EB.5070002@gmail.com> Hi Timo, As http://wiki2.dovecot.org/Authentication/MasterUsers states, currently the first way for master users to log in as other users only supports PLAIN SASL mechanism, and because DIGEST-MD5 uses user name to calculate MD5 digest, the second way can't support DIGEST-MD5. I enhance the code to support DIGEST-MD5 too for the first way, please review the attached patch against dovecot-2.0 HG tip. The patch also contains a little fix to "nonce-count" string, RFC 2831 shows it should be "nc". I tested it on Debian Wheezy, it seems OK. Below are my verification steps. (Debian packaged 2.0.15 + http://hg.dovecot.org/dovecot-2.0/rev/bed15faedfd4 + attached patch) $ doveconf -n # 2.0.15: /etc/dovecot/dovecot.conf # OS: Linux 3.1.0-1-686-pae i686 Debian wheezy/sid auth_default_realm = corp.example.com auth_krb5_keytab = /etc/dovecot.keytab auth_master_user_separator = * auth_mechanisms = gssapi digest-md5 cram-md5 auth_realms = corp.example.com auth_username_format = %n first_valid_gid = 1000 first_valid_uid = 1000 mail_location = mdbox:/srv/mail/%u/Mail managesieve_notify_capability = mailto managesieve_sieve_capability = fileinto reject envelope encoded-character vacation subaddress comparator-i;ascii-numeric relational regex imap4flags copy include variables body enotify environment mailbox date ihave passdb { args = /etc/dovecot/master-users driver = passwd-file master = yes } passdb { driver = pam } plugin { sieve = /srv/mail/%u/.dovecot.sieve sieve_dir = /srv/mail/%u/sieve } protocols = " imap lmtp sieve" service auth { unix_listener auth-client { group = Debian-exim mode = 0660 } } ssl_cert = , method=DIGEST-MD5, rip=127.0.0.1, lip=127.0.1.1, mpid=15974, TLS Jan 14 20:35:32 gold dovecot: imap: Debug: Added userdb setting: plugin/master_user=webmail2 Jan 14 20:35:32 gold dovecot: imap(dieken): Debug: Effective uid=1000, gid=1000, home=/srv/mail/dieken Jan 14 20:35:32 gold dovecot: imap(dieken): Debug: fs: root=/srv/mail/dieken/Mail, index=, control=, inbox=, alt= Jan 14 20:35:32 gold dovecot: imap(dieken): Debug: Namespace : Using permissions from /srv/mail/dieken/Mail: mode=0700 gid=-1 Jan 14 20:35:34 gold dovecot: imap(dieken): Disconnected: Logged out bytes=8/329 Jan 14 20:35:34 gold dovecot: imap-login: Warning: SSL alert: where=0x4008, ret=256: warning close notify [127.0.0.1] Jan 14 21:04:50 gold dovecot: imap(dieken): Disconnected: Logged out bytes=131/533 Jan 14 21:33:59 gold dovecot: imap-login: Login: user=, method=DIGEST-MD5, rip=127.0.0.1, lip=127.0.1.1, mpid=16114, TLS Jan 14 21:34:03 gold dovecot: imap(dieken): Disconnected: Logged out bytes=8/329 Jan 14 21:36:56 gold dovecot: imap-login: Disconnected (no auth attempts): rip=127.0.0.1, lip=127.0.1.1 Jan 14 21:36:56 gold dovecot: imap-login: Disconnected (no auth attempts): rip=127.0.0.1, lip=127.0.1.1 Jan 14 21:36:58 gold dovecot: imap-login: Login: user=, method=DIGEST-MD5, rip=127.0.0.1, lip=127.0.1.1, mpid=16135, TLS Jan 14 21:37:00 gold dovecot: imap(dieken): Disconnected: Logged out bytes=10/377 Regards, Yubao Liu -------------- next part -------------- A non-text attachment was scrubbed... Name: digest-md5-sasl-proxy-authorization.patch Type: text/x-patch Size: 2322 bytes Desc: not available URL: From AxelLuttgens at swing.be Sat Jan 14 19:03:22 2012 From: AxelLuttgens at swing.be (Axel Luttgens) Date: Sat, 14 Jan 2012 18:03:22 +0100 Subject: [Dovecot] v2.x services documentation In-Reply-To: <04D662E7-2A0A-448B-BA21-1E337A400CA6@iki.fi> References: <04D662E7-2A0A-448B-BA21-1E337A400CA6@iki.fi> Message-ID: <92A86804-CEEE-4EB6-9EE7-FC8B7905AA2C@swing.be> Le 7 d?c. 2011 ? 15:22, Timo Sirainen a ?crit : > If you've ever wanted to know everything about the service {} blocks, this should be quite helpful: http://wiki2.dovecot.org/Services Hello Timo, I know, I'm quite late at reading the messages, and this is really a nice and useful one; thanks! Up to now, I only had the opportunity to quickly read the wiki page, and have a small question; one may read: process_min_avail Minimum number of processes that always should be available to accept more client connections. For service_limit=1 processes this decreases the latency for handling new connections. For service_limit!=1 processes it could be set to the number of CPU cores on the system to balance the load among them. What's that service_limit setting? TIA, Axel From ivo at crm.walltopia.com Sat Jan 14 19:23:58 2012 From: ivo at crm.walltopia.com (IVO GELOV (CRM)) Date: Sat, 14 Jan 2012 19:23:58 +0200 Subject: [Dovecot] Using Dovecot-auth to return error code 450 (or other 4xx) to Postfix when user is on vacation In-Reply-To: <4F1071F8.4080202@Media-Brokers.com> References: <4F1071F8.4080202@Media-Brokers.com> Message-ID: On Fri, 13 Jan 2012 20:03:36 +0200, Charles Marcus wrote: > On 2012-01-13 12:11 PM, IVO GELOV (CRM) wrote: >> I am aware of the various autoresponder scripts for vacation autoreplies >> (I am using Virtual Vacation 3.1 by Mischa Peters). >> I have an issue with auto-replies - it is vulnerable to spamming with >> forged email address. > > I think you are using an extremely old/outdated version... > > The latest version would not suffer this problem, because it has a lot > of message types that it will *not* respond to, including messages > appearing to be from yourself... > > Get the latest version fro the postfixadmin package. > > However, I don't know how to use it without also using postfixadmin (it > creates databases for storing the vacation message, etc)... > I have downloaded the latest version 4.0 - but it seems there is no way to prevent spammers to use forged email addresses. I decided to remove the vacation feature from our corporate mail server, because it actually opens a backdoor (even though only when someone decides to activate his vacation auto-reply) for spammers and puts a risk on the company (our server can be blacklisted). I still think that my idea with custom error codes is more useful - if the user is on vacation, the message is rejected immediately (no auto-reply is sent) and sender can see (hopefully, because most users just ignore error messages) the reason why the messages was rejected. Probably Dovecot-auth does not offer such flexibility right now - but it worths considering. From robert at schetterer.org Sat Jan 14 21:24:39 2012 From: robert at schetterer.org (Robert Schetterer) Date: Sat, 14 Jan 2012 20:24:39 +0100 Subject: [Dovecot] Using Dovecot-auth to return error code 450 (or other 4xx) to Postfix when user is on vacation In-Reply-To: References: <4F1071F8.4080202@Media-Brokers.com> Message-ID: <4F11D677.2040706@schetterer.org> Am 14.01.2012 18:23, schrieb IVO GELOV (CRM): > On Fri, 13 Jan 2012 20:03:36 +0200, Charles Marcus > wrote: > >> On 2012-01-13 12:11 PM, IVO GELOV (CRM) wrote: >>> I am aware of the various autoresponder scripts for vacation autoreplies >>> (I am using Virtual Vacation 3.1 by Mischa Peters). >>> I have an issue with auto-replies - it is vulnerable to spamming with >>> forged email address. >> >> I think you are using an extremely old/outdated version... >> >> The latest version would not suffer this problem, because it has a lot >> of message types that it will *not* respond to, including messages >> appearing to be from yourself... >> >> Get the latest version fro the postfixadmin package. >> >> However, I don't know how to use it without also using postfixadmin (it >> creates databases for storing the vacation message, etc)... >> > > I have downloaded the latest version 4.0 - but it seems there is no way > to prevent > spammers to use forged email addresses. I decided to remove the vacation > feature > from our corporate mail server, because it actually opens a backdoor > (even though > only when someone decides to activate his vacation auto-reply) for > spammers and > puts a risk on the company (our server can be blacklisted). > > I still think that my idea with custom error codes is more useful - if > the user is > on vacation, the message is rejected immediately (no auto-reply is sent) > and sender > can see (hopefully, because most users just ignore error messages) the > reason why > the messages was rejected. > > Probably Dovecot-auth does not offer such flexibility right now - but it > worths > considering. your right there is no way make perfekt sure that someone not uses your emailaddress "from and to" for spamming ( dkim and spf may help little ) now i hope i understand your problem right a good way is to use dove lmtp with sieve also good antispam in postfix, perhaps a before global antispam sieve filter rule, that catched spam is sorted in some special junk folder , and so its not handled by incomming in mailbox inbox with what userdefined sieve rule ( i.e Vacation ) ever look here http://wiki.dovecot.org/LDA/Sieve for ideas anyway if you use other vacation tecs, make sure allready flagged spam by i.e clamav, amavis, spamassassin etc in postfix stage is not handled by your vacation service , script etc. as far i remember i gave some patch to the postfixadmin vacation script doing exact this there is no ultimate way not to answer spammers by vacation or other auto script etc but if you do right , the problem goes nearly null the risk of beeing blacklisted by third party exist ever when i.e forwarding ( redirect ) mail to outside ( so antispam filter is a "must have" here ), a simple vacation message only, is no high or none risk, as long it does not include any part of the real spam message also vacation should only answer once in some time period, which should protect against loops and flooding others the corect answer to your subject would be if you want postfix simple to reject mails for some mailaddress with error code you like if the mailaddressowner is away, use a postfix reject table, if you want with i.e in/with mysql and some gui ( i.e. php ) so the mailaddressowner can edit the table himself anyway, i personally dont use vacation anymore for many reasons , but others find it hardly needed -- Best Regards MfG Robert Schetterer Germany/Munich/Bavaria From mail at kinesis.me Sat Jan 14 22:17:58 2012 From: mail at kinesis.me (Charles Thompson) Date: Sat, 14 Jan 2012 12:17:58 -0800 Subject: [Dovecot] IMAP maillog error: file lib.c: line 37 (nearest_power): assertion failed: (num <= ((size_t)1 << (BITS_IN_SIZE_T-1))) Message-ID: Dear Mailing List, What does this error mean and how do I fix it? I am on a Centos 4.9 >From /var/log/maillog : Jan 14 11:54:51 hostname imap(username): file lib.c: line 37 (nearest_power): assertion failed: (num <= ((size_t)1 << (BITS_IN_SIZE_T-1))) Version information : root at hostname[/etc/rc.d/rc3.d]# dovecot --version ; dovecot -n ; cat /etc/*release* 0.99.11 Usage: dovecot [-F] [-c ] Fatal: Unknown argument: -n CentOS release 4.9 (Final) root at hostname[/etc/rc.d/rc3.d]# Thank you. -- Sincerely, Charles Thompson *UNIX & Linux Administrator* Tel* : *(650) 906-9156 Web : www.kinesis.me Mail: mail at kinesis.me From user+dovecot at localhost.localdomain.org Sat Jan 14 22:45:29 2012 From: user+dovecot at localhost.localdomain.org (Pascal Volk) Date: Sat, 14 Jan 2012 21:45:29 +0100 Subject: [Dovecot] IMAP maillog error: file lib.c: line 37 (nearest_power): assertion failed: (num <= ((size_t)1 << (BITS_IN_SIZE_T-1))) In-Reply-To: References: Message-ID: <4F11E969.2000909@localhost.localdomain.org> On 01/14/2012 09:17 PM Charles Thompson wrote: > Dear Mailing List, > > What does this error mean and how do I fix it? I am on a Centos 4.9 > > From /var/log/maillog : > Jan 14 11:54:51 hostname imap(username): file lib.c: line 37 > (nearest_power): assertion failed: (num <= ((size_t)1 << > (BITS_IN_SIZE_T-1))) > > > Version information : > root at hostname[/etc/rc.d/rc3.d]# dovecot --version ; dovecot -n ; cat > /etc/*release* > 0.99.11 > Usage: dovecot [-F] [-c ] > Fatal: Unknown argument: -n > CentOS release 4.9 (Final) > root at hostname[/etc/rc.d/rc3.d]# > > Thank you. To make it sort: Upgrade Regards, Pascal -- The trapper recommends today: cafefeed.1201421 at localdomain.org From CMarcus at Media-Brokers.com Sun Jan 15 14:33:24 2012 From: CMarcus at Media-Brokers.com (Charles Marcus) Date: Sun, 15 Jan 2012 07:33:24 -0500 Subject: [Dovecot] Using Dovecot-auth to return error code 450 (or other 4xx) to Postfix when user is on vacation In-Reply-To: References: <4F1071F8.4080202@Media-Brokers.com> Message-ID: <4F12C794.6070609@Media-Brokers.com> On 2012-01-14 12:23 PM, IVO GELOV (CRM) wrote: > I have downloaded the latest version 4.0 - but it seems there is no > way to prevent spammers to use forged email addresses. I decided to > remove the vacation feature from our corporate mail server, because > it actually opens a backdoor (even though only when someone decides > to activate his vacation auto-reply) for spammers and puts a risk on > the company (our server can be blacklisted). Sorry, I misread your message... However, (I *think*) there *is* a simple solution to your problem, if I now understand it correctly... Simply disallow anyone sending from an email address in your domain from sending without SASL_AUTHing... The way I do this is: in main.cf (I put all of my restrictions in smtpd_recipient_restrictions) add: check_sender_access ${hash}/nospoof, somewhere after reject_unauth_destination *but before any RBL checks) where nospoof contains: # Prevent spoofing from domains that we own allowed_address1 at example.com OK allowed_address2 at example.com OK example.com REJECT You must use sasl_auth to send from one of our example.com email addresses... and of course be sure to postmap the nospoof database after making any changes... -- Best regards, Charles From CMarcus at Media-Brokers.com Sun Jan 15 14:40:05 2012 From: CMarcus at Media-Brokers.com (Charles Marcus) Date: Sun, 15 Jan 2012 07:40:05 -0500 Subject: [Dovecot] IMAP maillog error: file lib.c: line 37 (nearest_power): assertion failed: (num <= ((size_t)1 << (BITS_IN_SIZE_T-1))) In-Reply-To: References: Message-ID: <4F12C925.4030008@Media-Brokers.com> On 2012-01-14 3:17 PM, Charles Thompson wrote: > Version information : > root at hostname[/etc/rc.d/rc3.d]# dovecot --version ; dovecot -n ; cat > /etc/*release* > 0.99.11 0.99 is simply way, way, *way* too old to waste any time helping you. The short answer is - *upgrade* to a more recent version (at *least* the latest 1.2.x series, but preferably 2.0.16)... Be sure to read all of the docs on upgrading, because you *will* have some reconfiguring to do... *Then*, if you have any questions/issues, by all means come back and ask... -- Best regards, Charles From CMarcus at Media-Brokers.com Sun Jan 15 14:50:00 2012 From: CMarcus at Media-Brokers.com (Charles Marcus) Date: Sun, 15 Jan 2012 07:50:00 -0500 Subject: [Dovecot] Using Dovecot-auth to return error code 450 (or other 4xx) to Postfix when user is on vacation In-Reply-To: <4F12C794.6070609@Media-Brokers.com> References: <4F1071F8.4080202@Media-Brokers.com> <4F12C794.6070609@Media-Brokers.com> Message-ID: <4F12CB78.6020602@Media-Brokers.com> On 2012-01-15 7:33 AM, Charles Marcus wrote: > check_sender_access ${hash}/nospoof, Oh - if you aren't using variables for the maps paths, just use: check_sender_access hash:/path/to/map/nospoof, -- Best regards, Charles From user+dovecot at localhost.localdomain.org Sun Jan 15 15:11:05 2012 From: user+dovecot at localhost.localdomain.org (Pascal Volk) Date: Sun, 15 Jan 2012 14:11:05 +0100 Subject: [Dovecot] 2.1.rc3 (1a722c7676bb): doveadm mailbox list -> Segmentation fault Message-ID: <4F12D069.9060102@localhost.localdomain.org> Oops, I did it again. -- The trapper recommends today: c01dcofe.1201514 at localdomain.org -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: core.doveadm.1326628435-21046_bt.txt URL: From CMarcus at Media-Brokers.com Sun Jan 15 19:03:42 2012 From: CMarcus at Media-Brokers.com (Charles Marcus) Date: Sun, 15 Jan 2012 12:03:42 -0500 Subject: [Dovecot] Using Dovecot-auth to return error code 450 (or other 4xx) to Postfix when user is on vacation In-Reply-To: <4F12CB78.6020602@Media-Brokers.com> References: <4F1071F8.4080202@Media-Brokers.com> <4F12C794.6070609@Media-Brokers.com> <4F12CB78.6020602@Media-Brokers.com> Message-ID: <4F1306EE.3050907@Media-Brokers.com> On 2012-01-15 7:50 AM, Charles Marcus wrote: > On 2012-01-15 7:33 AM, Charles Marcus wrote: >> check_sender_access ${hash}/nospoof, > > Oh - if you aren't using variables for the maps paths, just use: > > check_sender_access hash:/path/to/map/nospoof, One last thing - this obviously requires one or both of: permit_sasl_authenticated permit_mynetworks *before* the check_sender_access check... -- Best regards, Charles From CMarcus at Media-Brokers.com Sun Jan 15 19:10:31 2012 From: CMarcus at Media-Brokers.com (Charles Marcus) Date: Sun, 15 Jan 2012 12:10:31 -0500 Subject: [Dovecot] Using Dovecot-auth to return error code 450 (or other 4xx) to Postfix when user is on vacation In-Reply-To: <4F1306EE.3050907@Media-Brokers.com> References: <4F1071F8.4080202@Media-Brokers.com> <4F12C794.6070609@Media-Brokers.com> <4F12CB78.6020602@Media-Brokers.com> <4F1306EE.3050907@Media-Brokers.com> Message-ID: <4F130887.1020304@Media-Brokers.com> On 2012-01-15 12:03 PM, Charles Marcus wrote: > On 2012-01-15 7:50 AM, Charles Marcus wrote: >> On 2012-01-15 7:33 AM, Charles Marcus wrote: >>> check_sender_access ${hash}/nospoof, >> Oh - if you aren't using variables for the maps paths, just use: >> >> check_sender_access hash:/path/to/map/nospoof, > One last thing - this obviously requires one or both of: > > permit_sasl_authenticated > permit_mynetworks > > *before* the check_sender_access check... spoke too soon... one more 'last thing'... This also obviously requires you to enforce a policy that all users must either sasl_auth or be on a system whose IP is included in my_networks... -- Best regards, Charles From henson at acm.org Sun Jan 15 23:20:29 2012 From: henson at acm.org (Paul B. Henson) Date: Sun, 15 Jan 2012 13:20:29 -0800 Subject: [Dovecot] MySQL server has gone away In-Reply-To: <4F113648.2000902@schetterer.org> References: <4F108834.60709@schetterer.org> <20120114001912.GZ4844@bender.csupomona.edu> <4F113648.2000902@schetterer.org> Message-ID: <20120115212029.GC21623@bender.csupomona.edu> On Sat, Jan 14, 2012 at 12:01:12AM -0800, Robert Schetterer wrote: > > Hmm, hadn't tried that, but flipped it on to see how it might work out. > > The only tradeoff is a potential delay between when an account is > > disabled and when it can stop authenticating. I set the timeout to 10 > > minutes for now, with an hour timeout for negative caching. > > dont know if i unserstand you right Before I turned on auth caching, every attempted authentication hit our mysql database, which in addition to the password itself contains a flag indicating whether or not the account is enabled. So if somebody was abusing smtp authentication, our helpdesk could disable their account, and it would *immediately* stop working. Whereas with authentication caching enabled, there is a window the size of the ttl where an account that has been disabled can continue to successfully authenticate. > > That page says you can send a USR2 signal to the auth process for cache > > stats? That doesn't seem to work. OTOH, that page is for version 1, not > > 2; is there some other way to generate cache stats in version 2? > > auth cache works with dove 2, no idea about dove 1 ,didnt test, but i > guess it does I'm using dovecot 2; my question was that the documentation for dovecot 1 described a way to make dovecot dump the authentication cache statistics that doesn't seem to work for dovecot 2, and if there was some other way to get the cache statistics in dovecot 2. Thanks... From mark at msapiro.net Sun Jan 15 23:36:48 2012 From: mark at msapiro.net (Mark Sapiro) Date: Sun, 15 Jan 2012 13:36:48 -0800 Subject: [Dovecot] Using Dovecot-auth to return error code 450 (or other 4xx) to Postfix when user is on vacation In-Reply-To: References: Message-ID: <4F1346F0.6020908@msapiro.net> IVO GELOV (CRM) wrote: > I still think that my idea with custom error codes is more useful - if the user is > on vacation, the message is rejected immediately (no auto-reply is sent) and sender > can see (hopefully, because most users just ignore error messages) the reason why > the messages was rejected. A 4xx status will not do this. It should just cause the sending MTA to keep the message queued and keep retrying. Depending on the sending MTA's retry and notification policies, the sender may see no error or delay notification for several days. If you really want the sender to immediately see a rejection, you have to use a 5xx status. -- Mark Sapiro The highway is for gamblers, San Francisco Bay Area, California better use your sense - B. Dylan From mark at msapiro.net Sun Jan 15 23:50:02 2012 From: mark at msapiro.net (Mark Sapiro) Date: Sun, 15 Jan 2012 13:50:02 -0800 Subject: [Dovecot] Using Dovecot-auth to return error code 450 (or other 4xx) to Postfix when user is on vacation In-Reply-To: <4F12C794.6070609@Media-Brokers.com> References: <4F1071F8.4080202@Media-Brokers.com> <4F12C794.6070609@Media-Brokers.com> Message-ID: <4F134A0A.70804@msapiro.net> On 11:59 AM, Charles Marcus wrote: > On 2012-01-14 12:23 PM, IVO GELOV (CRM) wrote: >> I have downloaded the latest version 4.0 - but it seems there is no >> way to prevent spammers to use forged email addresses. I decided to >> remove the vacation feature from our corporate mail server, because >> it actually opens a backdoor (even though only when someone decides >> to activate his vacation auto-reply) for spammers and puts a risk on >> the company (our server can be blacklisted). > > Sorry, I misread your message... > > However, (I *think*) there *is* a simple solution to your problem, if I > now understand it correctly... > > Simply disallow anyone sending from an email address in your domain from > sending without SASL_AUTHing... I don't see how this will help. The scenario the OP is concerned about is spammer at foreign.domain sends a message with forged From: and maybe envelope sender victim at other.foreign.domain to his user on vacation. The vacation program sends an autoresponse to the victim. However, why worry about this minimal backscatter? A good vacation program will not send more that one autoresponse per long time (a week?) for a given sender/recipient and won't include the original spam payload. So, even though a spammer might use this backdoor to cause your server to send messages to multiple recipients, the messages should not have spam payloads and shouldn't be sent more that once to a given end recipient. -- Mark Sapiro The highway is for gamblers, San Francisco Bay Area, California better use your sense - B. Dylan From phessler at theapt.org Mon Jan 16 11:15:21 2012 From: phessler at theapt.org (Peter Hessler) Date: Mon, 16 Jan 2012 10:15:21 +0100 Subject: [Dovecot] per-user limit? Message-ID: <20120116091521.GA10944@gir.theapt.org> I am seeing a problem where users are limited to 6 imap logins total. One of my users has a bunch of phones and computers, and wants them all on at the same time. I'm looking through my configuration, and I cannot see a limit on how many times a single user can connect. He is connecting from different IPs. Any ideas? My logs show the following error when they attempt to auth for a 7th time: dovecot: imap-login: Disconnected (no auth attempts): rip=111.yy.zz.xx, lip=81.209.183.113, TLS $ dovecot -n # 2.0.16: /etc/dovecot/dovecot.conf # OS: OpenBSD 5.1 amd64 ffs auth_mechanisms = plain login base_dir = /var/dovecot/ listen = *, [::] mail_location = maildir:/usr/home/%u/Maildir managesieve_notify_capability = mailto managesieve_sieve_capability = fileinto reject envelope encoded-character vacation subaddress comparator-i;ascii-numeric relational regex imap4flags copy include variables body enotify environment mailbox date ihave mbox_write_locks = fcntl passdb { driver = bsdauth } service auth { unix_listener /var/run/dovecot/auth-master { mode = 0600 } unix_listener /var/spool/postfix/private/auth { group = wheel mode = 0660 user = _postfix } user = root } service imap-login { process_limit = 128 process_min_avail = 6 service_count = 1 user = _dovecot } service pop3-login { process_limit = 64 process_min_avail = 6 service_count = 1 user = _dovecot } ssl_cert = References: <4F1346F0.6020908@msapiro.net> Message-ID: On Sun, 15 Jan 2012 23:36:48 +0200, Mark Sapiro wrote: > IVO GELOV (CRM) wrote: > >> I still think that my idea with custom error codes is more useful - if the user is >> on vacation, the message is rejected immediately (no auto-reply is sent) and sender >> can see (hopefully, because most users just ignore error messages) the reason why >> the messages was rejected. > > > A 4xx status will not do this. It should just cause the sending MTA to > keep the message queued and keep retrying. Depending on the sending > MTA's retry and notification policies, the sender may see no error or > delay notification for several days. > > If you really want the sender to immediately see a rejection, you have > to use a 5xx status. > Yes, you are right. The error code is the smallest difficulty :) From ivo at crm.walltopia.com Mon Jan 16 11:38:01 2012 From: ivo at crm.walltopia.com (IVO GELOV (CRM)) Date: Mon, 16 Jan 2012 11:38:01 +0200 Subject: [Dovecot] Using Dovecot-auth to return error code 450 (or other 4xx) to Postfix when user is on vacation In-Reply-To: <4F134A0A.70804@msapiro.net> References: <4F1071F8.4080202@Media-Brokers.com> <4F12C794.6070609@Media-Brokers.com> <4F134A0A.70804@msapiro.net> Message-ID: On Sun, 15 Jan 2012 23:50:02 +0200, Mark Sapiro wrote: > On 11:59 AM, Charles Marcus wrote: >> On 2012-01-14 12:23 PM, IVO GELOV (CRM) wrote: >>> I have downloaded the latest version 4.0 - but it seems there is no >>> way to prevent spammers to use forged email addresses. I decided to >>> remove the vacation feature from our corporate mail server, because >>> it actually opens a backdoor (even though only when someone decides >>> to activate his vacation auto-reply) for spammers and puts a risk on >>> the company (our server can be blacklisted). >> >> Sorry, I misread your message... >> >> However, (I *think*) there *is* a simple solution to your problem, if I >> now understand it correctly... >> >> Simply disallow anyone sending from an email address in your domain from >> sending without SASL_AUTHing... > > > I don't see how this will help. The scenario the OP is concerned about > is spammer at foreign.domain sends a message with forged From: and maybe > envelope sender victim at other.foreign.domain to his user on vacation. The > vacation program sends an autoresponse to the victim. > > However, why worry about this minimal backscatter? A good vacation > program will not send more that one autoresponse per long time (a week?) > for a given sender/recipient and won't include the original spam > payload. So, even though a spammer might use this backdoor to cause your > server to send messages to multiple recipients, the messages should not > have spam payloads and shouldn't be sent more that once to a given end > recipient. > The limitation of 1 message per week for any unique combination of sender/recipient does not stop backscatter - because each message can come with a new forged FROM address, and from different compromised mail servers. The spammer does not have control over the body of the auto-replies (which is something like "I am not at the office, please write to my colleagues"), but it still may cause the victims to take some measures. From ivo at crm.walltopia.com Mon Jan 16 11:48:11 2012 From: ivo at crm.walltopia.com (IVO GELOV (CRM)) Date: Mon, 16 Jan 2012 11:48:11 +0200 Subject: [Dovecot] Using Dovecot-auth to return error code 450 (or other 4xx) to Postfix when user is on vacation In-Reply-To: <4F12C794.6070609@Media-Brokers.com> References: <4F1071F8.4080202@Media-Brokers.com> <4F12C794.6070609@Media-Brokers.com> Message-ID: On Sun, 15 Jan 2012 14:33:24 +0200, Charles Marcus wrote: > On 2012-01-14 12:23 PM, IVO GELOV (CRM) wrote: >> I have downloaded the latest version 4.0 - but it seems there is no >> way to prevent spammers to use forged email addresses. I decided to >> remove the vacation feature from our corporate mail server, because >> it actually opens a backdoor (even though only when someone decides >> to activate his vacation auto-reply) for spammers and puts a risk on >> the company (our server can be blacklisted). > > Sorry, I misread your message... > > However, (I *think*) there *is* a simple solution to your problem, if I > now understand it correctly... > > Simply disallow anyone sending from an email address in your domain from > sending without SASL_AUTHing... > > The way I do this is: > > in main.cf (I put all of my restrictions in > smtpd_recipient_restrictions) add: > > check_sender_access ${hash}/nospoof, > > somewhere after reject_unauth_destination *but before any RBL checks) > > where nospoof contains: > > # Prevent spoofing from domains that we own > allowed_address1 at example.com OK > allowed_address2 at example.com OK > example.com REJECT You must use sasl_auth to send from one of our > example.com email addresses... > > and of course be sure to postmap the nospoof database after making any > changes... > These are the restrictions I apply (or had been applying for some time). Anyway, for now I simply disabled the vacation plugin. smtpd_client_restrictions = permit_mynetworks, check_client_access mysql:/etc/postfix/sender_ip, permit_sasl_authenticated, reject_unknown_client #reject_rhsbl_client blackhole.securitysage.com, reject_rbl_client opm.blitzed.org, #smtpd_client_restrictions = permit_sasl_authenticated, permit_mynetworks, check_client_access mysql:/etc/postfix/client_sql, reject_rbl_client sbl.spamhaus.org, reject_rbl_client list.dsbl.org,reject_rbl_client cbl.abuseat.org, reject_rbl_client dul.dnsbl.sorbs.net, reject_rbl_client dnsbl.ahbl.org, permit #smtpd_client_restrictions = permit_sasl_authenticated, permit_mynetworks, check_client_access mysql:/etc/postfix/client_ok, reject_rbl_client sbl.spamhaus.org, reject_rbl_client list.dsbl.org,reject_rbl_client cbl.abuseat.org, reject_rbl_client dul.dnsbl.sorbs.net, reject_rbl_client dnsbl.ahbl.org, reject_unknown_client ###, check_policy_service inet:127.0.0.1:10040, reject_rbl_client sbl.spamhaus.org, reject_rbl_client cbl.abuseat.org, reject_rbl_client dul.dnsbl.sorbs.net, reject_rbl_client dnsbl.ahbl.org #,reject_rbl_client opm.blitzed.org, reject_rbl_client relays.ordb.org, reject_rbl_client dun.dnsrbl.net #REJECT_NON_FQDN_HOSTNAME - proverka dali HELO e pylno Domain ime (sus suffix) #smtpd_helo_restrictions = check_helo_access hash:/etc/postfix/helo_access, reject_invalid_hostname, reject_non_fqdn_hostname smtpd_helo_restrictions = reject_invalid_hostname smtpd_sender_restrictions = permit_sasl_authenticated, permit_mynetworks, reject_rhsbl_sender rhsbl.ahbl.org, reject_rhsbl_sender rhsbl.sorbs.net, reject_rhsbl_sender multi.surbl.org #reject_rhsbl_sender blackhole.securitysage.com, reject_rhsbl_sender opm.blitzed.org, #smtpd_sender_restrictions = permit_sasl_authenticated, permit_mynetworks, check_sender_access mysql:/etc/postfix/sender_sql, reject_non_fqdn_sender, reject_unknown_sender_domain, reject_rhsbl_sender rhsbl.ahbl.org, reject_rhsbl_sender block.rhs.mailpolice.com, reject_rhsbl_sender rhsbl.sorbs.net, reject_rhsbl_sender multi.surbl.org, reject_rhsbl_sender dsn.rfc-ignorant.org, permit #, reject_rhsbl_sender dsn.rfc-ignorant.org, reject_rhsbl_sender relays.ordb.org, reject_rhsbl_sender dun.dnsrbl.net #smtpd_recipient_restrictions = permit_sasl_authenticated, permit_mynetworks, reject_unauth_destination, reject_unauth_pipelining, check_recipient_access regexp:/etc/postfix/dspam_incoming smtpd_recipient_restrictions = permit_sasl_authenticated, permit_mynetworks, reject_unauth_destination, reject_unauth_pipelining smtpd_data_restrictions = reject_unauth_pipelining From joseba.torre at ehu.es Mon Jan 16 11:50:49 2012 From: joseba.torre at ehu.es (Joseba Torre) Date: Mon, 16 Jan 2012 10:50:49 +0100 Subject: [Dovecot] Using Dovecot-auth to return error code 450 (or other 4xx) to Postfix when user is on vacation In-Reply-To: <4F11D677.2040706@schetterer.org> References: <4F1071F8.4080202@Media-Brokers.com> <4F11D677.2040706@schetterer.org> Message-ID: <4F13F2F9.2070008@ehu.es> > anyway if you use other vacation tecs, make sure allready flagged spam > by i.e clamav, amavis, spamassassin etc in postfix stage is not handled > by your vacation service , script etc. > as far i remember i gave some patch to the postfixadmin vacation script > doing exact this If you're using any antispam soft that gives every mail a spam score (like spamassassin does), you can use a strong rule for vacation replies (like "only messages with a spam score under 5 are allowed, but only those under 3 may have a vacation reply"). From rasca at miamammausalinux.org Mon Jan 16 12:42:08 2012 From: rasca at miamammausalinux.org (RaSca) Date: Mon, 16 Jan 2012 11:42:08 +0100 Subject: [Dovecot] Quota is not working (Debian Squeeze - Dovecot 1.2) Message-ID: <4F13FF00.1050108@miamammausalinux.org> Hi all, I'm trying to make quota work in Squeeze (Dovecot 1.2.15-7). The quota module is correctly loaded and, when receiving a message, from the log I see these messages: Jan 16 11:20:04 mail-1 dovecot: deliver(testquota@): Loading modules from directory: /usr/lib/dovecot/modules/lda Jan 16 11:20:04 mail-1 dovecot: deliver(testquota@): Module loaded: /usr/lib/dovecot/modules/lda/lib10_quota_plugin.so Jan 16 11:20:04 mail-1 dovecot: deliver(testquota@): Module loaded: /usr/lib/dovecot/modules/lda/lib90_sieve_plugin.so Jan 16 11:20:04 mail-1 dovecot: deliver(testquota@): auth input: uid=5000 Jan 16 11:20:04 mail-1 dovecot: deliver(testquota@): auth input: gid=5000 Jan 16 11:20:04 mail-1 dovecot: deliver(testquota@): auth input: home=/mail/mailboxes//testquota Jan 16 11:20:04 mail-1 dovecot: deliver(testquota@): Quota root: name=/mail/mailboxes//testquota backend=maildir args= Jan 16 11:20:04 mail-1 dovecot: deliver(testquota@): maildir: data=/mail/mailboxes//testquota@ Jan 16 11:20:04 mail-1 dovecot: deliver(testquota@): maildir++: root=/mail/mailboxes//testquota@, index=, control=, inbox=/mail/mailboxes//testquota@ Jan 16 11:20:04 mail-1 dovecot: deliver(testquota@): sieve: user's script path /mail/mailboxes//testquota/.dovecot.sieve doesn't exist (using global script path in stead) Jan 16 11:20:04 mail-1 dovecot: deliver(testquota@): sieve: using sieve path for user's script: /mail/sieve/globalsieverc Jan 16 11:20:04 mail-1 dovecot: deliver(testquota@): sieve: opening script /mail/sieve/globalsieverc Jan 16 11:20:04 mail-1 dovecot: deliver(testquota@): sieve: executing compiled script /mail/sieve/globalsieverc Jan 16 11:20:04 mail-1 dovecot: deliver(testquota@): Namespace : Using permissions from /mail/mailboxes//testquota@: mode=0700 gid=-1 Jan 16 11:20:05 mail-1 dovecot: deliver(testquota@): sieve: msgid=<4F13F996.4000501 at seat.it>: stored mail into mailbox 'INBOX' Now, since I've got a message like this: Quota root: name=/mail/mailboxes//testquota@ backend=maildir args= it seems that something is checked, but even if this directory is over quota, nothing happens. This is my dovecot conf: protocols = imap pop3 disable_plaintext_auth = no log_timestamp = "%Y-%m-%d %H:%M:%S " mail_location = maildir:/mail/mailboxes/%d/%n@%d mail_privileged_group = mail mail_debug = yes mail_nfs_storage = yes mmap_disable=yes fsync_disable=no mail_nfs_index = yes protocol imap { mail_plugins = quota imap_quota } protocol pop3 { pop3_uidl_format = %08Xu%08Xv mail_plugins = quota } protocol managesieve { } protocol lda { auth_socket_path = /var/run/dovecot/auth-master postmaster_address = postmaster@ mail_plugins = sieve quota quota_full_tempfail = no log_path = } auth default { mechanisms = plain passdb sql { args = /etc/dovecot/dovecot-sql.conf } userdb passwd { } userdb static { args = uid=5000 gid=5000 home=/mail/mailboxes/%d/%n@%d allow_all_users=yes } user = root socket listen { master { path = /var/run/dovecot/auth-master mode = 0600 user = vmail } client { path = /var/spool/postfix/private/auth mode = 0660 user = postfix group = postfix } } } plugin { quota = maildir:/mail/mailboxes/%d/%n@%d sieve_global_path = /mail/sieve/globalsieverc } The db connection works, this is /etc/dovecot/dovecot-sql.conf: driver = mysql connect = host= dbname=mail user= password= default_pass_scheme = CRYPT password_query = SELECT username, password FROM mailbox WHERE username='%u' user_query = SELECT username AS user, maildir AS home, CONCAT('*:storage=', quota , 'B') AS quota_rule FROM mailbox WHERE username = '%u' AND active = '1' and for the user testquota the user_query results in this: +-------------------+----------------------------+--------------------+ | user | home | quota_rule | +-------------------+----------------------------+--------------------+ | testquota@ | /testquota@/ | *:storage=1024000B | +-------------------+----------------------------+--------------------+ everything else is ok, for example I'm using sieve for the spam filter, and the SPAM is correctly put in the .SPAM dir. I turned on debug on dovecot, but I can't see if the query in some way fails. Can you please help me to understand what am I doing wrong? -- RaSca Mia Mamma Usa Linux: Niente ? impossibile da capire, se lo spieghi bene! rasca at miamammausalinux.org http://www.miamammausalinux.org From jsxmoney at gmail.com Mon Jan 16 14:38:44 2012 From: jsxmoney at gmail.com (Jason X, Maney) Date: Mon, 16 Jan 2012 14:38:44 +0200 Subject: [Dovecot] Dovecot unable to locate mailbox Message-ID: Dear all, I hope someone can point me in the right direction. here. I have setup my Dovecot v2.0.13 on Ubuntu 11.10. The logs tells me that the mail location has failed as follows: ========= Jan 16 14:18:16 myservername dovecot: pop3-login: Login: user=, method=PLAIN, rip=aaa.bbb.ccc.ddd, lip=www.xxx.yyy.zzz, mpid=1360, TLS Jan 16 14:18:16 myservername dovecot: pop3(userA): Error: user molla: Initialization failed: mail_location not set and autodetection failed: Mail storage autodetection failed with home=/home/userA Jan 16 14:18:16 myservername dovecot: pop3(userA): Error: Invalid user settings. Refer to server log for more information. ========= Yet my config also come out strangely as below: ========= root at guyana:~# dovecot -n # 2.0.13: /etc/dovecot/dovecot.conf # OS: Linux 3.0.0-12-server x86_64 Ubuntu 11.10 passdb { driver = pam } protocols = " imap pop3" ssl_cert = References: <4F1071F8.4080202@Media-Brokers.com> <4F12C794.6070609@Media-Brokers.com> <4F134A0A.70804@msapiro.net> Message-ID: <4F141D93.30406@Media-Brokers.com> On 2012-01-15 4:50 PM, Mark Sapiro wrote: > I don't see how this will help. The scenario the OP is concerned about > isspammer at foreign.domain sends a message with forged From: and maybe > envelope sendervictim at other.foreign.domain to his user on vacation. Guess I should read more carefully... for some reason I thought I remembered him being worried about forged senders in his own domain(s)... Sorry for the noise... -- Best regards, Charles From kirill at shutemov.name Mon Jan 16 17:05:05 2012 From: kirill at shutemov.name (Kirill A. Shutemov) Date: Mon, 16 Jan 2012 17:05:05 +0200 Subject: [Dovecot] v2.1.rc3 released In-Reply-To: <1325878845.17774.38.camel@hurina> References: <1325878845.17774.38.camel@hurina> Message-ID: <20120116150504.GA28883@shutemov.name> On Fri, Jan 06, 2012 at 09:40:44PM +0200, Timo Sirainen wrote: > http://dovecot.org/releases/2.1/rc/dovecot-2.1.rc3.tar.gz > http://dovecot.org/releases/2.1/rc/dovecot-2.1.rc3.tar.gz.sig > > Whops, rc2 was missing a file. I always run "make distcheck", which > should catch these, but recently it has always failed due to clang > static checking giving one "error" that I didn't really want to fix. > Because of that the distcheck didn't finish and didn't check for the > missing file. > > So, anyway, I've made clang happy again, and now that I see how bad idea > it is to just ignore the failed distcheck, I won't do that again in > future. :) > > ./autogen failed: $ ./autogen.sh libtoolize: Consider adding `AC_CONFIG_MACRO_DIR([m4])' to configure.in and libtoolize: rerunning libtoolize, to keep the correct libtool macros in-tree. libtoolize: Consider adding `-I m4' to ACLOCAL_AMFLAGS in Makefile.am. src/plugins/fts/Makefile.am:52: `pkglibexecdir' is not a legitimate directory for `SCRIPTS' Makefile.am:24: `pkglibdir' is not a legitimate directory for `DATA' autoreconf: automake failed with exit status: 1 $ automake --version | head -1 automake (GNU automake) 1.11.2 -- Kirill A. Shutemov From info at simonecaruso.com Mon Jan 16 17:40:59 2012 From: info at simonecaruso.com (Simone Caruso) Date: Mon, 16 Jan 2012 16:40:59 +0100 Subject: [Dovecot] Quota is not working (Debian Squeeze - Dovecot 1.2) In-Reply-To: <4F13FF00.1050108@miamammausalinux.org> References: <4F13FF00.1050108@miamammausalinux.org> Message-ID: <4F14450B.8000903@simonecaruso.com> On 16/01/2012 11:42, RaSca wrote: > Hi all, > I'm trying to make quota work in Squeeze (Dovecot 1.2.15-7). try "auth_debug = yes" -- Simone Caruso IT Consultant +39 349 65 90 805 From thomas at koch.ro Mon Jan 16 17:51:45 2012 From: thomas at koch.ro (Thomas Koch) Date: Mon, 16 Jan 2012 16:51:45 +0100 Subject: [Dovecot] Trying to get metadata plugin working Message-ID: <201201161651.46232.thomas@koch.ro> Hi, I'm working on a Kolab related project and wanted to use dovecot on my dev machine. However I'm stuck with the metadata-plugin. I "solved" the permissions problems but now I get dict: Error: file dict commit: file_dotlock_open(~/Maildir/shared-metadata) failed: No such file or directory Before that, I had dict { metadata = file:/var/lib/dovecot/shared-metadata but got problems since my normal user had no permission to access /var/lib/dovecot. I compiled the plugin from the most recent commit. My dovecot runs in a chroot. I can login with KMail and can create Groupware (annotated) folders, but the metadata file dict won't get created and I also can't set/get metadata via telnet. doveconf -N # 2.0.15: /etc/dovecot/dovecot.conf # OS: Linux 3.1.0-1-amd64 x86_64 Debian 6.0.3 auth_mechanisms = plain dict { metadata = file:~/Maildir/shared-metadata } mail_access_groups = dovecot mail_location = maildir:~/Maildir mail_plugins = " metadata" passdb { driver = pam } plugin { metadata_dict = proxy::metadata } protocols = " imap" service dict { unix_listener dict { group = dovecot mode = 0666 } } ssl_cert = References: <4F1071F8.4080202@Media-Brokers.com> <4F12C794.6070609@Media-Brokers.com> <4F134A0A.70804@msapiro.net> Message-ID: <4F144E57.9060802@msapiro.net> On 11:59 AM, IVO GELOV (CRM) wrote: > > The limitation of 1 message per week for any unique combination of > sender/recipient > does not stop backscatter - because each message can come with a new > forged FROM address, > and from different compromised mail servers. > The spammer does not have control over the body of the auto-replies > (which is something > like "I am not at the office, please write to my colleagues"), but it still > may cause the victims to take some measures. All true, but the sender in the sender/recipient combination is the forged From: that ultimately receives the backscatter and the recipient is your local user who set the vacation autoresponse. If you only have one or two local users on vacation at a time, any given backscatter recipient could receive at most one or two backscatter messages per week regardless of how many compromised servers the spammer sends from. And this assumes the spam is initially sent to multiple local users on vacation and gets past your local spam filtering. I don't know about you, but I have more significant potential backscatter sources to worry about. -- Mark Sapiro The highway is for gamblers, San Francisco Bay Area, California better use your sense - B. Dylan From rasca at miamammausalinux.org Mon Jan 16 18:28:58 2012 From: rasca at miamammausalinux.org (RaSca) Date: Mon, 16 Jan 2012 17:28:58 +0100 Subject: [Dovecot] Quota is not working (Debian Squeeze - Dovecot 1.2) In-Reply-To: <4F14450B.8000903@simonecaruso.com> References: <4F13FF00.1050108@miamammausalinux.org> <4F14450B.8000903@simonecaruso.com> Message-ID: <4F14504A.9010302@miamammausalinux.org> Il giorno Lun 16 Gen 2012 16:40:59 CET, Simone Caruso ha scritto: > On 16/01/2012 11:42, RaSca wrote: >> Hi all, >> I'm trying to make quota work in Squeeze (Dovecot 1.2.15-7). > try "auth_debug = yes" > In fact, enabling auth_debug gives me this: Jan 16 17:21:06 mail-2 dovecot: auth(default): master in: USER#0111#011testquota@#011service=deliver Jan 16 17:21:06 mail-2 dovecot: auth(default): passwd(testquota@): lookup Jan 16 17:21:06 mail-2 dovecot: auth(default): passwd(testquota@): unknown user Jan 16 17:21:06 mail-2 dovecot: auth(default): master out: USER#0111#011testquota@#011uid=5000#011gid=5000#011home=/mail/mailboxes//testquota@ But what I don't understand is that manually doing password_query and user_query works. So why I receive unknown user? Is there something else to set? -- RaSca Mia Mamma Usa Linux: Niente ? impossibile da capire, se lo spieghi bene! rasca at miamammausalinux.org http://www.miamammausalinux.org From greve at kolabsys.com Mon Jan 16 18:13:14 2012 From: greve at kolabsys.com (Georg C. F. Greve) Date: Mon, 16 Jan 2012 17:13:14 +0100 Subject: [Dovecot] [Kolab-devel] Trying to get metadata plugin working In-Reply-To: <201201161651.46232.thomas@koch.ro> References: <201201161651.46232.thomas@koch.ro> Message-ID: <2001652.RYW7Y0I4zo@katana.lair> On Monday 16 January 2012 16.51:45 Thomas Koch wrote: > I'm working on a Kolab related project and wanted to use dovecot on my dev > machine. Very interesting. Please document your findings in wiki.kolab.org once you're done. > dict: Error: file dict commit: file_dotlock_open(~/Maildir/shared-metadata) > failed: No such file or directory Can't really help with that one, I'm afraid. Best regards, Georg -- Georg C. F. Greve Chief Executive Officer Kolab Systems AG Z?rich, Switzerland e: greve at kolabsys.com t: +41 78 904 43 33 w: http://kolabsys.com pgp: 86574ACA Georg C. F. Greve -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 308 bytes Desc: This is a digitally signed message part. URL: From tss at iki.fi Mon Jan 16 19:16:57 2012 From: tss at iki.fi (Timo Sirainen) Date: Mon, 16 Jan 2012 19:16:57 +0200 Subject: [Dovecot] Trying to get metadata plugin working In-Reply-To: <201201161651.46232.thomas@koch.ro> References: <201201161651.46232.thomas@koch.ro> Message-ID: <23312B5E-14CF-42D9-8A18-F995EDA874C4@iki.fi> On 16.1.2012, at 17.51, Thomas Koch wrote: > dict: Error: file dict commit: file_dotlock_open(~/Maildir/shared-metadata) > failed: No such file or directory It's not expanding ~/ > dict { > metadata = file:~/Maildir/shared-metadata Use %h/ instead of ~/ From thomas at koch.ro Mon Jan 16 20:26:12 2012 From: thomas at koch.ro (Thomas Koch) Date: Mon, 16 Jan 2012 19:26:12 +0100 Subject: [Dovecot] Trying to get metadata plugin working In-Reply-To: <23312B5E-14CF-42D9-8A18-F995EDA874C4@iki.fi> References: <201201161651.46232.thomas@koch.ro> <23312B5E-14CF-42D9-8A18-F995EDA874C4@iki.fi> Message-ID: <201201161926.12309.thomas@koch.ro> Timo Sirainen: > Use %h/ instead of ~/ Hi Timo, it doesn't expand either %h nor %%h. When I hardcode the path to my dev user's homedir I get a permission error. After hardcoding it to /tmp/shared-metadata the file gets at least written, but the content looks strange: shared/mailbox/7c2ae515102e144f172d0000d1887b74/shared//vendor/kolab/folder- test true Best regards, Thomas Koch, http://www.koch.ro From tss at iki.fi Mon Jan 16 20:33:44 2012 From: tss at iki.fi (Timo Sirainen) Date: Mon, 16 Jan 2012 20:33:44 +0200 Subject: [Dovecot] Trying to get metadata plugin working In-Reply-To: <201201161926.12309.thomas@koch.ro> References: <201201161651.46232.thomas@koch.ro> <23312B5E-14CF-42D9-8A18-F995EDA874C4@iki.fi> <201201161926.12309.thomas@koch.ro> Message-ID: On 16.1.2012, at 20.26, Thomas Koch wrote: > Timo Sirainen: >> Use %h/ instead of ~/ > > Hi Timo, > > it doesn't expand either %h nor %%h. Oh, right, wrong place. If you make it go through proxy, it doesn't do any expansion. It's then accessed by the "dict" process (which probably runs as "dovecot" user). You could instead use something like: metadata_dict = file:%h/Maildir/shared-metadata > When I hardcode the path to my dev user's > homedir I get a permission error. After hardcoding it to /tmp/shared-metadata > the file gets at least written, but the content looks strange: > > shared/mailbox/7c2ae515102e144f172d0000d1887b74/shared//vendor/kolab/folder- > test > true I haven't really looked at what the metadata plugin actually does.. From buchholz at easystreet.net Tue Jan 17 00:41:46 2012 From: buchholz at easystreet.net (Don Buchholz) Date: Mon, 16 Jan 2012 14:41:46 -0800 Subject: [Dovecot] imap-login process_limit reached Message-ID: <4F14A7AA.8010507@easystreet.net> I've been having some problems with IMAP user connections to the Dovecot (v2.0.8) server. The following message is being logged. Jan 16 10:51:36 postal dovecot: master: Warning: service(imap-login): process_limit reached, client connections are being dropped The server is running Red Hat Enterprise Linux release 4 (update 6). Dovecot is v2.0.8. We have only 29 user accounts in /etc/dovecot/users. There were 196 "dovecot/imap" processes and 6 other dovecot processes, for a total of 202 "dovecot" processes, listed in the 'ps aux' output when problems were being experienced. Stopping and restarting the Dovecot system fixes the problem -- for a while. The 'doveconf -n' output is attached. I have not set any "process_limit" values, and I don't think I'm getting anywhere close to the 1024 default, so I'm pretty confused as to what might be wrong. Any suggestions on what to do next are appreciated. Thanks, - Don ------------------------------------------------------------------------ -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: doveconf-n.txt URL: From lists at wildgooses.com Tue Jan 17 02:22:35 2012 From: lists at wildgooses.com (Ed W) Date: Tue, 17 Jan 2012 00:22:35 +0000 Subject: [Dovecot] Storing passwords encrypted... bcrypt? In-Reply-To: <4F04FAA9.3020908@localhost.localdomain.org> References: <4F0367A2.1000807@Media-Brokers.com> <4F04FAA9.3020908@localhost.localdomain.org> Message-ID: <4F14BF4B.5060804@wildgooses.com> On 05/01/2012 01:19, Pascal Volk wrote: > On 01/03/2012 09:40 PM Charles Marcus wrote: >> Hi everyone, >> >> Was just perusing this article about how trivial it is to decrypt >> passwords that are stored using most (standard) encryption methods (like >> MD5), and was wondering - is it possible to use bcrypt with >> dovecot+postfix+mysql (or posgres)? > Yes it is possible to use bcrypt with dovecot. Currently you have only > to write your password scheme plugin. The bcrypt algorithm is described > at http://en.wikipedia.org/wiki/Bcrypt. > > If you are using Dovecot>= 2.0 'doveadm pw' supports the schemes: > *BSD: Blowfish-Crypt > *Linux (since glibc 2.7): SHA-256-Crypt and SHA-512-Crypt > Some distributions have also added support for Blowfish-Crypt > See also: doveadm-pw(1) > > If you are using Dovecot< 2.0 you can also use any of the algorithms > supported by your system's libc. But then you have to prefix the hashes > with {CRYPT} - not {{BLF,SHA256,SHA512}-CRYPT}. > I'm a bit late, but the above is absolutely correct Basically the simplest solution is to pick a glibc which natively supports bcrypt (and the equivalent algorithm, but using SHA-256/512). Then you can effectively use any of these hashes in your /etc/{passwd,shadow} file. With the hash testing native in your glibc then a bunch of applications automatically acquire the ability to test passwords stored in these hash formats, dovecot being one of them To generate the hashes in that format, choose an appropriate library for your web interface or whatever generates the hashes for you. There are even command line utilities (mkpasswd) to do this for you. I forget the config knobs (/etc/logins.def ?), but it's entirely possible to also have all your normal /etc/shadow hashes generated in this format going forward if you wish I posted some patches for uclibc recently for bcrypt and I think sha-256/512 already made it in. I believe several of the big names have similar patches for glibc. Just to attack some of the myths here: - Salting passwords basically means adding some random garbage at the front of the password before hashing. - Salting passwords prevents you using a big lookup table to cheat and instantly reverse the password - Salting has very little ability to stop you bruteforcing the password, ie it takes around the same time to figure out the SHA or blowfish hash of every word in some dictionary, regardless of whether you use the raw word or the word with some garbage in front of it - Using an iterated hash algorithm gives you a linear increase in difficulty in bruteforcing passwords. So if you do a million iterations on each password, then it takes a million times longer to bruteforce (probably there are shortcuts to be discovered, assume that this is best case, but it's still a good improvement). - Bear in mind that off the shelf GPU crackers will do of the order 100-300 million hashes per second!! http://www.cryptohaze.com/multiforcer.php The last statistic should be scary to someone who has some small knowledge of the number of unique words in the [english] language, even multiplying up for trivial permutations with numbers or punctuation... So in conclusion: everyone who stores passwords in hash form should make their way in an orderly fashion towards the door if they don't currently use an iterated hash function. No need to run, but it definitely should be on the todo list to apply where feasible. BCrypt is very common and widely implemented, but it would seem logical to consider SHA-256/512 (iterated) options where there is application support. Note I personally believe there are valid reasons to store plaintext passwords - this seems to cause huge criticism due to the ensuing disaster which can happen if the database is pinched, but it does allow for enhanced security in the password exchange, so ultimately it depends on where your biggest risk lies... Good luck Ed W From lists at wildgooses.com Tue Jan 17 02:28:32 2012 From: lists at wildgooses.com (Ed W) Date: Tue, 17 Jan 2012 00:28:32 +0000 Subject: [Dovecot] compressed mboxes very slow In-Reply-To: <8739blw6gl.fsf@alfa.kjonca> References: <87iptnoans.fsf@alfa.kjonca> <8739blw6gl.fsf@alfa.kjonca> Message-ID: <4F14C0B0.9020709@wildgooses.com> On 12/01/2012 10:39, Kamil Jo?ca wrote: > kjonca at o2.pl (Kamil Jo?ca) writes: > >> I have some archive mails in gzipped mboxes. I could use them with >> dovecot 1.x without problems. >> But recently I have installed dovecot 2.0.12, and they are slow. very >> slow. > > Recently I have to read some compressed mboxes again, and no progress :( > I took 2.0.17 sources and put some > i_debug ("#kjonca["__FILE__",%d,%s] %d", __LINE__,__func__,...some parameters ...); > > lines into istream-bzlib.c, istream-raw-mbox.c and istream-limit.c > and found that: > > in istream-limit.c in function around lines 40-45: > --8<---------------cut here---------------start------------->8--- > i_stream_seek(stream->parent, lstream->istream.parent_start_offset + > stream->istream.v_offset); > stream->pos -= stream->skip; > stream->skip = 0; > --8<---------------cut here---------------end--------------->8--- > seeks stream, (calling i_stream_raw_mbox_seek in file istream-raw-mbox.c ) > > and then (line 50 ) > --8<---------------cut here---------------start------------->8--- > if ((ret = i_stream_read(stream->parent)) == -2) > return -2; > --8<---------------cut here---------------end--------------->8--- > > tries to read some data earlier in stream, and with compressed mboxes it > cause reread file from the beginning. > Just wanted to bump this since it seems interesting. Timo do you have a comment? I definitely see your point that skipping backwards in a compressed stream is going to be very CPU intensive. Ed W From moseleymark at gmail.com Tue Jan 17 03:17:26 2012 From: moseleymark at gmail.com (Mark Moseley) Date: Mon, 16 Jan 2012 17:17:26 -0800 Subject: [Dovecot] LMTP Logging Message-ID: Just had a minor suggestion, with no clue how hard/easy it would be to implement: The %f flag in deliver_log_format seems to pick up the From: header, instead of the "MAIL FROM:<...>" arg. It'd be handy to have a %F that shows the "MAIL FROM" arg instead. I'm looking at tracking emails through logs from Exim to Dovecot easily. I know Message-ID can be used for correlation but it adds some complexity to searching, i.e. I can't just grep for the sender (as logged by Exim), unless I assume "MAIL FROM" always == From: From janfrode at tanso.net Tue Jan 17 10:36:19 2012 From: janfrode at tanso.net (Jan-Frode Myklebust) Date: Tue, 17 Jan 2012 09:36:19 +0100 Subject: [Dovecot] resolve mail_home ? Message-ID: <20120117083619.GA21186@dibs.tanso.net> I now have "mail_home = /srv/mailstore/%256RHu/%d/%n". Is there any way of asking dovecot where a user's home directory is? It's not in "doveadm user": $ doveadm user -f home janfrode at lyse.net $ doveadm user janfrode at tanso.net userdb: janfrode at tanso.net mail : mdbox:~/mdbox quota_rule: *:storage=1048576 Alternatively, is there an easy way to calculate the %256RHu hash ? -jf From ivo at crm.walltopia.com Tue Jan 17 10:52:40 2012 From: ivo at crm.walltopia.com (IVO GELOV (CRM)) Date: Tue, 17 Jan 2012 10:52:40 +0200 Subject: [Dovecot] Using Dovecot-auth to return error code 450 (or other 4xx) to Postfix when user is on vacation In-Reply-To: <4F144E57.9060802@msapiro.net> References: <4F1071F8.4080202@Media-Brokers.com> <4F12C794.6070609@Media-Brokers.com> <4F134A0A.70804@msapiro.net> <4F144E57.9060802@msapiro.net> Message-ID: On Mon, 16 Jan 2012 18:20:39 +0200, Mark Sapiro wrote: > On 11:59 AM, IVO GELOV (CRM) wrote: >> >> The limitation of 1 message per week for any unique combination of >> sender/recipient >> does not stop backscatter - because each message can come with a new >> forged FROM address, >> and from different compromised mail servers. >> The spammer does not have control over the body of the auto-replies >> (which is something >> like "I am not at the office, please write to my colleagues"), but it still >> may cause the victims to take some measures. > > > All true, but the sender in the sender/recipient combination is the > forged From: that ultimately receives the backscatter and the recipient > is your local user who set the vacation autoresponse. If you only have > one or two local users on vacation at a time, any given backscatter > recipient could receive at most one or two backscatter messages per week > regardless of how many compromised servers the spammer sends from. And > this assumes the spam is initially sent to multiple local users on > vacation and gets past your local spam filtering. > > I don't know about you, but I have more significant potential > backscatter sources to worry about. > I see your point and I agree with you this is a minor problem. Thanks for your time, Mark. Best wishes, Ivo Gelov From ivo at crm.walltopia.com Tue Jan 17 11:59:14 2012 From: ivo at crm.walltopia.com (IVO GELOV (CRM)) Date: Tue, 17 Jan 2012 11:59:14 +0200 Subject: [Dovecot] Dovecot unable to locate mailbox In-Reply-To: References: Message-ID: On Mon, 16 Jan 2012 14:38:44 +0200, Jason X, Maney wrote: > Dear all, > > I hope someone can point me in the right direction. here. I have setup my > Dovecot v2.0.13 on Ubuntu 11.10. The logs tells me that the mail location > has failed as follows: > > ========= > Jan 16 14:18:16 myservername dovecot: pop3-login: Login: user=, > method=PLAIN, rip=aaa.bbb.ccc.ddd, lip=www.xxx.yyy.zzz, mpid=1360, TLS > Jan 16 14:18:16 myservername dovecot: pop3(userA): Error: user molla: > Initialization failed: mail_location not set and autodetection failed: Mail > storage autodetection failed with home=/home/userA > Jan 16 14:18:16 myservername dovecot: pop3(userA): Error: Invalid user > settings. Refer to server log for more information. > ========= > > Yet my config also come out strangely as below: > > # path given in the mail_location setting. > # mail_location = maildir:~/Maildir > # mail_location = mbox:~/mail:INBOX=/var/mail/%u > # mail_location = mbox:/var/mail/%d/%1n/%n:INDEX=/var/indexes/%d/%1n/%n > mail_location = maildir:~/Maildir > # explicitly, ie. mail_location does nothing unless you have a namespace > # mail_location, which is also the default for it. Hi, Jason. I will describe my configuration and probably you will find some usefull information. I am using Postfix as MTA and have configured Dovecot to be LDA. I have several domains, so I am using the following folder schema: /var/mail/vhosts = the root of the mail storage /var/mail/vhosts/domain_1 = first domain /var/mail/vhosts/domain_1/user_1 = first mailbox in this domain .... /var/mail/vhosts/domain_2 = another domain /var/mail/vhosts/domain_2/user_1 = first mailbox in the other domain This is achieved with the following setting in mail.conf: mail_location = maildir:/var/mail/vhosts/%d/%n But since I do not want to manually go and create the corresponding folders each time I add new user (I manage accounts through a MySQL table), I also use the following setting in lda.conf: lda_mailbox_autocreate = yes lda_mailbox_autosubscribe = yes Perhaps you only need to add the latter settings in lda.conf and everything should run fine. Best wishes, IVO GELOV From interfasys at gmail.com Tue Jan 17 13:07:28 2012 From: interfasys at gmail.com (=?UTF-8?B?aW50ZXJmYVN5cyBzw6BybA==?=) Date: Tue, 17 Jan 2012 11:07:28 +0000 Subject: [Dovecot] Antispam plugin not compatible with Dovecot 2.1 Message-ID: <4F155670.6010905@gmail.com> Here is what I get when I try to compile the antispam plugin agaisnt Dovecot 2.1 ************** mailbox.c: In function 'antispam_save_begin': mailbox.c:138:12: error: 'struct mail_save_context' has no member named 'copying' mailbox.c: In function 'antispam_save_finish': mailbox.c:174:12: error: 'struct mail_save_context' has no member named 'copying' Failed to compile mailbox.c (plugin)! gmake[3]: *** [mailbox.plugin.o] Error 1 ************** The other objects compile fine. Cheers, Olivier From CMarcus at Media-Brokers.com Tue Jan 17 13:26:39 2012 From: CMarcus at Media-Brokers.com (Charles Marcus) Date: Tue, 17 Jan 2012 06:26:39 -0500 Subject: [Dovecot] per-user limit? In-Reply-To: <20120116091521.GA10944@gir.theapt.org> References: <20120116091521.GA10944@gir.theapt.org> Message-ID: <4F155AEF.3080105@Media-Brokers.com> On 2012-01-16 4:15 AM, Peter Hessler wrote: > I'm looking through my configuration, and I cannot see a limit on how > many times a single user can connect. He is connecting from different > IPs. I think you're needing: http://wiki2.dovecot.org/Services#Service_limits -- Best regards, Charles From tss at iki.fi Tue Jan 17 16:20:13 2012 From: tss at iki.fi (Timo Sirainen) Date: Tue, 17 Jan 2012 16:20:13 +0200 Subject: [Dovecot] little bug with Director in 2.1? In-Reply-To: References: Message-ID: <1326810013.11500.1.camel@innu> Hi, On Tue, 2012-01-10 at 16:16 +0100, Luca Di Vizio wrote: > in 2.1rc3 the "director_servers" setting does not accept hostnames as > documented (with ip no problems). > It works correctly in 2.0.17. The problem most likely was that v2.1 chroots the director process by default, but it did it a bit too early so hostname lookups failed. http://hg.dovecot.org/dovecot-2.1/rev/1d54d2963392 should fix it. From michael at orlitzky.com Tue Jan 17 16:23:47 2012 From: michael at orlitzky.com (Michael Orlitzky) Date: Tue, 17 Jan 2012 09:23:47 -0500 Subject: [Dovecot] imap-login process_limit reached In-Reply-To: <4F14A7AA.8010507@easystreet.net> References: <4F14A7AA.8010507@easystreet.net> Message-ID: <4F158473.1000901@orlitzky.com> First of all, feature request: doveconf -d show the default value of all settings On 01/16/12 17:41, Don Buchholz wrote: > > The 'doveconf -n' output is attached. I have not set any > "process_limit" values, and I don't think I'm getting anywhere close to > the 1024 default, so I'm pretty confused as to what might be wrong. > > Any suggestions on what to do next are appreciated. What makes you think 1024 is the default? We had to increase it. It shows up in doveconf -n output, so I don't think that's the default. # doveconf -n | grep limit default_process_limit = 1024 From phessler at theapt.org Tue Jan 17 16:27:31 2012 From: phessler at theapt.org (Peter Hessler) Date: Tue, 17 Jan 2012 15:27:31 +0100 Subject: [Dovecot] per-user limit? In-Reply-To: <4F155AEF.3080105@Media-Brokers.com> References: <20120116091521.GA10944@gir.theapt.org> <4F155AEF.3080105@Media-Brokers.com> Message-ID: <20120117142731.GF24394@gir.theapt.org> On 2012 Jan 17 (Tue) at 06:26:39 -0500 (-0500), Charles Marcus wrote: :On 2012-01-16 4:15 AM, Peter Hessler wrote: :>I'm looking through my configuration, and I cannot see a limit on how :>many times a single user can connect. He is connecting from different :>IPs. : :I think you're needing: : :http://wiki2.dovecot.org/Services#Service_limits : Thanks for the pointer. Hoever, this doesn't seem to help me. When I do "doveconf | grep [foo]" I find that the limits are either '0' or '1'. Except in "service imap-login { process-limit = 128 }". I had bumped that up from 64, and now it is at 1024. I don't have many users (about 6 that use imap), and nobody can use more than 6. I also double checked my process limits, and they are either unlimited, or measured in the ten-thousands. -- Osborn's Law: Variables won't; constants aren't. From duihi77 at gmail.com Tue Jan 17 16:31:01 2012 From: duihi77 at gmail.com (Duane Hill) Date: Tue, 17 Jan 2012 14:31:01 +0000 Subject: [Dovecot] imap-login process_limit reached In-Reply-To: <4F158473.1000901@orlitzky.com> References: <4F14A7AA.8010507@easystreet.net> <4F158473.1000901@orlitzky.com> Message-ID: <716809841.20120117143101@gmail.com> On Tuesday, January 17, 2012 at 14:23:47 UTC, michael at orlitzky.com confabulated: > First of all, feature request: > doveconf -d > show the default value of all settings You mean like doveconf(1) ? OPTIONS -a Show all settings with their currently configured values. -- If at first you don't succeed... ...so much for skydiving. From michael at orlitzky.com Tue Jan 17 16:58:04 2012 From: michael at orlitzky.com (Michael Orlitzky) Date: Tue, 17 Jan 2012 09:58:04 -0500 Subject: [Dovecot] imap-login process_limit reached In-Reply-To: <716809841.20120117143101@gmail.com> References: <4F14A7AA.8010507@easystreet.net> <4F158473.1000901@orlitzky.com> <716809841.20120117143101@gmail.com> Message-ID: <4F158C7C.4070209@orlitzky.com> On 01/17/12 09:31, Duane Hill wrote: > On Tuesday, January 17, 2012 at 14:23:47 UTC, michael at orlitzky.com confabulated: > >> First of all, feature request: > >> doveconf -d >> show the default value of all settings > > You mean like doveconf(1) ? > > OPTIONS > -a Show all settings with their currently configured values. > Using -a shows you all settings, as they're running in your installation. That's the defaults, except where they're overwritten by your config. I was asking for the defaults regardless of what's in my config file, so that I don't have to deduce them from the combined doveconf output & my config file. From michael at orlitzky.com Tue Jan 17 17:01:45 2012 From: michael at orlitzky.com (Michael Orlitzky) Date: Tue, 17 Jan 2012 10:01:45 -0500 Subject: [Dovecot] imap-login process_limit reached In-Reply-To: <4F158C7C.4070209@orlitzky.com> References: <4F14A7AA.8010507@easystreet.net> <4F158473.1000901@orlitzky.com> <716809841.20120117143101@gmail.com> <4F158C7C.4070209@orlitzky.com> Message-ID: <4F158D59.2070703@orlitzky.com> On 01/17/12 09:58, Michael Orlitzky wrote: > > Using -a shows you all settings, as they're running in your > installation. That's the defaults, except where they're overwritten by > your config. > > I was asking for the defaults regardless of what's in my config file, so > that I don't have to deduce them from the combined doveconf output & my > config file. In other words, I don't want to have to do this: mail2 ~ # touch empty-config.conf mail2 ~ # doveconf -a -c empty-config.conf | grep limit | head doveconf: Error: ssl enabled, but ssl_cert not set doveconf: Error: ssl enabled, but ssl_cert not set doveconf: Fatal: Error in configuration file empty-config.conf: ssl enabled, but ssl_cert not set default_client_limit = 1000 default_process_limit = 100 default_vsz_limit = 256 M recipient_delimiter = + client_limit = 0 process_limit = 1 vsz_limit = 18446744073709551615 B client_limit = 1 process_limit = 0 vsz_limit = 18446744073709551615 B to find out that the default process limit isn't 1000. From tss at iki.fi Tue Jan 17 17:27:15 2012 From: tss at iki.fi (Timo Sirainen) Date: Tue, 17 Jan 2012 17:27:15 +0200 Subject: [Dovecot] resolve mail_home ? In-Reply-To: <20120117083619.GA21186@dibs.tanso.net> References: <20120117083619.GA21186@dibs.tanso.net> Message-ID: <1326814035.11500.9.camel@innu> On Tue, 2012-01-17 at 09:36 +0100, Jan-Frode Myklebust wrote: > I now have "mail_home = /srv/mailstore/%256RHu/%d/%n". Is there any way > of asking dovecot where a user's home directory is? No.. > It's not in "doveadm user": > > $ doveadm user -f home janfrode at lyse.net > $ doveadm user janfrode at tanso.net > userdb: janfrode at tanso.net > mail : mdbox:~/mdbox > quota_rule: *:storage=1048576 Right, because it's a default setting, not something that comes from a userdb lookup. > Alternatively, is there an easy way to calculate the %256RHu hash ? Nope.. Maybe a new command, or maybe a parameter to doveadm user that would show mail_uid/gid/home. Or maybe something that dumps config output with %vars expanded to the given user. Hmm. From tss at iki.fi Tue Jan 17 17:30:11 2012 From: tss at iki.fi (Timo Sirainen) Date: Tue, 17 Jan 2012 17:30:11 +0200 Subject: [Dovecot] per-user limit? In-Reply-To: <20120116091521.GA10944@gir.theapt.org> References: <20120116091521.GA10944@gir.theapt.org> Message-ID: <1326814211.11500.11.camel@innu> On Mon, 2012-01-16 at 10:15 +0100, Peter Hessler wrote: > I am seeing a problem where users are limited to 6 imap logins total. > One of my users has a bunch of phones and computers, and wants them all > on at the same time. > > I'm looking through my configuration, and I cannot see a limit on how > many times a single user can connect. He is connecting from different > IPs. > > Any ideas? My logs show the following error when they attempt to auth > for a 7th time: > > dovecot: imap-login: Disconnected (no auth attempts): rip=111.yy.zz.xx, lip=81.209.183.113, TLS This means that the client simply didn't try to log in. If Dovecot reaches some kind of a limit, it logs about that. If there isn't anything else logged, I don't think the problem is in Dovecot itself. Can you reproduce this yourself by logging in with e.g. telnet? From javierdemiguel at us.es Tue Jan 17 17:35:17 2012 From: javierdemiguel at us.es (=?UTF-8?B?SmF2aWVyIE1pZ3VlbCBSb2Ryw61ndWV6?=) Date: Tue, 17 Jan 2012 16:35:17 +0100 Subject: [Dovecot] resolve mail_home ? In-Reply-To: <1326814035.11500.9.camel@innu> References: <20120117083619.GA21186@dibs.tanso.net> <1326814035.11500.9.camel@innu> Message-ID: <4F159535.1070701@us.es> That comand/paramater should be great for our backup scripts in our hashed mdboxes tree, we are using now slocate... Regards Javier > Nope.. > > Maybe a new command, or maybe a parameter to doveadm user that would > show mail_uid/gid/home. Or maybe something that dumps config output with > %vars expanded to the given user. Hmm. > From CMarcus at Media-Brokers.com Tue Jan 17 18:13:44 2012 From: CMarcus at Media-Brokers.com (Charles Marcus) Date: Tue, 17 Jan 2012 11:13:44 -0500 Subject: [Dovecot] imap-login process_limit reached In-Reply-To: <4F158C7C.4070209@orlitzky.com> References: <4F14A7AA.8010507@easystreet.net> <4F158473.1000901@orlitzky.com> <716809841.20120117143101@gmail.com> <4F158C7C.4070209@orlitzky.com> Message-ID: <4F159E38.5020802@Media-Brokers.com> On 2012-01-17 9:58 AM, Michael Orlitzky wrote: > Using -a shows you all settings, as they're running in your > installation. That's the defaults, except where they're overwritten by > your config. > > I was asking for the defaults regardless of what's in my config file, so > that I don't have to deduce them from the combined doveconf output& my > config file. Yeah, I had suggested this to Timo a long time ago when I suggested doveconf -n (the way postfix does it), but I don't think he ever did the -d option... maybe it got lost in the huffle... -- Best regards, Charles From buchholz at easystreet.net Tue Jan 17 20:15:28 2012 From: buchholz at easystreet.net (Don Buchholz) Date: Tue, 17 Jan 2012 10:15:28 -0800 Subject: [Dovecot] imap-login process_limit reached In-Reply-To: <4F158473.1000901@orlitzky.com> References: <4F14A7AA.8010507@easystreet.net> <4F158473.1000901@orlitzky.com> Message-ID: <4F15BAC0.3060003@easystreet.net> Michael Orlitzky wrote: > First of all, feature request: > > doveconf -d > show the default value of all settings > > > On 01/16/12 17:41, Don Buchholz wrote: > >> The 'doveconf -n' output is attached. I have not set any >> "process_limit" values, and I don't think I'm getting anywhere close to >> the 1024 default, so I'm pretty confused as to what might be wrong. >> >> Any suggestions on what to do next are appreciated. >> > > > What makes you think 1024 is the default? We had to increase it. It > shows up in doveconf -n output, so I don't think that's the default. > > # doveconf -n | grep limit > default_process_limit = 1024 > What makes me think 1024 is the default? The documentation: --> http://wiki2.dovecot.org/Services?highlight=%28process_limit%29#imap.2C_pop3.2C_managesieve From michael at orlitzky.com Tue Jan 17 20:30:02 2012 From: michael at orlitzky.com (Michael Orlitzky) Date: Tue, 17 Jan 2012 13:30:02 -0500 Subject: [Dovecot] imap-login process_limit reached In-Reply-To: <4F15BAC0.3060003@easystreet.net> References: <4F14A7AA.8010507@easystreet.net> <4F158473.1000901@orlitzky.com> <4F15BAC0.3060003@easystreet.net> Message-ID: <4F15BE2A.6010605@orlitzky.com> On 01/17/12 13:15, Don Buchholz wrote: >> > What makes me think 1024 is the default? > The documentation: > --> > http://wiki2.dovecot.org/Services?highlight=%28process_limit%29#imap.2C_pop3.2C_managesieve > That's only for those three services (imap, pop3, managesieve), not for imap-login unfortunately. Check here for more info, http://wiki2.dovecot.org/LoginProcess but the good part is, Since one login process can handle only one connection, the service's process_limit setting limits the number of users that can be logging in at the same time (defaults to default_process_limit=100). From buchholz at easystreet.net Tue Jan 17 21:02:37 2012 From: buchholz at easystreet.net (Don Buchholz) Date: Tue, 17 Jan 2012 11:02:37 -0800 Subject: [Dovecot] imap-login process_limit reached In-Reply-To: <4F15BAC0.3060003@easystreet.net> References: <4F14A7AA.8010507@easystreet.net> <4F158473.1000901@orlitzky.com> <4F15BAC0.3060003@easystreet.net> Message-ID: <4F15C5CD.80904@easystreet.net> Don Buchholz wrote: > Michael Orlitzky wrote: >> First of all, feature request: >> >> doveconf -d >> show the default value of all settings >> >> >> On 01/16/12 17:41, Don Buchholz wrote: >> >>> The 'doveconf -n' output is attached. I have not set any >>> "process_limit" values, and I don't think I'm getting anywhere close to >>> the 1024 default, so I'm pretty confused as to what might be wrong. >>> >>> Any suggestions on what to do next are appreciated. >>> >> >> >> What makes you think 1024 is the default? We had to increase it. It >> shows up in doveconf -n output, so I don't think that's the default. >> >> # doveconf -n | grep limit >> default_process_limit = 1024 >> > What makes me think 1024 is the default? > The documentation: > --> > http://wiki2.dovecot.org/Services?highlight=%28process_limit%29#imap.2C_pop3.2C_managesieve > > But, Michael's right, documentation can be wrong. So, I dumped the entire configuration. Here are the values found on the running system. Both imap and pop3 services have "process_limit = 1024". | [root at postal ~]# doveconf -a | # 2.0.8: /etc/dovecot/dovecot.conf | # OS: Linux 2.6.9-67.0.1.ELsmp i686 Red Hat Enterprise Linux WS release 4 (Nahant Update 6) ext3 | ... | default_process_limit = 100 | ... | service anvil { | ... | process_limit = 1 | ... | } | service auth-worker { | ... | process_limit = 0 | ... | } | service auth { | ... | process_limit = 1 | ... | } | service config { | ... | process_limit = 0 | ... | } | service dict { | ... | process_limit = 0 | ... | } | service director { | ... | process_limit = 1 | ... | } | service dns_client { | ... | process_limit = 0 | ... | } | service doveadm { | ... | process_limit = 0 | ... | } | service imap-login { | ... | process_limit = 0 | ... | } | service imap { | ... | process_limit = 1024 | ... | } | service lmtp { | ... | process_limit = 0 | ... | } | service log { | ... | process_limit = 1 | ... | } | service pop3-login { | ... | process_limit = 0 | ... | } | service pop3 { | ... | process_limit = 1024 | ... | } | service ssl-params { | ... | process_limit = 0 | ... | } From michael at orlitzky.com Tue Jan 17 21:12:55 2012 From: michael at orlitzky.com (Michael Orlitzky) Date: Tue, 17 Jan 2012 14:12:55 -0500 Subject: [Dovecot] imap-login process_limit reached In-Reply-To: <4F15C5CD.80904@easystreet.net> References: <4F14A7AA.8010507@easystreet.net> <4F158473.1000901@orlitzky.com> <4F15BAC0.3060003@easystreet.net> <4F15C5CD.80904@easystreet.net> Message-ID: <4F15C837.2020002@orlitzky.com> On 01/17/12 14:02, Don Buchholz wrote: >> > But, Michael's right, documentation can be wrong. So, I dumped the > entire configuration. Here are the values found on the running system. > Both imap and pop3 services have "process_limit = 1024". > You probably just posted this while my last message was in-flight, but just in case, 'imap' and 'imap-login' are different, and have different process limits. As the title of the thread suggests, you're out of imap-login processes, not imap ones. From buchholz at easystreet.net Tue Jan 17 21:48:29 2012 From: buchholz at easystreet.net (Don Buchholz) Date: Tue, 17 Jan 2012 11:48:29 -0800 Subject: [Dovecot] imap-login process_limit reached In-Reply-To: <4F15BE2A.6010605@orlitzky.com> References: <4F14A7AA.8010507@easystreet.net> <4F158473.1000901@orlitzky.com> <4F15BAC0.3060003@easystreet.net> <4F15BE2A.6010605@orlitzky.com> Message-ID: <4F15D08D.4070209@easystreet.net> Michael Orlitzky wrote: > On 01/17/12 13:15, Don Buchholz wrote: > >>> >>> >> What makes me think 1024 is the default? >> The documentation: >> --> >> http://wiki2.dovecot.org/Services?highlight=%28process_limit%29#imap.2C_pop3.2C_managesieve >> >> > > That's only for those three services (imap, pop3, managesieve), not for > imap-login unfortunately. Check here for more info, > > http://wiki2.dovecot.org/LoginProcess > > but the good part is, > > Since one login process can handle only one connection, the > service's process_limit setting limits the number of users that can > be logging in at the same time (defaults to > default_process_limit=100). > Doh! Thanks, Michael. I wasn't looking at the original error message closely enough. I scanned too quickly and saw "service(imap)" and not "service(imap-login)". Now the failure when there are only ~200 (total) dovecot processes makes sense (because about half of the processes here are dovecot/imap-login). I've added the following to our configuration: service imap-login { process_limit = 500 process_min_avail = 2 } Thanks for your help ... and patience. - Don From buchholz at easystreet.net Tue Jan 17 21:52:19 2012 From: buchholz at easystreet.net (Don Buchholz) Date: Tue, 17 Jan 2012 11:52:19 -0800 Subject: [Dovecot] imap-login process_limit reached In-Reply-To: <4F15C837.2020002@orlitzky.com> References: <4F14A7AA.8010507@easystreet.net> <4F158473.1000901@orlitzky.com> <4F15BAC0.3060003@easystreet.net> <4F15C5CD.80904@easystreet.net> <4F15C837.2020002@orlitzky.com> Message-ID: <4F15D173.9090103@easystreet.net> Michael Orlitzky wrote: > On 01/17/12 14:02, Don Buchholz wrote: > >> But, Michael's right, documentation can be wrong. So, I dumped the >> entire configuration. Here are the values found on the running system. >> Both imap and pop3 services have "process_limit = 1024". >> >> > > You probably just posted this while my last message was in-flight, but > just in case, 'imap' and 'imap-login' are different, and have different > process limits. > > As the title of the thread suggests, you're out of imap-login processes, > not imap ones. > Yup! ... see reply on other branch in this thread. Thanks again! - Don From michael at orlitzky.com Tue Jan 17 22:13:22 2012 From: michael at orlitzky.com (Michael Orlitzky) Date: Tue, 17 Jan 2012 15:13:22 -0500 Subject: [Dovecot] imap-login process_limit reached In-Reply-To: <4F15D08D.4070209@easystreet.net> References: <4F14A7AA.8010507@easystreet.net> <4F158473.1000901@orlitzky.com> <4F15BAC0.3060003@easystreet.net> <4F15BE2A.6010605@orlitzky.com> <4F15D08D.4070209@easystreet.net> Message-ID: <4F15D662.9030601@orlitzky.com> On 01/17/12 14:48, Don Buchholz wrote: >> > Doh! Thanks, Michael. I wasn't looking at the original error message > closely enough. I scanned too quickly and saw "service(imap)" and not > "service(imap-login)". Now the failure when there are only ~200 (total) > dovecot processes makes sense (because about half of the processes here > are dovecot/imap-login). > > ... > > Thanks for your help ... and patience. > No problem, I went through the exact same process when we hit the limit. From interfasys at gmail.com Wed Jan 18 03:03:46 2012 From: interfasys at gmail.com (=?UTF-8?B?aW50ZXJmYVN5cyBzw6BybA==?=) Date: Wed, 18 Jan 2012 01:03:46 +0000 Subject: [Dovecot] [Dovecot 2.1] ACL plugin makes imap service crash when using some clients Message-ID: <4F161A72.8030400@gmail.com> Hello, I've just noticed that when Horde is connecting to Dovecot 2.1, it crashes the imap service if Dovecot is configured to use the ACL plugin. I'm not sure what's so special about the command Horde sends, but it shouldn't make Dovecot crash. Everything is fine when using Thunderbird. Here is the message in Dovecot's logs "Fatal: master: service(imap): child 89974 killed with signal 11 (core not dumped)" The message says that the core is not dumped, even though I did add drop_priv_before_exec=yes to my config file. I've tried connecting to the pid using gdb, but the process just hangs as soon as I'm connected. Cheers, Olivier From user+dovecot at localhost.localdomain.org Wed Jan 18 03:33:19 2012 From: user+dovecot at localhost.localdomain.org (Pascal Volk) Date: Wed, 18 Jan 2012 02:33:19 +0100 Subject: [Dovecot] [Dovecot 2.1] ACL plugin makes imap service crash when using some clients In-Reply-To: <4F161A72.8030400@gmail.com> References: <4F161A72.8030400@gmail.com> Message-ID: <4F16215F.5000909@localhost.localdomain.org> On 01/18/2012 02:03 AM interfaSys s?rl wrote: > Hello, > > I've just noticed that when Horde is connecting to Dovecot 2.1, it > crashes the imap service if Dovecot is configured to use the ACL plugin. > I'm not sure what's so special about the command Horde sends, but it > shouldn't make Dovecot crash. Everything is fine when using Thunderbird. > > Here is the message in Dovecot's logs > "Fatal: master: service(imap): child 89974 killed with signal 11 (core > not dumped)" > > The message says that the core is not dumped, even though I did add > drop_priv_before_exec=yes to my config file. dovecot stop ulimit -c unlimited dovecot Now connect with Horde and let it crash. > I've tried connecting to the pid using gdb, but the process just hangs > as soon as I'm connected. > continue [wait for the crash] bt full detach quit Regards, Pascal -- The trapper recommends today: cafefeed.1201802 at localdomain.org From gordon.grubert+lists at uni-greifswald.de Wed Jan 18 14:02:58 2012 From: gordon.grubert+lists at uni-greifswald.de (Gordon Grubert) Date: Wed, 18 Jan 2012 13:02:58 +0100 Subject: [Dovecot] Dovecot crashes totally - SOLVED In-Reply-To: <4EB6D845.7040208@uni-greifswald.de> References: <4EA317B5.3090209@uni-greifswald.de> <1320435812.21919.150.camel@hurina> <4EB6D845.7040208@uni-greifswald.de> Message-ID: <4F16B4F2.5050107@uni-greifswald.de> On 11/06/2011 07:56 PM, Gordon Grubert wrote: > On 11/04/2011 08:43 PM, Timo Sirainen wrote: >> On Sat, 2011-10-22 at 21:21 +0200, Gordon Grubert wrote: >>> Hello, >>> >>> our dovecot server crashes totally without any really useful >>> log messages. The error log can be found in the attachment. >>> The only way to get dovecot running again is a complete >>> system restart. >> >> How often does it break? If really a "complete system restart" is needed >> to fix it, it doesn't sound like a Dovecot problem. Check if it's enough >> to stop dovecot and then make sure there aren't any dovecot processes >> lying around afterwards. > Currently, the problem occurred three times. The last time some days > ago. The last "crash" was in the night and, therefore, we used the > chance for a detailed debugging of the system. > > You could be right, that it's not a dovecot problem. Next to dovecot, > we found other processes hanging and could not be killed by "kill -9". > Additionally, we found a commonness of all of these processes: They > hanged while trying to access the mailbox volume. Therefore, we repaired > the filesystem. Now, we're watching the system ... > >>> Oct 11 09:55:23 mailserver2 dovecot: master: Error: service(imap): >>> Initial status notification not received in 30 seconds, killing the >>> process >>> Oct 11 09:56:23 mailserver2 dovecot: imap-login: Error: master(imap): >>> Auth request timed out (received 0/12 bytes) >> >> Kind of looks like auth process is hanging. You could see if stracing it >> shows anything useful. Also are any errors logged about LDAP? Is LDAP >> running on the same server? > Dovecot authenticates against postfix and postfix has an LDAP > connection. The LDAP is running on an external cluster. Here, > no errors are reported. > > We hope, that the filesystem error was the reason for the problem > and, that the problem is fixed by repairing it. During the last two month, no error occurred. Therefore, the problem in the filesystem seems to be the reason for the dovecot crash. Thx and best regards, Gordon -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 5396 bytes Desc: S/MIME Cryptographic Signature URL: From tss at iki.fi Wed Jan 18 14:23:00 2012 From: tss at iki.fi (Timo Sirainen) Date: Wed, 18 Jan 2012 14:23:00 +0200 Subject: [Dovecot] imap-login process_limit reached In-Reply-To: <4F14A7AA.8010507@easystreet.net> References: <4F14A7AA.8010507@easystreet.net> Message-ID: <1326889380.11500.16.camel@innu> On Mon, 2012-01-16 at 14:41 -0800, Don Buchholz wrote: > I've been having some problems with IMAP user connections to the Dovecot > (v2.0.8) server. The following message is being logged. > > Jan 16 10:51:36 postal dovecot: master: Warning: > service(imap-login): process_limit reached, client connections are > being dropped Maybe this will help some in future: http://hg.dovecot.org/dovecot-2.1/rev/a4e61c99c7eb The new error message is: service(imap-login): process_limit (100) reached, client connections are being dropped From lee at standen.id.au Wed Jan 18 14:44:35 2012 From: lee at standen.id.au (Lee Standen) Date: Wed, 18 Jan 2012 20:44:35 +0800 Subject: [Dovecot] Performance of Maildir vs sdbox/mdbox Message-ID: <02f401ccd5de$ef17c470$cd474d50$@standen.id.au> Hi Guys, I've been desperately trying to find some comparative performance information about the different mailbox formats supported by Dovecot in order to make an assessment on which format is right for our environment. This is a brand new build, with customer mailboxes to be migrated in over the course of 3-4 months. Some details on our new environment: * Approximately 1.6M+ mailboxes once all legacy systems are combined * NetApp FAS6280 storage w/ 120TB usable for mail storage, 1TB of FlashCache in each controller * All mail storage presented via NFS over 10Gbps Ethernet (Jumbo Frames) * Postfix will feed new email to Dovecot via LMTP * Dovecot servers have been split based on their role - Dovecot LDA Servers (running LMTP protocol) - Dovecot POP/IMAP servers (running POP/IMAP protocols) - LDA & POP/IMAP servers are segmented into geographically split groups (so no server sees every single mailbox) - Nginx proxy used to terminate customer connections, connections are redirected to the appropriate geographic servers * Apache Lucene indexes will be used to accelerate IMAP search for users Our closest current live configuration (Qmail SMTP, Courier IMAP, Maildir) has 600K mailboxes and pushes ~ 35,000 NFS operations per second at peak Some of the things I would like to know: * Are we likely to see a reduction in IOPS/User by using Maildir alone under Dovecot? * What kind of IOPS/User reduction could we expect to see under mdbox? * If someone can give some technical reasoning behind why mdbox does less IOPS than Maildir? I understand some of the reasons for the mdbox IOPS question, but I need some more information so we can discuss internally and make a decision as to whether we're comfortable going with mdbox from day one. We're very familiar with Maidlir, and there's just some uneasiness internally around going to a new mail storage format. Thanks! From tss at iki.fi Wed Jan 18 14:58:15 2012 From: tss at iki.fi (Timo Sirainen) Date: Wed, 18 Jan 2012 14:58:15 +0200 Subject: [Dovecot] Dovecot Solutions company update Message-ID: <1326891495.11500.32.camel@innu> Hi, A small update: My Dovecot support company finally has web pages: http://www.dovecot.fi/ We've also started providing 24/7 support. From robert at schetterer.org Wed Jan 18 15:05:57 2012 From: robert at schetterer.org (Robert Schetterer) Date: Wed, 18 Jan 2012 14:05:57 +0100 Subject: [Dovecot] Performance of Maildir vs sdbox/mdbox In-Reply-To: <02f401ccd5de$ef17c470$cd474d50$@standen.id.au> References: <02f401ccd5de$ef17c470$cd474d50$@standen.id.au> Message-ID: <4F16C3B5.80404@schetterer.org> Am 18.01.2012 13:44, schrieb Lee Standen: > Hi Guys, > > > > I've been desperately trying to find some comparative performance > information about the different mailbox formats supported by Dovecot in > order to make an assessment on which format is right for our environment. > > This is a brand new build, with customer mailboxes to be migrated in over > the course of 3-4 months. > > > > Some details on our new environment: > > * Approximately 1.6M+ mailboxes once all legacy systems are combined > > * NetApp FAS6280 storage w/ 120TB usable for mail storage, 1TB of FlashCache > in each controller > > * All mail storage presented via NFS over 10Gbps Ethernet (Jumbo Frames) nfs may not be optimal clusterfilesystem might better, but this is an heavy seperate discussion > > * Postfix will feed new email to Dovecot via LMTP perfect > > * Dovecot servers have been split based on their role > > - Dovecot LDA Servers (running LMTP protocol) > > - Dovecot POP/IMAP servers (running POP/IMAP protocols) > > - LDA & POP/IMAP servers are segmented into geographically split groups > (so no server sees every single mailbox) > > - Nginx proxy used to terminate customer connections, connections are > redirected to the appropriate geographic servers > > * Apache Lucene indexes will be used to accelerate IMAP search for users > sounds ok > > > > > Our closest current live configuration (Qmail SMTP, Courier IMAP, Maildir) > has 600K mailboxes and pushes ~ 35,000 NFS operations per second at peak wow thats big > > > > Some of the things I would like to know: > > * Are we likely to see a reduction in IOPS/User by using Maildir alone under > Dovecot? > > * What kind of IOPS/User reduction could we expect to see under mdbox? there should be people on the list , knowing this , by migration done > > * If someone can give some technical reasoning behind why mdbox does less > IOPS than Maildir? as far i remember mdbox takes 8 mails per file ( i am not using it currently, so i didnt investigate it ), better wait for more qualified answer, anyway mdbox seems recommended in your case from our last plans about 25k mailboxes we decide using mdbox, as far i remember.... > > > > I understand some of the reasons for the mdbox IOPS question, but I need > some more information so we can discuss internally and make a decision as to > whether we're comfortable going with mdbox from day one. We're very > familiar with Maidlir, and there's just some uneasiness internally around > going to a new mail storage format. > > > > Thanks! > > > > from my personal knowledge io on storage has most influance of performance, if at last ,all other setup parts are solved optimal wait a little bit , i guess more matching answers will come up after all ,you can hire someone, perhaps Timo, if you stuck in something -- Best Regards MfG Robert Schetterer Germany/Munich/Bavaria From javierdemiguel at us.es Wed Jan 18 15:27:52 2012 From: javierdemiguel at us.es (=?ISO-8859-1?Q?Javier_Miguel_Rodr=EDguez?=) Date: Wed, 18 Jan 2012 14:27:52 +0100 Subject: [Dovecot] Performance of Maildir vs sdbox/mdbox In-Reply-To: <02f401ccd5de$ef17c470$cd474d50$@standen.id.au> References: <02f401ccd5de$ef17c470$cd474d50$@standen.id.au> Message-ID: <4F16C8D8.1090804@us.es> Spanish edu site here, 80k users, 4,5 TB of email, 6.000 iops (indexes) + 9.000 iops (mdboxes) in working hours here. We evaluated mdbox against Maildir and we found that with these setting dovecot 2 perfoms better than Maildir: mdbox_rotate_interval = 1d mdbox_rotate_size=60m zlib_save_level = 9 # 1..9 zlib_save = gz # or bz2 We detected 40% less iops with this setup *in working hours (more info below)*. Zlib saved some writes (15-30%). With mdbox, deletion of a message is written to indexes (use SSD for this), and a nightly cronjob deletes the real message from the mdbox, this saves us some iops in working hours. Also, backup software is MUCH happier handling hundreds of thousands files (mdbox) versus tens of millions (maildir) Mdbox has also drawbacks: you have to be VERY careful with your indexes, they contain data that can not be rebuilt from mdboxes. The nightly cronjob "purging" the mdboxes hammers the SAN. Full backup time is reduced, but incremental backup space & time increases: if you delete a message, after "purging" it from the mdbox the mdbox file changes (size and date), so the incremental backup has to copy it again. Regards Javier From email at randpoger.org Wed Jan 18 15:29:31 2012 From: email at randpoger.org (email at randpoger.org) Date: Wed, 18 Jan 2012 14:29:31 +0100 Subject: [Dovecot] Dovecot did not accept Login from Host Message-ID: <192f7dbb6b6c9e71bd44c41f08097a92-EhVcX1lATAFfWEQABwoYZ1dfaANWUkNeXEJbAVo1WEdQS1oIXkF3CEtXWV4wQEYAWVJQQ1tSWQ==-webmailer2@server06.webmailer.hosteurope.de> Hi! My Dovecot is running and i can connect + login through telnet: -------------------------------------------- >> telnet localhost 143 Trying ::1... Trying 127.0.0.1... Connected to localhost. Escape character is '^]'. * OK [CAPABILITY IMAP4rev1 LITERAL+ SASL-IR LOGIN-REFERRALS ID ENABLE STARTTLS AUTH=PLAIN AUTH=LOGIN] Dovecot ready. >> 1 login user passwort 1 OK [...] Logged in -------------------------------------------- But through my domain i can only connect, but than i get an error: -------------------------------------------- >> telnet domain.de 143 Trying xx.xxx.xxx.xx... Connected to domain.de. Escape character is '^]'. * OK [CAPABILITY IMAP4rev1 LITERAL+ SASL-IR LOGIN-REFERRALS ID ENABLE STARTTLS AUTH=PLAIN AUTH=LOGIN] Dovecot ready. >> 1 login user passwort 1 NO [AUTHENTICATIONFAILED] Authentication failed. -------------------------------------------- My dovecot.conf: -------------------------------------------- protocols = imap imaps ssl_cert_file = /etc/ssl/certs/dovecot.pem ssl_key_file = /etc/ssl/private/dovecot.pem mail_location= /var/mail/%u log_path = /var/log/dovecot.log log_timestamp = "%Y-%m-%d %H:%M:%S " auth_verbose = yes auth_debug = yes protocol imap { } auth default { mechanisms = plain login passdb pam { } userdb passwd { } user = root } -------------------------------------------- If i try to connect+login through an domain, dovecot write NOTHING into the .log Someone ideas about this? From tss at iki.fi Wed Jan 18 15:54:31 2012 From: tss at iki.fi (Timo Sirainen) Date: Wed, 18 Jan 2012 15:54:31 +0200 Subject: [Dovecot] Performance of Maildir vs sdbox/mdbox In-Reply-To: <02f401ccd5de$ef17c470$cd474d50$@standen.id.au> References: <02f401ccd5de$ef17c470$cd474d50$@standen.id.au> Message-ID: <1326894871.11500.45.camel@innu> On Wed, 2012-01-18 at 20:44 +0800, Lee Standen wrote: > I've been desperately trying to find some comparative performance > information about the different mailbox formats supported by Dovecot in > order to make an assessment on which format is right for our environment. Unfortunately there aren't really any. Everyone who seems to switch to sdbox/mdbox usually also change their hardware at the same time, so there aren't really any before/after metrics. I've of course some unrealistic synthetic benchmarks, but I don't think they are very useful. So, I would also be very interested in seeing some before/after graphs of disk IO, CPU and memory usage of Maildir -> dbox switch in same hardware. Maildir is anyway definitely worse performance then sdbox or mdbox. mdbox also uses less NFS operations, but I don't know how much faster (if any) it is with Netapps. > * All mail storage presented via NFS over 10Gbps Ethernet (Jumbo Frames) > > * Postfix will feed new email to Dovecot via LMTP > > * Dovecot servers have been split based on their role > > - Dovecot LDA Servers (running LMTP protocol) > > - Dovecot POP/IMAP servers (running POP/IMAP protocols) You're going to run into NFS caching troubles with the above split setup. I don't recommend it. You will see error messages about index corruption with it, and with dbox it can cause metadata loss. http://wiki2.dovecot.org/NFS http://wiki2.dovecot.org/Director > - LDA & POP/IMAP servers are segmented into geographically split groups > (so no server sees every single mailbox) > > - Nginx proxy used to terminate customer connections, connections are > redirected to the appropriate geographic servers Can the same mailbox still be accessed via multiple geographic servers? I've had some plans for doing this kind of access/replication using dsync.. > * Apache Lucene indexes will be used to accelerate IMAP search for users Dovecot's fts-solr or fts-lucene? > Our closest current live configuration (Qmail SMTP, Courier IMAP, Maildir) > has 600K mailboxes and pushes ~ 35,000 NFS operations per second at peak > > Some of the things I would like to know: > > * Are we likely to see a reduction in IOPS/User by using Maildir alone under > Dovecot? If you have webmail type of clients, definitely. For Outlook/Thunderbird you should still see improvement, but not necessarily as much. You didn't mention POP3. That isn't Dovecot's strong point. Its performance should be about the same as Courier-POP3, but could be less than QMail-POP3. Although if many of your POP3 users keep a lot of mails on server it > * If someone can give some technical reasoning behind why mdbox does less > IOPS than Maildir? Maildir renames files a lot. From new/ -> to cur/ and then every time message flag changes. That's why sdbox is faster. Why mdbox should be faster than sdbox is because mdbox puts (or should put) more mail data physically closer in disks to make reading it faster. > I understand some of the reasons for the mdbox IOPS question, but I need > some more information so we can discuss internally and make a decision as to > whether we're comfortable going with mdbox from day one. We're very > familiar with Maidlir, and there's just some uneasiness internally around > going to a new mail storage format. It's at least safer to first switch to Dovecot+Maildir to make sure that any problems you might find aren't related to the mailbox format.. From ebroch at whitehorsetc.com Wed Jan 18 16:20:31 2012 From: ebroch at whitehorsetc.com (Eric Broch) Date: Wed, 18 Jan 2012 07:20:31 -0700 Subject: [Dovecot] shared folder files not displaying in thunderbird Message-ID: <4F16D52F.2040907@whitehorsetc.com> Hello, I have dovecot installed with the configuration below. One of the subfolders created (using the email client) under the '/home/vpopmail/domains/mydomain.com/shared/projects' share no longer (it used to) displays the files located in it. There are about 150 folders under the '/home/vpopmail/domains/mydomain.com/shared/projects' share all of which display the files located in them, the one mentioned used to display the contents but no longer does. What would be the reason that one folder would no longer display existing files in the email client (Thunderbird) and the other folders would? And, how do I fix this? I've already tried unsubscribing and resubscribing the folder. This did not work. Would it now be simply a matter of unsubscribing the folder, deleting the dovecot files, and resubscribing to the folder? Eric # 2.0.11: /etc/dovecot/dovecot.conf # OS: Linux 2.6.18-238.19.1.el5 i686 CentOS release 5.7 (Final) auth_cache_size = 32 M auth_mechanisms = plain login digest-md5 cram-md5 auth_username_format = %Lu disable_plaintext_auth = no first_valid_uid = 89 log_path = /var/log/dovecot.log login_greeting = Dovecot toaster ready. namespace { inbox = yes location = prefix = INBOX. separator = . type = private } namespace { location = maildir:/home/vpopmail/domains/mydomain.com/shared/projects prefix = projects. separator = . type = public } passdb { args = cache_key=%u webmail=127.0.0.1 driver = vpopmail } plugin/quota = maildir protocols = imap ssl_cert = References: <1326891495.11500.32.camel@innu> Message-ID: <4F16D607.5030800@schetterer.org> Am 18.01.2012 13:58, schrieb Timo Sirainen: > Hi, > > A small update: My Dovecot support company finally has web pages: > http://www.dovecot.fi/ > > We've also started providing 24/7 support. > > Hi Timo, very cool ! -- Best Regards MfG Robert Schetterer Germany/Munich/Bavaria From tss at iki.fi Wed Jan 18 16:32:36 2012 From: tss at iki.fi (Timo Sirainen) Date: Wed, 18 Jan 2012 16:32:36 +0200 Subject: [Dovecot] Dovecot unable to locate mailbox In-Reply-To: References: Message-ID: <1326897156.11500.51.camel@innu> On Mon, 2012-01-16 at 14:38 +0200, Jason X, Maney wrote: > Jan 16 14:18:16 myservername dovecot: pop3(userA): Error: user molla: > Initialization failed: mail_location not set and autodetection failed: Mail > storage autodetection failed with home=/home/userA As it says. > Yet my config also come out strangely as below: > > ========= > root at guyana:~# dovecot -n > # 2.0.13: /etc/dovecot/dovecot.conf > # OS: Linux 3.0.0-12-server x86_64 Ubuntu 11.10 > passdb { > driver = pam > } > protocols = " imap pop3" > ssl_cert = ssl_key = userdb { > driver = passwd > } > root at guyana:~# > ========= There is no mail_location above. This is the configuration Dovecot sees. > My mailbox location setting is as follows: > > ========= > cat conf.d/10-mail.conf |grep mail_location Look at /etc/dovecot/dovecot.conf file. Do you see !include conf.d/*.conf in there? Probably not, so those files aren't being read. From tss at iki.fi Wed Jan 18 16:34:18 2012 From: tss at iki.fi (Timo Sirainen) Date: Wed, 18 Jan 2012 16:34:18 +0200 Subject: [Dovecot] Antispam plugin not compatible with Dovecot 2.1 In-Reply-To: <4F155670.6010905@gmail.com> References: <4F155670.6010905@gmail.com> Message-ID: <1326897258.11500.53.camel@innu> On Tue, 2012-01-17 at 11:07 +0000, interfaSys s?rl wrote: > Here is what I get when I try to compile the antispam plugin agaisnt > Dovecot 2.1 > > ************** > mailbox.c: In function 'antispam_save_begin': > mailbox.c:138:12: error: 'struct mail_save_context' has no member named > 'copying' The "copying" should be changed to "copying_via_save". From lee at standen.id.au Wed Jan 18 16:36:45 2012 From: lee at standen.id.au (Lee Standen) Date: Wed, 18 Jan 2012 22:36:45 +0800 Subject: [Dovecot] Performance of Maildir vs sdbox/mdbox Message-ID: <7c413ccbddc8e25584311c55672a51e5@standen.id.au> On 18.01.2012 21:54, Timo Sirainen wrote: > On Wed, 2012-01-18 at 20:44 +0800, Lee Standen wrote: > >> I've been desperately trying to find some comparative performance >> information about the different mailbox formats supported by Dovecot >> in >> order to make an assessment on which format is right for our >> environment. > > Unfortunately there aren't really any. Everyone who seems to switch > to > sdbox/mdbox usually also change their hardware at the same time, so > there aren't really any before/after metrics. I've of course some > unrealistic synthetic benchmarks, but I don't think they are very > useful. > > So, I would also be very interested in seeing some before/after > graphs > of disk IO, CPU and memory usage of Maildir -> dbox switch in same > hardware. > > Maildir is anyway definitely worse performance then sdbox or mdbox. > mdbox also uses less NFS operations, but I don't know how much faster > (if any) it is with Netapps. We have bought new hardware for this project too, so we might not be able to help out massively on that front... we do have NFS operations monitored though so we should at least be able to compare that metric since the underlying storage operating system is the same. All NetApp hardware runs their Data ONTAP operating system, so the metrics are assured to be the same :) How about this... are there any tools available (that you know of) to capture real live customer POP3/IMAP traffic and replay it against a separate system? That might be a feasible option for doing a like-for-like comparison in our environment? We could probably get something in place to simulate the load if we can do something like that... >> * All mail storage presented via NFS over 10Gbps Ethernet (Jumbo >> Frames) >> >> * Postfix will feed new email to Dovecot via LMTP >> >> * Dovecot servers have been split based on their role >> >> - Dovecot LDA Servers (running LMTP protocol) >> >> - Dovecot POP/IMAP servers (running POP/IMAP protocols) > > You're going to run into NFS caching troubles with the above split > setup. I don't recommend it. You will see error messages about index > corruption with it, and with dbox it can cause metadata loss. > http://wiki2.dovecot.org/NFS http://wiki2.dovecot.org/Director That might be the one thing (unfortunately) which prevents us from going with the dbox format. I understand the same issue can actually occur on Dovecot Maildir as well, but because Maildir works without these index files, we were willing to just go with it. I will raise it again, but there has been a lot of push back about introducing a single point of failure, even though this is a perceived one. The biggest challenge I have at the moment if I try to sell the dbox format is providing some kind of data on the expected gains from this. If it's only a 10% reduction in NFS operations for the typical user, then it's probably not worth our while. > >> - LDA & POP/IMAP servers are segmented into geographically split >> groups >> (so no server sees every single mailbox) >> >> - Nginx proxy used to terminate customer connections, connections >> are >> redirected to the appropriate geographic servers > > Can the same mailbox still be accessed via multiple geographic > servers? > I've had some plans for doing this kind of access/replication using > dsync.. No, we're using the nginx proxy layer to ensure that if a user in Sydney (for example) tries to access a Perth mailbox, their connection is redirected (by nginx) to the Perth POP/IMAP servers. Postfix configuration is handling the same thing on the LMTP side. The requirement here is for all users to have the same settings regardless of location, but still be able to locate the email servers and data close to the customer. > >> * Apache Lucene indexes will be used to accelerate IMAP search for >> users > > Dovecot's fts-solr or fts-lucene? fts-solr. I've been using Lucene/Solr interchangeably when discussing this project with my peers :) > >> Our closest current live configuration (Qmail SMTP, Courier IMAP, >> Maildir) >> has 600K mailboxes and pushes ~ 35,000 NFS operations per second at >> peak >> >> Some of the things I would like to know: >> >> * Are we likely to see a reduction in IOPS/User by using Maildir >> alone under >> Dovecot? > > If you have webmail type of clients, definitely. For > Outlook/Thunderbird > you should still see improvement, but not necessarily as much. > > You didn't mention POP3. That isn't Dovecot's strong point. Its > performance should be about the same as Courier-POP3, but could be > less > than QMail-POP3. Although if many of your POP3 users keep a lot of > mails > on server it > Our existing systems run with about 21K concurrent IMAP connections at any one point in time, not counting Webmail POP3 runs at about 3600 concurrent connections, but since those are not long lived it's not particularly indicative of customer numbers. Vague recollection is something like 25% IMAP, 55-60% POP3, rest < 20% Webmail. I'd have to go back and check the breakdown again. >> * If someone can give some technical reasoning behind why mdbox does >> less >> IOPS than Maildir? > > Maildir renames files a lot. From new/ -> to cur/ and then every time > message flag changes. That's why sdbox is faster. Why mdbox should be > faster than sdbox is because mdbox puts (or should put) more mail > data > physically closer in disks to make reading it faster. > >> I understand some of the reasons for the mdbox IOPS question, but I >> need >> some more information so we can discuss internally and make a >> decision as to >> whether we're comfortable going with mdbox from day one. We're very >> familiar with Maidlir, and there's just some uneasiness internally >> around >> going to a new mail storage format. > > It's at least safer to first switch to Dovecot+Maildir to make sure > that > any problems you might find aren't related to the mailbox format.. Yep, I'm considering that. The flip side is that it's actually going to be difficult for us to change mail format once we've migrated into this system, but we have an opportunity for (literally) a month long testing phase beginning in Feb/March which will let us test as many possibilities as we can. From tss at iki.fi Wed Jan 18 16:52:58 2012 From: tss at iki.fi (Timo Sirainen) Date: Wed, 18 Jan 2012 16:52:58 +0200 Subject: [Dovecot] LMTP Logging In-Reply-To: References: Message-ID: <1326898378.11500.54.camel@innu> On Mon, 2012-01-16 at 17:17 -0800, Mark Moseley wrote: > Just had a minor suggestion, with no clue how hard/easy it would be to > implement: > > The %f flag in deliver_log_format seems to pick up the From: header, > instead of the "MAIL FROM:<...>" arg. It'd be handy to have a %F that > shows the "MAIL FROM" arg instead. I'm looking at tracking emails > through logs from Exim to Dovecot easily. I know Message-ID can be > used for correlation but it adds some complexity to searching, i.e. I > can't just grep for the sender (as logged by Exim), unless I assume > "MAIL FROM" always == From: Added to v2.1: http://hg.dovecot.org/dovecot-2.1/rev/7ee2cfbcae2e http://hg.dovecot.org/dovecot-2.1/rev/08cc9d2a79e6 From tss at iki.fi Wed Jan 18 16:56:41 2012 From: tss at iki.fi (Timo Sirainen) Date: Wed, 18 Jan 2012 16:56:41 +0200 Subject: [Dovecot] Quota is not working (Debian Squeeze - Dovecot 1.2) In-Reply-To: <4F13FF00.1050108@miamammausalinux.org> References: <4F13FF00.1050108@miamammausalinux.org> Message-ID: <1326898601.11500.56.camel@innu> On Mon, 2012-01-16 at 11:42 +0100, RaSca wrote: > passdb sql { > args = /etc/dovecot/dovecot-sql.conf > } > userdb passwd { > } > userdb static { > args = uid=5000 gid=5000 home=/mail/mailboxes/%d/%n@%d > allow_all_users=yes > } You're using SQL only for passdb lookup. > plugin { > quota = maildir:/mail/mailboxes/%d/%n@%d The above path probably doesn't do what you intended. It's only the user-visible quota root name. It could just as well be "User quota" or something. > The db connection works, this is /etc/dovecot/dovecot-sql.conf: > > driver = mysql > connect = host= dbname=mail user= password= > default_pass_scheme = CRYPT > password_query = SELECT username, password FROM mailbox WHERE username='%u' > user_query = SELECT username AS user, maildir AS home, > CONCAT('*:storage=', quota , 'B') AS quota_rule FROM mailbox WHERE > username = '%u' AND active = '1' user_query isn't used, because you aren't using userdb sql. From tss at iki.fi Wed Jan 18 17:06:49 2012 From: tss at iki.fi (Timo Sirainen) Date: Wed, 18 Jan 2012 17:06:49 +0200 Subject: [Dovecot] v2.1.rc3 released In-Reply-To: <20120116150504.GA28883@shutemov.name> References: <1325878845.17774.38.camel@hurina> <20120116150504.GA28883@shutemov.name> Message-ID: <1326899209.11500.58.camel@innu> On Mon, 2012-01-16 at 17:05 +0200, Kirill A. Shutemov wrote: > ./autogen failed: > > $ ./autogen.sh > libtoolize: Consider adding `AC_CONFIG_MACRO_DIR([m4])' to configure.in and > libtoolize: rerunning libtoolize, to keep the correct libtool macros in-tree. > libtoolize: Consider adding `-I m4' to ACLOCAL_AMFLAGS in Makefile.am. > src/plugins/fts/Makefile.am:52: `pkglibexecdir' is not a legitimate directory for `SCRIPTS' > Makefile.am:24: `pkglibdir' is not a legitimate directory for `DATA' > autoreconf: automake failed with exit status: 1 > $ automake --version | head -1 > automake (GNU automake) 1.11.2 Looks like automake bug: http://old.nabble.com/Re%3A-Scripts-in-pkglibexecdir--p33070266.html From lee at standen.id.au Wed Jan 18 17:21:33 2012 From: lee at standen.id.au (Lee Standen) Date: Wed, 18 Jan 2012 23:21:33 +0800 Subject: [Dovecot] Performance of Maildir vs sdbox/mdbox In-Reply-To: <1326894871.11500.45.camel@innu> References: <02f401ccd5de$ef17c470$cd474d50$@standen.id.au> <1326894871.11500.45.camel@innu> Message-ID: Out of interest, has the NFS issue been tested on NFS4? My understanding is that NFS4 has a lot of fixes for the locking/caching problems that plague NFS3, and we were planning to use NFS4 from day one. If this hasn't been tested, is there some kind of load simulator that we could run to see if the issue does occur in our environment? On 18.01.2012 21:54, Timo Sirainen wrote: > On Wed, 2012-01-18 at 20:44 +0800, Lee Standen wrote: > >> I've been desperately trying to find some comparative performance >> information about the different mailbox formats supported by Dovecot >> in >> order to make an assessment on which format is right for our >> environment. > > Unfortunately there aren't really any. Everyone who seems to switch > to > sdbox/mdbox usually also change their hardware at the same time, so > there aren't really any before/after metrics. I've of course some > unrealistic synthetic benchmarks, but I don't think they are very > useful. > > So, I would also be very interested in seeing some before/after > graphs > of disk IO, CPU and memory usage of Maildir -> dbox switch in same > hardware. > > Maildir is anyway definitely worse performance then sdbox or mdbox. > mdbox also uses less NFS operations, but I don't know how much faster > (if any) it is with Netapps. > >> * All mail storage presented via NFS over 10Gbps Ethernet (Jumbo >> Frames) >> >> * Postfix will feed new email to Dovecot via LMTP >> >> * Dovecot servers have been split based on their role >> >> - Dovecot LDA Servers (running LMTP protocol) >> >> - Dovecot POP/IMAP servers (running POP/IMAP protocols) > > You're going to run into NFS caching troubles with the above split > setup. I don't recommend it. You will see error messages about index > corruption with it, and with dbox it can cause metadata loss. > http://wiki2.dovecot.org/NFS http://wiki2.dovecot.org/Director > >> - LDA & POP/IMAP servers are segmented into geographically split >> groups >> (so no server sees every single mailbox) >> >> - Nginx proxy used to terminate customer connections, connections >> are >> redirected to the appropriate geographic servers > > Can the same mailbox still be accessed via multiple geographic > servers? > I've had some plans for doing this kind of access/replication using > dsync.. > >> * Apache Lucene indexes will be used to accelerate IMAP search for >> users > > Dovecot's fts-solr or fts-lucene? > >> Our closest current live configuration (Qmail SMTP, Courier IMAP, >> Maildir) >> has 600K mailboxes and pushes ~ 35,000 NFS operations per second at >> peak >> >> Some of the things I would like to know: >> >> * Are we likely to see a reduction in IOPS/User by using Maildir >> alone under >> Dovecot? > > If you have webmail type of clients, definitely. For > Outlook/Thunderbird > you should still see improvement, but not necessarily as much. > > You didn't mention POP3. That isn't Dovecot's strong point. Its > performance should be about the same as Courier-POP3, but could be > less > than QMail-POP3. Although if many of your POP3 users keep a lot of > mails > on server it > >> * If someone can give some technical reasoning behind why mdbox does >> less >> IOPS than Maildir? > > Maildir renames files a lot. From new/ -> to cur/ and then every time > message flag changes. That's why sdbox is faster. Why mdbox should be > faster than sdbox is because mdbox puts (or should put) more mail > data > physically closer in disks to make reading it faster. > >> I understand some of the reasons for the mdbox IOPS question, but I >> need >> some more information so we can discuss internally and make a >> decision as to >> whether we're comfortable going with mdbox from day one. We're very >> familiar with Maidlir, and there's just some uneasiness internally >> around >> going to a new mail storage format. > > It's at least safer to first switch to Dovecot+Maildir to make sure > that > any problems you might find aren't related to the mailbox format.. From tss at iki.fi Wed Jan 18 17:28:36 2012 From: tss at iki.fi (Timo Sirainen) Date: Wed, 18 Jan 2012 17:28:36 +0200 Subject: [Dovecot] Performance of Maildir vs sdbox/mdbox In-Reply-To: <7c413ccbddc8e25584311c55672a51e5@standen.id.au> References: <7c413ccbddc8e25584311c55672a51e5@standen.id.au> Message-ID: <1326900516.11500.71.camel@innu> On Wed, 2012-01-18 at 22:36 +0800, Lee Standen wrote: > How about this... are there any tools available (that you know of) to > capture real live customer POP3/IMAP traffic and replay it against a > separate system? That might be a feasible option for doing a > like-for-like comparison in our environment? We could probably get > something in place to simulate the load if we can do something like > that... I've thought about that too before, but with IMAP traffic it doesn't work very well. Even if the storages were 100% synchronized at startup, the session states could easily become desynced. For example if client does a NOOP at the same time when two mails are being delivered to the mailbox, serverA might show only one of them while serverB would show two of them because it was executed a tiny bit later. All of the client's future commands could then be affected by this desync. (OK, I wrote the above thinking about a real-time system where you could redirect the client's traffic to two systems, but basically same problems exist for offline replays too. Although it would be easier to fix the replays to handle this.) > > You're going to run into NFS caching troubles with the above split > > setup. I don't recommend it. You will see error messages about index > > corruption with it, and with dbox it can cause metadata loss. > > http://wiki2.dovecot.org/NFS http://wiki2.dovecot.org/Director > > That might be the one thing (unfortunately) which prevents us from > going with the dbox format. I understand the same issue can actually > occur on Dovecot Maildir as well, but because Maildir works without > these index files, we were willing to just go with it. Are you planning on also redirecting POP3/IMAP connections to somewhat randomly to the different servers? I really don't recommend that, even with Maildir.. Some of the errors will be user visible, even if no actual data loss happens. Users may get disconnected, and sometimes might have to clean their client's cache. > I will raise it > again, but there has been a lot of push back about introducing a single > point of failure, even though this is a perceived one. What is a single point of failure there? > > It's at least safer to first switch to Dovecot+Maildir to make sure > > that > > any problems you might find aren't related to the mailbox format.. > > Yep, I'm considering that. The flip side is that it's actually going > to be difficult for us to change mail format once we've migrated into > this system, but we have an opportunity for (literally) a month long > testing phase beginning in Feb/March which will let us test as many > possibilities as we can. The mailbox format switching can be done one user at a time with zero downtime with dsync. From tss at iki.fi Wed Jan 18 17:34:54 2012 From: tss at iki.fi (Timo Sirainen) Date: Wed, 18 Jan 2012 17:34:54 +0200 Subject: [Dovecot] Performance of Maildir vs sdbox/mdbox In-Reply-To: References: <02f401ccd5de$ef17c470$cd474d50$@standen.id.au> <1326894871.11500.45.camel@innu> Message-ID: <1326900894.11500.74.camel@innu> On Wed, 2012-01-18 at 23:21 +0800, Lee Standen wrote: > Out of interest, has the NFS issue been tested on NFS4? My > understanding is that NFS4 has a lot of fixes for the locking/caching > problems that plague NFS3, and we were planning to use NFS4 from day > one. I've tried with Linux NFS4 server+client a few years ago. It seemed to have all the same caching problems as NFS3. > If this hasn't been tested, is there some kind of load simulator that > we could run to see if the issue does occur in our environment? http://imapwiki.org/ImapTest should easily trigger it. Just run it against two servers, both hammering the same mailbox. From tss at iki.fi Wed Jan 18 17:59:39 2012 From: tss at iki.fi (Timo Sirainen) Date: Wed, 18 Jan 2012 17:59:39 +0200 Subject: [Dovecot] 2.1.rc3 (1a722c7676bb): doveadm mailbox list -> Segmentation fault In-Reply-To: <4F12D069.9060102@localhost.localdomain.org> References: <4F12D069.9060102@localhost.localdomain.org> Message-ID: <1326902379.11500.81.camel@innu> On Sun, 2012-01-15 at 14:11 +0100, Pascal Volk wrote: > Core was generated by `doveadm mailbox list -u > jane.roe at example.com /*'. Finally fixed: http://hg.dovecot.org/dovecot-2.1/rev/99ea6da7dc99 From tss at iki.fi Wed Jan 18 18:04:49 2012 From: tss at iki.fi (Timo Sirainen) Date: Wed, 18 Jan 2012 18:04:49 +0200 Subject: [Dovecot] v2.x services documentation In-Reply-To: <92A86804-CEEE-4EB6-9EE7-FC8B7905AA2C@swing.be> References: <04D662E7-2A0A-448B-BA21-1E337A400CA6@iki.fi> <92A86804-CEEE-4EB6-9EE7-FC8B7905AA2C@swing.be> Message-ID: <1326902689.11500.82.camel@innu> On Sat, 2012-01-14 at 18:03 +0100, Axel Luttgens wrote: > Up to now, I only had the opportunity to quickly read the wiki page, and have a small question; one may read: > > process_min_avail > Minimum number of processes that always should be available to accept more client connections. For service_limit=1 processes this decreases the latency for handling new connections. For service_limit!=1 processes it could be set to the number of CPU cores on the system to balance the load among them. > > What's that service_limit setting? Thanks, fixed. Was supposed to be service_count. From eugene at raptor.kiev.ua Wed Jan 18 18:19:58 2012 From: eugene at raptor.kiev.ua (Eugene Paskevich) Date: Wed, 18 Jan 2012 18:19:58 +0200 Subject: [Dovecot] Antispam plugin not compatible with Dovecot 2.1 In-Reply-To: <1326897258.11500.53.camel@innu> References: <4F155670.6010905@gmail.com> <1326897258.11500.53.camel@innu> Message-ID: On Wed, 18 Jan 2012 16:34:18 +0200, Timo Sirainen wrote: > On Tue, 2012-01-17 at 11:07 +0000, interfaSys s?rl wrote: >> Here is what I get when I try to compile the antispam plugin agaisnt >> Dovecot 2.1 >> >> ************** >> mailbox.c: In function 'antispam_save_begin': >> mailbox.c:138:12: error: 'struct mail_save_context' has no member named >> 'copying' > > The "copying" should be changed to "copying_via_save". Thank you, Timo. Would #if DOVECOT_IS_GE(2,1) suffice or do I need anything more specific? -- Eugene Paskevich | *==)----------- | Plug me into eugene at raptor.kiev.ua | -----------(==* | The Matrix From tss at iki.fi Wed Jan 18 18:31:49 2012 From: tss at iki.fi (Timo Sirainen) Date: Wed, 18 Jan 2012 18:31:49 +0200 Subject: [Dovecot] Antispam plugin not compatible with Dovecot 2.1 In-Reply-To: References: <4F155670.6010905@gmail.com> <1326897258.11500.53.camel@innu> Message-ID: <1326904309.11500.83.camel@innu> On Wed, 2012-01-18 at 18:19 +0200, Eugene Paskevich wrote: > >> mailbox.c: In function 'antispam_save_begin': > >> mailbox.c:138:12: error: 'struct mail_save_context' has no member named > >> 'copying' > > > > The "copying" should be changed to "copying_via_save". > > Thank you, Timo. > Would #if DOVECOT_IS_GE(2,1) suffice or do I need anything more specific? Where do you expect to find such macro? ;) Hm. Perhaps I should try to add one. From eugene at raptor.kiev.ua Wed Jan 18 18:41:39 2012 From: eugene at raptor.kiev.ua (Eugene Paskevich) Date: Wed, 18 Jan 2012 18:41:39 +0200 Subject: [Dovecot] Antispam plugin not compatible with Dovecot 2.1 In-Reply-To: <1326904309.11500.83.camel@innu> References: <4F155670.6010905@gmail.com> <1326897258.11500.53.camel@innu> <1326904309.11500.83.camel@innu> Message-ID: On Wed, 18 Jan 2012 18:31:49 +0200, Timo Sirainen wrote: > On Wed, 2012-01-18 at 18:19 +0200, Eugene Paskevich wrote: >> >> mailbox.c: In function 'antispam_save_begin': >> >> mailbox.c:138:12: error: 'struct mail_save_context' has no member >> named >> >> 'copying' >> > >> > The "copying" should be changed to "copying_via_save". >> >> Thank you, Timo. >> Would #if DOVECOT_IS_GE(2,1) suffice or do I need anything more >> specific? > > Where do you expect to find such macro? ;) Hm. Perhaps I should try to > add one. Heh. That's Johannes' package private macro... :) -- Eugene Paskevich | *==)----------- | Plug me into eugene at raptor.kiev.ua | -----------(==* | The Matrix From moseleymark at gmail.com Wed Jan 18 19:17:40 2012 From: moseleymark at gmail.com (Mark Moseley) Date: Wed, 18 Jan 2012 09:17:40 -0800 Subject: [Dovecot] LMTP Logging In-Reply-To: <1326898378.11500.54.camel@innu> References: <1326898378.11500.54.camel@innu> Message-ID: On Wed, Jan 18, 2012 at 6:52 AM, Timo Sirainen wrote: > On Mon, 2012-01-16 at 17:17 -0800, Mark Moseley wrote: >> Just had a minor suggestion, with no clue how hard/easy it would be to >> implement: >> >> The %f flag in deliver_log_format seems to pick up the From: header, >> instead of the "MAIL FROM:<...>" arg. It'd be handy to have a %F that >> shows the "MAIL FROM" arg instead. I'm looking at tracking emails >> through logs from Exim to Dovecot easily. I know Message-ID can be >> used for correlation but it adds some complexity to searching, i.e. I >> can't just grep for the sender (as logged by Exim), unless I assume >> "MAIL FROM" always == From: > > Added to v2.1: http://hg.dovecot.org/dovecot-2.1/rev/7ee2cfbcae2e > http://hg.dovecot.org/dovecot-2.1/rev/08cc9d2a79e6 > > You're awesome, thanks! From moseleymark at gmail.com Wed Jan 18 19:54:15 2012 From: moseleymark at gmail.com (Mark Moseley) Date: Wed, 18 Jan 2012 09:54:15 -0800 Subject: [Dovecot] Performance of Maildir vs sdbox/mdbox In-Reply-To: <7c413ccbddc8e25584311c55672a51e5@standen.id.au> References: <7c413ccbddc8e25584311c55672a51e5@standen.id.au> Message-ID: >>> * All mail storage presented via NFS over 10Gbps Ethernet (Jumbo Frames) >>> >>> * Postfix will feed new email to Dovecot via LMTP >>> >>> * Dovecot servers have been split based on their role >>> >>> ?- Dovecot LDA Servers (running LMTP protocol) >>> >>> ?- Dovecot POP/IMAP servers (running POP/IMAP protocols) >> >> >> You're going to run into NFS caching troubles with the above split >> setup. I don't recommend it. You will see error messages about index >> corruption with it, and with dbox it can cause metadata loss. >> http://wiki2.dovecot.org/NFS http://wiki2.dovecot.org/Director > > > That might be the one thing (unfortunately) which prevents us from going > with the dbox format. ?I understand the same issue can actually occur on > Dovecot Maildir as well, but because Maildir works without these index > files, we were willing to just go with it. ?I will raise it again, but there > has been a lot of push back about introducing a single point of failure, > even though this is a perceived one. I'm in the middle of working on a Maildir->mdbox migration as well, and likewise, over NFS (all Netapps but moving to Sun), and likewise with split LDA and IMAP/POP servers (and both of those served out of pools). I was hoping doing things like setting "mail_nfs_index = yes" and "mmap_disable = yes" and "mail_fsync = always/optimized" would mitigate most of the risks of index corruption, as well as probably turning indexing off on the LDA side of things--i.e. all the suggestions at http://wiki2.dovecot.org/NFS. Is that definitely not the case? Is there anything else (beyond moving to a director-based architecture) that can mitigate the risk of index corruption? In our case, incoming IMAP/POP are 'stuck' to servers based on IP persistence for a given amount of time, but incoming LDA is randomly distributed. From tss at iki.fi Wed Jan 18 19:58:31 2012 From: tss at iki.fi (Timo Sirainen) Date: Wed, 18 Jan 2012 19:58:31 +0200 Subject: [Dovecot] Performance of Maildir vs sdbox/mdbox In-Reply-To: References: <7c413ccbddc8e25584311c55672a51e5@standen.id.au> Message-ID: <33C8A3B3-7DB5-4638-8C34-54E0C7E739A4@iki.fi> On 18.1.2012, at 19.54, Mark Moseley wrote: > I'm in the middle of working on a Maildir->mdbox migration as well, > and likewise, over NFS (all Netapps but moving to Sun), and likewise > with split LDA and IMAP/POP servers (and both of those served out of > pools). I was hoping doing things like setting "mail_nfs_index = yes" > and "mmap_disable = yes" and "mail_fsync = always/optimized" would > mitigate most of the risks of index corruption, They help, but aren't 100% effective and they also make the performance worse. > as well as probably > turning indexing off on the LDA side of things You can't turn off indexing with dbox. > --i.e. all the > suggestions at http://wiki2.dovecot.org/NFS. Is that definitely not > the case? Is there anything else (beyond moving to a director-based > architecture) that can mitigate the risk of index corruption? In our > case, incoming IMAP/POP are 'stuck' to servers based on IP persistence > for a given amount of time, but incoming LDA is randomly distributed. What's the problem with director-based architecture? From buchholz at easystreet.net Wed Jan 18 20:25:40 2012 From: buchholz at easystreet.net (Don Buchholz) Date: Wed, 18 Jan 2012 10:25:40 -0800 Subject: [Dovecot] imap-login process_limit reached In-Reply-To: <1326889380.11500.16.camel@innu> References: <4F14A7AA.8010507@easystreet.net> <1326889380.11500.16.camel@innu> Message-ID: <4F170EA4.20909@easystreet.net> Timo Sirainen wrote: > On Mon, 2012-01-16 at 14:41 -0800, Don Buchholz wrote: > >> I've been having some problems with IMAP user connections to the Dovecot >> (v2.0.8) server. The following message is being logged. >> >> Jan 16 10:51:36 postal dovecot: master: Warning: >> service(imap-login): process_limit reached, client connections are >> being dropped >> > > Maybe this will help some in future: > http://hg.dovecot.org/dovecot-2.1/rev/a4e61c99c7eb > > The new error message is: > > service(imap-login): process_limit (100) reached, client connections are being dropped > Great idea! Thanks, Timo. - Don From janfrode at tanso.net Wed Jan 18 20:51:38 2012 From: janfrode at tanso.net (Jan-Frode Myklebust) Date: Wed, 18 Jan 2012 19:51:38 +0100 Subject: [Dovecot] Performance of Maildir vs sdbox/mdbox In-Reply-To: <33C8A3B3-7DB5-4638-8C34-54E0C7E739A4@iki.fi> References: <7c413ccbddc8e25584311c55672a51e5@standen.id.au> <33C8A3B3-7DB5-4638-8C34-54E0C7E739A4@iki.fi> Message-ID: <20120118185137.GA21945@dibs.tanso.net> On Wed, Jan 18, 2012 at 07:58:31PM +0200, Timo Sirainen wrote: > > > --i.e. all the > > suggestions at http://wiki2.dovecot.org/NFS. Is that definitely not > > the case? Is there anything else (beyond moving to a director-based > > architecture) that can mitigate the risk of index corruption? In our > > case, incoming IMAP/POP are 'stuck' to servers based on IP persistence > > for a given amount of time, but incoming LDA is randomly distributed. > > What's the problem with director-based architecture? It hasn't been working reliably for lmtp in v2.0. To quote yourself: ----8<----8<----8<-----8<-----8<-----8<----8<-----8<----8<----8<-- I think the way I originally planned LMTP proxying to work is simply too complex to work reliably, perhaps even if the code was bug-free. So instead of reading+writing DATA at the same time, this patch changes the DATA to be first read into memory or temp file, and then from there read and sent to the LMTP backends: http://hg.dovecot.org/dovecot-2.1/raw-rev/51d87deb5c26 ----8<----8<----8<-----8<-----8<-----8<----8<-----8<----8<----8<-- unfortunately I haven't tested that patch, so I have no idea if it fixed the issues or not... -jf From tss at iki.fi Wed Jan 18 21:03:18 2012 From: tss at iki.fi (Timo Sirainen) Date: Wed, 18 Jan 2012 21:03:18 +0200 Subject: [Dovecot] Performance of Maildir vs sdbox/mdbox In-Reply-To: <20120118185137.GA21945@dibs.tanso.net> References: <7c413ccbddc8e25584311c55672a51e5@standen.id.au> <33C8A3B3-7DB5-4638-8C34-54E0C7E739A4@iki.fi> <20120118185137.GA21945@dibs.tanso.net> Message-ID: <23FFD99C-7D70-40BE-A4F3-FD259FFC62E9@iki.fi> On 18.1.2012, at 20.51, Jan-Frode Myklebust wrote: > On Wed, Jan 18, 2012 at 07:58:31PM +0200, Timo Sirainen wrote: >> >>> --i.e. all the >>> suggestions at http://wiki2.dovecot.org/NFS. Is that definitely not >>> the case? Is there anything else (beyond moving to a director-based >>> architecture) that can mitigate the risk of index corruption? In our >>> case, incoming IMAP/POP are 'stuck' to servers based on IP persistence >>> for a given amount of time, but incoming LDA is randomly distributed. >> >> What's the problem with director-based architecture? > > It hasn't been working reliably for lmtp in v2.0. Yes, besides that :) > To quote yourself: > > ----8<----8<----8<-----8<-----8<-----8<----8<-----8<----8<----8<-- > > I think the way I originally planned LMTP proxying to work is simply too > complex to work reliably, perhaps even if the code was bug-free. So > instead of reading+writing DATA at the same time, this patch changes the > DATA to be first read into memory or temp file, and then from there read > and sent to the LMTP backends: > > http://hg.dovecot.org/dovecot-2.1/raw-rev/51d87deb5c26 > > ----8<----8<----8<-----8<-----8<-----8<----8<-----8<----8<----8<-- > > unfortunately I haven't tested that patch, so I have no idea if it > fixed the issues or not... I'm not sure if that patch is useful or not. The important patch to fix it is http://hg.dovecot.org/dovecot-2.0/rev/71084b799a6c From moseleymark at gmail.com Wed Jan 18 21:49:59 2012 From: moseleymark at gmail.com (Mark Moseley) Date: Wed, 18 Jan 2012 11:49:59 -0800 Subject: [Dovecot] Performance of Maildir vs sdbox/mdbox In-Reply-To: <33C8A3B3-7DB5-4638-8C34-54E0C7E739A4@iki.fi> References: <7c413ccbddc8e25584311c55672a51e5@standen.id.au> <33C8A3B3-7DB5-4638-8C34-54E0C7E739A4@iki.fi> Message-ID: On Wed, Jan 18, 2012 at 9:58 AM, Timo Sirainen wrote: > On 18.1.2012, at 19.54, Mark Moseley wrote: > >> I'm in the middle of working on a Maildir->mdbox migration as well, >> and likewise, over NFS (all Netapps but moving to Sun), and likewise >> with split LDA and IMAP/POP servers (and both of those served out of >> pools). I was hoping doing things like setting "mail_nfs_index = yes" >> and "mmap_disable = yes" and "mail_fsync = always/optimized" would >> mitigate most of the risks of index corruption, > > They help, but aren't 100% effective and they also make the performance worse. In testing, it seemed very much like the benefits of reducing IOPS by up to a couple orders of magnitude outweighed having to use those settings. Both in scripted testing and just using a mail UI, with the NFS-ish settings, I didn't notice any lag and doing things like checking a good-sized mailbox were at least as quick as Maildir. And I'm hoping that reducing IOPS across the entire set of NFS servers will compound the benefits quite a bit. >> as well as probably >> turning indexing off on the LDA side of things > > You can't turn off indexing with dbox. Ah, too bad. I was hoping I could get away with the LDA not updating the index but just dropping the message into storage/m.# but it'd still be seen on the IMAP/POP side--but hadn't tested that. Guess that's not the case. >> --i.e. all the >> suggestions at http://wiki2.dovecot.org/NFS. Is that definitely not >> the case? Is there anything else (beyond moving to a director-based >> architecture) that can mitigate the risk of index corruption? In our >> case, incoming IMAP/POP are 'stuck' to servers based on IP persistence >> for a given amount of time, but incoming LDA is randomly distributed. > > What's the problem with director-based architecture? Nothing, per se. It's just that migrating to mdbox *and* to a director architecture is quite a bit more added complexity than simply migrating to mdbox alone. Hopefully, I'm not hijacking this thread. This seems pretty pertinent as well to the OP. From janfrode at tanso.net Wed Jan 18 22:14:37 2012 From: janfrode at tanso.net (Jan-Frode Myklebust) Date: Wed, 18 Jan 2012 21:14:37 +0100 Subject: [Dovecot] Performance of Maildir vs sdbox/mdbox In-Reply-To: <23FFD99C-7D70-40BE-A4F3-FD259FFC62E9@iki.fi> References: <7c413ccbddc8e25584311c55672a51e5@standen.id.au> <33C8A3B3-7DB5-4638-8C34-54E0C7E739A4@iki.fi> <20120118185137.GA21945@dibs.tanso.net> <23FFD99C-7D70-40BE-A4F3-FD259FFC62E9@iki.fi> Message-ID: <20120118201437.GA23070@dibs.tanso.net> On Wed, Jan 18, 2012 at 09:03:18PM +0200, Timo Sirainen wrote: > On 18.1.2012, at 20.51, Jan-Frode Myklebust wrote: > > >> What's the problem with director-based architecture? > > > > It hasn't been working reliably for lmtp in v2.0. > > Yes, besides that :) Besides that it's great! > > unfortunately I haven't tested that patch, so I have no idea if it > > fixed the issues or not... > > I'm not sure if that patch is useful or not. The important patch to fix it is http://hg.dovecot.org/dovecot-2.0/rev/71084b799a6c So with that oneliner on our directors, you expect lmtp proxying trough director to be better than lmtp to rr-dns towards backend servers? If so, I guess we should give it another try. -jf From tss at iki.fi Wed Jan 18 22:26:31 2012 From: tss at iki.fi (Timo Sirainen) Date: Wed, 18 Jan 2012 22:26:31 +0200 Subject: [Dovecot] Performance of Maildir vs sdbox/mdbox In-Reply-To: <20120118201437.GA23070@dibs.tanso.net> References: <7c413ccbddc8e25584311c55672a51e5@standen.id.au> <33C8A3B3-7DB5-4638-8C34-54E0C7E739A4@iki.fi> <20120118185137.GA21945@dibs.tanso.net> <23FFD99C-7D70-40BE-A4F3-FD259FFC62E9@iki.fi> <20120118201437.GA23070@dibs.tanso.net> Message-ID: <956410A8-290E-408A-B85A-5AD46F5CDB70@iki.fi> On 18.1.2012, at 22.14, Jan-Frode Myklebust wrote: >>> unfortunately I haven't tested that patch, so I have no idea if it >>> fixed the issues or not... >> >> I'm not sure if that patch is useful or not. The important patch to fix it is http://hg.dovecot.org/dovecot-2.0/rev/71084b799a6c > > So with that oneliner on our directors, you expect lmtp proxying trough > director to be better than lmtp to rr-dns towards backend servers? If so, > I guess we should give it another try. It should fix the hangs that were common. I'm not sure if it fixes everything without the complexity reduction patch. From admin at opsys.de Wed Jan 18 22:41:02 2012 From: admin at opsys.de (Markus Fritz) Date: Wed, 18 Jan 2012 21:41:02 +0100 Subject: [Dovecot] Quota won't work Message-ID: I tried to set a quota setting. I installed dovecot with newest version, patched it and started it. dovecot -n: # 1.2.15: /etc/dovecot/dovecot.conf # OS: Linux 2.6.32-5-amd64 x86_64 Debian 6.0.3 ext4 log_timestamp: %Y-%m-%d %H:%M:%S protocols: imap imaps pop3 pop3s ssl_listen: 143 ssl_cipher_list: ALL:!LOW:!SSLv2 disable_plaintext_auth: no login_dir: /var/run/dovecot/login login_executable(default): /usr/lib/dovecot/imap-login login_executable(imap): /usr/lib/dovecot/imap-login login_executable(pop3): /usr/lib/dovecot/pop3-login mail_privileged_group: mail mail_location: maildir:/var/vmail/%d/%n/Maildir mbox_write_locks: fcntl dotlock mail_executable(default): /usr/lib/dovecot/imap mail_executable(imap): /usr/lib/dovecot/imap mail_executable(pop3): /usr/lib/dovecot/pop3 mail_plugins(default): quota imap_quota mail_plugins(imap): quota imap_quota mail_plugins(pop3): quota mail_plugin_dir(default): /usr/lib/dovecot/modules/imap mail_plugin_dir(imap): /usr/lib/dovecot/modules/imap mail_plugin_dir(pop3): /usr/lib/dovecot/modules/pop3 namespace: type: private inbox: yes list: yes subscriptions: yes lda: postmaster_address: postmaster at opsys.de mail_plugins: sieve quota log_path: auth default: mechanisms: plain login verbose: yes passdb: driver: sql args: /etc/dovecot/dovecot-sql.conf userdb: driver: static args: uid=5000 gid=5000 home=/var/vmail/%d/%n/Maildir allow_all_users=yes socket: type: listen client: path: /var/spool/postfix/private/auth mode: 432 user: postfix group: postfix master: path: /var/run/dovecot/auth-master mode: 384 user: vmail /etc/dovecot/dovecot-sql.conf: driver = mysql connect = host=127.0.0.1 dbname=mailserver user=mailuser password=****** default_pass_scheme = PLAIN-MD5 password_query = SELECT email as user, password FROM virtual_users WHERE email='%u'; user_query = SELECT CONCAT('/var/mail/', maildir) AS home, CONCAT('*:bytes=', quota) AS quota_rule \ FROM virtual_users WHERE email='%u' virtual_users has this: CREATE TABLE IF NOT EXISTS `virtual_users` ( `id` int(11) NOT NULL AUTO_INCREMENT, `domain_id` int(11) NOT NULL, `password` varchar(32) NOT NULL, `email` varchar(100) NOT NULL, `quota` int(11) NOT NULL DEFAULT '629145600', PRIMARY KEY (`id`), UNIQUE KEY `email` (`email`), KEY `domain_id` (`domain_id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; Also postfix is installed with this (not the hole cfg): virtual_mailbox_domains = mysql:/etc/postfix/mysql-virtual-mailbox-domains.cf virtual_mailbox_limit_inbox = no virtual_mailbox_limit_maps = mysql:/etc/postfix/mysql-quota.cf virtual_mailbox_limit_override = yes virtual_mailbox_maps = mysql:/etc/postfix/mysql-virtual-mailbox-maps.cf virtual_maildir_extended = yes virtual_maildir_limit_message = "The user you are trying to reach is over quota." virtual_maildir_limit_message_maps = mail:/etc/postfix/mysql-quota.cf virtual_overquota_bounce = yes /etc/postfix/mysql-quota.cf: user = mailuser password = ****** hosts = 127.0.0.1 dbname = mailserver query = SELECT quota FROM virtual_users WHERE email='%s' I changed the quota of my mail account to 40, so 40Byte should be the maximum. My account is at a size of 600KB now. I still recieve mails, also they will be saved without errors. /var/log/mail.log says nothing to quota, just normal recieve and store entries. What to fix? -- Markus Fritz Administration opsys.de From tss at iki.fi Wed Jan 18 23:05:40 2012 From: tss at iki.fi (Timo Sirainen) Date: Wed, 18 Jan 2012 23:05:40 +0200 Subject: [Dovecot] Performance of Maildir vs sdbox/mdbox In-Reply-To: References: <7c413ccbddc8e25584311c55672a51e5@standen.id.au> <33C8A3B3-7DB5-4638-8C34-54E0C7E739A4@iki.fi> Message-ID: <233EA3FE-D978-4A62-AEE7-4E908AE83935@iki.fi> On 18.1.2012, at 21.49, Mark Moseley wrote: >> What's the problem with director-based architecture? > > Nothing, per se. It's just that migrating to mdbox *and* to a director > architecture is quite a bit more added complexity than simply > migrating to mdbox alone. Yes, I agree it's safer to do one thing that a time. That's why I'd do a switch to director first. :) From tss at iki.fi Wed Jan 18 23:07:42 2012 From: tss at iki.fi (Timo Sirainen) Date: Wed, 18 Jan 2012 23:07:42 +0200 Subject: [Dovecot] Quota won't work In-Reply-To: References: Message-ID: <40CE1ECA-D884-4127-862E-A6733B685594@iki.fi> On 18.1.2012, at 22.41, Markus Fritz wrote: > passdb: > driver: sql > args: /etc/dovecot/dovecot-sql.conf > userdb: > driver: static > args: uid=5000 gid=5000 home=/var/vmail/%d/%n/Maildir allow_all_users=yes You use sql as passdb, static as userdb. > password_query = SELECT email as user, password FROM virtual_users WHERE email='%u'; passdb sql executes password_query. > user_query = SELECT CONCAT('/var/mail/', maildir) AS home, CONCAT('*:bytes=', quota) AS quota_rule \ > FROM virtual_users WHERE email='%u' userdb sql executes user_query. But you're not using userdb sql, you're using userdb static. This query never gets executed. Also you don't have plugin { quota } setting. From Juergen.Obermann at hrz.uni-giessen.de Wed Jan 18 23:40:17 2012 From: Juergen.Obermann at hrz.uni-giessen.de (=?UTF-8?Q?J=C3=BCrgen_Obermann?=) Date: Wed, 18 Jan 2012 22:40:17 +0100 Subject: [Dovecot] Panic: file mbox-sync.c: line 1348: assertion failed In-Reply-To: <20120110163207.182538xtgzoxjg8w@webmail.hrz.uni-giessen.de> References: <20120110163207.182538xtgzoxjg8w@webmail.hrz.uni-giessen.de> Message-ID: <1460d9f2fc09b7f8f0d607cb5a86e01b@imapproxy.hrz> Am 10.01.2012 16:32, schrieb J?rgen Obermann: > > I have the following problem with doveadm: > > # gdb --args /opt/local/bin/doveadm -v mailbox status -u > userxy/g029 'messages' "Software-alle/AK-Software-Tagung" > GNU gdb 5.3 > Copyright 2002 Free Software Foundation, Inc. > GDB is free software, covered by the GNU General Public License, and > you are > welcome to change it and/or distribute copies of it under certain > conditions. > Type "show copying" to see the conditions. > There is absolutely no warranty for GDB. Type "show warranty" for > details. > This GDB was configured as "sparc-sun-solaris2.8"... > (gdb) run > Starting program: /opt/local/bin/doveadm -v mailbox status -u g029 > messages Software-alle/AK-Software-Tagung > warning: Lowest section in /lib/libthread.so.1 is .dynamic at > 00000074 > warning: Lowest section in /lib/libdl.so.1 is .hash at 000000b4 > doveadm(g029): Panic: file mbox-sync.c: line 1348: assertion failed: > (file_size >= sync_ctx->expunged_space + trailer_size) > doveadm(g029): Error: Raw backtrace: 0xff1cbc30 -> 0xff319544 -> > 0xff319fa8 -> 0xff31add8 -> 0xff31b278 -> 0xff2a69b0 -> 0xff2a6bac -> > 0x16808 -> 0x1b8fc -> 0x16ba0 -> 0x177cc -> 0x17944 -> 0x17a50 -> > 0x204e8 -> 0x165c8 > > Program received signal SIGABRT, Aborted. Hallo, the problem went away after I deleted the dovecot index files for the mailbox. Greetins, J?rgen Obermann Hochschulrechenzentrum der Justus-Liebig-Universit?t Gie?en Heinrich-Buff-Ring 44 Tel. 0641-9913054 From stan at hardwarefreak.com Thu Jan 19 06:39:04 2012 From: stan at hardwarefreak.com (Stan Hoeppner) Date: Wed, 18 Jan 2012 22:39:04 -0600 Subject: [Dovecot] Performance of Maildir vs sdbox/mdbox In-Reply-To: <1326894871.11500.45.camel@innu> References: <02f401ccd5de$ef17c470$cd474d50$@standen.id.au> <1326894871.11500.45.camel@innu> Message-ID: <4F179E68.5020408@hardwarefreak.com> On 1/18/2012 7:54 AM, Timo Sirainen wrote: > On Wed, 2012-01-18 at 20:44 +0800, Lee Standen wrote: >> * All mail storage presented via NFS over 10Gbps Ethernet (Jumbo Frames) >> >> * Postfix will feed new email to Dovecot via LMTP >> >> * Dovecot servers have been split based on their role >> >> - Dovecot LDA Servers (running LMTP protocol) >> >> - Dovecot POP/IMAP servers (running POP/IMAP protocols) > > You're going to run into NFS caching troubles with the above split > setup. I don't recommend it. You will see error messages about index > corruption with it, and with dbox it can cause metadata loss. > http://wiki2.dovecot.org/NFS http://wiki2.dovecot.org/Director Would it be possible to fix this NFS mdbox index corruption issue in this split scenario by using a dual namespace and disabling indexing on the INBOX? The goal being no index file collisions between LDA and imap processes. Maybe something like: namespace { separator = / prefix = "#mbox/" location = mbox:~/mail:INBOX=/var/mail/%u:INDEX=MEMORY inbox = yes hidden = yes list = no } namespace { separator = / prefix = location = mdbox:~/mdbox } Client access to new mail might be a little slower, but if it eliminates the index corruption issue and allows the split architecture, it may be a viable option. -- Stan From ebroch at whitehorsetc.com Thu Jan 19 08:48:29 2012 From: ebroch at whitehorsetc.com (Eric Broch) Date: Wed, 18 Jan 2012 23:48:29 -0700 Subject: [Dovecot] shared folder files not displaying in thunderbird Message-ID: <4F17BCBD.3020802@whitehorsetc.com> Can anyone help me figure out why email in a sub-folder (created using Thunderbird) of a dovecot namespace will not display in Thunderbird? ... Hello, I have dovecot installed with the configuration below. One of the subfolders created (using the email client) under the '/home/vpopmail/domains/mydomain.com/shared/projects' share no longer (it used to) displays the files located in it. There are about 150 folders under the '/home/vpopmail/domains/mydomain.com/shared/projects' share all of which display the files located in them, the one mentioned used to display the contents but no longer does. What would be the reason that one folder would no longer display existing files in the email client (Thunderbird) and the other folders would? And, how do I fix this? I've already tried unsubscribing and resubscribing the folder. This did not work. Would it now be simply a matter of unsubscribing the folder, deleting the dovecot files, and resubscribing to the folder? Eric # 2.0.11: /etc/dovecot/dovecot.conf # OS: Linux 2.6.18-238.19.1.el5 i686 CentOS release 5.7 (Final) auth_cache_size = 32 M auth_mechanisms = plain login digest-md5 cram-md5 auth_username_format = %Lu disable_plaintext_auth = no first_valid_uid = 89 log_path = /var/log/dovecot.log login_greeting = Dovecot toaster ready. namespace { inbox = yes location = prefix = INBOX. separator = . type = private } namespace { location = maildir:/home/vpopmail/domains/mydomain.com/shared/projects prefix = projects. separator = . type = public } passdb { args = cache_key=%u webmail=127.0.0.1 driver = vpopmail } plugin/quota = maildir protocols = imap ssl_cert = Hi, i want to send mails direct into a public folder. If i send an email via my local postfix the mail will be handled as a normal private mail. Dovecot does create a mailbox in the private Namespace and do use not the mailbox in public one. I hope you can help me with my little problem. Here sone informations about my configuration: [root at imap1 etc]# ls -la /var/dovecot/imap/public/ insgesamt 16 drwxr-x--- 3 vmail vmail 4096 19. Jan 10:12 . drwxr-x--- 5 vmail vmail 4096 18. Jan 08:41 .. -rw-r----- 1 vmail vmail 0 19. Jan 10:11 dovecot-acl-list -rw-r----- 1 vmail vmail 8 19. Jan 10:12 dovecot-uidvalidity -r--r--r-- 1 vmail vmail 0 19. Jan 10:12 dovecot-uidvalidity.4f17de84 drwx------ 5 vmail vmail 4096 19. Jan 10:12 .hrztest and here is me configuration: # 2.0.9: /etc/dovecot/dovecot.conf # OS: Linux 2.6.32-220.2.1.el6.i686 i686 Red Hat Enterprise Linux Server release 6.2 (Santiago) auth_username_format = %Ln disable_plaintext_auth = no login_greeting = Dovecot IMAP der Jade Hochschule. mail_access_groups = vmail mail_debug = yes mail_gid = vmail mail_plugins = quota acl mail_uid = vmail managesieve_notify_capability = mailto managesieve_sieve_capability = fileinto reject envelope encoded-character vacation subaddress comparator-i;ascii-numeric relational regex imap4flags copy include variables body enotify environment mailbox date imapflags notify mbox_write_locks = fcntl namespace { inbox = yes location = maildir:/var/dovecot/imap/%1n/%n prefix = separator = / type = private } namespace { list = children location = maildir:/var/dovecot/imap/public/ prefix = public/ separator = / subscriptions = no type = public } passdb { args = /etc/dovecot/dovecot-ldap.conf driver = ldap } passdb { driver = pam } plugin { acl = vfile acl_shared_dict = file:/var/lib/dovecot/shared-mailboxes mail_log_fields = uid box msgid size quota = dict:user::file:/var/dovecot/imap/%1n/%n/dovecot-quota quota_rule = *:storage=50MB quota_rule2 = Trash:storage=+10% sieve = /var/dovecot/imap/%1n/%n/.dovecot.sieve sieve_dir = /var/dovecot/imap/%1n/%n/sieve sieve_extensions = +notify +imapflags sieve_quota_max_scripts = 2 } postmaster_address = postmaster at jade-hs.de protocols = imap pop3 lmtp sieve service lmtp { unix_listener /var/spool/postfix/private/dovecot-lmtp { group = postfix mode = 0660 user = postfix } } service managesieve-login { inet_listener sieve { port = 4190 } } ssl_cert = References: <1BCAD28D-8120-45C9-BAA2-B6597C34545A@apple.com> <09EF3E7A-15A2-45EE-91BD-6EEFD1FD8049@iki.fi> Message-ID: <1326981545.11500.86.camel@innu> On Thu, 2012-01-12 at 22:20 +0200, Timo Sirainen wrote: > On 12.1.2012, at 1.09, Mike Abbott wrote: > > > In 2.0.17 you increased LOGIN_MAX_INBUF_SIZE from 1024 to 4096. > > Should you also have increased MASTER_AUTH_MAX_DATA_SIZE from (1024*2) to (4096*2)? > > /* This should be kept in sync with LOGIN_MAX_INBUF_SIZE. Multiply it by two > > to make sure there's space to transfer the command tag */ > > Well, yes.. Although I'd rather not do that. > > 1. Command tag length needs to be restricted to something reasonable, maybe 100 chars, so it won't have to be multiplied by 2 but just added the 100 (+1 for NUL). > > 2. Maybe I can change the LOGIN_MAX_INBUF_SIZE back to its original size and change the AUTHENTICATE command handling to read the SASL initial response to a separate buffer. > > I'll try doing those next week. http://hg.dovecot.org/dovecot-2.1/rev/b86f7dd170c6 does this. From moseleymark at gmail.com Thu Jan 19 19:08:06 2012 From: moseleymark at gmail.com (Mark Moseley) Date: Thu, 19 Jan 2012 09:08:06 -0800 Subject: [Dovecot] Performance of Maildir vs sdbox/mdbox In-Reply-To: <4F179E68.5020408@hardwarefreak.com> References: <02f401ccd5de$ef17c470$cd474d50$@standen.id.au> <1326894871.11500.45.camel@innu> <4F179E68.5020408@hardwarefreak.com> Message-ID: On Wed, Jan 18, 2012 at 8:39 PM, Stan Hoeppner wrote: > On 1/18/2012 7:54 AM, Timo Sirainen wrote: >> On Wed, 2012-01-18 at 20:44 +0800, Lee Standen wrote: > >>> * All mail storage presented via NFS over 10Gbps Ethernet (Jumbo Frames) >>> >>> * Postfix will feed new email to Dovecot via LMTP >>> >>> * Dovecot servers have been split based on their role >>> >>> ? - Dovecot LDA Servers (running LMTP protocol) >>> >>> ? - Dovecot POP/IMAP servers (running POP/IMAP protocols) >> >> You're going to run into NFS caching troubles with the above split >> setup. I don't recommend it. You will see error messages about index >> corruption with it, and with dbox it can cause metadata loss. >> http://wiki2.dovecot.org/NFS http://wiki2.dovecot.org/Director > > Would it be possible to fix this NFS mdbox index corruption issue in > this split scenario by using a dual namespace and disabling indexing on > the INBOX? ?The goal being no index file collisions between LDA and imap > processes. ?Maybe something like: > > namespace { > ?separator = / > ?prefix = "#mbox/" > ?location = mbox:~/mail:INBOX=/var/mail/%u:INDEX=MEMORY > ?inbox = yes > ?hidden = yes > ?list = no > } > namespace { > ?separator = / > ?prefix = > ?location = mdbox:~/mdbox > } > > Client access to new mail might be a little slower, but if it eliminates > the index corruption issue and allows the split architecture, it may be > a viable option. > > -- > Stan It could be that I botched my test up somehow, but when I tested something similar yesterday (pointing the index at another location on the LDA), it didn't work. I was sending from the LDA server and confirmed that the messages made it to storage/m.# but without the real indexes being updated. When I checked the mailbox via IMAP, it never seemed to register that there was a message there, so I'm guessing that dovecot never looks at the storage files but just relies on the indexes to be correct. That sound right, Timo? From rob0 at gmx.co.uk Thu Jan 19 19:37:15 2012 From: rob0 at gmx.co.uk (/dev/rob0) Date: Thu, 19 Jan 2012 11:37:15 -0600 Subject: [Dovecot] Storing passwords encrypted... bcrypt? In-Reply-To: <4F14BF4B.5060804@wildgooses.com> References: <4F0367A2.1000807@Media-Brokers.com> <4F04FAA9.3020908@localhost.localdomain.org> <4F14BF4B.5060804@wildgooses.com> Message-ID: <20120119173715.GD14195@harrier.slackbuilds.org> On Tue, Jan 17, 2012 at 12:22:35AM +0000, Ed W wrote: > Note I personally believe there are valid reasons to store > plaintext passwords - this seems to cause huge criticism due to > the ensuing disaster which can happen if the database is pinched, > but it does allow for enhanced security in the password exchange, > so ultimately it depends on where your biggest risk lies... Exactly. In any security decision, consider the threat model first. There are too many kneejerk "secure" ideas in circulation. -- http://rob0.nodns4.us/ -- system administration and consulting Offlist GMX mail is seen only if "/dev/rob0" is in the Subject: From tss at iki.fi Thu Jan 19 21:18:00 2012 From: tss at iki.fi (Timo Sirainen) Date: Thu, 19 Jan 2012 21:18:00 +0200 Subject: [Dovecot] Performance of Maildir vs sdbox/mdbox In-Reply-To: <4F179E68.5020408@hardwarefreak.com> References: <02f401ccd5de$ef17c470$cd474d50$@standen.id.au> <1326894871.11500.45.camel@innu> <4F179E68.5020408@hardwarefreak.com> Message-ID: <44D201BF-3A31-4354-8B55-0BFE11721601@iki.fi> On 19.1.2012, at 6.39, Stan Hoeppner wrote: >> You're going to run into NFS caching troubles with the above split >> setup. I don't recommend it. You will see error messages about index >> corruption with it, and with dbox it can cause metadata loss. >> http://wiki2.dovecot.org/NFS http://wiki2.dovecot.org/Director > > Would it be possible to fix this NFS mdbox index corruption issue in > this split scenario by using a dual namespace and disabling indexing on > the INBOX? The goal being no index file collisions between LDA and imap > processes. Maybe something like: > > namespace { > separator = / > prefix = "#mbox/" > location = mbox:~/mail:INBOX=/var/mail/%u:INDEX=MEMORY > inbox = yes > hidden = yes > list = no > } > namespace { > separator = / > prefix = > location = mdbox:~/mdbox > } > > Client access to new mail might be a little slower, but if it eliminates > the index corruption issue and allows the split architecture, it may be > a viable option. That assumes that mails are only being delivered to INBOX (i.e. no Sieve or +mailbox addressing). I suppose you could do that if you can live with that limitation. Slightly better for performance would be to not actually keep INBOX mails in mbox format but use snarf plugin to move them to mdbox. And of course the above still requires that for imap/pop3 access the user is redirected to the same server every time. I don't really see it helping much. From tss at iki.fi Thu Jan 19 21:21:20 2012 From: tss at iki.fi (Timo Sirainen) Date: Thu, 19 Jan 2012 21:21:20 +0200 Subject: [Dovecot] Performance of Maildir vs sdbox/mdbox In-Reply-To: References: <02f401ccd5de$ef17c470$cd474d50$@standen.id.au> <1326894871.11500.45.camel@innu> <4F179E68.5020408@hardwarefreak.com> Message-ID: <2DF8A9C6-EE59-4557-A1AE-4E4D2BC91C93@iki.fi> On 19.1.2012, at 19.08, Mark Moseley wrote: >> namespace { >> separator = / >> prefix = "#mbox/" >> location = mbox:~/mail:INBOX=/var/mail/%u:INDEX=MEMORY >> inbox = yes >> hidden = yes >> list = no >> } >> >> Client access to new mail might be a little slower, but if it eliminates >> the index corruption issue and allows the split architecture, it may be >> a viable option. >> >> -- >> Stan > > It could be that I botched my test up somehow, but when I tested > something similar yesterday (pointing the index at another location on > the LDA), it didn't work. Note that Stan used mbox format for INBOX, not mdbox. > I was sending from the LDA server and > confirmed that the messages made it to storage/m.# but without the > real indexes being updated. When I checked the mailbox via IMAP, it > never seemed to register that there was a message there, so I'm > guessing that dovecot never looks at the storage files but just relies > on the indexes to be correct. That sound right, Timo? Correct. dbox absolutely relies on index files always being up to date. In some error situations it can figure out that it should do an index rebuild and then it finds any missing mails, but in normal situations it doesn't even try, because that would unnecessarily waste disk IO. (And there's of course doveadm force-resync to force it.) From tss at iki.fi Thu Jan 19 21:25:38 2012 From: tss at iki.fi (Timo Sirainen) Date: Thu, 19 Jan 2012 21:25:38 +0200 Subject: [Dovecot] shared folder files not displaying in thunderbird In-Reply-To: <4F16D52F.2040907@whitehorsetc.com> References: <4F16D52F.2040907@whitehorsetc.com> Message-ID: <69E3CE17-A92B-48A4-8A56-F16EE6450898@iki.fi> On 18.1.2012, at 16.20, Eric Broch wrote: > I have dovecot installed with the configuration below. > One of the subfolders created (using the email client) under the > '/home/vpopmail/domains/mydomain.com/shared/projects' share no longer > (it used to) displays the files located in it. There are about 150 > folders under the '/home/vpopmail/domains/mydomain.com/shared/projects' > share all of which display the files located in them, the one mentioned > used to display the contents but no longer does. > > What would be the reason that one folder would no longer display > existing files in the email client (Thunderbird) and the other folders > would? And, how do I fix this? So the folder itself exists, but it just appears empty? Have you tried with another IMAP client? Have you checked if the files are actually still there in the maildir? You can check if this is a server problem or a client problem by running: doveadm fetch -u user at domain uid mailbox project.missing.sub.folder all If the output is empty, then Dovecot doesn't see any mails in there (check if there are any files in the maildir). If it outputs something, then the client's local cache is broken and you need to tell the client to do a resync. > Would it now be simply a matter of unsubscribing the folder, deleting > the dovecot files, and resubscribing to the folder? Subscriptions won't matter. Deleting Dovecot's files may emulate the client's cache flush because it changes IMAP UIDVALIDITY. From tss at iki.fi Thu Jan 19 21:31:57 2012 From: tss at iki.fi (Timo Sirainen) Date: Thu, 19 Jan 2012 21:31:57 +0200 Subject: [Dovecot] Problems sending email direct into publich folders In-Reply-To: References: Message-ID: <0D641C8A-B7E5-464F-9BFC-3A256ED4C615@iki.fi> On 19.1.2012, at 14.02, Bohlken, Henning wrote: > i want to send mails direct into a public folder. If i send an email via my local postfix the mail will be handled as a normal private mail. Dovecot does create a mailbox in the private Namespace and do use not the mailbox in public one. Depends on how you want to do this.. For example all mails intended to be put to public namespace could be sent to a "publicuser" named user, which has write permissions to the public namespace. Then you'll simply create a sieve script for the publicuser which redirects the mails to the wanted folder (e.g. fileinto "public/hrztest"). From ebroch at whitehorsetc.com Thu Jan 19 23:03:58 2012 From: ebroch at whitehorsetc.com (Eric Broch) Date: Thu, 19 Jan 2012 14:03:58 -0700 Subject: [Dovecot] shared folder files not displaying in thunderbird In-Reply-To: <69E3CE17-A92B-48A4-8A56-F16EE6450898@iki.fi> References: <4F16D52F.2040907@whitehorsetc.com> <69E3CE17-A92B-48A4-8A56-F16EE6450898@iki.fi> Message-ID: <4F18853E.5020003@whitehorsetc.com> Timo, > So the folder itself exists, but it just appears empty? Yes. > Have you tried with another IMAP client? Yes, both Outlook and Thunderbird > Have you checked if the files are actually still there in the maildir? I've done a list (ls -la) of the directory where the files reside (path.to.share.sub.dir/cur). They exist. > You can check if this is a server problem or a client problem by running: doveadm fetch -u user at domain uid mailbox project.missing.sub.folder all I did this per your instructions and there is no output. So, email exists in the share, and it does not show up in Thunderbird, Outlook, or using doveadm. Eric From tss at iki.fi Thu Jan 19 23:29:34 2012 From: tss at iki.fi (Timo Sirainen) Date: Thu, 19 Jan 2012 23:29:34 +0200 Subject: [Dovecot] shared folder files not displaying in thunderbird In-Reply-To: <4F18853E.5020003@whitehorsetc.com> References: <4F16D52F.2040907@whitehorsetc.com> <69E3CE17-A92B-48A4-8A56-F16EE6450898@iki.fi> <4F18853E.5020003@whitehorsetc.com> Message-ID: <489C9E80-1E22-4C18-BC08-2F869592CFD6@iki.fi> On 19.1.2012, at 23.03, Eric Broch wrote: >> Have you checked if the files are actually still there in the maildir? > I've done a list (ls -la) of the directory where the files reside > (path.to.share.sub.dir/cur). They exist. >> You can check if this is a server problem or a client problem by running: doveadm fetch -u user at domain uid mailbox project.missing.sub.folder all > I did this per your instructions and there is no output. Try "touch path.to/cur" and the doveadm fetch again. Does it help? If not, there's some kind of a mismatch between what you think is happening in Dovecot and what is happening in filesystem. I'd like to know the exact full path and the mailbox name then. (Or you could run doveadm through strace and see if it's accessing the intended directory.) From stan at hardwarefreak.com Fri Jan 20 01:51:06 2012 From: stan at hardwarefreak.com (Stan Hoeppner) Date: Thu, 19 Jan 2012 17:51:06 -0600 Subject: [Dovecot] Performance of Maildir vs sdbox/mdbox In-Reply-To: <44D201BF-3A31-4354-8B55-0BFE11721601@iki.fi> References: <02f401ccd5de$ef17c470$cd474d50$@standen.id.au> <1326894871.11500.45.camel@innu> <4F179E68.5020408@hardwarefreak.com> <44D201BF-3A31-4354-8B55-0BFE11721601@iki.fi> Message-ID: <4F18AC6A.4050508@hardwarefreak.com> On 1/19/2012 1:18 PM, Timo Sirainen wrote: > On 19.1.2012, at 6.39, Stan Hoeppner wrote: > >>> You're going to run into NFS caching troubles with the above split >>> setup. I don't recommend it. You will see error messages about index >>> corruption with it, and with dbox it can cause metadata loss. >>> http://wiki2.dovecot.org/NFS http://wiki2.dovecot.org/Director >> >> Would it be possible to fix this NFS mdbox index corruption issue in >> this split scenario by using a dual namespace and disabling indexing on >> the INBOX? The goal being no index file collisions between LDA and imap >> processes. Maybe something like: >> >> namespace { >> separator = / >> prefix = "#mbox/" >> location = mbox:~/mail:INBOX=/var/mail/%u:INDEX=MEMORY >> inbox = yes >> hidden = yes >> list = no >> } >> namespace { >> separator = / >> prefix = >> location = mdbox:~/mdbox >> } >> >> Client access to new mail might be a little slower, but if it eliminates >> the index corruption issue and allows the split architecture, it may be >> a viable option. > > That assumes that mails are only being delivered to INBOX (i.e. no Sieve or +mailbox addressing). I suppose you could do that if you can live with that limitation. Slightly better for performance would be to not actually keep INBOX mails in mbox format but use snarf plugin to move them to mdbox. > > And of course the above still requires that for imap/pop3 access the user is redirected to the same server every time. I don't really see it helping much. I spent a decent amount of time last night researching the NFS cache issue. It seems there is no way to completely disable NFS client caching (in lie of rewriting the code oneself--a daunting tak), which would seem to be the real solution to the mdbox index corruption problem. So I went looking for alternatives and came up with the idea above. Obviously it's far from an optimal solution and introduces some limitations, but I thought it was worth tossing out for discussion. Timo, it seems that when you designed mdbox you didn't have NFS based clusters in mind. Do you consider mdbox simply not suitable for such an NFS cluster deployment? If one has no choice but an NFS cluster architecture, what Dovecot mailbox format do you recommend? Stick with maildir? In this case the OP has Netapp storage. Netapp units support both NFS exports as well as iSCSI LUNs. If the OP could utilize iSCSI instead of NFS, switching to GFS2 or OCFS, do you see these cluster filesystem as preferable for mdbox? -- Stan From tss at iki.fi Fri Jan 20 02:13:26 2012 From: tss at iki.fi (Timo Sirainen) Date: Fri, 20 Jan 2012 02:13:26 +0200 Subject: [Dovecot] Performance of Maildir vs sdbox/mdbox In-Reply-To: <4F18AC6A.4050508@hardwarefreak.com> References: <02f401ccd5de$ef17c470$cd474d50$@standen.id.au> <1326894871.11500.45.camel@innu> <4F179E68.5020408@hardwarefreak.com> <44D201BF-3A31-4354-8B55-0BFE11721601@iki.fi> <4F18AC6A.4050508@hardwarefreak.com> Message-ID: On 20.1.2012, at 1.51, Stan Hoeppner wrote: > I spent a decent amount of time last night researching the NFS cache > issue. It seems there is no way to completely disable NFS client > caching (in lie of rewriting the code oneself--a daunting tak), which > would seem to be the real solution to the mdbox index corruption problem. > > So I went looking for alternatives and came up with the idea above. > Obviously it's far from an optimal solution and introduces some > limitations, but I thought it was worth tossing out for discussion. I spent months looking into NFS related issues. I read through Linux and FreeBSD kernel source codes to figure out if there's something I could do to avoid the problems I see. I sent some patches to try to improve things, which of course didn't get accepted (some alternative ways might have been, but it would have required much more work from my part). The mail_nfs_* settings are the result of what I found out. They don't fully work, so I gave up. > Timo, it seems that when you designed mdbox you didn't have NFS based > clusters in mind. Do you consider mdbox simply not suitable for such an > NFS cluster deployment? If one has no choice but an NFS cluster > architecture, what Dovecot mailbox format do you recommend? Stick with > maildir? In the typical random-access NFS setup I don't consider any of Dovecot's formats suitable. Not maildir, not dbox. Perhaps in future I can redesign everything in a way that just happens to work well with all kinds of NFS setups, but I don't really hold a lot of hope for that. It seems that either you'll get bad performance (I'm not really interested in making Dovecot do that) or you'll use such a setup where you get good performance by avoiding the NFS problems. There are several huge Dovecot+NFS setups. They use director. It works well enough (and with the recent fixes, I'd hope perfectly). > In this case the OP has Netapp storage. Netapp units support both NFS > exports as well as iSCSI LUNs. If the OP could utilize iSCSI instead of > NFS, switching to GFS2 or OCFS, do you see these cluster filesystem as > preferable for mdbox? I don't have personal experience with cluster filesystems in recent years (other than glusterfs, which had some problems, but most(/all?) were fixed already or are available from their commercial support..). Based on what I've heard, I'm guessing they work better than random-access-NFS, but even if there are no actual corruption problems, it sounds like their performance isn't very good. From noel.butler at ausics.net Fri Jan 20 03:18:16 2012 From: noel.butler at ausics.net (Noel Butler) Date: Fri, 20 Jan 2012 11:18:16 +1000 Subject: [Dovecot] Performance of Maildir vs sdbox/mdbox In-Reply-To: References: <02f401ccd5de$ef17c470$cd474d50$@standen.id.au> <1326894871.11500.45.camel@innu> <4F179E68.5020408@hardwarefreak.com> <44D201BF-3A31-4354-8B55-0BFE11721601@iki.fi> <4F18AC6A.4050508@hardwarefreak.com> Message-ID: <1327022296.9133.3.camel@tardis> On Fri, 2012-01-20 at 02:13 +0200, Timo Sirainen wrote: > There are several huge Dovecot+NFS setups. They use director. It works well enough (and with the recent fixes, I'd hope perfectly). Not to mention other huge NFS setups that don't use director, and also have no problems. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 490 bytes Desc: This is a digitally signed message part URL: From stan at hardwarefreak.com Fri Jan 20 04:27:59 2012 From: stan at hardwarefreak.com (Stan Hoeppner) Date: Thu, 19 Jan 2012 20:27:59 -0600 Subject: [Dovecot] Performance of Maildir vs sdbox/mdbox In-Reply-To: References: <02f401ccd5de$ef17c470$cd474d50$@standen.id.au> <1326894871.11500.45.camel@innu> <4F179E68.5020408@hardwarefreak.com> <44D201BF-3A31-4354-8B55-0BFE11721601@iki.fi> <4F18AC6A.4050508@hardwarefreak.com> Message-ID: <4F18D12F.2050809@hardwarefreak.com> On 1/19/2012 6:13 PM, Timo Sirainen wrote: > On 20.1.2012, at 1.51, Stan Hoeppner wrote: > >> I spent a decent amount of time last night researching the NFS cache >> issue. It seems there is no way to completely disable NFS client >> caching (in lie of rewriting the code oneself--a daunting tak), which >> would seem to be the real solution to the mdbox index corruption problem. >> >> So I went looking for alternatives and came up with the idea above. >> Obviously it's far from an optimal solution and introduces some >> limitations, but I thought it was worth tossing out for discussion. > > I spent months looking into NFS related issues. I read through Linux and FreeBSD kernel source codes to figure out if there's something I could do to avoid the problems I see. I sent some patches to try to improve things, which of course didn't get accepted (some alternative ways might have been, but it would have required much more work from my part). The mail_nfs_* settings are the result of what I found out. They don't fully work, so I gave up. Yeah, I recall some of your posts from that time, and your frustration. If an NFS config option existed to simply turn off the NFS client caching, would that resolve most/all of the remaining issues? Or is the problem more complex than just the file caching? I ask as it would seem creating such a Boolean NFS config option should be simple to implement. If the devs could be convinced of the need for it. >> Timo, it seems that when you designed mdbox you didn't have NFS based >> clusters in mind. Do you consider mdbox simply not suitable for such an >> NFS cluster deployment? If one has no choice but an NFS cluster >> architecture, what Dovecot mailbox format do you recommend? Stick with >> maildir? > > In the typical random-access NFS setup I don't consider any of Dovecot's formats suitable. Not maildir, not dbox. Perhaps in future I can redesign everything in a way that just happens to work well with all kinds of NFS setups, but I don't really hold a lot of hope for that. It seems that either you'll get bad performance (I'm not really interested in making Dovecot do that) or you'll use such a setup where you get good performance by avoiding the NFS problems. > > There are several huge Dovecot+NFS setups. They use director. It works well enough (and with the recent fixes, I'd hope perfectly). Are any of these huge setups using mdbox? Or does it make a difference? I.e. Indexes are indexes whether they be maildir or mdbox. Would Director alone allow the OP to avoid the cache corruption issues discussed in this thread? Or would there still be problems due to the split LDA setup? >> In this case the OP has Netapp storage. Netapp units support both NFS >> exports as well as iSCSI LUNs. If the OP could utilize iSCSI instead of >> NFS, switching to GFS2 or OCFS, do you see these cluster filesystem as >> preferable for mdbox? > > I don't have personal experience with cluster filesystems in recent years (other than glusterfs, which had some problems, but most(/all?) were fixed already or are available from their commercial support..). Based on what I've heard, I'm guessing they work better than random-access-NFS, but even if there are no actual corruption problems, it sounds like their performance isn't very good. So would an ideal long term solution to indexes in a cluster (NFS or clusterFS) environment be something like Dovecot's own index metadata broker daemon/lock manager that controls access to the files/indexes? Either a distributed token based architecture, or maybe something 'simple' such as a master node which all others send index updates to with the master performing the actual writes to the files, similar to a database architecture? The former likely being more difficult to implement, the latter having potential scalability and SPOF issues. Or is the percentage of Dovecot cluster deployments so small that it's difficult to justify the development investment for such a thing? Thanks Timo. -- Stan From robert at schetterer.org Fri Jan 20 09:43:01 2012 From: robert at schetterer.org (Robert Schetterer) Date: Fri, 20 Jan 2012 08:43:01 +0100 Subject: [Dovecot] Performance of Maildir vs sdbox/mdbox In-Reply-To: References: <02f401ccd5de$ef17c470$cd474d50$@standen.id.au> <1326894871.11500.45.camel@innu> <4F179E68.5020408@hardwarefreak.com> <44D201BF-3A31-4354-8B55-0BFE11721601@iki.fi> <4F18AC6A.4050508@hardwarefreak.com> Message-ID: <4F191B05.9020409@schetterer.org> Am 20.01.2012 01:13, schrieb Timo Sirainen: > On 20.1.2012, at 1.51, Stan Hoeppner wrote: > >> I spent a decent amount of time last night researching the NFS cache >> issue. It seems there is no way to completely disable NFS client >> caching (in lie of rewriting the code oneself--a daunting tak), which >> would seem to be the real solution to the mdbox index corruption problem. >> >> So I went looking for alternatives and came up with the idea above. >> Obviously it's far from an optimal solution and introduces some >> limitations, but I thought it was worth tossing out for discussion. > > I spent months looking into NFS related issues. I read through Linux and FreeBSD kernel source codes to figure out if there's something I could do to avoid the problems I see. I sent some patches to try to improve things, which of course didn't get accepted (some alternative ways might have been, but it would have required much more work from my part). The mail_nfs_* settings are the result of what I found out. They don't fully work, so I gave up. > >> Timo, it seems that when you designed mdbox you didn't have NFS based >> clusters in mind. Do you consider mdbox simply not suitable for such an >> NFS cluster deployment? If one has no choice but an NFS cluster >> architecture, what Dovecot mailbox format do you recommend? Stick with >> maildir? > > In the typical random-access NFS setup I don't consider any of Dovecot's formats suitable. Not maildir, not dbox. Perhaps in future I can redesign everything in a way that just happens to work well with all kinds of NFS setups, but I don't really hold a lot of hope for that. It seems that either you'll get bad performance (I'm not really interested in making Dovecot do that) or you'll use such a setup where you get good performance by avoiding the NFS problems. > > There are several huge Dovecot+NFS setups. They use director. It works well enough (and with the recent fixes, I'd hope perfectly). > >> In this case the OP has Netapp storage. Netapp units support both NFS >> exports as well as iSCSI LUNs. If the OP could utilize iSCSI instead of >> NFS, switching to GFS2 or OCFS, do you see these cluster filesystem as >> preferable for mdbox? > > I don't have personal experience with cluster filesystems in recent years (other than glusterfs, which had some problems, but most(/all?) were fixed already or are available from their commercial support..). Based on what I've heard, I'm guessing they work better than random-access-NFS, but even if there are no actual corruption problems, it sounds like their performance isn't very good. for info i have 3500 users behind keepalived loadbalancers with drbd ocfs2 on two lucid servers, they are heavy penetrated by pop3 with maildir on dove2 , in the begin i had some performance problem but they were mostly related to the raid controlers io, so imap was very slow. Fixing this raid problems gave good imap performance now ( beside some dovecot and kernel tuneups ), anyway i would overthink this whole setup again going up to more users i.e i guess mixing loadbalancers and directors is no problem, maildir seems to be slow by io in design , so mdbox might better, and after all i would more investigate about drbd and compare gfs ocfs and other cluster filesystems better, i.e switching to iSCSI i.e i think it should be poosible to design partitioning with ldap or sql to i.e split up heavy and big mailboxes in seperate storage partitions etc am i right here Timo ? anyway i would like to test some cross hostingplace setup with i.e glusterfs lustre etc to get more knowledge as base of a multi redundant mailsystem -- Best Regards MfG Robert Schetterer Germany/Munich/Bavaria From ewald.lists at fun.de Fri Jan 20 14:35:39 2012 From: ewald.lists at fun.de (Ewald Dieterich) Date: Fri, 20 Jan 2012 13:35:39 +0100 Subject: [Dovecot] Notify plugin: segmentation fault Message-ID: <4F195F9B.3030202@fun.de> I'm trying to develop a plugin that uses the hooks provided by the notify plugin. The notify plugin segfaults if you don't set the mailbox_rename hook. I attached a patch to notify-plugin.c from Dovecot 2.0.16 that should fix this. Ewald -------------- next part -------------- A non-text attachment was scrubbed... Name: notify-plugin.c.patch Type: text/x-diff Size: 530 bytes Desc: not available URL: From harm at vevida.nl Fri Jan 20 00:30:12 2012 From: harm at vevida.nl (Harm Weites) Date: Thu, 19 Jan 2012 23:30:12 +0100 Subject: [Dovecot] LMTP ignoring tcpwrappers Message-ID: <1327012212.2003.32.camel@manbearpig.lan.kantoor.vevida.net> Hello, we want to use dovecot LMTP for efficient mail delivery from our MX servers (running postfix 2.8) to our storage servers (dovecot 2.0.17). However, the one problem we see is the lack of access control when using LMTP. It apears that every client in our network who has access to the storage machines can drop a message in a Maildir of any user on that storage server. To prevent this behaviour it would be nice to use libwrap, just as it can be used for POP3/IMAP protocols. This, however, seems to be impossible using the configuration as mentioned on the dovecot wiki: login_access_sockets = tcpwrap service tcpwrap { unix_listener login/tcpwrap { group = $default_login_user mode = 0600 user = $default_login_user } } This seems to imply it only works for a login, and LMTP does not use that. The above works perfectly when trying to block access to IMAP or POP3 in /etc/hosts.deny, though a setting for LMTP is simply ignored. Is there a configuration setting needed for this to work for LMTP, or is it simply not possible (yet) and does libwrap support for LMTP requires a patch? Any help is appreciated. Regards, Harm From simon.brereton at buongiorno.com Fri Jan 20 18:06:45 2012 From: simon.brereton at buongiorno.com (Simon Brereton) Date: Fri, 20 Jan 2012 11:06:45 -0500 Subject: [Dovecot] mail_max_userip_connections exceeded. Message-ID: Hi I'm using Dovecot version 1:1.2.15-7 installed on Debian Squeeze via apt-get.. I have this error in the logs. /var/log/mail.log.1:2490:Jan 19 12:02:55 mail dovecot: imap-login: Maximum number of connections from user+IP exceeded (mail_max_userip_connections): user=, method=PLAIN, rip=127.0.0.1, secured I never changed this from the default 10. When I googled this error there was a thread on this list from May 2011 that indicated one would need one connection per user per subscribed folder. However, I know that user doesn't have 10 folders, let alone 10 subscribed folders! I can increase, it but it's not going to scale well. And there are people on this list with many 1000x users than I have - so how do they deal with that? 127.0.0.1 is obviously webmail (IMP5). So, how/why am I seeing this, and should I be concerned? Simon From jesus.navarro at bvox.net Fri Jan 20 18:24:41 2012 From: jesus.navarro at bvox.net (=?utf-8?q?Jes=C3=BAs_M=2E?= Navarro) Date: Fri, 20 Jan 2012 17:24:41 +0100 Subject: [Dovecot] UID 0 problem while issuing an UID THREAD REFS command Message-ID: <201201201724.41631.jesus.navarro@bvox.net> Hi: This is my first message to this list, so pleased to meet you all. Using dovecot 2.0.17 from packages at xi.rename-it.nl on a Debian "Squeeze" i686. Mail storage is a local ext3 partition (I attached the output of dovecot -n to this message). I'm having problems on a maildir due to dovecot returning an UID 0 to an UID THREAD REFS command: in <== TAG5 UID THREAD REFS us-ascii SINCE 18-Jul-2011 out <== * THREAD (0)(51 52)(53)(54 55 56)(57)(58)(59 60)(61) TAG5 OK Thread completed. The issuer is an atmail webmail that after the previous output will try an UID FETCH 0 that will fail with a "TAG6 BAD Error in IMAP command UID FETCH: Invalid uidset" message. I think that, as per a previous answer from Timo Sirainen*1, this should be considered a dovecot's bug, am I right? Anyway, what should I try to find why is this exactly happening? TIA *1 http://www.dovecot.org/list/dovecot/2011-November/061992.html -------------- next part -------------- # 2.0.17 (687949948a83): /etc/dovecot/dovecot.conf # OS: Linux 2.6.29-xs5.5.0.15 i686 Debian 6.0.3 ext3 auth_cache_negative_ttl = 10 mins auth_cache_size = 10 M auth_debug = yes auth_debug_passwords = yes auth_mechanisms = plain login digest-md5 cram-md5 auth_username_chars = abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ01234567890.-_@: auth_verbose = yes disable_plaintext_auth = no mail_gid = vmail mail_location = maildir:/var/vmail/%d/%n mail_plugins = " notify xmpp_pubsub fts fts_squat zlib" mail_privileged_group = mail mail_uid = vmail managesieve_notify_capability = mailto managesieve_sieve_capability = fileinto reject envelope encoded-character vacation subaddress comparator-i;ascii-numeric relational regex imap4flags copy include variables body enotify environment mailbox date ihave namespace { inbox = yes location = prefix = separator = / } passdb { args = /etc/dovecot/dovecot-sql.conf driver = sql } plugin { enotify_xmpp_jid = dovecot at openfire/%l enotify_xmpp_password = [EDITED] enotify_xmpp_server = [EDITED] enotify_xmpp_use_tls = no fts = squat fts_squat = partial=4 full=10 mail_log_events = delete undelete expunge copy mailbox_delete mailbox_rename mail_log_fields = uid box msgid size vsize flags mail_log_group_events = no sieve = ~/.dovecot.sieve sieve_after = /var/lib/dovecot.sieve/after.d/ sieve_before = /var/lib/dovecot.sieve/before.d/ sieve_dir = ~/sieve sieve_global_path = /var/lib/dovecot.sieve/default.sieve xmpp_pubsub_events = delete undelete expunge copy mailbox_delete mailbox_rename xmpp_pubsub_fields = uid box msgid size vsize flags } protocols = " imap lmtp sieve pop3" service auth { unix_listener auth-userdb { group = vmail mode = 0600 user = vmail } } service imap-login { service_count = 0 } service managesieve-login { inet_listener sieve { port = 4190 } inet_listener sieve_deprecated { port = 2000 } } ssl_cert = References: <20120113224607.GS4844@bender.csupomona.edu> Message-ID: <10122B91-B309-413C-BA8F-43BF2DE0C1C7@iki.fi> On 14.1.2012, at 1.19, Mark Moseley wrote: >>> Also another idea to avoid them in the first place: >>> >>> service auth-worker { >>> idle_kill = 20 >>> } >> >> Ah, set the auth-worker timeout to less than the mysql timeout to >> prevent a stale mysql connection from ever being used. I'll try that, >> thanks. > > I gave that a try. Sometimes it seems to kill off the auth-worker but > not till after a minute or so (with idle_kill = 20). Other times, the > worker stays around for more like 5 minutes (I gave up watching), > despite being idle -- and I'm the only person connecting to it, so > it's definitely idle. Does auth-worker perhaps only wake up every so > often to check its idle status? This is fixed in v2.1 hg. The default idle_kill of 60 seconds seems to have gotten rid of the "MySQL server has gone away" errors completely. So I guess the problem was that during some peak times a ton of auth worker processes were created, but afterwards they weren't used until the next peak happened, and then they failed. http://hg.dovecot.org/dovecot-2.1/rev/3963862a4086 http://hg.dovecot.org/dovecot-2.1/rev/58556a90259f From tss at iki.fi Fri Jan 20 19:17:24 2012 From: tss at iki.fi (Timo Sirainen) Date: Fri, 20 Jan 2012 19:17:24 +0200 Subject: [Dovecot] MySQL server has gone away In-Reply-To: <20120114001912.GZ4844@bender.csupomona.edu> References: <4F108834.60709@schetterer.org> <20120114001912.GZ4844@bender.csupomona.edu> Message-ID: <2B3DAEEA-9281-4E5B-BB90-4FCE9C61C9E4@iki.fi> On 14.1.2012, at 2.19, Paul B. Henson wrote: > On Fri, Jan 13, 2012 at 11:38:28AM -0800, Robert Schetterer wrote: > >> by the way , if you use sql for auth have you tried auth caching ? >> >> http://wiki.dovecot.org/Authentication/Caching > > That page says you can send a USR2 signal to the auth process for cache > stats? That doesn't seem to work. OTOH, that page is for version 1, not > 2; is there some other way to generate cache stats in version 2? Works for me. Are you maybe sending it to wrong auth process (auth worker instead of master)? From tss at iki.fi Fri Jan 20 21:14:07 2012 From: tss at iki.fi (Timo Sirainen) Date: Fri, 20 Jan 2012 21:14:07 +0200 Subject: [Dovecot] Notify plugin: segmentation fault In-Reply-To: <4F195F9B.3030202@fun.de> References: <4F195F9B.3030202@fun.de> Message-ID: <4F19BCFF.904@iki.fi> On 01/20/2012 02:35 PM, Ewald Dieterich wrote: > I'm trying to develop a plugin that uses the hooks provided by the > notify plugin. The notify plugin segfaults if you don't set the > mailbox_rename hook. I attached a patch to notify-plugin.c from > Dovecot 2.0.16 that should fix this. Fixed, thanks. From tss at iki.fi Fri Jan 20 21:16:01 2012 From: tss at iki.fi (Timo Sirainen) Date: Fri, 20 Jan 2012 21:16:01 +0200 Subject: [Dovecot] UID 0 problem while issuing an UID THREAD REFS command In-Reply-To: <201201201724.41631.jesus.navarro@bvox.net> References: <201201201724.41631.jesus.navarro@bvox.net> Message-ID: <4F19BD71.9000603@iki.fi> On 01/20/2012 06:24 PM, Jes?s M. Navarro wrote: > I'm having problems on a maildir due to dovecot returning an UID 0 to an UID > THREAD REFS command: > > I think that, as per a previous answer from Timo Sirainen*1, this should be > considered a dovecot's bug, am I right? Anyway, what should I try to find why > is this exactly happening? Yes, it's a bug. > *1 http://www.dovecot.org/list/dovecot/2011-November/061992.html Same question as in that mail: Could you instead send me such a mailbox where you can reproduce this problem? Probably sending dovecot.index, dovecot.index.log and dovecot.index.thread files would be enough. None of those contain any sensitive information. From tss at iki.fi Fri Jan 20 21:19:57 2012 From: tss at iki.fi (Timo Sirainen) Date: Fri, 20 Jan 2012 21:19:57 +0200 Subject: [Dovecot] mail_max_userip_connections exceeded. In-Reply-To: References: Message-ID: <4F19BE5D.20603@iki.fi> On 01/20/2012 06:06 PM, Simon Brereton wrote: > I have this error in the logs. > /var/log/mail.log.1:2490:Jan 19 12:02:55 mail dovecot: imap-login: > Maximum number of connections from user+IP exceeded > (mail_max_userip_connections): user=, method=PLAIN, > rip=127.0.0.1, secured > > I never changed this from the default 10. When I googled this error > there was a thread on this list from May 2011 that indicated one would > need one connection per user per subscribed folder. However, I know > that user doesn't have 10 folders, let alone 10 subscribed folders! I > can increase, it but it's not going to scale well. And there are > people on this list with many 1000x users than I have - so how do they > deal with that? > > 127.0.0.1 is obviously webmail (IMP5). > > So, how/why am I seeing this, and should I be concerned? Well, it really does look like IMP is using more than 10 connections at the same time. Or perhaps some of the existing connections are just hanging for some reason after IMP already discarded them, such as maybe a very long running SEARCH command was started and IMP then gave up. You could look at the process list (with verbose_proctitle=yes) and check if the user has other processes hanging at the time when this error is logged. From tss at iki.fi Fri Jan 20 21:34:07 2012 From: tss at iki.fi (Timo Sirainen) Date: Fri, 20 Jan 2012 21:34:07 +0200 Subject: [Dovecot] LMTP ignoring tcpwrappers In-Reply-To: <1327012212.2003.32.camel@manbearpig.lan.kantoor.vevida.net> References: <1327012212.2003.32.camel@manbearpig.lan.kantoor.vevida.net> Message-ID: On 20.1.2012, at 0.30, Harm Weites wrote: > we want to use dovecot LMTP for efficient mail delivery from our MX > servers (running postfix 2.8) to our storage servers (dovecot 2.0.17). > However, the one problem we see is the lack of access control when using > LMTP. It apears that every client in our network who has access to the > storage machines can drop a message in a Maildir of any user on that > storage server. Is it a real problem? Can't they just as easily drop messages to other users' maildirs simply by sending the mail via SMTP? > To prevent this behaviour it would be nice to use > libwrap, just as it can be used for POP3/IMAP protocols. > This, however, seems to be impossible using the configuration as > mentioned on the dovecot wiki: > > login_access_sockets = tcpwrap > > This seems to imply it only works for a login, and LMTP does not use > that. The above works perfectly when trying to block access to IMAP or > POP3 in /etc/hosts.deny, though a setting for LMTP is simply ignored. Right. I'm not sure if I'd even want to add such feature to LMTP. It doesn't really feel like it belongs there. > Is there a configuration setting needed for this to work for LMTP, or is > it simply not possible (yet) and does libwrap support for LMTP requires > a patch? Not possible in Dovecot currently. You could use firewall rules. From tss at iki.fi Fri Jan 20 21:44:19 2012 From: tss at iki.fi (Timo Sirainen) Date: Fri, 20 Jan 2012 21:44:19 +0200 Subject: [Dovecot] Performance of Maildir vs sdbox/mdbox In-Reply-To: <4F18D12F.2050809@hardwarefreak.com> References: <02f401ccd5de$ef17c470$cd474d50$@standen.id.au> <1326894871.11500.45.camel@innu> <4F179E68.5020408@hardwarefreak.com> <44D201BF-3A31-4354-8B55-0BFE11721601@iki.fi> <4F18AC6A.4050508@hardwarefreak.com> <4F18D12F.2050809@hardwarefreak.com> Message-ID: On 20.1.2012, at 4.27, Stan Hoeppner wrote: >> I spent months looking into NFS related issues. I read through Linux and FreeBSD kernel source codes to figure out if there's something I could do to avoid the problems I see. I sent some patches to try to improve things, which of course didn't get accepted (some alternative ways might have been, but it would have required much more work from my part). The mail_nfs_* settings are the result of what I found out. They don't fully work, so I gave up. > > Yeah, I recall some of your posts from that time, and your frustration. > If an NFS config option existed to simply turn off the NFS client > caching, would that resolve most/all of the remaining issues? Or is the > problem more complex than just the file caching? I ask as it would seem > creating such a Boolean NFS config option should be simple to implement. > If the devs could be convinced of the need for it. It would work, but the performance would suck. >> There are several huge Dovecot+NFS setups. They use director. It works well enough (and with the recent fixes, I'd hope perfectly). > > Are any of these huge setups using mdbox? Or does it make a difference? I think they're all Maildirs currently, but it shouldn't make a difference. The index files are the ones most easily corrupted, so if they work then everything else should work just as well. In those director setups there have been no index corruption errors. > I.e. Indexes are indexes whether they be maildir or mdbox. Would > Director alone allow the OP to avoid the cache corruption issues > discussed in this thread? Or would there still be problems due to the > split LDA setup? By using LMTP proxying with director there wouldn't be any problems. Or using director for IMAP/POP3 and not using dovecot-lda for mail deliveries would work too. >>> In this case the OP has Netapp storage. Netapp units support both NFS >>> exports as well as iSCSI LUNs. If the OP could utilize iSCSI instead of >>> NFS, switching to GFS2 or OCFS, do you see these cluster filesystem as >>> preferable for mdbox? >> >> I don't have personal experience with cluster filesystems in recent years (other than glusterfs, which had some problems, but most(/all?) were fixed already or are available from their commercial support..). Based on what I've heard, I'm guessing they work better than random-access-NFS, but even if there are no actual corruption problems, it sounds like their performance isn't very good. > > So would an ideal long term solution to indexes in a cluster (NFS or > clusterFS) environment be something like Dovecot's own index metadata > broker daemon/lock manager that controls access to the files/indexes? > Either a distributed token based architecture, or maybe something > 'simple' such as a master node which all others send index updates to > with the master performing the actual writes to the files, similar to a > database architecture? The former likely being more difficult to > implement, the latter having potential scalability and SPOF issues. > > Or is the percentage of Dovecot cluster deployments so small that it's > difficult to justify the development investment for such a thing? I'm not sure if such daemons would be of much help. For best performance the user's mail access should be redirected to the same server in any case, and doing that solves all the other problems as well. I've a few other clustering plans besides a regular NFS based setup, but all of them rely on user normally being redirected to the same server (exception: split brain operation when mails are replicated to multiple data centers). From tss at iki.fi Fri Jan 20 21:48:00 2012 From: tss at iki.fi (Timo Sirainen) Date: Fri, 20 Jan 2012 21:48:00 +0200 Subject: [Dovecot] Performance of Maildir vs sdbox/mdbox In-Reply-To: <4F191B05.9020409@schetterer.org> References: <02f401ccd5de$ef17c470$cd474d50$@standen.id.au> <1326894871.11500.45.camel@innu> <4F179E68.5020408@hardwarefreak.com> <44D201BF-3A31-4354-8B55-0BFE11721601@iki.fi> <4F18AC6A.4050508@hardwarefreak.com> <4F191B05.9020409@schetterer.org> Message-ID: <4623523A-742E-4C32-82A0-0918F8B2DFE4@iki.fi> On 20.1.2012, at 9.43, Robert Schetterer wrote: > i.e i think it should be poosible to design partitioning with ldap or sql > to i.e split up heavy and big mailboxes in seperate storage partitions etc > am i right here Timo ? You can use per-user home or mail_location that points to different storages. If you want only some folders in separate storages, you could use symlinks, but deleting such a folder probably wouldn't delete the mails (or at least not all files). From tss at iki.fi Fri Jan 20 21:58:01 2012 From: tss at iki.fi (Timo Sirainen) Date: Fri, 20 Jan 2012 21:58:01 +0200 Subject: [Dovecot] Clients show .subscriptions folder In-Reply-To: References: Message-ID: <816AB6CB-989A-4D87-8FC0-80E8BE880539@iki.fi> On 10.1.2012, at 18.34, Mark Sapiro wrote: > Since upgrading from dovecot-2.1.rc1 to dovecot-2.1.rc3, some clients > are showing a .subscriptions file in the user's mbox path as a folder. Fixed: http://hg.dovecot.org/dovecot-2.1/rev/958ef86e7f5b From tss at iki.fi Fri Jan 20 23:06:57 2012 From: tss at iki.fi (Timo Sirainen) Date: Fri, 20 Jan 2012 23:06:57 +0200 Subject: [Dovecot] MySQL server has gone away In-Reply-To: <10122B91-B309-413C-BA8F-43BF2DE0C1C7@iki.fi> References: <20120113224607.GS4844@bender.csupomona.edu> <10122B91-B309-413C-BA8F-43BF2DE0C1C7@iki.fi> Message-ID: <9E57D55C-5F19-4291-A2E7-BC06678B2F79@iki.fi> On 20.1.2012, at 19.16, Timo Sirainen wrote: > This is fixed in v2.1 hg. The default idle_kill of 60 seconds seems to have gotten rid of the "MySQL server has gone away" errors completely. So I guess the problem was that during some peak times a ton of auth worker processes were created, but afterwards they weren't used until the next peak happened, and then they failed. Hmh. Still doesn't work 100%: auth-worker(28788): Error: mysql: Query failed, retrying: MySQL server has gone away (idled for 181 secs) auth-worker(7413): Error: mysql: Query failed, retrying: MySQL server has gone away (idled for 298 secs) I'm not really sure why it's not killing itself after 60 seconds of idling. Probably related to how mysql code tracks idle time and how idle_kill tracks it.. Anyway, those errors are much more rare now. From henson at acm.org Sat Jan 21 02:00:51 2012 From: henson at acm.org (Paul B. Henson) Date: Fri, 20 Jan 2012 16:00:51 -0800 Subject: [Dovecot] MySQL server has gone away In-Reply-To: <9E57D55C-5F19-4291-A2E7-BC06678B2F79@iki.fi> References: <20120113224607.GS4844@bender.csupomona.edu> <10122B91-B309-413C-BA8F-43BF2DE0C1C7@iki.fi> <9E57D55C-5F19-4291-A2E7-BC06678B2F79@iki.fi> Message-ID: <4F1A0033.8060202@acm.org> On 1/20/2012 1:06 PM, Timo Sirainen wrote: > Hmh. Still doesn't work 100%: > > auth-worker(28788): Error: mysql: Query failed, retrying: MySQL > server has gone away (idled for 181 secs) auth-worker(7413): Error: > mysql: Query failed, retrying: MySQL server has gone away (idled for > 298 secs) > > I'm not really sure why it's not killing itself after 60 seconds of > idling. Probably related to how mysql code tracks idle time and how > idle_kill tracks it.. Anyway, those errors are much more rare now. The mysql server starts tracking idle time as beginning after the last network communication with the client. So presumably if the auth worker gets marked as not idle by anything not involving interaction with the mysql server, they could get out of sync. Before you posted a potential fix to the idle timeout, I was looking at other possible ways to resolve the issue. Currently, an authentication request is tried exactly twice -- one initial try, and one retry. Looking at driver-sqlpool.c: if (result->failed_try_retry && !request->retried) { Currently, retried is a boolean. What if retried was an integer instead, and a new configuration variable allowed you to specify how many times an authentication attempt should be retried? The default could be 2, which would result in exactly the same behavior. But then you could set it to 3 or 4 to prevent a request from hitting a timed out connection twice and failing completely. Ideally, a better fix would be for the client not to consider a "MySQL server has gone away" return as a failure, but instead immediately reconnect and try again without marking it as a retry. However, from reviewing the code, that would be a much more difficult and invasive change. Changing the existing retried variable to an integer count rather than a boolean is pretty simple. -- Paul B. Henson | (909) 979-6361 | http://www.csupomona.edu/~henson/ Operating Systems and Network Analyst | henson at csupomona.edu California State Polytechnic University | Pomona CA 91768 From jtam.home at gmail.com Sat Jan 21 02:48:54 2012 From: jtam.home at gmail.com (Joseph Tam) Date: Fri, 20 Jan 2012 16:48:54 -0800 (PST) Subject: [Dovecot] mail_max_userip_connections exceeded. In-Reply-To: References: Message-ID: Simon Brereton writes: > /var/log/mail.log.1:2490:Jan 19 12:02:55 mail dovecot: imap-login: > Maximum number of connections from user+IP exceeded > (mail_max_userip_connections): user=, method=PLAIN, > rip=127.0.0.1, secured > > I never changed this from the default 10. When I googled this error > there was a thread on this list from May 2011 that indicated one would > need one connection per user per subscribed folder. However, I know > that user doesn't have 10 folders, let alone 10 subscribed folders! I > can increase, it but it's not going to scale well. And there are > people on this list with many 1000x users than I have - so how do they > deal with that? > > 127.0.0.1 is obviously webmail (IMP5). IMAP proxy or lack of proxy? IMAP proxy could be a problem if the user had opened more than 10 (unique) mailboxes. The proxy would keep this connection open until a timeout, and after some time, could accumulate more connections than your limit. The lack of proxy could solve your problem if for some reason your webmail software is not closing the IMAP connection properly (I assume IMP does a connect/authenticate/IMAP command/logout for every webmail operation). Every connection (even to the same mailbox) would open up a new connection. The proxy software will recognize the reconnnection and funnel it through its cached connection. You can lsof the user's IMAP processes (or troll through /proc/{imap-process} or what you have) to figure out which mailboxes it has opened. On my system, file descriptor 9 and 11 gives you the names of the index files that indicate which mailboxes are being accessed. Joseph Tam From mark at msapiro.net Sat Jan 21 03:02:37 2012 From: mark at msapiro.net (Mark Sapiro) Date: Fri, 20 Jan 2012 17:02:37 -0800 Subject: [Dovecot] Clients show .subscriptions folder In-Reply-To: <816AB6CB-989A-4D87-8FC0-80E8BE880539@iki.fi> Message-ID: Timo Sirainen wrote: >On 10.1.2012, at 18.34, Mark Sapiro wrote: > >> Since upgrading from dovecot-2.1.rc1 to dovecot-2.1.rc3, some clients >> are showing a .subscriptions file in the user's mbox path as a folder. > >Fixed: http://hg.dovecot.org/dovecot-2.1/rev/958ef86e7f5b Thanks Timo. I've installed the above and it seems fine. -- Mark Sapiro The highway is for gamblers, San Francisco Bay Area, California better use your sense - B. Dylan From dovecot at knutejohnson.com Sat Jan 21 03:04:46 2012 From: dovecot at knutejohnson.com (Knute Johnson) Date: Fri, 20 Jan 2012 17:04:46 -0800 Subject: [Dovecot] mail_max_userip_connections exceeded. In-Reply-To: References: Message-ID: <4F1A0F2E.9020907@knutejohnson.com> On 1/20/2012 4:48 PM, Joseph Tam wrote: > Simon Brereton writes: > >> /var/log/mail.log.1:2490:Jan 19 12:02:55 mail dovecot: imap-login: >> Maximum number of connections from user+IP exceeded >> (mail_max_userip_connections): user=, method=PLAIN, >> rip=127.0.0.1, secured >> >> I never changed this from the default 10. When I googled this error >> there was a thread on this list from May 2011 that indicated one would >> need one connection per user per subscribed folder. However, I know >> that user doesn't have 10 folders, let alone 10 subscribed folders! I >> can increase, it but it's not going to scale well. And there are >> people on this list with many 1000x users than I have - so how do they >> deal with that? >> >> 127.0.0.1 is obviously webmail (IMP5). > > IMAP proxy or lack of proxy? > > IMAP proxy could be a problem if the user had opened more than 10 (unique) > mailboxes. The proxy would keep this connection open until a timeout, and > after some time, could accumulate more connections than your limit. > > The lack of proxy could solve your problem if for some reason your webmail > software is not closing the IMAP connection properly (I assume IMP does a > connect/authenticate/IMAP command/logout for every webmail operation). > Every connection (even to the same mailbox) would open up a new connection. > The proxy software will recognize the reconnnection and funnel it through > its cached connection. > > You can lsof the user's IMAP processes (or troll through > /proc/{imap-process} or what you have) to figure out which mailboxes it > has opened. On my system, file descriptor 9 and 11 gives you the names > of the index files that indicate which mailboxes are being accessed. > > Joseph Tam I'm not sure that I saw the beginning of this thread but I got the same error. I traced it to the fact that my destktop and my phone email programs were both trying to access my imap from the same local network. I changed it to 20 and I haven't seen any more problems. I don't know if that would be a problem on a really heavily used server or not. -- Knute Johnson From henson at acm.org Sat Jan 21 03:34:41 2012 From: henson at acm.org (Paul B. Henson) Date: Fri, 20 Jan 2012 17:34:41 -0800 Subject: [Dovecot] MySQL server has gone away In-Reply-To: <2B3DAEEA-9281-4E5B-BB90-4FCE9C61C9E4@iki.fi> References: <4F108834.60709@schetterer.org> <20120114001912.GZ4844@bender.csupomona.edu> <2B3DAEEA-9281-4E5B-BB90-4FCE9C61C9E4@iki.fi> Message-ID: <4F1A1631.2000704@acm.org> On 1/20/2012 9:17 AM, Timo Sirainen wrote: > Works for me. Are you maybe sending it to wrong auth process (auth worker instead of master)? I had tried sending it to both; but the underlying problem turned out to be that the updated config hadn't actually been deployed yet 8-/ oops. Once I fixed that, sending the signal did generate the log output. Evidently nothing is printed out in the case where the authentication caching isn't enabled; maybe you should make it print out something like "Hey idiot, caching isn't turned on" ;). Thanks... -- Paul B. Henson | (909) 979-6361 | http://www.csupomona.edu/~henson/ Operating Systems and Network Analyst | henson at csupomona.edu California State Polytechnic University | Pomona CA 91768 From user+dovecot at localhost.localdomain.org Sat Jan 21 03:46:47 2012 From: user+dovecot at localhost.localdomain.org (Pascal Volk) Date: Sat, 21 Jan 2012 02:46:47 +0100 Subject: [Dovecot] 2.1.rc3 (1a722c7676bb): doveadm mailbox list withholds child mailboxes In-Reply-To: <4F0F8815.8070609@localhost.localdomain.org> References: <4F0F8815.8070609@localhost.localdomain.org> Message-ID: <4F1A1907.4070906@localhost.localdomain.org> On 01/13/2012 02:25 AM Pascal Volk wrote: > doveadm mailbox list -u user at example.com doesn't show child mailboxes. Looks like http://hg.dovecot.org/dovecot-2.1/rev/54e74090fb42 fixed the problem. Thanks Regards, Pascal -- The trapper recommends today: defaced.1202102 at localdomain.org From henson at acm.org Sat Jan 21 04:36:56 2012 From: henson at acm.org (Paul B. Henson) Date: Fri, 20 Jan 2012 18:36:56 -0800 Subject: [Dovecot] MySQL server has gone away In-Reply-To: <10122B91-B309-413C-BA8F-43BF2DE0C1C7@iki.fi> References: <20120113224607.GS4844@bender.csupomona.edu> <10122B91-B309-413C-BA8F-43BF2DE0C1C7@iki.fi> Message-ID: <20120121023656.GO4207@bender.csupomona.edu> On Fri, Jan 20, 2012 at 09:16:57AM -0800, Timo Sirainen wrote: > This is fixed in v2.1 hg. The default idle_kill of 60 seconds seems to > have gotten rid of the "MySQL server has gone away" errors completely. > So I guess the problem was that during some peak times a ton of auth > worker processes were created, but afterwards they weren't used until > the next peak happened, and then they failed. > > http://hg.dovecot.org/dovecot-2.1/rev/3963862a4086 > http://hg.dovecot.org/dovecot-2.1/rev/58556a90259f Hmm, I tried to apply this to 2.0.17, and that didn't really work out. Before I spend too much time trying to hand port the changes, do you know off hand if they simply won't apply to 2.0.17 due to other changes made since then? It looks like 2.1 might be out soon, I guess maybe I should just wait for that. Thanks... -- Paul B. Henson | (909) 979-6361 | http://www.csupomona.edu/~henson/ Operating Systems and Network Analyst | henson at csupomona.edu California State Polytechnic University | Pomona CA 91768 From admin at opsys.de Sat Jan 21 20:39:00 2012 From: admin at opsys.de (Markus Fritz) Date: Sat, 21 Jan 2012 19:39:00 +0100 Subject: [Dovecot] Sieve tempoary script folder Message-ID: Hello, I got the issue that sieve wants to write his tmp files to /etc/dovecot/. But I want sieve to write in a folder which it has write rights. I created a script to to put spam in the 'Spam' folder, put it in /etc/dovecot/.dovecot.sieve. When recieving a mail sieve want's to create a tmp file like /etc/dovecot/.dovecot.sieve.12033 How to change the desired tmp folder by sieve? From mikedvct at makuch.org Sun Jan 22 15:55:02 2012 From: mikedvct at makuch.org (Michael Makuch) Date: Sun, 22 Jan 2012 07:55:02 -0600 Subject: [Dovecot] where is subscribed list stored? Message-ID: <4F1C1536.1000407@makuch.org> I'm using $ /usr/sbin/dovecot --version 2.0.15 on $ cat /etc/fedora-release Fedora release 14 (Laughlin) and version 8 of Thunderbird. I use dovecot locally for internal only access to my email archives, of which I have many gigs of email archives. Over time I end up subscribing to a couple dozen different IMAP email folders. Problem is that periodically my list of subscribed folders get zapped to none, and I have to go and re-subscribe to a dozen or two folders again. Anyone seen this happen? It looks like the list of subscribed folders is here ~/Mail/.subscriptions and I can see in my daily backup that it reflects what appears in TBird. What might be zapping it? I use multiple email clients simultaneously on different hosts. (IOW I leave them open) Is this a problem? Does dovecot manage that in some way? Or is that my problem? I don't think this is the problem since this only occurs like a few times per year. If it were the problem I'd expect it to occur much more frequently. Thanks for any clues Mike From me at junc.org Sun Jan 22 16:10:09 2012 From: me at junc.org (Benny Pedersen) Date: Sun, 22 Jan 2012 15:10:09 +0100 Subject: [Dovecot] =?utf-8?q?where_is_subscribed_list_stored=3F?= In-Reply-To: <4F1C1536.1000407@makuch.org> References: <4F1C1536.1000407@makuch.org> Message-ID: On Sun, 22 Jan 2012 07:55:02 -0600, Michael Makuch wrote: > $ cat /etc/fedora-release > Fedora release 14 (Laughlin) > > and version 8 of Thunderbird. can you use thunderbird 9 ? does the account work with eg rouncube webmail ? my own question is, is it a dovecot problem ? do you modify files outside of imap protocol ? if so you asked for it :-) From jk at jkart.de Sun Jan 22 16:22:24 2012 From: jk at jkart.de (Jim Knuth) Date: Sun, 22 Jan 2012 15:22:24 +0100 Subject: [Dovecot] where is subscribed list stored? In-Reply-To: References: <4F1C1536.1000407@makuch.org> Message-ID: <4F1C1BA0.5060305@jkart.de> am 22.01.12 15:10 schrieb Benny Pedersen : > can you use thunderbird 9 ? > > does the account work with eg rouncube webmail ? I`ve TB9 AND Roundcube. No problems with Dovecot 2.017 here > > my own question is, is it a dovecot problem ? > > do you modify files outside of imap protocol ? > > if so you asked for it :-) -- Mit freundlichen Gr??en, with kind regards, Jim Knuth --------- Wahrhaftigkeit und Politik wohnen selten unter einem Dach. (Marie Antoinette) From stan at hardwarefreak.com Sun Jan 22 22:58:03 2012 From: stan at hardwarefreak.com (Stan Hoeppner) Date: Sun, 22 Jan 2012 14:58:03 -0600 Subject: [Dovecot] where is subscribed list stored? In-Reply-To: <4F1C1536.1000407@makuch.org> References: <4F1C1536.1000407@makuch.org> Message-ID: <4F1C785B.8020304@hardwarefreak.com> On 1/22/2012 7:55 AM, Michael Makuch wrote: > Anyone seen this happen? It looks like the list of subscribed folders is > here ~/Mail/.subscriptions and I can see in my daily backup that it > reflects what appears in TBird. What might be zapping it? I use multiple > email clients simultaneously on different hosts. (IOW I leave them open) > Is this a problem? Does dovecot manage that in some way? Or is that my > problem? I don't think this is the problem since this only occurs like a > few times per year. If it were the problem I'd expect it to occur much > more frequently. What do your Dovecot logs and TB Activity Manager tell you, if anything? How about logging on the other MUAs? You are a human being, and are thus limited to physical interaction with a single host at any point in time. How are you "using" multiple MUAs on multiple hosts simultaneously? Can you describe this workflow? I'm guessing you're performing some kind of automated tasks with each MUA, and that is likely the root of the problem. Please describe these automated tasks. -- Stan From jesus.navarro at bvox.net Mon Jan 23 14:55:13 2012 From: jesus.navarro at bvox.net (=?utf-8?q?Jes=C3=BAs_M=2E?= Navarro) Date: Mon, 23 Jan 2012 13:55:13 +0100 Subject: [Dovecot] UID 0 problem while issuing an UID THREAD REFS command In-Reply-To: <4F19BD71.9000603@iki.fi> References: <201201201724.41631.jesus.navarro@bvox.net> <4F19BD71.9000603@iki.fi> Message-ID: <201201231355.15051.jesus.navarro@bvox.net> Hi again, Timo: On Viernes, 20 de Enero de 2012 20:16:01 Timo Sirainen escribi?: > On 01/20/2012 06:24 PM, Jes?s M. Navarro wrote: > > I'm having problems on a maildir due to dovecot returning an UID 0 to an > > UID THREAD REFS command: [...] > Could you instead send me such a mailbox where you can reproduce this > problem? Probably sending dovecot.index, dovecot.index.log and > dovecot.index.thread files would be enough. None of those contain any > sensitive information. Thank you very much. I'm sending to your personal address a whole maildir that reproduces the bug (it's very short) to avoid having it published in the mail archives. From l.chelchowski at slupsk.eurocar.pl Mon Jan 23 15:58:22 2012 From: l.chelchowski at slupsk.eurocar.pl (l.chelchowski) Date: Mon, 23 Jan 2012 14:58:22 +0100 Subject: [Dovecot] Quota-warning and setresgid In-Reply-To: References: Message-ID: Anyone? W dniu 2012-01-10 10:34, l.chelchowski napisa?(a): > Hi! > > Please help me with this. > The problem exists when quota-warning is executing: > > LOG: > Jan 10 10:15:06 lmtp(85973): Debug: none: root=, index=, control=, > inbox=, alt= > Jan 10 10:15:06 lmtp(85973): Info: Connect from local > Jan 10 10:15:06 lmtp(85973): Debug: Loading modules from directory: > /usr/local/lib/dovecot > Jan 10 10:15:06 lmtp(85973): Debug: Module loaded: > /usr/local/lib/dovecot/lib10_quota_plugin.so > Jan 10 10:15:06 lmtp(85973): Debug: Module loaded: > /usr/local/lib/dovecot/lib90_sieve_plugin.so > Jan 10 10:15:06 lmtp(85973): Debug: auth input: tester at domain.eu > home=/home/vmail/domain.eu/tester/ > mail=maildir:/home/vmail/domain.eu/tester/:INDEX=/var/mail/vmail/domain.eu/tester at domain.eu/index/public > uid=101 gid=12 quota_rule=*:storage=2097 acl_groups= > Jan 10 10:15:06 lmtp(85973): Debug: Added userdb setting: > mail=maildir:/home/vmail/domain.eu/tester/:INDEX=/var/mail/vmail/domain.eu/tester at domain.eu/index/public > Jan 10 10:15:06 lmtp(85973): Debug: Added userdb setting: > plugin/quota_rule=*:storage=2097 > Jan 10 10:15:06 lmtp(85973): Debug: Added userdb setting: > plugin/acl_groups= > Jan 10 10:15:06 lmtp(85973, tester at domain.eu): Debug: Effective > uid=101, gid=12, home=/home/vmail/domain.eu/tester/ > Jan 10 10:15:06 lmtp(85973, tester at domain.eu): Debug: Quota root: > name=user backend=dict args=:proxy::quotadict > Jan 10 10:15:06 lmtp(85973, tester at domain.eu): Debug: Quota rule: > root=user mailbox=* bytes=2147328 messages=0 > Jan 10 10:15:06 lmtp(85973, tester at domain.eu): Debug: Quota rule: > root=user mailbox=Trash bytes=+429465 (20%) messages=0 > Jan 10 10:15:06 lmtp(85973, tester at domain.eu): Debug: Quota rule: > root=user mailbox=SPAM bytes=+429465 (20%) messages=0 > Jan 10 10:15:06 lmtp(85973, tester at domain.eu): Debug: Quota warning: > bytes=1717862 (80%) messages=0 reverse=no command=quota-warning 80 > tester at domain.eu > Jan 10 10:15:06 lmtp(85973, tester at domain.eu): Debug: Quota warning: > bytes=1932595 (90%) messages=0 reverse=no command=quota-warning 90 > tester at domain.eu > Jan 10 10:15:06 lmtp(85973, tester at domain.eu): Debug: Quota warning: > bytes=2039961 (95%) messages=0 reverse=no command=quota-warning 95 > tester at domain.eu > Jan 10 10:15:06 lmtp(85973, tester at domain.eu): Debug: dict quota: > user=tester at domain.eu, uri=proxy::quotadict, noenforcing=0 > Jan 10 10:15:06 lmtp(85973, tester at domain.eu): Debug: Namespace : > type=private, prefix=, sep=/, inbox=yes, hidden=no, list=yes, > subscriptions=yes > location=maildir:/home/vmail/domain.eu/tester/:INDEX=/var/mail/vmail/domain.eu/tester at domain.eu/index/public > Jan 10 10:15:06 lmtp(85973, tester at domain.eu): Debug: maildir++: > root=/home/vmail/domain.eu/tester, > index=/var/mail/vmail/domain.eu/tester at domain.eu/index/public, control=, > inbox=/home/vmail/domain.eu/tester, alt= > Jan 10 10:15:06 lmtp(85973, tester at domain.eu): Debug: Namespace : > type=public, prefix=Public/, sep=/, inbox=no, hidden=no, list=children, > subscriptions=yes > location=maildir:/home/vmail/public/:CONTROL=/var/mail/vmail/domain.eu/tester/control/public:INDEX=/var/mail/vmail/domain.eu/tester/index/public:LAYOUT=fs > Jan 10 10:15:06 lmtp(85973, tester at domain.eu): Debug: fs: > root=/home/vmail/public, > index=/var/mail/vmail/domain.eu/tester/index/public, > control=/var/mail/vmail/domain.eu/tester/control/public, inbox=, alt= > Jan 10 10:15:06 lmtp(85973, tester at domain.eu): Debug: Namespace : > type=shared, prefix=Shared/%u/, sep=/, inbox=no, hidden=no, > list=children, subscriptions=no > location=maildir:%h/:INDEX=/var/mail/vmail/domain.eu/tester at domain.eu/index/shared/%u > Jan 10 10:15:06 lmtp(85973, tester at domain.eu): Debug: shared: > root=/var/run/dovecot, index=, control=, inbox=, alt= > ... > Jan 10 10:15:06 lmtp(85973, tester at domain.eu): Debug: quota: Executing > warning: quota-warning 95 tester at domain.eu > Jan 10 10:15:06 lmtp(85973, tester at domain.eu): Info: > bLUfAJoBDE/VTwEA9hAjDg: sieve: msgid=<4F0C0180.3040704 at domain.eu>: > stored mail into mailbox 'INBOX' > Jan 10 10:15:06 lmtp(85973): Info: Disconnect from local: Client quit > (in reset) > Jan 10 10:15:06 lda: Debug: Loading modules from directory: > /usr/local/lib/dovecot > Jan 10 10:15:06 lda: Debug: Module loaded: > /usr/local/lib/dovecot/lib01_acl_plugin.so > Jan 10 10:15:06 lda: Debug: Module loaded: > /usr/local/lib/dovecot/lib10_quota_plugin.so > Jan 10 10:15:06 lda: Debug: Module loaded: > /usr/local/lib/dovecot/lib90_sieve_plugin.so > Jan 10 10:15:06 lda: Debug: auth input: tester at domain.eu > home=/home/vmail/domain.eu/tester/ > mail=maildir:/home/vmail/domain.eu/tester/:INDEX=/var/mail/vmail/domain.eu/tester at domain.eu/index/public > uid=101 gid=12 quota_rule=*:storage=2097 acl_groups= > Jan 10 10:15:06 lda: Debug: Added userdb setting: > mail=maildir:/home/vmail/domain.eu/tester/:INDEX=/var/mail/vmail/domain.eu/tester at domain.eu/index/public > Jan 10 10:15:06 lda: Debug: Added userdb setting: > plugin/quota_rule=*:storage=2097 > Jan 10 10:15:06 lda: Debug: Added userdb setting: plugin/acl_groups= > Jan 10 10:15:06 lda(tester at domain.eu): Fatal: > setresgid(12(mail),12(mail),101(vmail)) failed with euid=101(vmail): > Operation not permitted > Jan 10 10:15:06 master: Error: service(quota-warning): child 85974 > returned error 75 > > dovecot -n > # 2.0.16: /usr/local/etc/dovecot/dovecot.conf > # OS: FreeBSD 8.2-RELEASE-p3 amd64 > auth_master_user_separator = * > auth_mechanisms = plain login cram-md5 > auth_username_format = %Lu > dict { > quotadict = mysql:/usr/local/etc/dovecot/dovecot-dict-sql.conf > } > disable_plaintext_auth = no > first_valid_gid = 12 > first_valid_uid = 101 > log_path = /var/log/dovecot.log > mail_debug = yes > mail_gid = vmail > mail_plugins = " quota acl" > mail_privileged_group = vmail > mail_uid = vmail > managesieve_notify_capability = mailto > managesieve_sieve_capability = fileinto reject envelope > encoded-character vacation subaddress comparator-i;ascii-numeric > relational regex imap4flags copy include variables body enotify > environment mailbox date > namespace { > inbox = yes > location = > prefix = > separator = / > type = private > } > namespace { > list = children > location = > maildir:/home/vmail/public/:CONTROL=/var/mail/vmail/%d/%n/control/public:INDEX=/var/mail/vmail/%d/%n/index/public:LAYOUT=fs > prefix = Public/ > separator = / > subscriptions = yes > type = public > } > namespace { > list = children > location = maildir:%%h/:INDEX=/var/mail/vmail/%d/%u/index/shared/%%u > prefix = Shared/%%u/ > separator = / > subscriptions = no > type = shared > } > passdb { > args = /usr/local/etc/dovecot/dovecot-sql.conf > driver = sql > } > passdb { > args = /usr/local/etc/dovecot/passwd.masterusers > driver = passwd-file > master = yes > pass = yes > } > plugin { > acl = vfile:/usr/local/etc/dovecot/acls > acl_shared_dict = > file:/usr/local/etc/dovecot/shared/shared-mailboxes.db > autocreate = Trash > autocreate2 = Junk > autocreate3 = Sent > autocreate4 = Drafts > autocreate5 = Archives > autosubscribe = Trash > autosubscribe2 = Junk > autosubscribe3 = Sent > autosubscribe4 = Drafts > autosubscribe5 = Public/Poczta > autosubscribe6 = Archives > fts = squat > fts_squat = partial=4 full=10 > quota = dict:user::proxy::quotadict > quota_rule2 = Trash:storage=+20%% > quota_rule3 = SPAM:storage=+20%% > quota_warning = storage=80%% quota-warning 80 %u > quota_warning2 = storage=90%% quota-warning 90 %u > quota_warning3 = storage=95%% quota-warning 95 %u > sieve = ~/.dovecot.sieve > sieve_before = /usr/local/etc/dovecot/sieve/default.sieve > sieve_dir = ~/sieve > sieve_global_dir = /usr/local/etc/dovecot/sieve > sieve_global_path = /usr/local/etc/dovecot/sieve/default.sieve > } > protocols = imap pop3 sieve lmtp > service auth { > unix_listener /var/spool/postfix/private/auth { > group = mail > mode = 0660 > user = postfix > } > unix_listener auth-userdb { > group = mail > mode = 0660 > user = vmail > } > } > service dict { > unix_listener dict { > mode = 0600 > user = vmail > } > } > service imap { > executable = imap postlogin > } > service lmtp { > unix_listener /var/spool/postfix/private/dovecot-lmtp { > group = postfix > mode = 0660 > user = postfix > } > } > service managesieve { > drop_priv_before_exec = yes > } > service pop3 { > drop_priv_before_exec = yes > } > service postlogin { > executable = script-login rawlog > } > service quota-warning { > executable = script /usr/local/bin/quota-warning.sh > unix_listener quota-warning { > user = vmail > } > user = vmail > } > ssl = no > userdb { > args = /usr/local/etc/dovecot/dovecot-sql.conf > driver = sql > } > verbose_proctitle = yes > protocol imap { > imap_client_workarounds = delay-newmail tb-extra-mailbox-sep > mail_plugins = " acl imap_acl autocreate fts fts_squat quota > imap_quota" > } > protocol lmtp { > mail_plugins = quota sieve > } > protocol pop3 { > pop3_client_workarounds = outlook-no-nuls oe-ns-eoh > pop3_uidl_format = %08Xu%08Xv > } > protocol lda { > deliver_log_format = msgid=%m: %$ > mail_plugins = sieve acl quota > postmaster_address = postmaster at domain.eu > sendmail_path = /usr/sbin/sendmail > } -- Pozdrawiam ?ukasz From a.othman at cairosource.com Mon Jan 23 16:30:32 2012 From: a.othman at cairosource.com (Amira Othman) Date: Mon, 23 Jan 2012 16:30:32 +0200 Subject: [Dovecot] change smtp port Message-ID: <000801ccd9db$91671960$b4354c20$@othman@cairosource.com> Hi all I am using postfix 2.8 with dovecot-1.2.17-0_116.el5 on cento 5.7 server . when I changed smtp port from 25 to 587 from postfix configuration my mail server stops to receive emails. I think it sounds strange and I don't understand why this happen any one can help me Regards From giles at coochey.net Mon Jan 23 16:33:27 2012 From: giles at coochey.net (Giles Coochey) Date: Mon, 23 Jan 2012 14:33:27 +0000 Subject: [Dovecot] change smtp port In-Reply-To: <000801ccd9db$91671960$b4354c20$@othman@cairosource.com> References: <000801ccd9db$91671960$b4354c20$@othman@cairosource.com> Message-ID: On 2012-01-23 14:30, Amira Othman wrote: > Hi all > > I am using postfix 2.8 with dovecot-1.2.17-0_116.el5 on cento 5.7 > server . > when I changed smtp port from 25 to 587 from postfix configuration my > mail > server stops to receive emails. I think it sounds strange and I don't > understand why this happen any one can help me > > > > Regards If this SMTP server is your MX record, then you need to use port 25. Only use the 587 port for authenticated submissions from your own users for outgoing email. -- Message sent via my webmail account. From Ralf.Hildebrandt at charite.de Mon Jan 23 16:33:43 2012 From: Ralf.Hildebrandt at charite.de (Ralf Hildebrandt) Date: Mon, 23 Jan 2012 15:33:43 +0100 Subject: [Dovecot] change smtp port In-Reply-To: <000801ccd9db$91671960$b4354c20$@othman@cairosource.com> References: <000801ccd9db$91671960$b4354c20$@othman@cairosource.com> Message-ID: <20120123143343.GI29761@charite.de> * Amira Othman : > Hi all > > I am using postfix 2.8 with dovecot-1.2.17-0_116.el5 on cento 5.7 server . > when I changed smtp port from 25 to 587 from postfix configuration my mail > server stops to receive emails. That's normal. > I think it sounds strange and I don't understand why this happen any > one can help me Mail from other systems comes in via port 25. Once you change the port, nobody can send mail to your server. Easy, no? -- Ralf Hildebrandt Gesch?ftsbereich IT | Abteilung Netzwerk Charit? - Universit?tsmedizin Berlin Campus Benjamin Franklin Hindenburgdamm 30 | D-12203 Berlin Tel. +49 30 450 570 155 | Fax: +49 30 450 570 962 ralf.hildebrandt at charite.de | http://www.charite.de From a.othman at cairosource.com Mon Jan 23 16:38:06 2012 From: a.othman at cairosource.com (Amira Othman) Date: Mon, 23 Jan 2012 16:38:06 +0200 Subject: [Dovecot] change smtp port In-Reply-To: <20120123143343.GI29761@charite.de> References: <000801ccd9db$91671960$b4354c20$@othman@cairosource.com> <20120123143343.GI29761@charite.de> Message-ID: <001601ccd9dc$9ff04d20$dfd0e760$@othman@cairosource.com> And there is no way to receive incoming emails not on port 25 ? > Hi all > > I am using postfix 2.8 with dovecot-1.2.17-0_116.el5 on cento 5.7 server . > when I changed smtp port from 25 to 587 from postfix configuration my mail > server stops to receive emails. That's normal. > I think it sounds strange and I don't understand why this happen any > one can help me Mail from other systems comes in via port 25. Once you change the port, nobody can send mail to your server. Easy, no? -- Ralf Hildebrandt Gesch?ftsbereich IT | Abteilung Netzwerk Charit? - Universit?tsmedizin Berlin Campus Benjamin Franklin Hindenburgdamm 30 | D-12203 Berlin Tel. +49 30 450 570 155 | Fax: +49 30 450 570 962 ralf.hildebrandt at charite.de | http://www.charite.de From giles at coochey.net Mon Jan 23 16:41:52 2012 From: giles at coochey.net (Giles Coochey) Date: Mon, 23 Jan 2012 14:41:52 +0000 Subject: [Dovecot] change smtp port In-Reply-To: <001601ccd9dc$9ff04d20$dfd0e760$@othman@cairosource.com> References: <000801ccd9db$91671960$b4354c20$@othman@cairosource.com> <20120123143343.GI29761@charite.de> <001601ccd9dc$9ff04d20$dfd0e760$@othman@cairosource.com> Message-ID: On 2012-01-23 14:38, Amira Othman wrote: > And there is no way to receive incoming emails not on port 25 ? > > No. http://en.wikipedia.org/wiki/Simple_Mail_Transfer_Protocol From CMarcus at Media-Brokers.com Mon Jan 23 16:50:09 2012 From: CMarcus at Media-Brokers.com (Charles Marcus) Date: Mon, 23 Jan 2012 09:50:09 -0500 Subject: [Dovecot] change smtp port In-Reply-To: References: <000801ccd9db$91671960$b4354c20$@othman@cairosource.com> <20120123143343.GI29761@charite.de> <001601ccd9dc$9ff04d20$dfd0e760$@othman@cairosource.com> Message-ID: <4F1D73A1.2010504@Media-Brokers.com> On 2012-01-23 9:41 AM, Giles Coochey wrote: > On 2012-01-23 14:38, Amira Othman wrote: >> And there is no way to receive incoming emails not on port 25 ? > No. > http://en.wikipedia.org/wiki/Simple_Mail_Transfer_Protocol Well, not precisely correct... You *could* use a router that does port translation (translates incoming port 25 connections to port 587), but that would be extremely ugly and kludgy and I certainly don't recommend it. Amira - what you need to do is re-enable port 25, and then enable the submission service (port 587) at the same time (just uncomment the relevant lines in master.cf), and require your users to use the submission port for relaying their mail. -- Best regards, Charles From giles at coochey.net Mon Jan 23 17:01:57 2012 From: giles at coochey.net (Giles Coochey) Date: Mon, 23 Jan 2012 15:01:57 +0000 Subject: [Dovecot] change smtp port In-Reply-To: <4F1D73A1.2010504@Media-Brokers.com> References: <000801ccd9db$91671960$b4354c20$@othman@cairosource.com> <20120123143343.GI29761@charite.de> <001601ccd9dc$9ff04d20$dfd0e760$@othman@cairosource.com> <4F1D73A1.2010504@Media-Brokers.com> Message-ID: On 2012-01-23 14:50, Charles Marcus wrote: > On 2012-01-23 9:41 AM, Giles Coochey wrote: >> On 2012-01-23 14:38, Amira Othman wrote: >>> And there is no way to receive incoming emails not on port 25 ? > >> No. >> http://en.wikipedia.org/wiki/Simple_Mail_Transfer_Protocol > > Well, not precisely correct... > Now true, you can do anything you like internally, but if you want to listen and speak with the rest of the Internet, you should be RFC compliant. RFC821 Connection Establishment The SMTP transmission channel is a TCP connection established between the sender process port U and the receiver process port L. This single full duplex connection is used as the transmission channel. This protocol is assigned the service port 25 (31 octal), that is L=25. RFC531 4.5.4.2. Receiving Strategy The SMTP server SHOULD attempt to keep a pending listen on the SMTP port (specified by IANA as port 25) at all times. This requires the support of multiple incoming TCP connections for SMTP. Some limit MAY be imposed, but servers that cannot handle more than one SMTP transaction at a time are not in conformance with the intent of this specification. As discussed above, when the SMTP server receives mail from a particular host address, it could activate its own SMTP queuing mechanisms to retry any mail pending for that host address. From rasca at miamammausalinux.org Mon Jan 23 17:04:17 2012 From: rasca at miamammausalinux.org (RaSca) Date: Mon, 23 Jan 2012 16:04:17 +0100 Subject: [Dovecot] Quota is not working (Debian Squeeze - Dovecot 1.2) SOLVED In-Reply-To: <1326898601.11500.56.camel@innu> References: <4F13FF00.1050108@miamammausalinux.org> <1326898601.11500.56.camel@innu> Message-ID: <4F1D76F1.9070106@miamammausalinux.org> Il giorno Mer 18 Gen 2012 15:56:41 CET, Timo Sirainen ha scritto: [...] > You're using SQL only for passdb lookup. [...] > user_query isn't used, because you aren't using userdb sql. Hi Timo, thank you, I confirm everything you wrote. In order to help someone with the same problem, when using virtual profiles in mysql, there must be declared both passwd sql (necessary to verify the authentication) and userdb sql (necessary to verify the user informations). For every value that has not a specific user override it is possible to declare a global value in the plugin area (and there must be also "a quota = maildir:User quota" declaration). In the end, with this configuration the quota plugin works (the sql file remains the same I first posted): protocols = imap pop3 disable_plaintext_auth = no log_timestamp = "%Y-%m-%d %H:%M:%S " mail_location = maildir:/mail/mailboxes/%d/%u mail_privileged_group = mail #mail_debug = yes #auth_debug = yes mail_nfs_storage = yes mmap_disable=yes fsync_disable=no mail_nfs_index = yes protocol imap { mail_plugins = quota imap_quota } protocol pop3 { pop3_uidl_format = %08Xu%08Xv mail_plugins = quota } protocol managesieve { } protocol lda { auth_socket_path = /var/run/dovecot/auth-master postmaster_address = postmaster@ mail_plugins = sieve quota quota_full_tempfail = no } auth default { mechanisms = plain userdb sql { args = /etc/dovecot/dovecot-sql.conf } passdb sql { args = /etc/dovecot/dovecot-sql.conf } user = root socket listen { master { path = /var/run/dovecot/auth-master mode = 0600 user = vmail } client { path = /var/spool/postfix/private/auth mode = 0660 user = postfix group = postfix } } } plugin { quota = maildir:User quota quota2 = fs:Disk quota quota_rule = *:storage=1G quota_warning = storage=95%% /mail/scripts/quota-warning.sh 95 quota_warning2 = storage=80%% /mail/scripts/quota-warning.sh 80 sieve_global_path = /mail/sieve/globalsieverc } -- RaSca Mia Mamma Usa Linux: Niente ? impossibile da capire, se lo spieghi bene! rasca at miamammausalinux.org http://www.miamammausalinux.org From noeldude at gmail.com Mon Jan 23 18:14:11 2012 From: noeldude at gmail.com (Noel) Date: Mon, 23 Jan 2012 10:14:11 -0600 Subject: [Dovecot] change smtp port In-Reply-To: <4f1d70f6.4d25cc0a.5325.ffff88a8SMTPIN_ADDED@mx.google.com> References: <000801ccd9db$91671960$b4354c20$@othman@cairosource.com> <20120123143343.GI29761@charite.de> <4f1d70f6.4d25cc0a.5325.ffff88a8SMTPIN_ADDED@mx.google.com> Message-ID: <4F1D8753.9040900@gmail.com> On 1/23/2012 8:38 AM, Amira Othman wrote: > And there is no way to receive incoming emails not on port 25 ? > You can't randomly change the port you receive mail on because external MTAs have no way to find what port you're using. They will *always* use port 25 and nothing else. If your problem is that your Internet Service Provider is blocking port 25, you can contact them. Some ISPs will unblock port 25 on request, or might even have an online form you can fill out. If you can't get help from the ISP, you need a remailer service -- some outside proxy that accepts the mail for you and forwards connections to some different port on your computer. I don't know of any free services that do this; dyndns and others offer this for a fee, sometimes combined with spam/virus filtering. -- Noel Jones From moseleymark at gmail.com Mon Jan 23 21:13:56 2012 From: moseleymark at gmail.com (Mark Moseley) Date: Mon, 23 Jan 2012 11:13:56 -0800 Subject: [Dovecot] Director questions Message-ID: In playing with dovecot director, a couple of things came up, one related to the other: 1) Is there an effective maximum of directors that shouldn't be exceeded? That is, even if technically possible, that I shouldn't go over? Since we're 100% NFS, we've scaled servers horizontally quite a bit. At this point, we've got servers operating as MTAs, servers doing IMAP/POP directly, and servers separately doing IMAP/POP as webmail backends. Works just dandy for our existing setup. But to director-ize all of them, I'm looking at a director ring of maybe 75-85 servers, which is a bit unnerving, since I don't know if the ring will be able to keep up. Is there a scale where it'll bog down? 2) If it is too big, is there any way, that I might be missing, to use remote directors? It looks as if directors have to live locally on the same box as the proxy. For my MTAs, where they're not customer-facing, I'm much less worried about the latency it'd introduce. Likewise with my webmail servers, the extra latency would probably be trivial compared to the rest of the request--but then again, might not. But for direct IMAP, the latency likely be more noticeable. So ideally I'd be able to make my IMAP servers (well, the frontside of the proxy, that is) be the director pool, while leaving my MTAs to talk to the director remotely, and possibly my webmail servers remote too. Is that a remote possibility? From tss at iki.fi Mon Jan 23 21:37:02 2012 From: tss at iki.fi (Timo Sirainen) Date: Mon, 23 Jan 2012 21:37:02 +0200 Subject: [Dovecot] Director questions In-Reply-To: References: Message-ID: On 23.1.2012, at 21.13, Mark Moseley wrote: > In playing with dovecot director, a couple of things came up, one > related to the other: > > 1) Is there an effective maximum of directors that shouldn't be > exceeded? That is, even if technically possible, that I shouldn't go > over? There's no definite number, but each director adds some extra traffic to network and sometimes extra latency to lookups. So you should have only as many as you need. > Since we're 100% NFS, we've scaled servers horizontally quite a > bit. At this point, we've got servers operating as MTAs, servers doing > IMAP/POP directly, and servers separately doing IMAP/POP as webmail > backends. Works just dandy for our existing setup. But to director-ize > all of them, I'm looking at a director ring of maybe 75-85 servers, > which is a bit unnerving, since I don't know if the ring will be able > to keep up. Is there a scale where it'll bog down? That's definitely too many directors. So far the largest installation I know of has 4 directors. Another one will maybe have 6-10 to handle 2Gbps traffic. > 2) If it is too big, is there any way, that I might be missing, to use > remote directors? It looks as if directors have to live locally on the > same box as the proxy. For my MTAs, where they're not customer-facing, > I'm much less worried about the latency it'd introduce. Likewise with > my webmail servers, the extra latency would probably be trivial > compared to the rest of the request--but then again, might not. But > for direct IMAP, the latency likely be more noticeable. So ideally I'd > be able to make my IMAP servers (well, the frontside of the proxy, > that is) be the director pool, while leaving my MTAs to talk to the > director remotely, and possibly my webmail servers remote too. Is that > a remote possibility? I guess that could be a possibility, but .. Why do you need so many proxies at all? Couldn't all of your traffic go through just a few dedicated proxy/director servers? From harm at vevida.nl Mon Jan 23 22:52:34 2012 From: harm at vevida.nl (Harm Weites) Date: Mon, 23 Jan 2012 21:52:34 +0100 Subject: [Dovecot] LMTP ignoring tcpwrappers In-Reply-To: References: <1327012212.2003.32.camel@manbearpig.lan.kantoor.vevida.net> Message-ID: <1327351954.1940.15.camel@manbearpig> Timo Sirainen schreef op vr 20-01-2012 om 21:34 [+0200]: > On 20.1.2012, at 0.30, Harm Weites wrote: > > > we want to use dovecot LMTP for efficient mail delivery from our MX > > servers (running postfix 2.8) to our storage servers (dovecot 2.0.17). > > However, the one problem we see is the lack of access control when using > > LMTP. It apears that every client in our network who has access to the > > storage machines can drop a message in a Maildir of any user on that > > storage server. > > Is it a real problem? Can't they just as easily drop messages to other users' maildirs simply by sending the mail via SMTP? > This is true, though, in that case messages or not passing our content scanners which is something we do not want. Hence the thought of configuring tcpwrappers, as can be done with the other two protocols, to only allow access to LMTP from our MX servers. > > To prevent this behaviour it would be nice to use > > libwrap, just as it can be used for POP3/IMAP protocols. > > This, however, seems to be impossible using the configuration as > > mentioned on the dovecot wiki: > > > > login_access_sockets = tcpwrap > > > > This seems to imply it only works for a login, and LMTP does not use > > that. The above works perfectly when trying to block access to IMAP or > > POP3 in /etc/hosts.deny, though a setting for LMTP is simply ignored. > > Right. I'm not sure if I'd even want to add such feature to LMTP. It doesn't really feel like it belongs there. > Would you rather implement something completely different to cater in access-control, or just leave things as they are now? > > Is there a configuration setting needed for this to work for LMTP, or is > > it simply not possible (yet) and does libwrap support for LMTP requires > > a patch? > > Not possible in Dovecot currently. You could use firewall rules. Yes indeed, using some firewall rules and perhaps an extra vlan sounds ok, though I would like to use something a little less low-level. From moseleymark at gmail.com Mon Jan 23 23:44:26 2012 From: moseleymark at gmail.com (Mark Moseley) Date: Mon, 23 Jan 2012 13:44:26 -0800 Subject: [Dovecot] Director questions In-Reply-To: References: Message-ID: On Mon, Jan 23, 2012 at 11:37 AM, Timo Sirainen wrote: > On 23.1.2012, at 21.13, Mark Moseley wrote: > >> In playing with dovecot director, a couple of things came up, one >> related to the other: >> >> 1) Is there an effective maximum of directors that shouldn't be >> exceeded? That is, even if technically possible, that I shouldn't go >> over? > > There's no definite number, but each director adds some extra traffic to network and sometimes extra latency to lookups. So you should have only as many as you need. Ok. >> Since we're 100% NFS, we've scaled servers horizontally quite a >> bit. At this point, we've got servers operating as MTAs, servers doing >> IMAP/POP directly, and servers separately doing IMAP/POP as webmail >> backends. Works just dandy for our existing setup. But to director-ize >> all of them, I'm looking at a director ring of maybe 75-85 servers, >> which is a bit unnerving, since I don't know if the ring will be able >> to keep up. Is there a scale where it'll bog down? > > That's definitely too many directors. So far the largest installation I know of has 4 directors. Another one will maybe have 6-10 to handle 2Gbps traffic. Ok >> 2) If it is too big, is there any way, that I might be missing, to use >> remote directors? It looks as if directors have to live locally on the >> same box as the proxy. For my MTAs, where they're not customer-facing, >> I'm much less worried about the latency it'd introduce. Likewise with >> my webmail servers, the extra latency would probably be trivial >> compared to the rest of the request--but then again, might not. But >> for direct IMAP, the latency likely be more noticeable. So ideally I'd >> be able to make my IMAP servers (well, the frontside of the proxy, >> that is) be the director pool, while leaving my MTAs to talk to the >> director remotely, and possibly my webmail servers remote too. Is that >> a remote possibility? > > I guess that could be a possibility, but .. Why do you need so many proxies at all? Couldn't all of your traffic go through just a few dedicated proxy/director servers? I'm probably conceptualizing it wrongly. In our system, since it's NFS, we have everything pooled. For a given mailbox, any number of MTA (Exim) boxes could actually do the delivery, any number of IMAP servers can do IMAP for that mailbox, and any number of webmail servers could do IMAP too for that mailbox. So our horizontal scaling, server-wise, is just adding more servers to pools. This is on the order of a few million mailboxes, per datacenter. It's less messy than it probably sounds :) I was assuming that at any spot where a server touched the actual mailbox, it would need to instead proxy to a set of backend servers. Is that accurate or way off? If it is accurate, it sounds like we'd need to shuffle things up a bit. From janfrode at tanso.net Mon Jan 23 23:48:00 2012 From: janfrode at tanso.net (Jan-Frode Myklebust) Date: Mon, 23 Jan 2012 22:48:00 +0100 Subject: [Dovecot] make imap search less verbose Message-ID: <20120123214800.GA3112@dibs.tanso.net> We have an imap-client (SOGo) that doesn't handle this status output while searching: * OK Searched 76% of the mailbox, ETA 0:50 Is there any way to disable this output on the dovecot-side? -jf From tss at iki.fi Mon Jan 23 23:56:49 2012 From: tss at iki.fi (Timo Sirainen) Date: Mon, 23 Jan 2012 23:56:49 +0200 Subject: [Dovecot] make imap search less verbose In-Reply-To: <20120123214800.GA3112@dibs.tanso.net> References: <20120123214800.GA3112@dibs.tanso.net> Message-ID: On 23.1.2012, at 23.48, Jan-Frode Myklebust wrote: > We have an imap-client (SOGo) that doesn't handle this status output while > searching: > > * OK Searched 76% of the mailbox, ETA 0:50 > > Is there any way to disable this output on the dovecot-side? No way to disable it without modifying code. I think SOGo should fix it anyway.. From janfrode at tanso.net Tue Jan 24 00:19:05 2012 From: janfrode at tanso.net (Jan-Frode Myklebust) Date: Mon, 23 Jan 2012 23:19:05 +0100 Subject: [Dovecot] make imap search less verbose In-Reply-To: References: <20120123214800.GA3112@dibs.tanso.net> Message-ID: <20120123221905.GA3717@dibs.tanso.net> On Mon, Jan 23, 2012 at 11:56:49PM +0200, Timo Sirainen wrote: > > No way to disable it without modifying code. I think SOGo should fix it anyway.. > Ok, thanks. SOGo will get fixed. I was just looking for a quick workaround while we wait for updated sogo. -jf From tss at iki.fi Tue Jan 24 01:19:47 2012 From: tss at iki.fi (Timo Sirainen) Date: Tue, 24 Jan 2012 01:19:47 +0200 Subject: [Dovecot] make imap search less verbose In-Reply-To: <20120123221905.GA3717@dibs.tanso.net> References: <20120123214800.GA3112@dibs.tanso.net> <20120123221905.GA3717@dibs.tanso.net> Message-ID: <6F0CE9DA-1344-4299-AC6C-616B22F54609@iki.fi> On 24.1.2012, at 0.19, Jan-Frode Myklebust wrote: > On Mon, Jan 23, 2012 at 11:56:49PM +0200, Timo Sirainen wrote: >> >> No way to disable it without modifying code. I think SOGo should fix it anyway.. >> > > Ok, thanks. SOGo will get fixed. I was just looking for a quick > workaround while we wait for updated sogo. With Dovecot you can do: diff -r 759e879c4c42 src/lib-storage/index/index-search.c --- a/src/lib-storage/index/index-search.c Fri Jan 20 18:59:16 2012 +0200 +++ b/src/lib-storage/index/index-search.c Tue Jan 24 01:19:18 2012 +0200 @@ -1200,9 +1200,9 @@ text = t_strdup_printf("Searched %d%% of the mailbox, " "ETA %d:%02d", (int)percentage, secs/60, secs%60); - box->storage->callbacks. + /*box->storage->callbacks. notify_ok(box, text, - box->storage->callback_context); + box->storage->callback_context);*/ } T_END; } ctx->last_notify = ioloop_timeval; From tss at iki.fi Tue Jan 24 03:58:23 2012 From: tss at iki.fi (Timo Sirainen) Date: Tue, 24 Jan 2012 03:58:23 +0200 Subject: [Dovecot] dbox + SIS + zlib fixed Message-ID: I think a few people have complained about this combination being somewhat broken, resulting in bogus "cached message size wrong" errors sometimes. This fixes it: http://hg.dovecot.org/dovecot-2.0/rev/9b2931607063 From lists at necoro.eu Tue Jan 24 11:22:48 2012 From: lists at necoro.eu (=?ISO-8859-15?Q?Ren=E9_Neumann?=) Date: Tue, 24 Jan 2012 10:22:48 +0100 Subject: [Dovecot] Capabilities of imapc Message-ID: <4F1E7868.2060102@necoro.eu> Hi *, I can't find any decent information about the capabilities of imapc in the planned future dovecot releases. As I think about using imapc, I'll just give the two use-cases I see for me. Will this be possible with imapc? 1) One (or more) folders in a mailbox which are proxied? 2) Proxy a whole mailbox _and use the folders in it as shared folders_. That means account X on Server 1 (the dovecot box) is proxied via imapc to Server 2 (some other server). The folders of this account on Server 1 are then shared with account Y. When account Y uses these folders they are always up-to-date (so no action of account X is required). The second use-case is just some (ugly) workaround in case the first one is not possible. Thanks, Ren? From tss at iki.fi Tue Jan 24 11:31:27 2012 From: tss at iki.fi (Timo Sirainen) Date: Tue, 24 Jan 2012 11:31:27 +0200 Subject: [Dovecot] Capabilities of imapc In-Reply-To: <4F1E7868.2060102@necoro.eu> References: <4F1E7868.2060102@necoro.eu> Message-ID: <89981E22-65F4-415A-995C-E460093BE21B@iki.fi> On 24.1.2012, at 11.22, Ren? Neumann wrote: > I can't find any decent information about the capabilities of imapc in > the planned future dovecot releases. Mainly it's about adding support for more IMAP commands (e.g. SEARCH), so that it doesn't necessarily have to be used as a rather dummy storage. (Although it always has to be possible to be used as a dummy storage, like it is now.) > As I think about using imapc, I'll just give the two use-cases I see for > me. Will this be possible with imapc? > > 1) One (or more) folders in a mailbox which are proxied? Currently because imapc_* settings are global, you can't have more than one imapc destination. This will be fixed at some point. Otherwise this works the same way as other storage backends: You create namespace(s) for the folders you want to proxy. > 2) Proxy a whole mailbox _and use the folders in it as shared folders_. > That means account X on Server 1 (the dovecot box) is proxied via imapc > to Server 2 (some other server). The folders of this account on Server 1 > are then shared with account Y. When account Y uses these folders they > are always up-to-date (so no action of account X is required). This should be possible, yes. From lists at necoro.eu Tue Jan 24 12:15:48 2012 From: lists at necoro.eu (=?ISO-8859-1?Q?Ren=E9_Neumann?=) Date: Tue, 24 Jan 2012 11:15:48 +0100 Subject: [Dovecot] Capabilities of imapc In-Reply-To: <89981E22-65F4-415A-995C-E460093BE21B@iki.fi> References: <4F1E7868.2060102@necoro.eu> <89981E22-65F4-415A-995C-E460093BE21B@iki.fi> Message-ID: <4F1E84D4.20102@necoro.eu> Am 24.01.2012 10:31, schrieb Timo Sirainen: >> As I think about using imapc, I'll just give the two use-cases I see for >> me. Will this be possible with imapc? >> >> 1) One (or more) folders in a mailbox which are proxied? > > Currently because imapc_* settings are global, you can't have more than one imapc destination. This will be fixed at some point. Otherwise this works the same way as other storage backends: You create namespace(s) for the folders you want to proxy. Ah - this sounds good. I'll try as soon as dovecot-2.1 is released (because 2.0.17 does not include imapc, right?) Thanks, Ren? From CMarcus at Media-Brokers.com Tue Jan 24 13:23:14 2012 From: CMarcus at Media-Brokers.com (Charles Marcus) Date: Tue, 24 Jan 2012 06:23:14 -0500 Subject: [Dovecot] change smtp port In-Reply-To: <4F1D8753.9040900@gmail.com> References: <000801ccd9db$91671960$b4354c20$@othman@cairosource.com> <20120123143343.GI29761@charite.de> <4f1d70f6.4d25cc0a.5325.ffff88a8SMTPIN_ADDED@mx.google.com> <4F1D8753.9040900@gmail.com> Message-ID: <4F1E94A2.6050409@Media-Brokers.com> On 2012-01-23 11:14 AM, Noel wrote: > If your problem is that your Internet Service Provider is blocking > port 25, you can contact them. Some ISPs will unblock port 25 on > request, or might even have an online form you can fill out. The OP specifically said that *he* had changed the port from 25 to 587... obviously he doesn't understand how smtp works... -- Best regards, Charles From joshua at hybrid.pl Tue Jan 24 13:51:28 2012 From: joshua at hybrid.pl (Jacek Osiecki) Date: Tue, 24 Jan 2012 12:51:28 +0100 (CET) Subject: [Dovecot] change smtp port In-Reply-To: <4F1E94A2.6050409@Media-Brokers.com> References: <000801ccd9db$91671960$b4354c20$@othman@cairosource.com> <20120123143343.GI29761@charite.de> <4f1d70f6.4d25cc0a.5325.ffff88a8SMTPIN_ADDED@mx.google.com> <4F1D8753.9040900@gmail.com> <4F1E94A2.6050409@Media-Brokers.com> Message-ID: On Tue, 24 Jan 2012, Charles Marcus wrote: > On 2012-01-23 11:14 AM, Noel wrote: >> If your problem is that your Internet Service Provider is blocking >> port 25, you can contact them. Some ISPs will unblock port 25 on >> request, or might even have an online form you can fill out. > The OP specifically said that *he* had changed the port from 25 to 587... > obviously he doesn't understand how smtp works... Most probably he wanted to enable his users to send emails via his mail server using port 587, because some may have blocked access to port 25. Proper solution is to open additionally port 587 and require users to authenticate in order to send mails through the server. If it is too complicated in postfix, admin can simply map port 587 to 25 - most probably that would work well. Best regards, -- Jacek Osiecki joshua at ceti.pl GG:3828944 I don't want something I need. I want something I want. From CMarcus at Media-Brokers.com Tue Jan 24 14:18:46 2012 From: CMarcus at Media-Brokers.com (Charles Marcus) Date: Tue, 24 Jan 2012 07:18:46 -0500 Subject: [Dovecot] change smtp port In-Reply-To: References: <000801ccd9db$91671960$b4354c20$@othman@cairosource.com> <20120123143343.GI29761@charite.de> <4f1d70f6.4d25cc0a.5325.ffff88a8SMTPIN_ADDED@mx.google.com> <4F1D8753.9040900@gmail.com> <4F1E94A2.6050409@Media-Brokers.com> Message-ID: <4F1EA1A6.2080007@Media-Brokers.com> On 2012-01-24 6:51 AM, Jacek Osiecki wrote: > On Tue, 24 Jan 2012, Charles Marcus wrote: >> On 2012-01-23 11:14 AM, Noel wrote: >>> If your problem is that your Internet Service Provider is blocking >>> port 25, you can contact them. Some ISPs will unblock port 25 on >>> request, or might even have an online form you can fill out. >> The OP specifically said that *he* had changed the port from 25 to >> 587... obviously he doesn't understand how smtp works... > Most probably he wanted to enable his users to send emails via his mail > server using port 587, because some may have blocked access to port 25. Which obviously means he has not even a basic understanding of how smtp works. > Proper solution is to open additionally port 587 and require users to > authenticate in order to send mails through the server. If it is too > complicated in postfix, Which is precisely why I (and a few others) gave him those instructions... > admin can simply map port 587 to 25 - most probably that would work well. Of course it will work... but it is most definitely *not* recommended, and not only that, will totally defeat achieving the goal of using the submission port (because *all* port 587 traffic would be routed to port 25)... I only mentioned that this could be done in answer to someone who said it couldn't... -- Best regards, Charles From a.othman at cairosource.com Tue Jan 24 14:51:59 2012 From: a.othman at cairosource.com (Amira Othman) Date: Tue, 24 Jan 2012 14:51:59 +0200 Subject: [Dovecot] change smtp port In-Reply-To: <4F1EA1A6.2080007@Media-Brokers.com> References: <000801ccd9db$91671960$b4354c20$@othman@cairosource.com> <20120123143343.GI29761@charite.de> <4f1d70f6.4d25cc0a.5325.ffff88a8SMTPIN_ADDED@mx.google.com> <4F1D8753.9040900@gmail.com> <4F1E94A2.6050409@Media-Brokers.com> <4F1EA1A6.2080007@Media-Brokers.com> Message-ID: <001801ccda96$f843e350$e8cba9f0$@othman@cairosource.com> Thanks for reply The problem that ISP for some reason port 25 is not stable and refuse connection for several times so I tried to change port to 587 instead of 25 to keep sending emails. And I though that I can stop using port 25 as it's not always working from ISP -----Original Message----- From: dovecot-bounces at dovecot.org [mailto:dovecot-bounces at dovecot.org] On Behalf Of Charles Marcus Sent: Tuesday, January 24, 2012 2:19 PM To: dovecot at dovecot.org Subject: Re: [Dovecot] change smtp port On 2012-01-24 6:51 AM, Jacek Osiecki wrote: > On Tue, 24 Jan 2012, Charles Marcus wrote: >> On 2012-01-23 11:14 AM, Noel wrote: >>> If your problem is that your Internet Service Provider is blocking >>> port 25, you can contact them. Some ISPs will unblock port 25 on >>> request, or might even have an online form you can fill out. >> The OP specifically said that *he* had changed the port from 25 to >> 587... obviously he doesn't understand how smtp works... > Most probably he wanted to enable his users to send emails via his mail > server using port 587, because some may have blocked access to port 25. Which obviously means he has not even a basic understanding of how smtp works. > Proper solution is to open additionally port 587 and require users to > authenticate in order to send mails through the server. If it is too > complicated in postfix, Which is precisely why I (and a few others) gave him those instructions... > admin can simply map port 587 to 25 - most probably that would work well. Of course it will work... but it is most definitely *not* recommended, and not only that, will totally defeat achieving the goal of using the submission port (because *all* port 587 traffic would be routed to port 25)... I only mentioned that this could be done in answer to someone who said it couldn't... -- Best regards, Charles From noeldude at gmail.com Tue Jan 24 15:39:43 2012 From: noeldude at gmail.com (Noel) Date: Tue, 24 Jan 2012 07:39:43 -0600 Subject: [Dovecot] change smtp port In-Reply-To: <4F1E94A2.6050409@Media-Brokers.com> References: <000801ccd9db$91671960$b4354c20$@othman@cairosource.com> <20120123143343.GI29761@charite.de> <4f1d70f6.4d25cc0a.5325.ffff88a8SMTPIN_ADDED@mx.google.com> <4F1D8753.9040900@gmail.com> <4F1E94A2.6050409@Media-Brokers.com> Message-ID: <4F1EB49F.4090300@gmail.com> On 1/24/2012 5:23 AM, Charles Marcus wrote: > On 2012-01-23 11:14 AM, Noel wrote: >> If your problem is that your Internet Service Provider is blocking >> port 25, you can contact them. Some ISPs will unblock port 25 on >> request, or might even have an online form you can fill out. > > The OP specifically said that *he* had changed the port from 25 to > 587... ... because port 25 didn't work. > obviously he doesn't understand how smtp works... > and we can assume he's here to learn, not to get flamed. Anyway, this is OT for dovecot. Over and out. -- Noel Jones From devurandom at gmx.net Tue Jan 24 16:43:22 2012 From: devurandom at gmx.net (Dennis Schridde) Date: Tue, 24 Jan 2012 15:43:22 +0100 Subject: [Dovecot] Trying to get metadata plugin working In-Reply-To: <201201161651.46232.thomas@koch.ro> References: <201201161651.46232.thomas@koch.ro> Message-ID: <2007528.Wh0gVP3DHS@samson> Hi Thomas and List! Am Montag, 16. Januar 2012, 16:51:45 schrieb Thomas Koch: > dict: Error: file dict commit: file_dotlock_open(~/Maildir/shared-metadata) > failed: No such file or directory The dovecot-metadata is still a work in progress, despite my earlier message reading differently. I assumed because Akonadi began to work (my telnet tests were already successful since a while), that the dovecot plugin would also work, but noticed later that everything was a coincidence. Anyway, my config is: plugin { metadata_dict = proxy::metadata } dict { metadata = file:/var/lib/dovecot/shared-metadata } This appears to work for me - I think the key is the proxy::. --Dennis -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 198 bytes Desc: This is a digitally signed message part. URL: From CMarcus at Media-Brokers.com Tue Jan 24 16:58:29 2012 From: CMarcus at Media-Brokers.com (Charles Marcus) Date: Tue, 24 Jan 2012 09:58:29 -0500 Subject: [Dovecot] change smtp port In-Reply-To: <001801ccda96$f843e350$e8cba9f0$@othman@cairosource.com> References: <000801ccd9db$91671960$b4354c20$@othman@cairosource.com> <20120123143343.GI29761@charite.de> <4f1d70f6.4d25cc0a.5325.ffff88a8SMTPIN_ADDED@mx.google.com> <4F1D8753.9040900@gmail.com> <4F1E94A2.6050409@Media-Brokers.com> <4F1EA1A6.2080007@Media-Brokers.com> <001801ccda96$f843e350$e8cba9f0$@othman@cairosource.com> Message-ID: <4F1EC715.8020700@Media-Brokers.com> On 2012-01-24 7:51 AM, Amira Othman wrote: > Thanks for reply > > The problem that ISP for some reason port 25 is not stable and refuse > connection for several times so I tried to change port to 587 instead > of 25 to keep sending emails. And I though that I can stop using port > 25 as it's not always working from ISP As I said, you obviously do not understand how smtp works. This is made obvious by your questions, and failure to understand that port 25 is *the* port for receiving email on the public internet. Period. If your main problem with port 25 is *sending* (relaying outbound) mails, then you will need to take this up with your ISP. If they are unable or unwilling to address the problem, one option would be to setup your system to relay through some other smtp relay service on the internet using port 587 as you apparently read somwehere, but you don't do this by changing the main smtpd daemon to port 587, because as you discovered, you won't be able to receive *any* emails like this. That said, I fail to see any relevance to dovecot in this thread... -- Best regards, Charles From CMarcus at Media-Brokers.com Tue Jan 24 17:07:04 2012 From: CMarcus at Media-Brokers.com (Charles Marcus) Date: Tue, 24 Jan 2012 10:07:04 -0500 Subject: [Dovecot] change smtp port In-Reply-To: <4F1EB49F.4090300@gmail.com> References: <000801ccd9db$91671960$b4354c20$@othman@cairosource.com> <20120123143343.GI29761@charite.de> <4f1d70f6.4d25cc0a.5325.ffff88a8SMTPIN_ADDED@mx.google.com> <4F1D8753.9040900@gmail.com> <4F1E94A2.6050409@Media-Brokers.com> <4F1EB49F.4090300@gmail.com> Message-ID: <4F1EC918.2060003@Media-Brokers.com> On 2012-01-24 8:39 AM, Noel wrote: > On 1/24/2012 5:23 AM, Charles Marcus wrote: >> The OP specifically said that *he* had changed the port from 25 to >> 587... > ... because port 25 didn't work. For *sending*... And his complaint was that changing the port for the main smtpd process caused him to not be able to *receive* email... >> obviously he doesn't understand how smtp works... > and we can assume he's here to learn, not to get flamed. What!? Please point out how simply pointing out the obvious - that someone doesn't understand something - is the same as *flaming* them... Please... > Anyway, this is OT for dovecot. Over and out. Agreed on that one... nip/tuck From divizio at exentrica.it Tue Jan 24 17:58:34 2012 From: divizio at exentrica.it (Luca Di Vizio) Date: Tue, 24 Jan 2012 16:58:34 +0100 Subject: [Dovecot] [PATCH] autoconf small fix Message-ID: Hi Timo, the attached patch seems to solve a warning from autoconf: libtoolize: Consider adding `AC_CONFIG_MACRO_DIR([m4])' to configure.in and libtoolize: rerunning libtoolize, to keep the correct libtool macros in-tree. Best regards, Luca -------------- next part -------------- A non-text attachment was scrubbed... Name: autoconf.patch Type: text/x-patch Size: 279 bytes Desc: not available URL: From support at palatineweb.com Tue Jan 24 18:35:10 2012 From: support at palatineweb.com (Palatine Web Support) Date: Tue, 24 Jan 2012 16:35:10 +0000 Subject: [Dovecot] =?utf-8?q?Imap_Quota_Exceeded_-_But_Still_Receiving_Ema?= =?utf-8?q?ils=3F?= Message-ID: Hello I am trying to setup dovecot maildir quota, but even though it seems to be working fine, I am still receiving emails into my inbox even though I have exceeded my quota. Here is my dovecot config: plugin { quota = maildir:User Quota quota_rule2 = Trash:storage=+100M } And my SQL config file for Dovecot (dovecot-sql.conf): user_query = SELECT '/var/vmail/%d/%n' as home, 'maildir:/var/vmail/%d/%n' as mail, 150 AS uid, 8 AS gid, CONCAT('*:storage=', quota) AS quota_rule FROM mailbox WHERE username = '%u' AND active = '1' CONCAT('*:storage=', quota) AS quota_rule quota_rule = *:storage=3M So it picks up my set quota of 3MB but dovecot is not rejecting emails if I am over my quota. Can anyone help? Thanks. Carl From lists at wildgooses.com Wed Jan 25 00:06:55 2012 From: lists at wildgooses.com (Ed W) Date: Tue, 24 Jan 2012 22:06:55 +0000 Subject: [Dovecot] Password auth scheme question with mysql Message-ID: <4F1F2B7F.3070005@wildgooses.com> Hi, I have a current auth database using mysql with a "password" column in plain text. The config has "default_pass_scheme = PLAIN" specified In preparation for a more adaptable system I changed a password entry from "asdf" to "{PLAIN}asdf", but now auth fails. Works fine if I change it back to just "asdf". (I don't believe it's a caching problem) What might I be missing? I was under the impression that the password column can include a {scheme} prefix to indicate the password scheme (presumably this also means a password cannot start with a "{"?). Is this still true when using mysql and default_pass_scheme ? Thanks for any hints? Ed W From lists at wildgooses.com Wed Jan 25 00:51:31 2012 From: lists at wildgooses.com (Ed W) Date: Tue, 24 Jan 2012 22:51:31 +0000 Subject: [Dovecot] Password auth scheme question with mysql In-Reply-To: <4F1F2B7F.3070005@wildgooses.com> References: <4F1F2B7F.3070005@wildgooses.com> Message-ID: <4F1F35F3.9070303@wildgooses.com> On 24/01/2012 22:06, Ed W wrote: > Hi, I have a current auth database using mysql with a "password" > column in plain text. The config has "default_pass_scheme = PLAIN" > specified > > In preparation for a more adaptable system I changed a password entry > from "asdf" to "{PLAIN}asdf", but now auth fails. Works fine if I > change it back to just "asdf". (I don't believe it's a caching problem) > > What might I be missing? I was under the impression that the password > column can include a {scheme} prefix to indicate the password scheme > (presumably this also means a password cannot start with a "{"?). Is > this still true when using mysql and default_pass_scheme ? Hmm, so I try: # doveadm pw -p asdf -s sha256 {SHA256}8OTC92xYkW7CWPJGhRvqCR0U1CR6L8PhhpRGGxgW4Ts= I enter this hash into my database column, then enabling debug logging I see this in the logs: Jan 24 22:40:44 mail1 dovecot: auth: Debug: cache(demo at mailasail.com,1.2.24.129): SHA256({PLAIN}asdf) != '8OTC92xYkW7CWPJGhRvqCR0U1CR6L8PhhpRGGxgW4Ts=' Jan 24 22:40:44 mail1 dovecot: auth-worker: Debug: sql(demo at blah.com,1.2.24.129): query: SELECT NULLIF(mail_host, '1.2.24.129') as proxy, NULLIF(mail_host, '1.2.24.129') as host, email as user, password, password as pass, home userdb_home, concat(home, '/', maildir) as userdb_mail, 200 as userdb_uid, 200 as userdb_gid FROM users WHERE email = if('blah.com'<>'','demo at blah.com','demo at blah.com@mailasail.com') and flag_active=1 Jan 24 22:40:44 mail1 dovecot: auth-worker: sql(demo at blah.com,1.2.24.129): Password mismatch (given password: {PLAIN}asdf) Jan 24 22:40:44 mail1 dovecot: auth-worker: Error: md5_verify(demo at mailasail.com): Not a valid MD5-CRYPT or PLAIN-MD5 password Jan 24 22:40:44 mail1 dovecot: auth-worker: Error: ssha256_verify(demo at mailasail.com): SSHA256 password too short Jan 24 22:40:44 mail1 dovecot: auth-worker: Error: ssha512_verify(demo at mailasail.com): SSHA512 password too short Jan 24 22:40:44 mail1 dovecot: auth-worker: Warning: Invalid OTP data in passdb Jan 24 22:40:44 mail1 dovecot: auth-worker: Warning: Invalid OTP data in passdb Jan 24 22:40:44 mail1 dovecot: auth-worker: Debug: sql(demo at blah.com,1.2.24.129): SHA256({PLAIN}asdf) != '8OTC92xYkW7CWPJGhRvqCR0U1CR6L8PhhpRGGxgW4Ts=' Forgot to say. this is with dovecot 2.0.17 Thanks for any pointers Ed W From lists at wildgooses.com Wed Jan 25 01:09:53 2012 From: lists at wildgooses.com (Ed W) Date: Tue, 24 Jan 2012 23:09:53 +0000 Subject: [Dovecot] Password auth scheme question with mysql In-Reply-To: <4F1F35F3.9070303@wildgooses.com> References: <4F1F2B7F.3070005@wildgooses.com> <4F1F35F3.9070303@wildgooses.com> Message-ID: <4F1F3A41.8020206@wildgooses.com> On 24/01/2012 22:51, Ed W wrote: > Hmm, so I try: > > # doveadm pw -p asdf -s sha256 > {SHA256}8OTC92xYkW7CWPJGhRvqCR0U1CR6L8PhhpRGGxgW4Ts= > > I enter this hash into my database column, then enabling debug logging > I see this in the logs: > .. > Jan 24 22:40:44 mail1 dovecot: auth-worker: Debug: > sql(demo at blah.com,1.2.24.129): SHA256({PLAIN}asdf) != > '8OTC92xYkW7CWPJGhRvqCR0U1CR6L8PhhpRGGxgW4Ts=' Gah. Ok, so I discovered the "doveadm auth" command: # doveadm auth -x service=pop3 demo asdf passdb: demo auth succeeded extra fields: user=demo at blah.com proxy host=1.2.24.129 pass={SHA256}8OTC92xYkW7CWPJGhRvqCR0U1CR6L8PhhpRGGxgW4Ts= So why do I get an auth failed and the log files I showed in my last email when I use "telnet localhost 110" and then the commands: user demo pass asdf Help please...? Ed W From lists at wildgooses.com Wed Jan 25 02:03:35 2012 From: lists at wildgooses.com (Ed W) Date: Wed, 25 Jan 2012 00:03:35 +0000 Subject: [Dovecot] Password auth scheme question with mysql In-Reply-To: <4F1F2B7F.3070005@wildgooses.com> References: <4F1F2B7F.3070005@wildgooses.com> Message-ID: <4F1F46D7.7050600@wildgooses.com> On 24/01/2012 22:06, Ed W wrote: > Hi, I have a current auth database using mysql with a "password" > column in plain text. The config has "default_pass_scheme = PLAIN" > specified > > In preparation for a more adaptable system I changed a password entry > from "asdf" to "{PLAIN}asdf", but now auth fails. Works fine if I > change it back to just "asdf". (I don't believe it's a caching problem) > > What might I be missing? I was under the impression that the password > column can include a {scheme} prefix to indicate the password scheme > (presumably this also means a password cannot start with a "{"?). Is > this still true when using mysql and default_pass_scheme ? Bahh. Partly figured this out now - sorry for the noise - looks like a config error on my side: I have traced this to my proxy setup, which appears not to work as expected. Basically all works fine when I test to the main server IP, but fails when I test "localhost", since it triggers me to be proxied to the main IP address (same machine, just using the external IP). The error seems to be that I set the "pass" variable in my password_query to set the master password for the upstream proxied to server. I can't actually remember now why this was required, but it was necessary to allow the proxy to work correctly in the past. I guess this assumption needs revisiting now since it can't be used if the plain password isn't in the database... For interest, here is my auth setup: password_query = SELECT NULLIF(mail_host, '%l') as proxy, NULLIF(mail_host, '%l') as host, \ email as user, password, \ password as pass, \ home userdb_home, concat(home, '/', maildir) as userdb_mail, \ 1234 as userdb_uid, 1234 as userdb_gid \ FROM users \ WHERE email = if('%d'<>'','%u','%u at mailasail.com') and flag_active=1 "mail_host" in this case holds the IP of the machine holding the users mailbox (hence it's easy to push mailboxes to a specific machine and the users get proxied to it) Sorry for the noise Ed W From jd.beaubien at gmail.com Wed Jan 25 05:22:10 2012 From: jd.beaubien at gmail.com (Jean-Daniel Beaubien) Date: Tue, 24 Jan 2012 22:22:10 -0500 Subject: [Dovecot] Persistence of UIDs Message-ID: Hi everyone, I have a question concerning UIDs. How persistant are they? I am thinking about building some form of webmail specialized for some specific business purpose and I am thinking of building a sort of cache in a DB by storing the email addr, date, subject and UID for quick lookups and search of correspondance. I am doing this because I am having issue with multiple people searching thru email folders that have 100k+ emails (which is another problem in itself, searches don't seem to scale well when folder goes above 60k emails). So to come back to my question, can I store the UIDs and reuse those UIDs later on to obtain the body of the email??? Or can the UIDs change on the server and they will not be valid anymore?. My setup is: - dovecot 1.x (will migrate to 2.x soon) - maildir - everything stored on an intel 320 SSD (index and maildir folder) Thanks, -JD From slusarz at curecanti.org Wed Jan 25 07:27:02 2012 From: slusarz at curecanti.org (Michael M Slusarz) Date: Tue, 24 Jan 2012 22:27:02 -0700 Subject: [Dovecot] Persistence of UIDs In-Reply-To: References: Message-ID: <20120124222702.Horde.UpAiY4F5lbhPH5KmSLoWegA@bigworm.curecanti.org> Quoting Jean-Daniel Beaubien : > I have a question concerning UIDs. How persistant are they? [snip] > So to come back to my question, can I store the UIDs and reuse those UIDs > later on to obtain the body of the email??? Or can the UIDs change on the > server and they will not be valid anymore?. You really need to read RFC 3501 (http://tools.ietf.org/html/rfc3501), specifically section 2.3.1.1. Short answer: UIDs will almost always be persistent, but you always need to check UIDVALIDITY in the tiny chance that they may be invalidated. michael From dmiller at amfes.com Wed Jan 25 07:38:47 2012 From: dmiller at amfes.com (Daniel L. Miller) Date: Tue, 24 Jan 2012 21:38:47 -0800 Subject: [Dovecot] Imap Quota Exceeded - But Still Receiving Emails? In-Reply-To: References: Message-ID: On 1/24/2012 8:35 AM, Palatine Web Support wrote: > > Here is my dovecot config: > > plugin { > quota = maildir:User Quota > quota_rule2 = Trash:storage=+100M > } [..] > > So it picks up my set quota of 3MB but dovecot is not rejecting emails > if I am over my quota. > > Can anyone help? > Is the quota plugin being loaded? What is the output of: doveconf | grep -B 2 plug -- Daniel From dovecot at bravenec.eu Wed Jan 25 09:05:47 2012 From: dovecot at bravenec.eu (Petr Bravenec) Date: Wed, 25 Jan 2012 08:05:47 +0100 Subject: [Dovecot] Dovecot antispam plugint got an empty message Message-ID: <201201250805.47430.dovecot@bravenec.eu> Few weeks ago I upgraded dovecot from 1.2 to 2.0.16 and antispam plugin to 2.0_pre20101222. Since the upgrade I'm not able to move messages to my Junk folder. In the maillog I have found this message: dspam[25060]: empty message (no data received) Message is copied from my INBOX to Junk folder, but dspam got an empty message and sent an error return code. So the moving operation is not successfull and the original message in INBOX was not deleted. The dspam was not trained (got an empty message). Looking to source code of dspam and antispam plugin I suspect the dovecot not to sending any content to plugin. Can you help me, please? Petr Bravenec -------------- next part -------------- # 2.0.16: /etc/dovecot/dovecot.conf # OS: Linux 3.1.6-gentoo x86_64 Gentoo Base System release 2.0.3 ext4 auth_mechanisms = plain login base_dir = /var/run/dovecot/ dict { acl = pgsql:/etc/dovecot/dovecot-acl.conf } disable_plaintext_auth = no first_valid_gid = 98 first_valid_uid = 98 last_valid_gid = 98 last_valid_uid = 98 listen = *, [::] mail_location = maildir:/home/dovecot/%u/maildir managesieve_notify_capability = mailto managesieve_sieve_capability = fileinto reject envelope encoded-character vacation subaddress comparator-i;ascii-numeric relational regex imap4flags copy include variables body enotify environment mailbox date ihave namespace { inbox = yes location = prefix = separator = . type = private } namespace { inbox = no list = children location = maildir:/home/dovecot/%%n/maildir:INDEX=/home/dovecot/%n/shared/%%n prefix = Ostatni.%%n. separator = . subscriptions = no type = shared } namespace { inbox = no list = children location = maildir:/home/dovecot/Sdilene/maildir:INDEX=/home/dovecot/%n/public prefix = Sdilene. separator = . subscriptions = no type = public } passdb { args = session=yes driver = pam } plugin { acl = vfile acl_shared_dict = proxy::acl antispam_backend = dspam antispam_dspam_args = --user;%u;--source=error antispam_dspam_binary = /usr/bin/dspam antispam_dspam_notspam = --class=innocent antispam_dspam_result_header = X-DSPAM-Result antispam_dspam_spam = --class=spam antispam_mail_tmpdir = /tmp antispam_signature = X-DSPAM-Signature antispam_signature_missing = move antispam_spam = Junk antispam_trash = Trash antispam_unsure = sieve = /home/dovecot/%u/sieve.default sieve_before = /etc/dovecot/sieve/dspam.sieve sieve_dir = /home/dovecot/%u/sieve } protocols = imap sieve service auth { unix_listener /var/spool/postfix/private/auth { group = postfix mode = 0660 user = postfix } unix_listener auth-master { group = vmails mode = 0660 user = dspam } unix_listener auth-userdb { group = vmails mode = 0660 user = dspam } user = root } service dict { unix_listener dict { group = vmails mode = 0660 user = dspam } } ssl_cert = Hi, I am using dovecot 2.0.16, and assigend globally procmailrc (/etc/procmailrc) which delivers mails to user's home directory in maildir formate. Also I assined quota to User through setquota (edquota) command, If the quota excedded then this case user's mail store to /var/spool/mail/user. After incresing quota how to delivered these mails to user's home dir in maildir formate automatically. Thanks & Regards, Arun Kumar Gupta -- This message has been scanned for viruses and dangerous content by MailScanner, and is believed to be clean. From tss at iki.fi Wed Jan 25 14:45:31 2012 From: tss at iki.fi (Timo Sirainen) Date: Wed, 25 Jan 2012 14:45:31 +0200 Subject: [Dovecot] Persistence of UIDs In-Reply-To: References: Message-ID: <9EDDD923-760E-4D24-BFB7-290172418A62@iki.fi> On 25.1.2012, at 5.22, Jean-Daniel Beaubien wrote: > I have a question concerning UIDs. How persistant are they? With Dovecot persistent enough. But as Michael said, check UIDVALIDITY. > I am thinking about building some form of webmail specialized for some > specific business purpose and I am thinking of building a sort of cache in > a DB by storing the email addr, date, subject and UID for quick lookups and > search of correspondance. Dovecot should already have such cache. If there are problems with that, I think it would be better to fix it on Dovecot's side rather than adding a second cache. > I am doing this because I am having issue with multiple people searching > thru email folders that have 100k+ emails (which is another problem in > itself, searches don't seem to scale well when folder goes above 60k > emails). Maybe enable fts-solr or fts-lucene? (Both work much better in v2.1.) From jd.beaubien at gmail.com Wed Jan 25 15:34:59 2012 From: jd.beaubien at gmail.com (Jean-Daniel Beaubien) Date: Wed, 25 Jan 2012 08:34:59 -0500 Subject: [Dovecot] Persistence of UIDs In-Reply-To: <9EDDD923-760E-4D24-BFB7-290172418A62@iki.fi> References: <9EDDD923-760E-4D24-BFB7-290172418A62@iki.fi> Message-ID: On Wed, Jan 25, 2012 at 7:45 AM, Timo Sirainen wrote: > On 25.1.2012, at 5.22, Jean-Daniel Beaubien wrote: > > > I have a question concerning UIDs. How persistant are they? > > With Dovecot persistent enough. But as Michael said, check UIDVALIDITY. > > > I am thinking about building some form of webmail specialized for some > > specific business purpose and I am thinking of building a sort of cache > in > > a DB by storing the email addr, date, subject and UID for quick lookups > and > > search of correspondance. > > Dovecot should already have such cache. If there are problems with that, I > think it would be better to fix it on Dovecot's side rather than adding a > second cache. > Very true. Has there been many search/index improvements since 1.0.9? I read thru the release notes but nothing jumped out at me. > > > I am doing this because I am having issue with multiple people searching > > thru email folders that have 100k+ emails (which is another problem in > > itself, searches don't seem to scale well when folder goes above 60k > > emails). > > Maybe enable fts-solr or fts-lucene? (Both work much better in v2.1.) > I was under the impression that lucene was for full-text search. I'm just doing simple from/to field searches. I will get a few numbers together about folder_size --> search time and I will post them tonight. -jd From CMarcus at Media-Brokers.com Wed Jan 25 15:40:18 2012 From: CMarcus at Media-Brokers.com (Charles Marcus) Date: Wed, 25 Jan 2012 08:40:18 -0500 Subject: [Dovecot] Persistence of UIDs In-Reply-To: References: <9EDDD923-760E-4D24-BFB7-290172418A62@iki.fi> Message-ID: <4F200642.4020008@Media-Brokers.com> On 2012-01-25 8:34 AM, Jean-Daniel Beaubien wrote: > On Wed, Jan 25, 2012 at 7:45 AM, Timo Sirainen wrote: >> Dovecot should already have such cache. If there are problems with that, I >> think it would be better to fix it on Dovecot's side rather than adding a >> second cache. > Very true. Has there been many search/index improvements since 1.0.9? I > read thru the release notes but nothing jumped out at me. Seriously?? 1.0.9 is *very* old, and even no longer really supported. Upgrade. Really. It isn't that hard. There is zero reason to stay on an unsupported version. >>> I am doing this because I am having issue with multiple people searching >>> thru email folders that have 100k+ emails (which is another problem in >>> itself, searches don't seem to scale well when folder goes above 60k >>> emails). >> Maybe enable fts-solr or fts-lucene? (Both work much better in v2.1.) > I was under the impression that lucene was for full-text search. I'm just > doing simple from/to field searches. > > I will get a few numbers together about folder_size --> search time and I > will post them tonight. Don't waste your time testing such an old and unsupported version, I'm sure Timo has no interest in any such numbers - *unless* you are planning on doing said tests on *both* the 1.0.9 version *and* the latest 2.0.x or 2.1 build and provide a *comparison* - *that* may be interesting... -- Best regards, Charles From tss at iki.fi Wed Jan 25 15:47:28 2012 From: tss at iki.fi (Timo Sirainen) Date: Wed, 25 Jan 2012 15:47:28 +0200 Subject: [Dovecot] Persistence of UIDs In-Reply-To: References: <9EDDD923-760E-4D24-BFB7-290172418A62@iki.fi> Message-ID: <469769DA-F849-4AE3-AB82-BB4AE05E0F11@iki.fi> On 25.1.2012, at 15.34, Jean-Daniel Beaubien wrote: >>> I am thinking about building some form of webmail specialized for some >>> specific business purpose and I am thinking of building a sort of cache >> in >>> a DB by storing the email addr, date, subject and UID for quick lookups >> and >>> search of correspondance. >> >> Dovecot should already have such cache. If there are problems with that, I >> think it would be better to fix it on Dovecot's side rather than adding a >> second cache. >> > > Very true. Has there been many search/index improvements since 1.0.9? I > read thru the release notes but nothing jumped out at me. Disk I/O usage is the same probably, CPU usage is less in newer versions. >>> I am doing this because I am having issue with multiple people searching >>> thru email folders that have 100k+ emails (which is another problem in >>> itself, searches don't seem to scale well when folder goes above 60k >>> emails). >> >> Maybe enable fts-solr or fts-lucene? (Both work much better in v2.1.) >> > > I was under the impression that lucene was for full-text search. I'm just > doing simple from/to field searches. In v2.1 from/to fields are also searched via FTS. From Juergen.Obermann at hrz.uni-giessen.de Wed Jan 25 16:43:11 2012 From: Juergen.Obermann at hrz.uni-giessen.de (=?UTF-8?Q?J=C3=BCrgen_Obermann?=) Date: Wed, 25 Jan 2012 15:43:11 +0100 Subject: [Dovecot] problem compiling imaptest under solaris Message-ID: <89f61bff49f4c5343be06dd45459b14a@imapproxy.hrz> Hallo, today I tried to compile imaptest under solaris 10 with studio 11 compiler and got the following error: gmake[2]: Entering directory `/net/fileserv/export/sunsrc/src/imaptest-20111119/src' source='client.c' object='client.o' libtool=no \ DEPDIR=.deps depmode=none /bin/bash ../depcomp \ cc -DHAVE_CONFIG_H -I. -I. -I.. -I/opt/local/include/dovecot -I/usr/local/include -fast -xarch=v8plusa -I/usr/sfw/include -c client.c "/opt/local/include/dovecot/imap-util.h", line 6: warning: useless declaration "client-state.h", line 6: warning: useless declaration "client.c", line 655: operand cannot have void type: op "==" "client.c", line 655: operands have incompatible types: const void "==" int cc: acomp failed for client.c what can I do? Thanks for any help, J?rgen -- J?rgen Obermann Hochschulrechenzentrum der Justus-Liebig-Universit?t Gie?en Heinrich-Buff-Ring 44 Tel. 0641-9913054 From tom at whyscream.net Wed Jan 25 18:19:18 2012 From: tom at whyscream.net (Tom Hendrikx) Date: Wed, 25 Jan 2012 17:19:18 +0100 Subject: [Dovecot] Dovecot antispam plugint got an empty message In-Reply-To: <201201250805.47430.dovecot@bravenec.eu> References: <201201250805.47430.dovecot@bravenec.eu> Message-ID: <4F202B86.9000102@whyscream.net> On 25-01-12 08:05, Petr Bravenec wrote: > Few weeks ago I upgraded dovecot from 1.2 to 2.0.16 and antispam plugin to > 2.0_pre20101222. Since the upgrade I'm not able to move messages to my Junk > folder. In the maillog I have found this message: > > dspam[25060]: empty message (no data received) > Gentoo has included the antispam plugin from Johannes historically, but added the fork by Eugene to support upgrades to dovecot 2.0. It is not really made clear by the gentoo ebuild is that the forked plugin needs a slightly different config. I use the config below with dovecot 2.0.17 and a git checkout for dovecot-antispam: ===8<======== plugin { antispam_signature = X-DSPAM-Signature antispam_signature_missing = move antispam_spam_pattern_ignorecase = Junk;Junk.* antispam_trash_pattern_ignorecase = Trash;Deleted Items;Deleted Messages # Backend specific antispam_backend = dspam antispam_dspam_binary = /usr/bin/dspamc antispam_dspam_args = --user;%u;--deliver=;--source=error;--signature=%%s antispam_dspam_spam = --class=spam antispam_dspam_notspam = --class=innocent #antispam_dspam_result_header = X-DSPAM-Result } -- Regards, Tom From CMarcus at Media-Brokers.com Wed Jan 25 18:42:39 2012 From: CMarcus at Media-Brokers.com (Charles Marcus) Date: Wed, 25 Jan 2012 11:42:39 -0500 Subject: [Dovecot] move mails from spool to users home dir(maildir formate) automatically In-Reply-To: References: Message-ID: <4F2030FF.1080304@Media-Brokers.com> On 2012-01-25 3:19 AM, Arun Gupta wrote: > I am using dovecot 2.0.16, and assigend globally procmailrc > (/etc/procmailrc) which delivers mails to user's home directory in > maildir formate. Also I assined quota to User through setquota (edquota) > command, If the quota excedded then this case user's mail store to > /var/spool/mail/user. After incresing quota how to delivered these mails > to user's home dir in maildir formate automatically. Best practice is to reject mail for users over quota (as long as you do this during the smtp transaction... Otherwise, whats the point? (they can still fill up your server)... -- Best regards, Charles From dmiller at amfes.com Wed Jan 25 18:55:09 2012 From: dmiller at amfes.com (Daniel L. Miller) Date: Wed, 25 Jan 2012 08:55:09 -0800 Subject: [Dovecot] Imap Quota Exceeded - But Still Receiving Emails? In-Reply-To: <58f41e2e84d4befd5b09a1cb913e57b4@palatineweb.com> References: <4F1F9567.1030804@amfes.com> <58f41e2e84d4befd5b09a1cb913e57b4@palatineweb.com> Message-ID: On 1/25/2012 1:39 AM, Palatine Web Support wrote: > On 2012-01-25 05:38, Daniel L. Miller wrote: >> On 1/24/2012 8:35 AM, Palatine Web Support wrote: >>> >>> Here is my dovecot config: >>> >>> plugin { >>> quota = maildir:User Quota >>> quota_rule2 = Trash:storage=+100M >>> } >> [..] >>> >>> So it picks up my set quota of 3MB but dovecot is not rejecting >>> emails if I am over my quota. >>> >>> Can anyone help? >>> >> Is the quota plugin being loaded? What is the output of: >> >> doveconf | grep -B 2 plug > > Hi Daniel > > I tried the command and it returned the command was not found. I have > installed: > > apt-get install dovecot-common > apt-get install dovecot-dev > apt-get install dovecot-imapd > > > Which package does the binary doveconf come from? You need to make sure to reply to the list - not just to me. If you don't have doveconf...what version of Dovecot are you using? -- Daniel From dmiller at amfes.com Wed Jan 25 19:01:30 2012 From: dmiller at amfes.com (Daniel L. Miller) Date: Wed, 25 Jan 2012 09:01:30 -0800 Subject: [Dovecot] Imap Quota Exceeded - But Still Receiving Emails? In-Reply-To: <747f97172fd71affd2ee5b5ebcc5d16c@palatineweb.com> References: <4F1F9567.1030804@amfes.com> <747f97172fd71affd2ee5b5ebcc5d16c@palatineweb.com> Message-ID: On 1/25/2012 2:01 AM, Palatine Web Support wrote: > On 2012-01-25 05:38, Daniel L. Miller wrote: >> On 1/24/2012 8:35 AM, Palatine Web Support wrote: >>> >>> Here is my dovecot config: >>> >>> plugin { >>> quota = maildir:User Quota >>> quota_rule2 = Trash:storage=+100M >>> } >> [..] >>> >>> So it picks up my set quota of 3MB but dovecot is not rejecting >>> emails if I am over my quota. >>> >>> Can anyone help? >>> >> Is the quota plugin being loaded? What is the output of: >> >> doveconf | grep -B 2 plug > > The modules are being loaded. From the log file with debugging enabled: > > Jan 25 09:59:58 mail dovecot: IMAP(xxx at xxx.com): Loading modules from > directory: /usr/lib/dovecot/modules/imap > Jan 25 09:59:58 mail dovecot: IMAP(xxx at xxx.com): Module loaded: > /usr/lib/dovecot/modules/imap/lib10_quota_plugin.so > Jan 25 09:59:58 mail dovecot: IMAP(xxx at xxx.com): Module loaded: > /usr/lib/dovecot/modules/imap/lib11_imap_quota_plugin.so > Jan 25 09:59:58 mail dovecot: IMAP(xxx at xxx.com): Effective uid=150, > gid=8, home=/var/vmail/xxx.com/support > Jan 25 09:59:58 mail dovecot: IMAP(xxx at xxx.com): Quota root: name=User > Quota backend=dirsize args= > Jan 25 09:59:58 mail dovecot: IMAP(xxx at xxx.com): Quota rule: root=User > Quota mailbox=* bytes=3145728 messages=0 > Jan 25 09:59:58 mail dovecot: IMAP(xxx at xxx.com): Quota rule: root=User > Quota mailbox=Trash bytes=104857600 messages=0 > Jan 25 09:59:58 mail dovecot: IMAP(xxx at xxx.com): maildir: > data=/var/vmail/xxx.com/support > Jan 25 09:59:58 mail dovecot: IMAP(xxx at xxx.com): maildir++: > root=/var/vmail/xxx.com/support, index=, control=, > inbox=/var/vmail/xxx.com/support > Jan 25 09:59:58 mail dovecot: IMAP(xxx at xxx.com): Namespace : Using > permissions from /var/vmail/xxx.com/support: mode=0700 gid=-1 > Jan 25 09:59:58 mail dovecot: IMAP(xxx at xxx.com): Disconnected: Logged > out bytes=82/573 > Jan 25 09:59:58 mail dovecot: IMAP(xxx at xxx.com): Disconnected: Logged > out bytes=269/8243 > I don't know if it makes any difference, but in your config file, try changing: plugin { quota = maildir:User Quota to plugin { quota = maildir:User quota (lowercase the "quota") -- Daniel From tss at iki.fi Thu Jan 26 01:03:58 2012 From: tss at iki.fi (Timo Sirainen) Date: Thu, 26 Jan 2012 01:03:58 +0200 Subject: [Dovecot] v2.1.rc5 released Message-ID: <901DE631-5790-41BF-89A9-73BBB544FDF7@iki.fi> http://dovecot.org/releases/2.1/rc/dovecot-2.1.rc5.tar.gz http://dovecot.org/releases/2.1/rc/dovecot-2.1.rc5.tar.gz.sig I'm still lagging behind reading emails. v2.1.0 will be released after I've finished that. RC5 is already stable and used in production, but I want to make sure that I haven't missed anything important that was reported previously. Most of the recent fixed bugs existed also in v2.0 series. Changes since rc3: * Temporary authentication failures sent to IMAP/POP3 clients now includes the server's hostname and timestamp. This makes it easier to find the error message from logs. + auth: Implemented support for Postfix's "TCP map" sockets for user existence lookups. + auth: Idling auth worker processes are now stopped. This reduces error messages about MySQL disconnections. - director: With >2 directors ring syncing might have stalled during director connect/disconnect, causing logins to fail. - LMTP client/proxy: Fixed potential hanging when sending (big) mails - Compressed mails with external attachments (dbox + SIS + zlib) failed sometimes with bogus "cached message size wrong" errors. (I skipped rc4 release, because I accidentally tagged it too early in hg.) From tss at iki.fi Thu Jan 26 01:15:31 2012 From: tss at iki.fi (Timo Sirainen) Date: Thu, 26 Jan 2012 01:15:31 +0200 Subject: [Dovecot] FOSDEM Message-ID: <91D95FB6-D651-4A82-BC16-241F4DDAEF78@iki.fi> I'll be in FOSDEM giving a small lightning talk about Dovecot: http://fosdem.org/2012/schedule/event/dovecot I'll also be around in FOSDEM the whole time, so if you're there and want to talk to me about anything, send me an email at some point. Poll to dovecot-news list people: Do you want to see these kind of news about my upcoming talks sent to the list? Probably happens a few times/year. A simple "yes" or "no" reply to this mail privately to me is enough. From petr at bravenec.eu Wed Jan 25 23:17:38 2012 From: petr at bravenec.eu (Petr Bravenec) Date: Wed, 25 Jan 2012 22:17:38 +0100 Subject: [Dovecot] Dovecot antispam plugint got an empty message In-Reply-To: <4F202B86.9000102@whyscream.net> References: <201201250805.47430.dovecot@bravenec.eu> <4F202B86.9000102@whyscream.net> Message-ID: <7860878.6BHtT8IiNC@hrabos> Thank you, I have reconfigured my dovecot on gentoo and it looks now that it worked properly. Regards, Petr Bravenec Dne Wednesday 25 of January 2012 17:19:18 Tom Hendrikx napsal(a): > On 25-01-12 08:05, Petr Bravenec wrote: > > Few weeks ago I upgraded dovecot from 1.2 to 2.0.16 and antispam plugin > > to 2.0_pre20101222. Since the upgrade I'm not able to move messages to > > my Junk folder. In the maillog I have found this message: > > > > dspam[25060]: empty message (no data received) > > Gentoo has included the antispam plugin from Johannes historically, but > added the fork by Eugene to support upgrades to dovecot 2.0. It is not > really made clear by the gentoo ebuild is that the forked plugin needs a > slightly different config. > > I use the config below with dovecot 2.0.17 and a git checkout for > dovecot-antispam: > > ===8<======== > plugin { > antispam_signature = X-DSPAM-Signature > antispam_signature_missing = move > antispam_spam_pattern_ignorecase = Junk;Junk.* > antispam_trash_pattern_ignorecase = Trash;Deleted Items;Deleted > Messages > > # Backend specific > antispam_backend = dspam > antispam_dspam_binary = /usr/bin/dspamc > antispam_dspam_args = > --user;%u;--deliver=;--source=error;--signature=%%s > antispam_dspam_spam = --class=spam > antispam_dspam_notspam = --class=innocent > #antispam_dspam_result_header = X-DSPAM-Result > } > > > -- > Regards, > Tom From user+dovecot at localhost.localdomain.org Thu Jan 26 01:24:50 2012 From: user+dovecot at localhost.localdomain.org (Pascal Volk) Date: Thu, 26 Jan 2012 00:24:50 +0100 Subject: [Dovecot] FOSDEM In-Reply-To: <91D95FB6-D651-4A82-BC16-241F4DDAEF78@iki.fi> References: <91D95FB6-D651-4A82-BC16-241F4DDAEF78@iki.fi> Message-ID: <4F208F42.4020007@localhost.localdomain.org> On 01/26/2012 12:15 AM Timo Sirainen wrote: > I'll be in FOSDEM giving a small lightning talk about Dovecot: http://fosdem.org/2012/schedule/event/dovecot > > I'll also be around in FOSDEM the whole time, so if you're there and want to talk to me about anything, send me an email at some point. I'll be there too. > Poll to dovecot-news list people: Do you want to see these kind of news about my upcoming talks sent to the list? Probably happens a few times/year. A simple "yes" or "no" reply to this mail privately to me is enough. yes Regards, Pascal -- The trapper recommends today: f007ba11.1202600 at localdomain.org From dmiller at amfes.com Thu Jan 26 01:37:16 2012 From: dmiller at amfes.com (Daniel L. Miller) Date: Wed, 25 Jan 2012 15:37:16 -0800 Subject: [Dovecot] Crash on mail folder delete Message-ID: Attempting to delete a folder from within the trash folder using Thunderbird. I see the following in the log: Jan 25 15:36:22 bubba dovecot: imap(dmiller at amfes.com): Panic: file mailbox-list-fs.c: line 156 (fs_list_get_path): assertion failed: (mailbox_list_is_valid_pattern(_list, name)) Jan 25 15:36:22 bubba dovecot: imap(dmiller at amfes.com): Error: Raw backtrace: /usr/local/lib/dovecot/libdovecot.so.0(+0x3efba) [0x7f5fe9f86fba] -> /usr/local/lib/dovecot/libdovecot.so.0(+0x3f006) [0x7f5fe9f87006] -> /usr/local/lib/dovecot/libdovecot.so.0(+0x17f5a) [0x7f5fe9f5ff5a] -> /usr/local/lib/dovecot/libdovecot-storage.so.0(+0x47287) [0x7f5fea214287] -> /usr/local/lib/dovecot/lib01_acl_plugin.so(+0x6c71) [0x7f5fe8b9cc71] -> /usr/local/lib/dovecot/lib01_acl_plugin.so(+0x6d47) [0x7f5fe8b9cd47] -> /usr/local/lib/dovecot/lib01_acl_plugin.so(acl_mailbox_allocated+0x9e) [0x7f5fe8ba061e] -> /usr/local/lib/dovecot/libdovecot-storage.so.0(hook_mailbox_allocated+0x62) [0x7f5fea2085b2] -> /usr/local/lib/dovecot/libdovecot-storage.so.0(mailbox_alloc+0xb2) [0x7f5fea2073d2] -> dovecot/imap [dmiller at amfes.com 192.168.0.91 delete](cmd_delete+0x72) [0x409922] -> dovecot/imap [dmiller at amfes.com 192.168.0.91 delete](command_exec+0x3d) [0x4109ad] -> dovecot/imap [dmiller at amfes.com 192.168.0.91 delete]() [0x40f97e] -> dovecot/imap [dmiller at amfes.com 192.168.0.91 delete]() [0x40fa5d] -> dovecot/imap [dmiller at amfes.com 192.168.0.91 delete](client_handle_input+0x135) [0x40fc85] -> dovecot/imap [dmiller at amfes.com 192.168.0.91 delete](client_input+0x5f) [0x4105af] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_call_io+0x36) [0x7f5fe9f93406] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_handler_run+0x9f) [0x7f5fe9f9448f] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_run+0x28) [0x7f5fe9f933a8] -> /usr/local/lib/dovecot/libdovecot.so.0(master_service_run+0x13) [0x7f5fe9f803b3] -> dovecot/imap [dmiller at amfes.com 192.168.0.91 delete](main+0x301) [0x418a61] -> /lib/libc.so.6(__libc_start_main+0xfe) [0x7f5fe9be3d8e] -> dovecot/imap [dmiller at amfes.com 192.168.0.91 delete]() [0x4083f9] Jan 25 15:36:23 bubba dovecot: imap(dmiller at amfes.com): Panic: file mailbox-list-fs.c: line 156 (fs_list_get_path): assertion failed: (mailbox_list_is_valid_pattern(_list, name)) Jan 25 15:36:23 bubba dovecot: imap(dmiller at amfes.com): Error: Raw backtrace: /usr/local/lib/dovecot/libdovecot.so.0(+0x3efba) [0x7f33673dafba] -> /usr/local/lib/dovecot/libdovecot.so.0(+0x3f006) [0x7f33673db006] -> /usr/local/lib/dovecot/libdovecot.so.0(+0x17f5a) [0x7f33673b3f5a] -> /usr/local/lib/dovecot/libdovecot-storage.so.0(+0x47287) [0x7f3367668287] -> /usr/local/lib/dovecot/lib01_acl_plugin.so(+0x6c71) [0x7f3365ff0c71] -> /usr/local/lib/dovecot/lib01_acl_plugin.so(+0x6d47) [0x7f3365ff0d47] -> /usr/local/lib/dovecot/lib01_acl_plugin.so(acl_mailbox_allocated+0x9e) [0x7f3365ff461e] -> /usr/local/lib/dovecot/libdovecot-storage.so.0(hook_mailbox_allocated+0x62) [0x7f336765c5b2] -> /usr/local/lib/dovecot/libdovecot-storage.so.0(mailbox_alloc+0xb2) [0x7f336765b3d2] -> dovecot/imap [dmiller at amfes.com 192.168.0.91 delete](cmd_delete+0x72) [0x409922] -> dovecot/imap [dmiller at amfes.com 192.168.0.91 delete](command_exec+0x3d) [0x4109ad] -> dovecot/imap [dmiller at amfes.com 192.168.0.91 delete]() [0x40f97e] -> dovecot/imap [dmiller at amfes.com 192.168.0.91 delete]() [0x40fa5d] -> dovecot/imap [dmiller at amfes.com 192.168.0.91 delete](client_handle_input+0x135) [0x40fc85] -> dovecot/imap [dmiller at amfes.com 192.168.0.91 delete](client_input+0x5f) [0x4105af] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_call_io+0x36) [0x7f33673e7406] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_handler_run+0x9f) [0x7f33673e848f] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_run+0x28) [0x7f33673e73a8] -> /usr/local/lib/dovecot/libdovecot.so.0(master_service_run+0x13) [0x7f33673d43b3] -> dovecot/imap [dmiller at amfes.com 192.168.0.91 delete](main+0x301) [0x418a61] -> /lib/libc.so.6(__libc_start_main+0xfe) [0x7f3367037d8e] -> dovecot/imap [dmiller at amfes.com 192.168.0.91 delete]() [0x4083f9] Jan 25 15:36:23 bubba dovecot: imap(dmiller at amfes.com): Fatal: master: service(imap): child 6074 killed with signal 6 (core dumps disabled) Jan 25 15:36:23 bubba dovecot: imap(dmiller at amfes.com): Fatal: master: service(imap): child 6589 killed with signal 6 (core dumps disabled) -- Daniel From doctor at doctor.nl2k.ab.ca Thu Jan 26 01:39:30 2012 From: doctor at doctor.nl2k.ab.ca (The Doctor) Date: Wed, 25 Jan 2012 16:39:30 -0700 Subject: [Dovecot] v2.1.rc5 released In-Reply-To: <901DE631-5790-41BF-89A9-73BBB544FDF7@iki.fi> References: <901DE631-5790-41BF-89A9-73BBB544FDF7@iki.fi> Message-ID: <20120125233930.GA17183@doctor.nl2k.ab.ca> On Thu, Jan 26, 2012 at 01:03:58AM +0200, Timo Sirainen wrote: > http://dovecot.org/releases/2.1/rc/dovecot-2.1.rc5.tar.gz > http://dovecot.org/releases/2.1/rc/dovecot-2.1.rc5.tar.gz.sig > > I'm still lagging behind reading emails. v2.1.0 will be released after I've finished that. RC5 is already stable and used in production, but I want to make sure that I haven't missed anything important that was reported previously. Most of the recent fixed bugs existed also in v2.0 series. > > Changes since rc3: > > * Temporary authentication failures sent to IMAP/POP3 clients > now includes the server's hostname and timestamp. This makes it > easier to find the error message from logs. > > + auth: Implemented support for Postfix's "TCP map" sockets for > user existence lookups. > + auth: Idling auth worker processes are now stopped. This reduces > error messages about MySQL disconnections. > - director: With >2 directors ring syncing might have stalled during > director connect/disconnect, causing logins to fail. > - LMTP client/proxy: Fixed potential hanging when sending (big) mails > - Compressed mails with external attachments (dbox + SIS + zlib) failed > sometimes with bogus "cached message size wrong" errors. > > (I skipped rc4 release, because I accidentally tagged it too early in hg.) All right, can you get configure to detect --as-needed flag for ld? This is show stopping for me. -- Member - Liberal International This is doctor at nl2k.ab.ca Ici doctor at nl2k.ab.ca God, Queen and country! Never Satan President Republic! Beware AntiChrist rising! https://www.fullyfollow.me/rootnl2k Birthdate : 29 Jan 1969 Croydon, Surrey, UK From tss at iki.fi Thu Jan 26 01:42:11 2012 From: tss at iki.fi (Timo Sirainen) Date: Thu, 26 Jan 2012 01:42:11 +0200 Subject: [Dovecot] v2.1.rc5 released In-Reply-To: <20120125233930.GA17183@doctor.nl2k.ab.ca> References: <901DE631-5790-41BF-89A9-73BBB544FDF7@iki.fi> <20120125233930.GA17183@doctor.nl2k.ab.ca> Message-ID: On 26.1.2012, at 1.39, The Doctor wrote: > All right, can you get configure to detect --as-needed flag for ld? > > This is show stopping for me. It should only be used with GNU ld. What ld and OS do you use? configure --without-gnu-ld probably works also? From tss at iki.fi Thu Jan 26 01:42:46 2012 From: tss at iki.fi (Timo Sirainen) Date: Thu, 26 Jan 2012 01:42:46 +0200 Subject: [Dovecot] Crash on mail folder delete In-Reply-To: References: Message-ID: <54B5D728-EC26-4633-A927-7EC040043BF5@iki.fi> On 26.1.2012, at 1.37, Daniel L. Miller wrote: > Attempting to delete a folder from within the trash folder using Thunderbird. I see the following in the log: Dovecot version? From dmiller at amfes.com Thu Jan 26 01:43:26 2012 From: dmiller at amfes.com (Daniel L. Miller) Date: Wed, 25 Jan 2012 15:43:26 -0800 Subject: [Dovecot] Crash on mail folder delete In-Reply-To: <54B5D728-EC26-4633-A927-7EC040043BF5@iki.fi> References: <4F20922C.60206@amfes.com> <54B5D728-EC26-4633-A927-7EC040043BF5@iki.fi> Message-ID: On 1/25/2012 3:42 PM, Timo Sirainen wrote: > On 26.1.2012, at 1.37, Daniel L. Miller wrote: > >> Attempting to delete a folder from within the trash folder using Thunderbird. I see the following in the log: > Dovecot version? > 2.1.rc3. I'm compiling rc5 now... -- Daniel From doctor at doctor.nl2k.ab.ca Thu Jan 26 02:01:26 2012 From: doctor at doctor.nl2k.ab.ca (The Doctor) Date: Wed, 25 Jan 2012 17:01:26 -0700 Subject: [Dovecot] v2.1.rc5 released In-Reply-To: References: <901DE631-5790-41BF-89A9-73BBB544FDF7@iki.fi> <20120125233930.GA17183@doctor.nl2k.ab.ca> Message-ID: <20120126000126.GA19765@doctor.nl2k.ab.ca> On Thu, Jan 26, 2012 at 01:42:11AM +0200, Timo Sirainen wrote: > On 26.1.2012, at 1.39, The Doctor wrote: > > > All right, can you get configure to detect --as-needed flag for ld? > > > > This is show stopping for me. > > It should only be used with GNU ld. What ld and OS do you use? configure --without-gnu-ld probably works also? My /usr/bin/ld GNU ld version 2.13.1 Copyright 2002 Free Software Foundation, Inc. This program is free software; you may redistribute it under the terms of the GNU General Public License. This program has absolutely no warranty. on BSD/OS 4.3.1 -- Member - Liberal International This is doctor at nl2k.ab.ca Ici doctor at nl2k.ab.ca God, Queen and country! Never Satan President Republic! Beware AntiChrist rising! https://www.fullyfollow.me/rootnl2k Birthdate : 29 Jan 1969 Croydon, Surrey, UK From dmiller at amfes.com Thu Jan 26 02:04:08 2012 From: dmiller at amfes.com (Daniel L. Miller) Date: Wed, 25 Jan 2012 16:04:08 -0800 Subject: [Dovecot] Crash on mail folder delete In-Reply-To: <4F20939E.4010903@amfes.com> References: <4F20922C.60206@amfes.com> <54B5D728-EC26-4633-A927-7EC040043BF5@iki.fi> <4F20939E.4010903@amfes.com> Message-ID: On 1/25/2012 3:43 PM, Daniel L. Miller wrote: > On 1/25/2012 3:42 PM, Timo Sirainen wrote: >> On 26.1.2012, at 1.37, Daniel L. Miller wrote: >> >>> Attempting to delete a folder from within the trash folder using >>> Thunderbird. I see the following in the log: >> Dovecot version? >> > 2.1.rc3. I'm compiling rc5 now... > Error still there on rc5. Jan 25 16:03:47 bubba dovecot: imap(dmiller at amfes.com): Panic: file mailbox-list-fs.c: line 156 (fs_list_get_path): assertion failed: (mailbox_list_is_valid_pattern(_list, name)) Jan 25 16:03:47 bubba dovecot: imap(dmiller at amfes.com): Error: Raw backtrace: /usr/local/lib/dovecot/libdovecot.so.0(+0x3f1ba) [0x7f7c3f0331ba] -> /usr/local/lib/dovecot/libdovecot.so.0(+0x3f206) [0x7f7c3f033206] -> /usr/local/lib/dovecot/libdovecot.so.0(+0x1804a) [0x7f7c3f00c04a] -> /usr/local/lib/dovecot/libdovecot-storage.so.0(+0x47317) [0x7f7c3f2c0317] -> /usr/local/lib/dovecot/lib01_acl_plugin.so(+0x6c71) [0x7f7c3dc48c71] -> /usr/local/lib/dovecot/lib01_acl_plugin.so(+0x6d47) [0x7f7c3dc48d47] -> /usr/local/lib/dovecot/lib01_acl_plugin.so(acl_mailbox_allocated+0x9e) [0x7f7c3dc4c61e] -> /usr/local/lib/dovecot/libdovecot-storage.so.0(hook_mailbox_allocated+0x62) [0x7f7c3f2b4662] -> /usr/local/lib/dovecot/libdovecot-storage.so.0(mailbox_alloc+0xb2) [0x7f7c3f2b3482] -> dovecot/imap [dmiller at amfes.com 192.168.0.91 delete](cmd_delete+0x72) [0x409972] -> dovecot/imap [dmiller at amfes.com 192.168.0.91 delete](command_exec+0x3d) [0x4109fd] -> dovecot/imap [dmiller at amfes.com 192.168.0.91 delete]() [0x40f9ce] -> dovecot/imap [dmiller at amfes.com 192.168.0.91 delete]() [0x40faad] -> dovecot/imap [dmiller at amfes.com 192.168.0.91 delete](client_handle_input+0x135) [0x40fcd5] -> dovecot/imap [dmiller at amfes.com 192.168.0.91 delete](client_input+0x5f) [0x4105ff] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_call_io+0x36) [0x7f7c3f03f5d6] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_handler_run+0x9f) [0x7f7c3f04065f] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_run+0x28) [0x7f7c3f03f578] -> /usr/local/lib/dovecot/libdovecot.so.0(master_service_run+0x13) [0x7f7c3f02c593] -> dovecot/imap [dmiller at amfes.com 192.168.0.91 delete](main+0x2a5) [0x418a55] -> /lib/libc.so.6(__libc_start_main+0xfe) [0x7f7c3ec8fd8e] -> dovecot/imap [dmiller at amfes.com 192.168.0.91 delete]() [0x408449] Jan 25 16:03:48 bubba dovecot: imap(dmiller at amfes.com): Panic: file mailbox-list-fs.c: line 156 (fs_list_get_path): assertion failed: (mailbox_list_is_valid_pattern(_list, name)) Jan 25 16:03:48 bubba dovecot: imap(dmiller at amfes.com): Error: Raw backtrace: /usr/local/lib/dovecot/libdovecot.so.0(+0x3f1ba) [0x7f9e52e211ba] -> /usr/local/lib/dovecot/libdovecot.so.0(+0x3f206) [0x7f9e52e21206] -> /usr/local/lib/dovecot/libdovecot.so.0(+0x1804a) [0x7f9e52dfa04a] -> /usr/local/lib/dovecot/libdovecot-storage.so.0(+0x47317) [0x7f9e530ae317] -> /usr/local/lib/dovecot/lib01_acl_plugin.so(+0x6c71) [0x7f9e51a36c71] -> /usr/local/lib/dovecot/lib01_acl_plugin.so(+0x6d47) [0x7f9e51a36d47] -> /usr/local/lib/dovecot/lib01_acl_plugin.so(acl_mailbox_allocated+0x9e) [0x7f9e51a3a61e] -> /usr/local/lib/dovecot/libdovecot-storage.so.0(hook_mailbox_allocated+0x62) [0x7f9e530a2662] -> /usr/local/lib/dovecot/libdovecot-storage.so.0(mailbox_alloc+0xb2) [0x7f9e530a1482] -> dovecot/imap [dmiller at amfes.com 192.168.0.91 delete](cmd_delete+0x72) [0x409972] -> dovecot/imap [dmiller at amfes.com 192.168.0.91 delete](command_exec+0x3d) [0x4109fd] -> dovecot/imap [dmiller at amfes.com 192.168.0.91 delete]() [0x40f9ce] -> dovecot/imap [dmiller at amfes.com 192.168.0.91 delete]() [0x40faad] -> dovecot/imap [dmiller at amfes.com 192.168.0.91 delete](client_handle_input+0x135) [0x40fcd5] -> dovecot/imap [dmiller at amfes.com 192.168.0.91 delete](client_input+0x5f) [0x4105ff] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_call_io+0x36) [0x7f9e52e2d5d6] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_handler_run+0x9f) [0x7f9e52e2e65f] -> /usr/local/lib/dovecot/libdovecot.so.0(io_loop_run+0x28) [0x7f9e52e2d578] -> /usr/local/lib/dovecot/libdovecot.so.0(master_service_run+0x13) [0x7f9e52e1a593] -> dovecot/imap [dmiller at amfes.com 192.168.0.91 delete](main+0x2a5) [0x418a55] -> /lib/libc.so.6(__libc_start_main+0xfe) [0x7f9e52a7dd8e] -> dovecot/imap [dmiller at amfes.com 192.168.0.91 delete]() [0x408449] Jan 25 16:03:48 bubba dovecot: imap(dmiller at amfes.com): Fatal: master: service(imap): child 3300 killed with signal 6 (core dumps disabled) Jan 25 16:03:48 bubba dovecot: imap(dmiller at amfes.com): Fatal: master: service(imap): child 3267 killed with signal 6 (core dumps disabled) -- Daniel From jd.beaubien at gmail.com Thu Jan 26 03:40:16 2012 From: jd.beaubien at gmail.com (Jean-Daniel Beaubien) Date: Wed, 25 Jan 2012 20:40:16 -0500 Subject: [Dovecot] Persistence of UIDs In-Reply-To: <469769DA-F849-4AE3-AB82-BB4AE05E0F11@iki.fi> References: <9EDDD923-760E-4D24-BFB7-290172418A62@iki.fi> <469769DA-F849-4AE3-AB82-BB4AE05E0F11@iki.fi> Message-ID: On Wed, Jan 25, 2012 at 8:47 AM, Timo Sirainen wrote: > On 25.1.2012, at 15.34, Jean-Daniel Beaubien wrote: > > >>> I am thinking about building some form of webmail specialized for some > >>> specific business purpose and I am thinking of building a sort of cache > >> in > >>> a DB by storing the email addr, date, subject and UID for quick lookups > >> and > >>> search of correspondance. > >> > >> Dovecot should already have such cache. If there are problems with > that, I > >> think it would be better to fix it on Dovecot's side rather than adding > a > >> second cache. > >> > > > > Very true. Has there been many search/index improvements since 1.0.9? I > > read thru the release notes but nothing jumped out at me. > > Disk I/O usage is the same probably, CPU usage is less in newer versions. > > >>> I am doing this because I am having issue with multiple people > searching > >>> thru email folders that have 100k+ emails (which is another problem in > >>> itself, searches don't seem to scale well when folder goes above 60k > >>> emails). > >> > >> Maybe enable fts-solr or fts-lucene? (Both work much better in v2.1.) > >> > > > > I was under the impression that lucene was for full-text search. I'm just > > doing simple from/to field searches. > > In v2.1 from/to fields are also searched via FTS. > Ok, I managed to compile 2.1 rc5 on an old ubuntu 8.04 without any issue. However, the config file is giving me a bit of a hard time, I'll figure this part out tomorrow. I'd just like to confirm that there is no risk to the actual mail data is ever something is badly configured when I start dovecot 2.1. I am managing this old server on my spare time for a friend, so I don't want to loose 2million+ emails and have to deal with those consequences :) From gedalya at gedalya.net Thu Jan 26 06:31:20 2012 From: gedalya at gedalya.net (Gedalya) Date: Wed, 25 Jan 2012 23:31:20 -0500 Subject: [Dovecot] IMAP to Maildir Migration preserving UIDs? Message-ID: <4F20D718.9010805@gedalya.net> Hello all, I'm facing the need to migrate from a proprietary IMAP server to Dovecot. The migration must be as smooth and transparent as possible. The mailbox format I would want to use is Maildir++. The storage format used by the current server is unknown, and I don't look forward to trying to reverse-engineer it. This leaves me with the option of reading the mailboxes using IMAP. There are tools like offlineimap or mbsync, and they do store the UID and UIDVALIDITY info. The last piece of the puzzle is a process to properly create the dovecot-uidlist and dovecot-uidvalidity files. So far I wasn't able to find anything on this. Are there any tips? Are there any tools available to do this job, or part of it? In either case I need this done, and I'll have to create whatever I can't find available. If there isn't anything out there that I'm yet to become aware of, then I'm looking at creating something like an offlineimap post-processing routine? Any help would be much appreciated. Gedalya From arung at cdac.in Thu Jan 26 07:13:07 2012 From: arung at cdac.in (Arun Gupta) Date: Thu, 26 Jan 2012 10:43:07 +0530 (IST) Subject: [Dovecot] dovecot Digest, Vol 105, Issue 57 In-Reply-To: References: Message-ID: Dear Sir, Thanks for your reply and I really agreed your point about 'reject mail for users over quota', but I don't want to do it if it is possible to without reject mails to deliver mails from spool to user's home directory automatically, kindly provide solution. I will be highly obliged all of you. -- Thanks & Regards, Arun Kumar Gupta > formate) automatically > Message-ID: > Content-Type: TEXT/PLAIN; format=flowed; charset=US-ASCII > > > Hi, > > I am using dovecot 2.0.16, and assigend globally procmailrc > (/etc/procmailrc) which delivers mails to user's home directory in maildir > formate. Also I assined quota to User through setquota (edquota) command, > If the quota excedded then this case user's mail store to > /var/spool/mail/user. After incresing quota how to delivered these mails > to user's home dir in maildir formate automatically. > > Thanks & Regards, > > Arun Kumar Gupta > Best practice is to reject mail for users over quota (as long as you do this during the smtp transaction... Otherwise, whats the point? (they can still fill up your server)... -- Best regards, Charles -- This message has been scanned for viruses and dangerous content by MailScanner, and is believed to be clean. From mark.zealey at webfusion.com Thu Jan 26 12:14:57 2012 From: mark.zealey at webfusion.com (Mark Zealey) Date: Thu, 26 Jan 2012 10:14:57 +0000 Subject: [Dovecot] auth-worker temporary failures causing lmtp 500 rejection Message-ID: <4F2127A1.2010302@webfusion.com> Hi there, I'm using dovecot 2.0.16 with a mysql user database. From time to time when we have a big influx of messages (perhaps more than 30 concurrent rcpt to:<> sessions at the same time so no auth-workers free?) or when we have a transient issue connecting to the database server, we see the message: Jan 25 16:38:23 mailbox dovecot: auth-worker: sql(foo at bar.com,1.2.3.4): Unknown user and the lmtp process returns: 550 5.1.1 User doesn't exist: foo at bar.com This would be correct for a permanent error where the user doesn't exist in our database, however it seems to be doing this on transient errors too. Is this an issue with the code or perhaps some setting I have missed? Thanks, Mark From CMarcus at Media-Brokers.com Thu Jan 26 14:03:56 2012 From: CMarcus at Media-Brokers.com (Charles Marcus) Date: Thu, 26 Jan 2012 07:03:56 -0500 Subject: [Dovecot] Persistence of UIDs In-Reply-To: References: <9EDDD923-760E-4D24-BFB7-290172418A62@iki.fi> <469769DA-F849-4AE3-AB82-BB4AE05E0F11@iki.fi> Message-ID: <4F21412C.9060105@Media-Brokers.com> On 2012-01-25 8:40 PM, Jean-Daniel Beaubien wrote: > I'd just like to confirm that there is no risk to the actual mail data is > ever something is badly configured when I start dovecot 2.1. I am managing > this old server on my spare time for a friend, so I don't want to loose > 2million+ emails and have to deal with those consequences:) There are *always* risks associated with things like this... maybe the chance is low, but no guarantees... As always, it is *your* responsibility to *backup* *first*... -- Best regards, Charles From CMarcus at Media-Brokers.com Thu Jan 26 14:06:44 2012 From: CMarcus at Media-Brokers.com (Charles Marcus) Date: Thu, 26 Jan 2012 07:06:44 -0500 Subject: [Dovecot] IMAP to Maildir Migration preserving UIDs? In-Reply-To: <4F20D718.9010805@gedalya.net> References: <4F20D718.9010805@gedalya.net> Message-ID: <4F2141D4.806@Media-Brokers.com> On 2012-01-25 11:31 PM, Gedalya wrote: > This leaves me with the option of reading the mailboxes using IMAP. > There are tools like offlineimap or mbsync, Not familiar with those, but I think imapsync will do what you want? http://imapsync.lamiral.info/ I do see that it references those two though... -- Best regards, Charles From support at palatineweb.com Thu Jan 26 14:09:09 2012 From: support at palatineweb.com (Palatine Web Support) Date: Thu, 26 Jan 2012 12:09:09 +0000 Subject: [Dovecot] =?utf-8?q?Imap_Quota_Exceeded_-_But_Still_Receiving_Ema?= =?utf-8?q?ils=3F?= In-Reply-To: References: <4F1F9567.1030804@amfes.com> <747f97172fd71affd2ee5b5ebcc5d16c@palatineweb.com> Message-ID: <32bb69a587decb1d09d618792dc1ed8d@palatineweb.com> On 2012-01-25 17:01, Daniel L. Miller wrote: > On 1/25/2012 2:01 AM, Palatine Web Support wrote: >> On 2012-01-25 05:38, Daniel L. Miller wrote: >>> On 1/24/2012 8:35 AM, Palatine Web Support wrote: >>>> >>>> Here is my dovecot config: >>>> >>>> plugin { >>>> quota = maildir:User Quota >>>> quota_rule2 = Trash:storage=+100M >>>> } >>> [..] >>>> >>>> So it picks up my set quota of 3MB but dovecot is not rejecting >>>> emails if I am over my quota. >>>> >>>> Can anyone help? >>>> >>> Is the quota plugin being loaded? What is the output of: >>> >>> doveconf | grep -B 2 plug >> >> The modules are being loaded. From the log file with debugging >> enabled: >> >> Jan 25 09:59:58 mail dovecot: IMAP(xxx at xxx.com): Loading modules >> from directory: /usr/lib/dovecot/modules/imap >> Jan 25 09:59:58 mail dovecot: IMAP(xxx at xxx.com): Module loaded: >> /usr/lib/dovecot/modules/imap/lib10_quota_plugin.so >> Jan 25 09:59:58 mail dovecot: IMAP(xxx at xxx.com): Module loaded: >> /usr/lib/dovecot/modules/imap/lib11_imap_quota_plugin.so >> Jan 25 09:59:58 mail dovecot: IMAP(xxx at xxx.com): Effective uid=150, >> gid=8, home=/var/vmail/xxx.com/support >> Jan 25 09:59:58 mail dovecot: IMAP(xxx at xxx.com): Quota root: >> name=User Quota backend=dirsize args= >> Jan 25 09:59:58 mail dovecot: IMAP(xxx at xxx.com): Quota rule: >> root=User Quota mailbox=* bytes=3145728 messages=0 >> Jan 25 09:59:58 mail dovecot: IMAP(xxx at xxx.com): Quota rule: >> root=User Quota mailbox=Trash bytes=104857600 messages=0 >> Jan 25 09:59:58 mail dovecot: IMAP(xxx at xxx.com): maildir: >> data=/var/vmail/xxx.com/support >> Jan 25 09:59:58 mail dovecot: IMAP(xxx at xxx.com): maildir++: >> root=/var/vmail/xxx.com/support, index=, control=, >> inbox=/var/vmail/xxx.com/support >> Jan 25 09:59:58 mail dovecot: IMAP(xxx at xxx.com): Namespace : Using >> permissions from /var/vmail/xxx.com/support: mode=0700 gid=-1 >> Jan 25 09:59:58 mail dovecot: IMAP(xxx at xxx.com): Disconnected: >> Logged out bytes=82/573 >> Jan 25 09:59:58 mail dovecot: IMAP(xxx at xxx.com): Disconnected: >> Logged out bytes=269/8243 >> > > I don't know if it makes any difference, but in your config file, try > changing: > plugin { > quota = maildir:User Quota > > to > > plugin { > quota = maildir:User quota > > (lowercase the "quota") The quota is working fine now. The problem was I had my transport agent set to virtual when it should have been set to dovecot. Thanks. From tss at iki.fi Thu Jan 26 14:21:32 2012 From: tss at iki.fi (Timo Sirainen) Date: Thu, 26 Jan 2012 14:21:32 +0200 Subject: [Dovecot] Persistence of UIDs In-Reply-To: <4F21412C.9060105@Media-Brokers.com> References: <9EDDD923-760E-4D24-BFB7-290172418A62@iki.fi> <469769DA-F849-4AE3-AB82-BB4AE05E0F11@iki.fi> <4F21412C.9060105@Media-Brokers.com> Message-ID: <055C0680-BAF5-4617-918D-E12C09266006@iki.fi> On 26.1.2012, at 14.03, Charles Marcus wrote: > On 2012-01-25 8:40 PM, Jean-Daniel Beaubien wrote: >> I'd just like to confirm that there is no risk to the actual mail data is >> ever something is badly configured when I start dovecot 2.1. I am managing >> this old server on my spare time for a friend, so I don't want to loose >> 2million+ emails and have to deal with those consequences:) > > There are *always* risks associated with things like this... maybe the chance is low, but no guarantees... Risks of some trouble, yes .. but you have to be highly creative if you want to accidentally lose any mails. I can't think of any way to do that without explicitly deleting files from filesystem or via IMAP/POP3 client. From tss at iki.fi Thu Jan 26 14:27:15 2012 From: tss at iki.fi (Timo Sirainen) Date: Thu, 26 Jan 2012 14:27:15 +0200 Subject: [Dovecot] IMAP to Maildir Migration preserving UIDs? In-Reply-To: <4F20D718.9010805@gedalya.net> References: <4F20D718.9010805@gedalya.net> Message-ID: On 26.1.2012, at 6.31, Gedalya wrote: > I'm facing the need to migrate from a proprietary IMAP server to Dovecot. The migration must be as smooth and transparent as possible. > > The mailbox format I would want to use is Maildir++. > > The storage format used by the current server is unknown, and I don't look forward to trying to reverse-engineer it. This leaves me with the option of reading the mailboxes using IMAP. There are tools like offlineimap or mbsync, and they do store the UID and UIDVALIDITY info. The last piece of the puzzle is a process to properly create the dovecot-uidlist and dovecot-uidvalidity files. So far I wasn't able to find anything on this. Are there any tips? Are there any tools available to do this job, or part of it? Get Dovecot v2.1 and configure it to work. Then for migration add to dovecot.conf: imapc_host = imap.example.com imapc_port = 993 imapc_ssl = imaps imapc_ssl_ca_dir = /etc/ssl/certs mail_prefetch_count = 50 And do the migration one user at a time: doveadm -o imapc_user=USERNAME -o imapc_password=PASSWORD backup -R imapc: From tss at iki.fi Thu Jan 26 14:31:43 2012 From: tss at iki.fi (Timo Sirainen) Date: Thu, 26 Jan 2012 14:31:43 +0200 Subject: [Dovecot] auth-worker temporary failures causing lmtp 500 rejection In-Reply-To: <4F2127A1.2010302@webfusion.com> References: <4F2127A1.2010302@webfusion.com> Message-ID: <1BEC91E0-E617-4EDA-88E8-A1877CF4C3D9@iki.fi> On 26.1.2012, at 12.14, Mark Zealey wrote: > I'm using dovecot 2.0.16 with a mysql user database. From time to time when we have a big influx of messages (perhaps more than 30 concurrent rcpt to:<> sessions at the same time so no auth-workers free?) or when we have a transient issue connecting to the database server, we see the message: > > Jan 25 16:38:23 mailbox dovecot: auth-worker: sql(foo at bar.com,1.2.3.4): Unknown user This happens only when the SQL query doesn't return any rows, but does return success. > and the lmtp process returns: > > 550 5.1.1 User doesn't exist: foo at bar.com > > This would be correct for a permanent error where the user doesn't exist in our database, however it seems to be doing this on transient errors too. Is this an issue with the code or perhaps some setting I have missed? The problem is that temporary errors are returning "unknown user". Can you reproduce this somehow? Like if you stop MySQL it always returns that "Unknown user"? From ar-dovecotlist at acrconsulting.co.uk Thu Jan 26 14:38:17 2012 From: ar-dovecotlist at acrconsulting.co.uk (Andrew Richards) Date: 26 Jan 2012 12:38:17 +0000 Subject: [Dovecot] IMAP to Maildir Migration preserving UIDs? In-Reply-To: <4F20D718.9010805@gedalya.net> References: <4F20D718.9010805@gedalya.net> Message-ID: <201201261238.17182.ar-dovecotlist@acrconsulting.co.uk> On Thursday 26 January 2012 04:31:20 Gedalya wrote: > I'm facing the need to migrate from a proprietary IMAP server to > Dovecot. The migration must be as smooth and transparent as possible. Ignoring the migration of individual mailboxes addressed in other replies, I trust you've met Perdition - very useful for this sort of situation, http://horms.net/projects/perdition/ to provide an IMAP "Server" (actually proxy) that knows where the real mailboxes are located, and directs connections accordingly. That way you can migrate mailboxes one-by-one as you've migrated them, helpful to test a few mailboxes first without affecting the bulk of users' mailboxes atall. cheers, Andrew. From gedalya at gedalya.net Thu Jan 26 15:11:32 2012 From: gedalya at gedalya.net (Gedalya) Date: Thu, 26 Jan 2012 08:11:32 -0500 Subject: [Dovecot] IMAP to Maildir Migration preserving UIDs? In-Reply-To: <4F2141D4.806@Media-Brokers.com> References: <4F20D718.9010805@gedalya.net> <4F2141D4.806@Media-Brokers.com> Message-ID: <4F215104.4000409@gedalya.net> On 01/26/2012 07:06 AM, Charles Marcus wrote: > On 2012-01-25 11:31 PM, Gedalya wrote: >> This leaves me with the option of reading the mailboxes using IMAP. >> There are tools like offlineimap or mbsync, > > Not familiar with those, but I think imapsync will do what you want? > > http://imapsync.lamiral.info/ > > I do see that it references those two though... > As I understand, there is no way an IMAP-to-IAMP process can preserve UIDs, since new UIDs are assigned for every message by the target server. Also, imapsync found 0 messages in all mailboxes on my evil to-be-eliminated server, something I didn't bother troubleshooting much. Timo's idea sounds interesting, time to look into 2.1 ! From gedalya at gedalya.net Thu Jan 26 15:18:32 2012 From: gedalya at gedalya.net (Gedalya) Date: Thu, 26 Jan 2012 08:18:32 -0500 Subject: [Dovecot] IMAP to Maildir Migration preserving UIDs? In-Reply-To: <201201261238.17182.ar-dovecotlist@acrconsulting.co.uk> References: <4F20D718.9010805@gedalya.net> <201201261238.17182.ar-dovecotlist@acrconsulting.co.uk> Message-ID: <4F2152A8.2040302@gedalya.net> On 01/26/2012 07:38 AM, Andrew Richards wrote: > On Thursday 26 January 2012 04:31:20 Gedalya wrote: >> I'm facing the need to migrate from a proprietary IMAP server to >> Dovecot. The migration must be as smooth and transparent as possible. > Ignoring the migration of individual mailboxes addressed in other replies, I > trust you've met Perdition - very useful for this sort of situation, > http://horms.net/projects/perdition/ > > to provide an IMAP "Server" (actually proxy) that knows where the real > mailboxes are located, and directs connections accordingly. That way you can > migrate mailboxes one-by-one as you've migrated them, helpful to test a few > mailboxes first without affecting the bulk of users' mailboxes atall. > > cheers, > > Andrew. Sounds very cool. I already have dovecot set up as a proxy, working, and it should allow me to forcefully disconnect users and lock them out while they are being migrated and then once they are done they'll be served locally rather than proxied. My main problem is that most connections are simply coming directly to the old server, using the deprecated hostname. I need all clients to use the right hostnames, or clog up this new server with redirectors and proxies for all the junk done on the old server.. bummer. What I might want to look into is actually setting up a proxy like this but on the evil (windows) server - to get *him* to pass on those requests he shouldn't be handling. From CMarcus at Media-Brokers.com Thu Jan 26 16:11:27 2012 From: CMarcus at Media-Brokers.com (Charles Marcus) Date: Thu, 26 Jan 2012 09:11:27 -0500 Subject: [Dovecot] IMAP to Maildir Migration preserving UIDs? In-Reply-To: <4F215104.4000409@gedalya.net> References: <4F20D718.9010805@gedalya.net> <4F2141D4.806@Media-Brokers.com> <4F215104.4000409@gedalya.net> Message-ID: <4F215F0F.9010106@Media-Brokers.com> On 2012-01-26 8:11 AM, Gedalya wrote: > As I understand, there is no way an IMAP-to-IAMP process can preserve > UIDs, since new UIDs are assigned for every message by the target server. > Also, imapsync found 0 messages in all mailboxes on my evil > to-be-eliminated server, something I didn't bother troubleshooting much. > Timo's idea sounds interesting, time to look into 2.1 ! Yep, it definitely sounds like the way to go... -- Best regards, Charles From Mark.Zealey at webfusion.com Thu Jan 26 16:37:49 2012 From: Mark.Zealey at webfusion.com (Mark Zealey) Date: Thu, 26 Jan 2012 14:37:49 +0000 Subject: [Dovecot] auth-worker temporary failures causing lmtp 500 rejection In-Reply-To: <1BEC91E0-E617-4EDA-88E8-A1877CF4C3D9@iki.fi> References: <4F2127A1.2010302@webfusion.com>, <1BEC91E0-E617-4EDA-88E8-A1877CF4C3D9@iki.fi> Message-ID: I've tried reproducing by having long running auth queries in the sql and KILLing them on the server, restarting the mysql service, and setting max auth workers to 1 and running 2 sessions at the same time (with long-running auth queries), but to no effect. There must be something else going on here; I saw it in particular when exim on our frontend servers had queued a large number of messages and suddenly released them all at once hence the auth-worker hypothesis although the log messages do not support this. I'll try to see if I can trigger this manually although we have been doing some massively parallel testing previously and not seen this. Mark ________________________________________ From: Timo Sirainen [tss at iki.fi] Sent: 26 January 2012 12:31 To: Mark Zealey Cc: dovecot at dovecot.org Subject: Re: [Dovecot] auth-worker temporary failures causing lmtp 500 rejection On 26.1.2012, at 12.14, Mark Zealey wrote: > I'm using dovecot 2.0.16 with a mysql user database. From time to time when we have a big influx of messages (perhaps more than 30 concurrent rcpt to:<> sessions at the same time so no auth-workers free?) or when we have a transient issue connecting to the database server, we see the message: > > Jan 25 16:38:23 mailbox dovecot: auth-worker: sql(foo at bar.com,1.2.3.4): Unknown user This happens only when the SQL query doesn't return any rows, but does return success. > and the lmtp process returns: > > 550 5.1.1 User doesn't exist: foo at bar.com > > This would be correct for a permanent error where the user doesn't exist in our database, however it seems to be doing this on transient errors too. Is this an issue with the code or perhaps some setting I have missed? The problem is that temporary errors are returning "unknown user". Can you reproduce this somehow? Like if you stop MySQL it always returns that "Unknown user"? From lists at wildgooses.com Thu Jan 26 18:02:28 2012 From: lists at wildgooses.com (Ed W) Date: Thu, 26 Jan 2012 16:02:28 +0000 Subject: [Dovecot] IMAP to Maildir Migration preserving UIDs? In-Reply-To: <4F2152A8.2040302@gedalya.net> References: <4F20D718.9010805@gedalya.net> <201201261238.17182.ar-dovecotlist@acrconsulting.co.uk> <4F2152A8.2040302@gedalya.net> Message-ID: <4F217914.1050501@wildgooses.com> Hi > Sounds very cool. I already have dovecot set up as a proxy, working, > and it should allow me to forcefully disconnect users and lock them > out while they are being migrated and then once they are done they'll > be served locally rather than proxied. My main problem is that most > connections are simply coming directly to the old server, using the > deprecated hostname. I need all clients to use the right hostnames, or > clog up this new server with redirectors and proxies for all the junk > done on the old server.. bummer. Why not put the old server IP to redirect to the new machine, then give the old machine some new temp IP in order to proxy back to it? That way you can do the proxying on the dovecot machine, which as you already established is working ok? Good luck Ed W From lists at wildgooses.com Thu Jan 26 18:06:24 2012 From: lists at wildgooses.com (Ed W) Date: Thu, 26 Jan 2012 16:06:24 +0000 Subject: [Dovecot] auth-worker temporary failures causing lmtp 500 rejection In-Reply-To: References: <4F2127A1.2010302@webfusion.com>, <1BEC91E0-E617-4EDA-88E8-A1877CF4C3D9@iki.fi> Message-ID: <4F217A00.8090504@wildgooses.com> On 26/01/2012 14:37, Mark Zealey wrote: > I've tried reproducing by having long running auth queries in the sql and KILLing them on the server, restarting the mysql service, and setting max auth workers to 1 and running 2 sessions at the same time (with long-running auth queries), but to no effect. There must be something else going on here; I saw it in particular when exim on our frontend servers had queued a large number of messages and suddenly released them all at once hence the auth-worker hypothesis although the log messages do not support this. I'll try to see if I can trigger this manually although we have been doing some massively parallel testing previously and not seen this. > Could it be a *timeout* rather than lack of worker processes? Theory would be that disk starvation causes other processes to take a long time to respond, hence the worker is *alive*, but doesn't return a response quickly enough, which in turn causes the "unknown user" message? You could try a different disk io scheduler, or ionice to control the effect of these big bursts of disk activity on other processes? (Most MTA programs such as postfix and qmail do a lot of fsyncs - this will cause a lot of IO activity and could easily starve other processes on the same box?) Good luck Ed W From gedalya at gedalya.net Thu Jan 26 18:30:53 2012 From: gedalya at gedalya.net (Gedalya) Date: Thu, 26 Jan 2012 11:30:53 -0500 Subject: [Dovecot] IMAP to Maildir Migration preserving UIDs? In-Reply-To: <4F217914.1050501@wildgooses.com> References: <4F20D718.9010805@gedalya.net> <201201261238.17182.ar-dovecotlist@acrconsulting.co.uk> <4F2152A8.2040302@gedalya.net> <4F217914.1050501@wildgooses.com> Message-ID: <4F217FBD.6070908@gedalya.net> On 01/26/2012 11:02 AM, Ed W wrote: > Hi > >> Sounds very cool. I already have dovecot set up as a proxy, working, >> and it should allow me to forcefully disconnect users and lock them >> out while they are being migrated and then once they are done they'll >> be served locally rather than proxied. My main problem is that most >> connections are simply coming directly to the old server, using the >> deprecated hostname. I need all clients to use the right hostnames, >> or clog up this new server with redirectors and proxies for all the >> junk done on the old server.. bummer. > > Why not put the old server IP to redirect to the new machine, then > give the old machine some new temp IP in order to proxy back to it? > That way you can do the proxying on the dovecot machine, which as you > already established is working ok? > > Good luck > > Ed W Yeap, taht's what I'm doing to do, except that I would have to proxy more than just IMAP and POP - it's a one-does-it-all kind of machine accepting mail delivered from the outside, relaying outgoing mail, does webmail, does all this things very poorly... I have the choice of forcing all users to change to the new, dedicated servers doing these things, or reimplementing / porxying all of this on my new dovecot server which I so desperately want to keep neat and tidy... From tss at iki.fi Thu Jan 26 18:50:06 2012 From: tss at iki.fi (Timo Sirainen) Date: Thu, 26 Jan 2012 18:50:06 +0200 Subject: [Dovecot] auth-worker temporary failures causing lmtp 500 rejection In-Reply-To: <4F217A00.8090504@wildgooses.com> References: <4F2127A1.2010302@webfusion.com>, <1BEC91E0-E617-4EDA-88E8-A1877CF4C3D9@iki.fi> <4F217A00.8090504@wildgooses.com> Message-ID: <85410DCB-B5A8-44F3-A942-031C5E4C932C@iki.fi> On 26.1.2012, at 18.06, Ed W wrote: > Could it be a *timeout* rather than lack of worker processes? The message in log was "Unknown user". The only reason this happens is if MySQL library's query functions returned success without any rows. No timeouts, crashes, or anything else can give that error message. So I'd the problem is either in MySQL library or MySQL server. Try if the attached patch gives any crashes. If it does, it means that mysql library returned mysql_errno()=0 (success) even though it should have returned a failure. Or you could even change it to only: i_assert(result->result != NULL); if you're not using MySQL for anything else than auth. The other possibility is if in driver_mysql_result_next_row() the mysql_fetch_row() returns NULL, but also there I'm checking mysql_errno(). -------------- next part -------------- A non-text attachment was scrubbed... Name: diff Type: application/octet-stream Size: 435 bytes Desc: not available URL: -------------- next part -------------- From lists at wildgooses.com Thu Jan 26 22:08:18 2012 From: lists at wildgooses.com (Ed W) Date: Thu, 26 Jan 2012 20:08:18 +0000 Subject: [Dovecot] IMAP to Maildir Migration preserving UIDs? In-Reply-To: <4F217FBD.6070908@gedalya.net> References: <4F20D718.9010805@gedalya.net> <201201261238.17182.ar-dovecotlist@acrconsulting.co.uk> <4F2152A8.2040302@gedalya.net> <4F217914.1050501@wildgooses.com> <4F217FBD.6070908@gedalya.net> Message-ID: <4F21B2B2.6030505@wildgooses.com> Hi > Yeap, taht's what I'm doing to do, except that I would have to proxy > more than just IMAP and POP - it's a one-does-it-all kind of machine > accepting mail delivered from the outside, relaying outgoing mail, > does webmail, does all this things very poorly... I have the choice of > forcing all users to change to the new, dedicated servers doing these > things, or reimplementing / porxying all of this on my new dovecot > server which I so desperately want to keep neat and tidy... In that case I would suggest perhaps that the IP is taken over by a dedicated firewall box (running the OS of your choice). The firewall could then be used to port forward the services to the individual machines responsible for each service. This would give you the benefit that you could easily move other services off/around We are clearly off topic to dovecot... Plenty of good firewall options. If you want small, compact and low power, then you can pickup a bunch off intel compatible boards around the low couple hundred ?s mark fairly easily. Run your favourite distro and firewall on them. If you hadn't seen them before, I quite like Lanner for appliances, eg: http://www.lannerinc.com/x86_Network_Appliances/x86_Desktop_Appliances For example if you added a small appliance running linux which runs that IP, then you could add intrusion detection, bounce the web traffic to the windows box (or even just certain URLs, other URLs could go to some hypothetical linux box, etc), port forwarding the mail to the new dovecot box, etc, etc. Incremental price would be surprisingly low, but lots of extra flexibility? Just a thought Good luck Ed W From stan at hardwarefreak.com Thu Jan 26 22:51:02 2012 From: stan at hardwarefreak.com (Stan Hoeppner) Date: Thu, 26 Jan 2012 14:51:02 -0600 Subject: [Dovecot] v2.1.rc5 released In-Reply-To: <20120126000126.GA19765@doctor.nl2k.ab.ca> References: <901DE631-5790-41BF-89A9-73BBB544FDF7@iki.fi> <20120125233930.GA17183@doctor.nl2k.ab.ca> <20120126000126.GA19765@doctor.nl2k.ab.ca> Message-ID: <4F21BCB6.6030908@hardwarefreak.com> On 1/25/2012 6:01 PM, The Doctor wrote: > BSD/OS 4.3.1 A defunct/dead operating system, last released in 2003, support withdrawn in 2004. BSDI went belly up. Wind River acquired and then killed BSD/OS. You're using a dead, 9 year old OS, that hasn't seen official updates for 8 years. Do you think it's fair to ask application developers to support the oddities of your one-of-a-kind, ancient, patchwork of a platform? We've had this discussion before. And I don't believe you ever provided a sane rational for continuing to use an OS that's been officially dead for 8 years. What is the reason you are unable or unwilling to migrate to a newer and supported no cost BSD variant, or Linux distro? You're trying to run bleeding edge Dovecot, compiling it from source, on an 8 year old platform... -- Stan From Mark.Zealey at webfusion.com Thu Jan 26 23:35:24 2012 From: Mark.Zealey at webfusion.com (Mark Zealey) Date: Thu, 26 Jan 2012 21:35:24 +0000 Subject: [Dovecot] auth-worker temporary failures causing lmtp 500 rejection In-Reply-To: <420B5E34BFEE9646B7198438F9978AE223E4CB48@mail01.internal.webfusion.com> References: <4F2127A1.2010302@webfusion.com>, <1BEC91E0-E617-4EDA-88E8-A1877CF4C3D9@iki.fi>, <420B5E34BFEE9646B7198438F9978AE223E4CB48@mail01.internal.webfusion.com> Message-ID: Hi Timo thanks for the patch; I have now analyzed network dumps & discovered that the cause is actually our frontend mail servers not dovecot - we were delivering to the wrong lmtp port which we then use in the mysql query hence getting empty records. Sorry about this! Mark ________________________________________ From: Mark Zealey Sent: 26 January 2012 14:37 To: Timo Sirainen Cc: dovecot at dovecot.org Subject: RE: [Dovecot] auth-worker temporary failures causing lmtp 500 rejection I've tried reproducing by having long running auth queries in the sql and KILLing them on the server, restarting the mysql service, and setting max auth workers to 1 and running 2 sessions at the same time (with long-running auth queries), but to no effect. There must be something else going on here; I saw it in particular when exim on our frontend servers had queued a large number of messages and suddenly released them all at once hence the auth-worker hypothesis although the log messages do not support this. I'll try to see if I can trigger this manually although we have been doing some massively parallel testing previously and not seen this. Mark ________________________________________ From: Timo Sirainen [tss at iki.fi] Sent: 26 January 2012 12:31 To: Mark Zealey Cc: dovecot at dovecot.org Subject: Re: [Dovecot] auth-worker temporary failures causing lmtp 500 rejection On 26.1.2012, at 12.14, Mark Zealey wrote: > I'm using dovecot 2.0.16 with a mysql user database. From time to time when we have a big influx of messages (perhaps more than 30 concurrent rcpt to:<> sessions at the same time so no auth-workers free?) or when we have a transient issue connecting to the database server, we see the message: > > Jan 25 16:38:23 mailbox dovecot: auth-worker: sql(foo at bar.com,1.2.3.4): Unknown user This happens only when the SQL query doesn't return any rows, but does return success. > and the lmtp process returns: > > 550 5.1.1 User doesn't exist: foo at bar.com > > This would be correct for a permanent error where the user doesn't exist in our database, however it seems to be doing this on transient errors too. Is this an issue with the code or perhaps some setting I have missed? The problem is that temporary errors are returning "unknown user". Can you reproduce this somehow? Like if you stop MySQL it always returns that "Unknown user"? From gedalya at gedalya.net Fri Jan 27 01:42:05 2012 From: gedalya at gedalya.net (Gedalya) Date: Thu, 26 Jan 2012 18:42:05 -0500 Subject: [Dovecot] IMAP to Maildir Migration preserving UIDs? In-Reply-To: References: <4F20D718.9010805@gedalya.net> Message-ID: <4F21E4CD.3070001@gedalya.net> On 01/26/2012 07:27 AM, Timo Sirainen wrote: > On 26.1.2012, at 6.31, Gedalya wrote: > >> I'm facing the need to migrate from a proprietary IMAP server to Dovecot. The migration must be as smooth and transparent as possible. >> >> The mailbox format I would want to use is Maildir++. >> >> The storage format used by the current server is unknown, and I don't look forward to trying to reverse-engineer it. This leaves me with the option of reading the mailboxes using IMAP. There are tools like offlineimap or mbsync, and they do store the UID and UIDVALIDITY info. The last piece of the puzzle is a process to properly create the dovecot-uidlist and dovecot-uidvalidity files. So far I wasn't able to find anything on this. Are there any tips? Are there any tools available to do this job, or part of it? > Get Dovecot v2.1 and configure it to work. Then for migration add to dovecot.conf: > > imapc_host = imap.example.com > imapc_port = 993 > imapc_ssl = imaps > imapc_ssl_ca_dir = /etc/ssl/certs > mail_prefetch_count = 50 > > And do the migration one user at a time: > > doveadm -o imapc_user=USERNAME -o imapc_password=PASSWORD backup -R imapc: > Still working on it on my side, but for now: # doveadm -o imapc_user=gedalya at thisdomain.com -o imapc_password=***** backup -u gedalya at thisdomain.com -R imapc: Segmentation fault syslog: Jan 26 18:34:29 imap01 kernel: [ 9055.766548] doveadm[8015]: segfault at 4 ip b7765752 sp bff90600 error 4 in libdovecot-storage.so.0.0.0[b769a000+ff000] Jan 26 18:34:53 imap01 kernel: [ 9078.883024] doveadm[8046]: segfault at 4 ip b7828752 sp bf964450 error 4 in libdovecot-storage.so.0.0.0[b775d000+ff000] (I tried twice) Also, I happen to have no idea what I'm doing, but still, segfault.. This is a debian testing "wheezy" machine I put up to do the initial playing around, i386, using Dovecot prebuilt binary packages from http://xi.rename-it.nl/debian/pool/testing-auto/dovecot-2.1/ From tss at iki.fi Fri Jan 27 01:46:16 2012 From: tss at iki.fi (Timo Sirainen) Date: Fri, 27 Jan 2012 01:46:16 +0200 Subject: [Dovecot] IMAP to Maildir Migration preserving UIDs? In-Reply-To: <4F21E4CD.3070001@gedalya.net> References: <4F20D718.9010805@gedalya.net> <4F21E4CD.3070001@gedalya.net> Message-ID: On 27.1.2012, at 1.42, Gedalya wrote: >> doveadm -o imapc_user=USERNAME -o imapc_password=PASSWORD backup -R imapc: >> > Still working on it on my side, but for now: > > # doveadm -o imapc_user=gedalya at thisdomain.com -o imapc_password=***** backup -u gedalya at thisdomain.com -R imapc: > Segmentation fault gdb backtrace would be helpful. You should be able to get that by running (as root): gdb --args doveadm ... bt full (assuming you haven't changed base_dir, otherwise it might fail) From gedalya at gedalya.net Fri Jan 27 02:00:44 2012 From: gedalya at gedalya.net (Gedalya) Date: Thu, 26 Jan 2012 19:00:44 -0500 Subject: [Dovecot] IMAP to Maildir Migration preserving UIDs? In-Reply-To: References: <4F20D718.9010805@gedalya.net> <4F21E4CD.3070001@gedalya.net> Message-ID: <4F21E92C.4090509@gedalya.net> On 01/26/2012 06:46 PM, Timo Sirainen wrote: > On 27.1.2012, at 1.42, Gedalya wrote: > >>> doveadm -o imapc_user=USERNAME -o imapc_password=PASSWORD backup -R imapc: >>> >> Still working on it on my side, but for now: >> >> # doveadm -o imapc_user=gedalya at thisdomain.com -o imapc_password=***** backup -u gedalya at thisdomain.com -R imapc: >> Segmentation fault > gdb backtrace would be helpful. You should be able to get that by running (as root): > > gdb --args doveadm ... > bt full > > (assuming you haven't changed base_dir, otherwise it might fail) > Does this help? GNU gdb (GDB) 7.3-debian Copyright (C) 2011 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. Type "show copying" and "show warranty" for details. This GDB was configured as "i486-linux-gnu". For bug reporting instructions, please see: ... Reading symbols from /usr/bin/doveadm...Reading symbols from /usr/lib/debug/usr/bin/doveadm...done. done. (gdb) run Starting program: /usr/bin/doveadm -o imapc_user=jedi at example.com -o imapc_password=**** backup -u jedi at example.com -R imapc: [Thread debugging using libthread_db enabled] Program received signal SIGSEGV, Segmentation fault. mailbox_log_iter_open_next (iter=0x80cbd90) at mailbox-log.c:213 213 mailbox-log.c: No such file or directory. in mailbox-log.c (gdb) bt full #0 mailbox_log_iter_open_next (iter=0x80cbd90) at mailbox-log.c:213 No locals. #1 0xb7fa7dd4 in mailbox_log_iter_init (log=0x0) at mailbox-log.c:239 iter = 0x80cbd90 #2 0x0806ffd3 in dsync_worker_get_list_mailbox_log (list=0x80b6180, worker=0x80c3138) at dsync-worker-local.c:316 log = iter = 0x8 rec = #3 dsync_worker_get_mailbox_log (worker=0x80c3138) at dsync-worker-local.c:386 ns = 0x80a5f90 ret = #4 0x0807032f in dsync_worker_get_mailbox_log (worker=0x80c3138) at dsync-worker-local.c:372 No locals. #5 local_worker_mailbox_iter_init (_worker=0x80c3138) at dsync-worker-local.c:410 worker = 0x80c3138 iter = 0x80b6920 patterns = {0x8076124 "*", 0x0} #6 0x08065a2f in dsync_brain_mailbox_list_init (brain=0x80b68e8, worker=0x80c3138) at dsync-brain.c:141 list = 0x80c5940 pool = 0x80c5930 #7 0x0806680f in dsync_brain_sync (brain=0x80b68e8) at dsync-brain.c:827 No locals. #8 dsync_brain_sync (brain=0x80b68e8) at dsync-brain.c:813 No locals. #9 0x08067038 in dsync_brain_sync_all (brain=0x80b68e8) at dsync-brain.c:895 old_state = DSYNC_STATE_GET_MAILBOXES __FUNCTION__ = "dsync_brain_sync_all" #10 0x08064cfd in cmd_dsync_run (_ctx=0x8098ec0, user=0x80a9e98) at doveadm-dsync.c:237 ctx = 0x8098ec0 worker1 = 0x80c3138 worker2 = 0x80aedb8 workertmp = brain = 0x80b68e8 #11 0x0805371e in doveadm_mail_next_user (error_r=0xbffffa1c, ctx=0x8098ec0, input=) at doveadm-mail.c:221 ret = #12 doveadm_mail_next_user (ctx=0x8098ec0, input=, error_r=0xbffffa1c) at doveadm-mail.c:187 error = ret = #13 0x08053b2e in doveadm_mail_single_user (ctx=0x8098ec0, input=0xbffffa6c) at doveadm-mail.c:242 ---Type to continue, or q to quit--- error = 0x0 ret = __FUNCTION__ = "doveadm_mail_single_user" #14 0x08053f58 in doveadm_mail_cmd (cmd=0x8096f60, argc=, argv=0x80901e4) at doveadm-mail.c:425 input = {module = 0x0, service = 0x8076b3a "doveadm", username = 0x8090242 "jedi at example.com", local_ip = {family = 0, u = { ip6 = {__in6_u = {__u6_addr8 = '\000' , __u6_addr16 = {0, 0, 0, 0, 0, 0, 0, 0}, __u6_addr32 = {0, 0, 0, 0}}}, ip4 = {s_addr = 0}}}, remote_ip = {family = 0, u = {ip6 = {__in6_u = {__u6_addr8 = '\000' , __u6_addr16 = {0, 0, 0, 0, 0, 0, 0, 0}, __u6_addr32 = {0, 0, 0, 0}}}, ip4 = {s_addr = 0}}}, local_port = 0, remote_port = 0, userdb_fields = 0x0, flags_override_add = 0, flags_override_remove = 0, no_userdb_lookup = 0} ctx = 0x8098ec0 getopt_args = wildcard_user = 0x0 c = #15 0x080543d9 in doveadm_mail_try_run (cmd_name=0x8090238 "backup", argc=5, argv=0x80901d4) at doveadm-mail.c:482 cmd__foreach_end = 0x8096f9c cmd = 0x8096f60 cmd_name_len = 6 __FUNCTION__ = "doveadm_mail_try_run" #16 0x08053347 in main (argc=5, argv=0x80901d4) at doveadm.c:352 cmd_name = i = quick_init = false c = From tss at iki.fi Fri Jan 27 02:06:22 2012 From: tss at iki.fi (Timo Sirainen) Date: Fri, 27 Jan 2012 02:06:22 +0200 Subject: [Dovecot] IMAP to Maildir Migration preserving UIDs? In-Reply-To: <4F21E92C.4090509@gedalya.net> References: <4F20D718.9010805@gedalya.net> <4F21E4CD.3070001@gedalya.net> <4F21E92C.4090509@gedalya.net> Message-ID: On 27.1.2012, at 2.00, Gedalya wrote: >>> # doveadm -o imapc_user=gedalya at thisdomain.com -o imapc_password=***** backup -u gedalya at thisdomain.com -R imapc: >>> Segmentation fault >> gdb backtrace would be helpful. You should be able to get that by running (as root): >> > 213 mailbox-log.c: No such file or directory. > in mailbox-log.c > (gdb) bt full > #0 mailbox_log_iter_open_next (iter=0x80cbd90) at mailbox-log.c:213 > No locals. > #1 0xb7fa7dd4 in mailbox_log_iter_init (log=0x0) at mailbox-log.c:239 > iter = 0x80cbd90 > #2 0x0806ffd3 in dsync_worker_get_list_mailbox_log (list=0x80b6180, worker=0x80c3138) at dsync-worker-local.c:316 Ah, right, dsync really wants index files. Of course it shouldn't crash, I'll fix that, but you should be able to work around it: rm -rf /tmp/imapc doveadm -o imapc_user=USERNAME -o imapc_password=PASSWORD backup -R imapc:/tmp/imapc From gedalya at gedalya.net Fri Jan 27 02:17:42 2012 From: gedalya at gedalya.net (Gedalya) Date: Thu, 26 Jan 2012 19:17:42 -0500 Subject: [Dovecot] IMAP to Maildir Migration preserving UIDs? In-Reply-To: References: <4F20D718.9010805@gedalya.net> <4F21E4CD.3070001@gedalya.net> <4F21E92C.4090509@gedalya.net> Message-ID: <4F21ED26.6020908@gedalya.net> On 01/26/2012 07:06 PM, Timo Sirainen wrote: > On 27.1.2012, at 2.00, Gedalya wrote: > >>>> # doveadm -o imapc_user=gedalya at thisdomain.com -o imapc_password=***** backup -u gedalya at thisdomain.com -R imapc: >>>> Segmentation fault >>> gdb backtrace would be helpful. You should be able to get that by running (as root): >>> >> 213 mailbox-log.c: No such file or directory. >> in mailbox-log.c >> (gdb) bt full >> #0 mailbox_log_iter_open_next (iter=0x80cbd90) at mailbox-log.c:213 >> No locals. >> #1 0xb7fa7dd4 in mailbox_log_iter_init (log=0x0) at mailbox-log.c:239 >> iter = 0x80cbd90 >> #2 0x0806ffd3 in dsync_worker_get_list_mailbox_log (list=0x80b6180, worker=0x80c3138) at dsync-worker-local.c:316 > Ah, right, dsync really wants index files. Of course it shouldn't crash, I'll fix that, but you should be able to work around it: > > rm -rf /tmp/imapc > doveadm -o imapc_user=USERNAME -o imapc_password=PASSWORD backup -R imapc:/tmp/imapc > # doveadm -o imapc_user=jedi at example.com -o imapc_password=***** backup -u jedi at example.com -R imapc:/tmp/imapc dsync(jedi at example.com): Error: Failed to sync mailbox Drafts: STATUS cannot access mailbox Drafts dsync(jedi at example.com): Fatal: dsync backup: Looks like you're trying to run backup in wrong direction. Source is empty and destination is not. To be clear, I am trying to pull all the mailboxes from the old server on to this dovecot server, which has no mailboxes populated yet. It looks like this command would be pushing the messages from here to the imapc_host rather than pulling? From gedalya at gedalya.net Fri Jan 27 02:33:46 2012 From: gedalya at gedalya.net (Gedalya) Date: Thu, 26 Jan 2012 19:33:46 -0500 Subject: [Dovecot] IMAP to Maildir Migration preserving UIDs? In-Reply-To: <4F21ED26.6020908@gedalya.net> References: <4F20D718.9010805@gedalya.net> <4F21E4CD.3070001@gedalya.net> <4F21E92C.4090509@gedalya.net> <4F21ED26.6020908@gedalya.net> Message-ID: <4F21F0EA.5090700@gedalya.net> On 01/26/2012 07:17 PM, Gedalya wrote: > On 01/26/2012 07:06 PM, Timo Sirainen wrote: >> On 27.1.2012, at 2.00, Gedalya wrote: >> >>>>> # doveadm -o imapc_user=gedalya at thisdomain.com -o >>>>> imapc_password=***** backup -u gedalya at thisdomain.com -R imapc: >>>>> Segmentation fault >>>> gdb backtrace would be helpful. You should be able to get that by >>>> running (as root): >>>> >>> 213 mailbox-log.c: No such file or directory. >>> in mailbox-log.c >>> (gdb) bt full >>> #0 mailbox_log_iter_open_next (iter=0x80cbd90) at mailbox-log.c:213 >>> No locals. >>> #1 0xb7fa7dd4 in mailbox_log_iter_init (log=0x0) at mailbox-log.c:239 >>> iter = 0x80cbd90 >>> #2 0x0806ffd3 in dsync_worker_get_list_mailbox_log (list=0x80b6180, >>> worker=0x80c3138) at dsync-worker-local.c:316 >> Ah, right, dsync really wants index files. Of course it shouldn't >> crash, I'll fix that, but you should be able to work around it: >> >> rm -rf /tmp/imapc >> doveadm -o imapc_user=USERNAME -o imapc_password=PASSWORD backup -R >> imapc:/tmp/imapc >> > # doveadm -o imapc_user=jedi at example.com -o imapc_password=***** > backup -u jedi at example.com -R imapc:/tmp/imapc > dsync(jedi at example.com): Error: Failed to sync mailbox Drafts: STATUS > cannot access mailbox Drafts > dsync(jedi at example.com): Fatal: dsync backup: Looks like you're trying > to run backup in wrong direction. Source is empty and destination is not. > > To be clear, I am trying to pull all the mailboxes from the old server > on to this dovecot server, which has no mailboxes populated yet. It > looks like this command would be pushing the messages from here to the > imapc_host rather than pulling? > This got me somewhere... # doveadm -o imapc_user=jedi1 at example.com -o imapc_password=***** import -u jedi1 at example.com imapc: "" all doveadm(jedi1 at example.com): Error: Copying box=INBOX uid=1 failed: Message GUID not available in this server (guid) doveadm(jedi1 at example.com): Error: Copying box=INBOX uid=2 failed: Message GUID not available in this server (guid) doveadm(jedi1 at example.com): Error: Copying box=INBOX uid=3 failed: Message GUID not available in this server (guid) doveadm(jedi1 at example.com): Error: Copying box=INBOX uid=4 failed: Message GUID not available in this server (guid) doveadm(jedi1 at example.com): Error: Copying box=INBOX uid=5 failed: Message GUID not available in this server (guid) doveadm(jedi1 at example.com): Error: Copying box=INBOX uid=6 failed: Message GUID not available in this server (guid) doveadm(jedi1 at example.com): Error: Copying box=INBOX uid=7 failed: Message GUID not available in this server (guid) doveadm(jedi1 at example.com): Error: Copying box=INBOX uid=8 failed: Message GUID not available in this server (guid) doveadm(jedi1 at example.com): Error: Copying box=INBOX uid=9 failed: Message GUID not available in this server (guid) doveadm(jedi1 at example.com): Error: Copying box=INBOX uid=10 failed: Message GUID not available in this server (guid) doveadm(jedi1 at example.com): Error: Copying box=INBOX uid=11 failed: Message GUID not available in this server (guid) doveadm(jedi1 at example.com): Error: Copying box=INBOX uid=12 failed: Message GUID not available in this server (guid) doveadm(jedi1 at example.com): Error: Copying box=INBOX uid=13 failed: Message GUID not available in this server (guid) doveadm(jedi1 at example.com): Error: Copying box=INBOX uid=14 failed: Message GUID not available in this server (guid) doveadm(jedi1 at example.com): Error: Copying box=INBOX uid=15 failed: Message GUID not available in this server (guid) doveadm(jedi1 at example.com): Error: Copying box=INBOX uid=16 failed: Message GUID not available in this server (guid) doveadm(jedi1 at example.com): Error: Copying box=INBOX uid=17 failed: Message GUID not available in this server (guid) Should I / how can I disable this message GUID thing? From gedalya at gedalya.net Fri Jan 27 02:44:01 2012 From: gedalya at gedalya.net (Gedalya) Date: Thu, 26 Jan 2012 19:44:01 -0500 Subject: [Dovecot] IMAP to Maildir Migration preserving UIDs? In-Reply-To: <4F21ED26.6020908@gedalya.net> References: <4F20D718.9010805@gedalya.net> <4F21E4CD.3070001@gedalya.net> <4F21E92C.4090509@gedalya.net> <4F21ED26.6020908@gedalya.net> Message-ID: <4F21F351.3090907@gedalya.net> On 01/26/2012 07:17 PM, Gedalya wrote: > On 01/26/2012 07:06 PM, Timo Sirainen wrote: >> On 27.1.2012, at 2.00, Gedalya wrote: >> >>>>> # doveadm -o imapc_user=gedalya at thisdomain.com -o >>>>> imapc_password=***** backup -u gedalya at thisdomain.com -R imapc: >>>>> Segmentation fault >>>> gdb backtrace would be helpful. You should be able to get that by >>>> running (as root): >>>> >>> 213 mailbox-log.c: No such file or directory. >>> in mailbox-log.c >>> (gdb) bt full >>> #0 mailbox_log_iter_open_next (iter=0x80cbd90) at mailbox-log.c:213 >>> No locals. >>> #1 0xb7fa7dd4 in mailbox_log_iter_init (log=0x0) at mailbox-log.c:239 >>> iter = 0x80cbd90 >>> #2 0x0806ffd3 in dsync_worker_get_list_mailbox_log (list=0x80b6180, >>> worker=0x80c3138) at dsync-worker-local.c:316 >> Ah, right, dsync really wants index files. Of course it shouldn't >> crash, I'll fix that, but you should be able to work around it: >> >> rm -rf /tmp/imapc >> doveadm -o imapc_user=USERNAME -o imapc_password=PASSWORD backup -R >> imapc:/tmp/imapc >> > # doveadm -o imapc_user=jedi at example.com -o imapc_password=***** > backup -u jedi at example.com -R imapc:/tmp/imapc > dsync(jedi at example.com): Error: Failed to sync mailbox Drafts: STATUS > cannot access mailbox Drafts > dsync(jedi at example.com): Fatal: dsync backup: Looks like you're trying > to run backup in wrong direction. Source is empty and destination is not. > > To be clear, I am trying to pull all the mailboxes from the old server > on to this dovecot server, which has no mailboxes populated yet. It > looks like this command would be pushing the messages from here to the > imapc_host rather than pulling? > Sorry, my bad. That was a malfunction on the old IMAP server - that mailbox is inaccessible. Tried with another account: doveadm -o imapc_user=jedi1 at example.com -o imapc_password=***** backup -u jedi1 at example.com -R imapc:/tmp/imapc dsync(jedi1 at example.com): Error: msg guid lookup failed: Message GUID not available in this server dsync(jedi1 at example.com): Error: msg guid lookup failed: Message GUID not available in this server dsync(jedi1 at example.com): Panic: file dsync-brain.c: line 901 (dsync_brain_sync_all): assertion failed: (brain->state != old_state) dsync(jedi1 at example.com): Error: Raw backtrace: /usr/lib/dovecot/libdovecot.so.0(+0x3e98a) [0xb756a98a] -> /usr/lib/dovecot/libdovecot.so.0(default_fatal_handler+0x41) [0xb756aa91] -> /usr/lib/dovecot/libdovecot.so.0(i_fatal+0) [0xb753f66b] -> doveadm() [0x8067095] -> doveadm() [0x8064cfd] -> doveadm() [0x805371e] -> doveadm(doveadm_mail_single_user+0x5e) [0x8053b2e] -> doveadm() [0x8053f58] -> doveadm(doveadm_mail_try_run+0x139) [0x80543d9] -> doveadm(main+0x3a7) [0x8053347] -> /lib/i386-linux-gnu/i686/cmov/libc.so.6(__libc_start_main+0xe6) [0xb73e8e46] -> doveadm() [0x8053519] Aborted So there :D From tss at iki.fi Fri Jan 27 02:45:45 2012 From: tss at iki.fi (Timo Sirainen) Date: Fri, 27 Jan 2012 02:45:45 +0200 Subject: [Dovecot] IMAP to Maildir Migration preserving UIDs? In-Reply-To: <4F21F0EA.5090700@gedalya.net> References: <4F20D718.9010805@gedalya.net> <4F21E4CD.3070001@gedalya.net> <4F21E92C.4090509@gedalya.net> <4F21ED26.6020908@gedalya.net> <4F21F0EA.5090700@gedalya.net> Message-ID: <59F2504B-1B62-4036-A4DD-F476C9D0A02A@iki.fi> On 27.1.2012, at 2.33, Gedalya wrote: >> # doveadm -o imapc_user=jedi at example.com -o imapc_password=***** backup -u jedi at example.com -R imapc:/tmp/imapc >> dsync(jedi at example.com): Error: Failed to sync mailbox Drafts: STATUS cannot access mailbox Drafts Apparently your server doesn't like sending STATUS command to Drafts mailbox and returns a failure. This isn't very nice from it. >> dsync(jedi at example.com): Fatal: dsync backup: Looks like you're trying to run backup in wrong direction. Source is empty and destination is not. The -R parameter reversed the direction. It possibly fails because of the STATUS error. Or maybe some other problem, I'd need to look into it. You could try giving "-m INBOX" parameter to see if it works for one mailbox. > This got me somewhere... > > # doveadm -o imapc_user=jedi1 at example.com -o imapc_password=***** import -u jedi1 at example.com imapc: "" all > doveadm(jedi1 at example.com): Error: Copying box=INBOX uid=1 failed: Message GUID not available in this server (guid) Fixed: http://hg.dovecot.org/dovecot-2.1/rev/f3e000992f61 But doveadm import doesn't preserve UIDs. From gedalya at gedalya.net Fri Jan 27 02:57:39 2012 From: gedalya at gedalya.net (Gedalya) Date: Thu, 26 Jan 2012 19:57:39 -0500 Subject: [Dovecot] IMAP to Maildir Migration preserving UIDs? In-Reply-To: <59F2504B-1B62-4036-A4DD-F476C9D0A02A@iki.fi> References: <4F20D718.9010805@gedalya.net> <4F21E4CD.3070001@gedalya.net> <4F21E92C.4090509@gedalya.net> <4F21ED26.6020908@gedalya.net> <4F21F0EA.5090700@gedalya.net> <59F2504B-1B62-4036-A4DD-F476C9D0A02A@iki.fi> Message-ID: <4F21F683.3080200@gedalya.net> On 01/26/2012 07:45 PM, Timo Sirainen wrote: > On 27.1.2012, at 2.33, Gedalya wrote: > >>> # doveadm -o imapc_user=jedi at example.com -o imapc_password=***** backup -u jedi at example.com -R imapc:/tmp/imapc >>> dsync(jedi at example.com): Error: Failed to sync mailbox Drafts: STATUS cannot access mailbox Drafts > Apparently your server doesn't like sending STATUS command to Drafts mailbox and returns a failure. This isn't very nice from it. > This particular is broken - I'm pretty sure it doesn't do this for other accounts. >>> dsync(jedi at example.com): Fatal: dsync backup: Looks like you're trying to run backup in wrong direction. Source is empty and destination is not. > The -R parameter reversed the direction. It possibly fails because of the STATUS error. Or maybe some other problem, I'd need to look into it. You could try giving "-m INBOX" parameter to see if it works for one mailbox. Must be that broken account. >> This got me somewhere... >> >> # doveadm -o imapc_user=jedi1 at example.com -o imapc_password=***** import -u jedi1 at example.com imapc: "" all >> doveadm(jedi1 at example.com): Error: Copying box=INBOX uid=1 failed: Message GUID not available in this server (guid) > Fixed: http://hg.dovecot.org/dovecot-2.1/rev/f3e000992f61 > > But doveadm import doesn't preserve UIDs. OK - I got a different error from running doveadm backup on a non-broken account - see my other email :) From tss at iki.fi Fri Jan 27 03:00:15 2012 From: tss at iki.fi (Timo Sirainen) Date: Fri, 27 Jan 2012 03:00:15 +0200 Subject: [Dovecot] IMAP to Maildir Migration preserving UIDs? In-Reply-To: <4F21F683.3080200@gedalya.net> References: <4F20D718.9010805@gedalya.net> <4F21E4CD.3070001@gedalya.net> <4F21E92C.4090509@gedalya.net> <4F21ED26.6020908@gedalya.net> <4F21F0EA.5090700@gedalya.net> <59F2504B-1B62-4036-A4DD-F476C9D0A02A@iki.fi> <4F21F683.3080200@gedalya.net> Message-ID: On 27.1.2012, at 2.57, Gedalya wrote: >>> # doveadm -o imapc_user=jedi1 at example.com -o imapc_password=***** import -u jedi1 at example.com imapc: "" all >>> doveadm(jedi1 at example.com): Error: Copying box=INBOX uid=1 failed: Message GUID not available in this server (guid) >> Fixed: http://hg.dovecot.org/dovecot-2.1/rev/f3e000992f61 >> >> But doveadm import doesn't preserve UIDs. > OK - I got a different error from running doveadm backup on a non-broken account - see my other email :) The GUID error is the same. The crash is probably the result of it. Try if upgrading fixes it. From gedalya at gedalya.net Fri Jan 27 03:03:33 2012 From: gedalya at gedalya.net (Gedalya) Date: Thu, 26 Jan 2012 20:03:33 -0500 Subject: [Dovecot] IMAP to Maildir Migration preserving UIDs? In-Reply-To: References: <4F20D718.9010805@gedalya.net> <4F21E4CD.3070001@gedalya.net> <4F21E92C.4090509@gedalya.net> <4F21ED26.6020908@gedalya.net> <4F21F0EA.5090700@gedalya.net> <59F2504B-1B62-4036-A4DD-F476C9D0A02A@iki.fi> <4F21F683.3080200@gedalya.net> Message-ID: <4F21F7E5.1020606@gedalya.net> On 01/26/2012 08:00 PM, Timo Sirainen wrote: > On 27.1.2012, at 2.57, Gedalya wrote: > >>>> # doveadm -o imapc_user=jedi1 at example.com -o imapc_password=***** import -u jedi1 at example.com imapc: "" all >>>> doveadm(jedi1 at example.com): Error: Copying box=INBOX uid=1 failed: Message GUID not available in this server (guid) >>> Fixed: http://hg.dovecot.org/dovecot-2.1/rev/f3e000992f61 >>> >>> But doveadm import doesn't preserve UIDs. >> OK - I got a different error from running doveadm backup on a non-broken account - see my other email :) > The GUID error is the same. The crash is probably the result of it. Try if upgrading fixes it. > OK. Thank you very very much for everything so far. I'm going to wait for the changes to pop up in the prebuilt binary repository - I assume it's a matter of hours? For now I need to go eat something :-) and get back to this later, I'll post the results at that time. From gedalya at gedalya.net Fri Jan 27 06:16:42 2012 From: gedalya at gedalya.net (Gedalya) Date: Thu, 26 Jan 2012 23:16:42 -0500 Subject: [Dovecot] IMAP to Maildir Migration preserving UIDs? In-Reply-To: References: <4F20D718.9010805@gedalya.net> <4F21E4CD.3070001@gedalya.net> <4F21E92C.4090509@gedalya.net> <4F21ED26.6020908@gedalya.net> <4F21F0EA.5090700@gedalya.net> <59F2504B-1B62-4036-A4DD-F476C9D0A02A@iki.fi> <4F21F683.3080200@gedalya.net> Message-ID: <4F22252A.4070204@gedalya.net> On 01/26/2012 08:00 PM, Timo Sirainen wrote: > On 27.1.2012, at 2.57, Gedalya wrote: > >>>> # doveadm -o imapc_user=jedi1 at example.com -o imapc_password=***** import -u jedi1 at example.com imapc: "" all >>>> doveadm(jedi1 at example.com): Error: Copying box=INBOX uid=1 failed: Message GUID not available in this server (guid) >>> Fixed: http://hg.dovecot.org/dovecot-2.1/rev/f3e000992f61 >>> >>> But doveadm import doesn't preserve UIDs. >> OK - I got a different error from running doveadm backup on a non-broken account - see my other email :) > The GUID error is the same. The crash is probably the result of it. Try if upgrading fixes it. > Yeap. Worked impeccably (doveadm backup)!! Pretty fast, too. Very impressed! I'll have to do some very thorough testing with various clients etc, will post interesting findings if any come up. From alexis.lelion at gmail.com Fri Jan 27 12:59:02 2012 From: alexis.lelion at gmail.com (Alexis Lelion) Date: Fri, 27 Jan 2012 11:59:02 +0100 Subject: [Dovecot] LMTP : Can't handle mixed proxy/non-proxy destinations Message-ID: Hello, In my current setup, I uses two mailservers to handle the users connections, and my emails are stored on a distant server using NFS (maildir architecture) Dovecot is both my IMAP server and the delivery agent (LMTP via postfix) To avoid indexing issues related to NFS, proxying is enabled both on IMAP and LMTP. But when a mail is sent to users that are shared between the servers, I got the subject mentionned error in the logs : Jan 25 09:05:12 mail01 postfix/lmtp[23934]: A92709300DB: to=< user_on_mail02 at domain.com>, relay=mail01.domain.com[private/dovecot-lmtp], delay=0.07, delays=0.01/0/0/0.06, dsn=4.3.0, status=deferred (host mail01.domain.com[private/dovecot-lmtp] said: 451 4.3.0 < user_on_mail02 at domain.com> Can't handle mixed proxy/non-proxy destinations (in reply to RCPT TO command)) >From what I saw, the mail is then put in the queue, and wait until the next time Postifx will browse the queue. The mail will then be correctly delivered on "mail02". However, the "queue_run_delay" postfix parameter is set to 900, which means that the mail will be delivered with a lag of 15 minutes. I was wondering if there was another way of handling this, for example by triggering an immediate queue lookup from postfix or forwarding a copy of the mail to the other server. Note that the postfix "queue_run_delay" was increased to 15min on purpose, so I cannot change that. I'm using dovecot 2.0.15 on Debian Squeeze, kernel 2.6.32-5-amd64. Thanks, Alexis From clube33-mail at yahoo.com Fri Jan 27 14:32:17 2012 From: clube33-mail at yahoo.com (Gustavo) Date: Fri, 27 Jan 2012 04:32:17 -0800 (PST) Subject: [Dovecot] Problem with Postfix + Dovecot + MySQL + Squirrelmail Message-ID: <1327667537.79787.YahooMailNeo@web65309.mail.ac2.yahoo.com> Dear friends, I try configure a webmail on my server using Postfix + Dovecot + MySQL + Squirrelmail. My system is a Debian6 and dovecot version is: #dovecot --version 1.2.15 But, when I try to access an account on squirrel I recieve this message: ?ERROR Error connecting to IMAP server: localhost. 111 : Connection refused? Looking for a problem I foud this: #service dovecot start Starting IMAP/POP3 mail server: dovecotLast died with error (see error log for more information): Auth process died too early - shutting down If you have trouble with authentication failures, enable auth_debug setting. See http://wiki.dovecot.org/WhyDoesItNotWork This message goes away after the first successful login. . And the status of doveco is: #service dovecot status dovecot is not running ... failed! The other services seems to be OK: #service postfix status postfix is running. # service mysql status /usr/bin/mysqladmin ?Ver 8.42 Distrib 5.1.49, for debian-linux-gnu on x86_64 Copyright 2000-2008 MySQL AB, 2008 Sun Microsystems, Inc. This software comes with ABSOLUTELY NO WARRANTY. This is free software, and you are welcome to modify and redistribute it under the GPL license Server version5.1.49-3 Protocol version10 ConnectionLocalhost via UNIX socket UNIX socket/var/run/mysqld/mysqld.sock Uptime:32 days 14 hours 23 min 39 sec Threads: 1 ?Questions: 6743 ?Slow queries: 0 ?Opens: 385 ?Flush tables: 1 ?Open tables: 47 ?Queries per second avg: 0.2. Looking at dovecot.conf I found some incosistences: On dovecot.conf: protocol lda { sendmail_path = /usr/lib/sendmail auth_socket_path = /var/run/dovecot/auth-master } socket listen { master { path = /var/run/dovecot/auth-master mode = 0600 user = vmail group = mail } client { path = /var/run/dovecot/auth-client mode = 0660 user = vmail group = mail } } But in the system I don1t found this files!!! /var/run/dovecot# ls total 20K drwxr-xr-x 3 root root ???4.0K Jan 27 11:35 . drwxr-xr-x 8 root root ???4.0K Jan 27 09:33 .. srw------- 1 root root ??????0 Jan 27 11:35 auth-worker.26163 srwxrwxrwx 1 root root ??????0 Jan 27 11:35 dict-server lrwxrwxrwx 1 root root ?????25 Jan 27 11:35 dovecot.conf -> /etc/dovecot/dovecot.conf drwxr-x--- 2 root dovecot 4.0K Jan 27 11:35 login -rw------- 1 root root ?????43 Jan 27 11:35 master-fatal.lastlog -rw------- 1 root root ??????6 Jan 27 11:35 master.pid /var/run/dovecot# ls login/ total 12K drwxr-x--- 2 root dovecot 4.0K Jan 27 11:35 . drwxr-xr-x 3 root root ???4.0K Jan 27 11:35 .. srw-rw---- 1 root dovecot ???0 Jan 27 11:35 default -rw-r--r-- 2 root root ????230 Jan 23 19:12 ssl-parameters.dat I think maybe that is the problem. Anyone knows how I fix that? Or what is the real problem? Thanks for any help! ? --? Gustavo? From odhiambo at gmail.com Fri Jan 27 17:28:50 2012 From: odhiambo at gmail.com (Odhiambo Washington) Date: Fri, 27 Jan 2012 18:28:50 +0300 Subject: [Dovecot] v2.1.rc5 released In-Reply-To: <4F21BCB6.6030908@hardwarefreak.com> References: <901DE631-5790-41BF-89A9-73BBB544FDF7@iki.fi> <20120125233930.GA17183@doctor.nl2k.ab.ca> <20120126000126.GA19765@doctor.nl2k.ab.ca> <4F21BCB6.6030908@hardwarefreak.com> Message-ID: On Thu, Jan 26, 2012 at 23:51, Stan Hoeppner wrote: > On 1/25/2012 6:01 PM, The Doctor wrote: > > BSD/OS 4.3.1 > > A defunct/dead operating system, last released in 2003, support > withdrawn in 2004. BSDI went belly up. Wind River acquired and then > killed BSD/OS. You're using a dead, 9 year old OS, that hasn't seen > official updates for 8 years. > > Do you think it's fair to ask application developers to support the > oddities of your one-of-a-kind, ancient, patchwork of a platform? > > We've had this discussion before. And I don't believe you ever provided > a sane rational for continuing to use an OS that's been officially dead > for 8 years. What is the reason you are unable or unwilling to migrate > to a newer and supported no cost BSD variant, or Linux distro? > > You're trying to run bleeding edge Dovecot, compiling it from source, on > an 8 year old platform... > > Maybe "The Doctor" has no idea on how to migrate. I see no other sane reason to continue running that OS. -- Best regards, Odhiambo WASHINGTON, Nairobi,KE +254733744121/+254722743223 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ I can't hear you -- I'm using the scrambler. Please consider the environment before printing this email. -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 652 bytes Desc: not available URL: From mcazzador at gmail.com Fri Jan 27 18:48:31 2012 From: mcazzador at gmail.com (Matteo Cazzador) Date: Fri, 27 Jan 2012 17:48:31 +0100 Subject: [Dovecot] dovecot imap cluster Message-ID: Hello, i'm using postfix like smtp server, i need to choose an imap server with a special features. I have a customer with 3 different geographic locations. Every locations have a mail server for the same domain (example.com). If user1 at example.com receive mail form external this mail going on every locations server. I've a problem now, is it possible to syncronize the state (mail flag) of user1 imap folder mails on every mail locations server? Example, if user1 read a mail on server one is it possible to change flag of the same mail file on server 2 and server 3? Is it possible to use dsync for it? I need something like imap cluster. Or an action in post processing imap mail read. I can't use distribuited file system. Thank's a lot -- Rispetta l'ambiente: se non ti ? necessario, non stampare questa mail. ****************************************** Ing. Matteo Cazzador Email: mcazzador at gmail.com ****************************************** From info at simonecaruso.com Fri Jan 27 20:11:59 2012 From: info at simonecaruso.com (Simone Caruso) Date: Fri, 27 Jan 2012 19:11:59 +0100 Subject: [Dovecot] dovecot imap cluster In-Reply-To: References: Message-ID: <4F22E8EF.7070609@simonecaruso.com> On 27/01/2012 17:48, Matteo Cazzador wrote: > Hello, i'm using postfix like smtp server, i need to choose an imap > server with a special features. > > I have a customer with 3 different geographic locations. > > Every locations have a mail server for the same domain (example.com). > > If user1 at example.com receive mail form external this mail going on > every locations server. > > I've a problem now, is it possible to syncronize the state (mail flag) > of user1 imap folder mails on every mail locations server? > > Example, if user1 read a mail on server one is it possible to change > flag of the same mail file on server 2 and server 3? > > Is it possible to use dsync for it? > > I need something like imap cluster. > > Or an action in post processing imap mail read. > > I can't use distribuited file system. > > Thank's a lot > Syncronize your storage with DRBD, (or async replica like rsync) and use dovecot director for connection persistence. -- Simone Caruso From me at junc.org Fri Jan 27 22:53:02 2012 From: me at junc.org (Benny Pedersen) Date: Fri, 27 Jan 2012 21:53:02 +0100 Subject: [Dovecot] v2.1.rc5 released In-Reply-To: <4F21BCB6.6030908@hardwarefreak.com> References: <901DE631-5790-41BF-89A9-73BBB544FDF7@iki.fi> <20120125233930.GA17183@doctor.nl2k.ab.ca> <20120126000126.GA19765@doctor.nl2k.ab.ca> <4F21BCB6.6030908@hardwarefreak.com> Message-ID: On Thu, 26 Jan 2012 14:51:02 -0600, Stan Hoeppner wrote: > You're trying to run bleeding edge Dovecot, compiling it from source, > on > an 8 year old platform... i remember freebsd 4.9 installed from 2 1440kb floppy disks, why is upgradeing so hard to keep without reinstalling ? gentoo/funtoo it keeps emerge world forever, and portage exists on freebsd From tss at iki.fi Fri Jan 27 22:57:05 2012 From: tss at iki.fi (Timo Sirainen) Date: Fri, 27 Jan 2012 22:57:05 +0200 Subject: [Dovecot] dovecot imap cluster In-Reply-To: <4F22E8EF.7070609@simonecaruso.com> References: <4F22E8EF.7070609@simonecaruso.com> Message-ID: <3797A713-4DA9-4AEC-A155-006E3574BB6C@iki.fi> On 27.1.2012, at 20.11, Simone Caruso wrote: >> I have a customer with 3 different geographic locations. >> >> Every locations have a mail server for the same domain (example.com). >> >> If user1 at example.com receive mail form external this mail going on >> every locations server. >> >> I've a problem now, is it possible to syncronize the state (mail flag) >> of user1 imap folder mails on every mail locations server? > > Syncronize your storage with DRBD, (or async replica like rsync) and use dovecot > director for connection persistence. There are a couple of problems with DRBD and most (all?) other filesystem based solutions when doing multi-master replication across wide geographic locations: 1. Multi-master requires synchronous replication -> latency may be very high -> performance probably is bad enough that the system is unusable. 2. Network outages are still common -> you can't handle split brain situations in filesystem level without either a) loss of availability (everyone's email down) or b) data loss/corruption (what do you do when multiple sites have modified the same file?) With dsync-based replication it's possible to avoid both of these problems, because application-level replication can intelligently handle situations where asynchronous replication results in data conflicts. (This kind of conflict resolution is also what I hope to do with some nosql database in future when Dovecot supports them.) I've been working on dsync-based easy-to-use replication recently, and it's almost in a condition where I'm going to start using it myself (maybe this weekend). From doctor at doctor.nl2k.ab.ca Fri Jan 27 23:03:11 2012 From: doctor at doctor.nl2k.ab.ca (The Doctor) Date: Fri, 27 Jan 2012 14:03:11 -0700 Subject: [Dovecot] v2.1.rc5 released In-Reply-To: References: <901DE631-5790-41BF-89A9-73BBB544FDF7@iki.fi> <20120125233930.GA17183@doctor.nl2k.ab.ca> <20120126000126.GA19765@doctor.nl2k.ab.ca> <4F21BCB6.6030908@hardwarefreak.com> Message-ID: <20120127210310.GA2218@doctor.nl2k.ab.ca> On Fri, Jan 27, 2012 at 09:53:02PM +0100, Benny Pedersen wrote: > On Thu, 26 Jan 2012 14:51:02 -0600, Stan Hoeppner wrote: > >> You're trying to run bleeding edge Dovecot, compiling it from source, on >> an 8 year old platform... > > i remember freebsd 4.9 installed from 2 1440kb floppy disks, why is > upgradeing so hard to keep without reinstalling ? > > gentoo/funtoo it keeps emerge world forever, and portage exists on freebsd I got 2.1rc to work on this old work horse, just that the --as-needed flag needs to be edited out of 21 files. IT might be easier just in configuration to look up which version of ld you have as if it does not need the --as-needed flag. -- Member - Liberal International This is doctor at nl2k.ab.ca Ici doctor at nl2k.ab.ca God, Queen and country! Never Satan President Republic! Beware AntiChrist rising! https://www.fullyfollow.me/rootnl2k Birthdate : 29 Jan 1969 Croydon, Surrey, UK From me at junc.org Fri Jan 27 23:31:55 2012 From: me at junc.org (Benny Pedersen) Date: Fri, 27 Jan 2012 22:31:55 +0100 Subject: [Dovecot] v2.1.rc5 released In-Reply-To: <20120127210310.GA2218@doctor.nl2k.ab.ca> References: <901DE631-5790-41BF-89A9-73BBB544FDF7@iki.fi> <20120125233930.GA17183@doctor.nl2k.ab.ca> <20120126000126.GA19765@doctor.nl2k.ab.ca> <4F21BCB6.6030908@hardwarefreak.com> <20120127210310.GA2218@doctor.nl2k.ab.ca> Message-ID: <70d5df2910c161d844f6dbb7aa8fef8c@junc.org> On Fri, 27 Jan 2012 14:03:11 -0700, The Doctor wrote: > IT might be easier just in configuration to look up > which version of ld you have as if it does not need the --as-needed > flag. replyed sent privately, keep up the good work on freebsd :=) From me at junc.org Sat Jan 28 00:05:57 2012 From: me at junc.org (Benny Pedersen) Date: Fri, 27 Jan 2012 23:05:57 +0100 Subject: [Dovecot] =?utf-8?q?IMAP_to_Maildir_Migration_preserving_UIDs=3F?= In-Reply-To: <4F20D718.9010805@gedalya.net> References: <4F20D718.9010805@gedalya.net> Message-ID: <2989c8bf4cccf90002e99389385c97d8@junc.org> On Wed, 25 Jan 2012 23:31:20 -0500, Gedalya wrote: > I'm facing the need to migrate from a proprietary IMAP server to > Dovecot. The migration must be as smooth and transparent as possible. setup dovecot and make it listen on 127.0.0.2 only, modify your current to only listen on 127.0.0.1 this so you now can have 2 imap servers running at the same time next step is here http://www.howtoforge.com/how-to-migrate-mailboxes-between-imap-servers-with-imapsync when all accounts is transfered, stop the old server, make dovecot listen on any ip, done, it worked for me when i changed from courier-imap to dovecot From gedalya at gedalya.net Sat Jan 28 00:35:40 2012 From: gedalya at gedalya.net (Gedalya) Date: Fri, 27 Jan 2012 17:35:40 -0500 Subject: [Dovecot] IMAP to Maildir Migration preserving UIDs? In-Reply-To: References: <4F20D718.9010805@gedalya.net> <4F21E4CD.3070001@gedalya.net> <4F21E92C.4090509@gedalya.net> <4F21ED26.6020908@gedalya.net> <4F21F0EA.5090700@gedalya.net> <59F2504B-1B62-4036-A4DD-F476C9D0A02A@iki.fi> <4F21F683.3080200@gedalya.net> Message-ID: <4F2326BC.608@gedalya.net> On 01/26/2012 08:00 PM, Timo Sirainen wrote: > On 27.1.2012, at 2.57, Gedalya wrote: > >>>> # doveadm -o imapc_user=jedi1 at example.com -o imapc_password=***** import -u jedi1 at example.com imapc: "" all >>>> doveadm(jedi1 at example.com): Error: Copying box=INBOX uid=1 failed: Message GUID not available in this server (guid) >>> Fixed: http://hg.dovecot.org/dovecot-2.1/rev/f3e000992f61 >>> >>> But doveadm import doesn't preserve UIDs. >> OK - I got a different error from running doveadm backup on a non-broken account - see my other email :) > The GUID error is the same. The crash is probably the result of it. Try if upgrading fixes it. > This is what I ended up doing. I have the production machine acting as a dovecot imap server, and as a proxy for accounts not yet migrated. Running dovecot 2.0.15, with directly attached 6 TB of storage. Per Timo's instructions, I set up a quick VM running debian wheezy and the latest dovecot 2.1, and copied the config from the production server with tiny modifications, and connected it to the same mysql user database. I gave this machine the same hostname as the production machine, just so that the maildir filenames end up looking neat. I don't know if this has anything more than psychological value :-) I mounted the storage from the production machine (sshfs surprisingly didn't seem slower than NFS) and set up dovecot 2.1 to find the mailboxes under there, then things like doveadm -o imapc_user=jedi1 at example.com -o imapc_password=****** backup -u jedi1 at example.com -R imapc:/tmp/imapc started doing the job. No output, no problems. So far the only glitch I noticed is that I have dovecot autocreate a Spam folder and when starting a Windows Live Mail which was reading a proxied account, after it was migrated and served by dovecot, it doesn't find the Spam folder until I click "Download all folders". We have thousands of mailboxes being read from every conceivable client, so there will be more tiny issues like this. Can't wait to test a blackberry. Other than that, things work as intended - UID and UIDVALIDITY seem to be preserved, the clients don't seem to notice the migration or react to it in any way. What's left is to wrap around this a proper process to lock the mailbox, essentially put the right things in the database in the beginning and in the end of the process. Looks beautiful. From kyle.lafkoff at cpanel.net Sat Jan 28 00:57:15 2012 From: kyle.lafkoff at cpanel.net (Kyle Lafkoff) Date: Fri, 27 Jan 2012 16:57:15 -0600 Subject: [Dovecot] Test suite? Message-ID: <5319F037-A973-45EE-9129-93489C026619@cpanel.net> Hi I am building a RPM for dovecot. Is there a test suite available I could use during the build to verify proper functionality? Thanks! Kyle From tss at iki.fi Sat Jan 28 01:15:53 2012 From: tss at iki.fi (Timo Sirainen) Date: Sat, 28 Jan 2012 01:15:53 +0200 Subject: [Dovecot] Test suite? In-Reply-To: <5319F037-A973-45EE-9129-93489C026619@cpanel.net> References: <5319F037-A973-45EE-9129-93489C026619@cpanel.net> Message-ID: <37E8F766-8456-49D2-8360-DB70288E7A8A@iki.fi> On 28.1.2012, at 0.57, Kyle Lafkoff wrote: > I am building a RPM for dovecot. Is there a test suite available I could use during the build to verify proper functionality? Thanks! It would be nice to have a proper finished test suite testing all kinds of functionality. Unfortunately I haven't had time to write such a thing, and no one's tried to help creating one. There is "make check" that you can run, which goes through some unit tests, but it's not very useful in catching bugs. There is also imaptest tool (http://imapwiki.org/ImapTest), which is very useful in catching bugs. I've been planning on creating a comprehensive test suite by creating Dovecot-specific scripts for imaptest and running them against many different Dovecot configurations (mbox/maildir/sdbox/mdbox formats each against different kinds of namespaces, as well as many other tests). That plan has existed several years now, but unfortunately only in my head. Perhaps soon I can hire someone else to do that via my company. :) From stan at hardwarefreak.com Sat Jan 28 02:23:35 2012 From: stan at hardwarefreak.com (Stan Hoeppner) Date: Fri, 27 Jan 2012 18:23:35 -0600 Subject: [Dovecot] dovecot imap cluster In-Reply-To: <3797A713-4DA9-4AEC-A155-006E3574BB6C@iki.fi> References: <4F22E8EF.7070609@simonecaruso.com> <3797A713-4DA9-4AEC-A155-006E3574BB6C@iki.fi> Message-ID: <4F234007.9030907@hardwarefreak.com> On 1/27/2012 2:57 PM, Timo Sirainen wrote: > On 27.1.2012, at 20.11, Simone Caruso wrote: > >>> I have a customer with 3 different geographic locations. >>> >>> Every locations have a mail server for the same domain (example.com). >>> >>> If user1 at example.com receive mail form external this mail going on >>> every locations server. >>> >>> I've a problem now, is it possible to syncronize the state (mail flag) >>> of user1 imap folder mails on every mail locations server? >> >> Syncronize your storage with DRBD, (or async replica like rsync) and use dovecot >> director for connection persistence. > > > There are a couple of problems with DRBD and most (all?) other filesystem based solutions when doing multi-master replication across wide geographic locations: > > 1. Multi-master requires synchronous replication -> latency may be very high -> performance probably is bad enough that the system is unusable. > > 2. Network outages are still common -> you can't handle split brain situations in filesystem level without either a) loss of availability (everyone's email down) or b) data loss/corruption (what do you do when multiple sites have modified the same file?) > > With dsync-based replication it's possible to avoid both of these problems, because application-level replication can intelligently handle situations where asynchronous replication results in data conflicts. (This kind of conflict resolution is also what I hope to do with some nosql database in future when Dovecot supports them.) I've been working on dsync-based easy-to-use replication recently, and it's almost in a condition where I'm going to start using it myself (maybe this weekend). Can you provide a basic diagram/high level description of how this dsync replication would be configured to work over a 2 node wide area network? Are we looking at something like period scripts, something more automatic, a replication daemon? -- Stan From tss at iki.fi Sat Jan 28 02:32:15 2012 From: tss at iki.fi (Timo Sirainen) Date: Sat, 28 Jan 2012 02:32:15 +0200 Subject: [Dovecot] dovecot imap cluster In-Reply-To: <4F234007.9030907@hardwarefreak.com> References: <4F22E8EF.7070609@simonecaruso.com> <3797A713-4DA9-4AEC-A155-006E3574BB6C@iki.fi> <4F234007.9030907@hardwarefreak.com> Message-ID: On 28.1.2012, at 2.23, Stan Hoeppner wrote: >> With dsync-based replication it's possible to avoid both of these problems, because application-level replication can intelligently handle situations where asynchronous replication results in data conflicts. (This kind of conflict resolution is also what I hope to do with some nosql database in future when Dovecot supports them.) I've been working on dsync-based easy-to-use replication recently, and it's almost in a condition where I'm going to start using it myself (maybe this weekend). > > Can you provide a basic diagram/high level description of how this dsync > replication would be configured to work over a 2 node wide area network? I'll write a description at some point.. It's anyway meant to be more scalable than just 2 nodes, so the idea is to have userdb lookup return the 2 (or more) replicas. > Are we looking at something like period scripts, something more > automatic, a replication daemon? It's a replication daemon that basically calls "doveadm sync" when needed (via doveadm server connection). Initially it's not as optimal from performance point of view as it could, but should get better. :) From dmiller at amfes.com Sat Jan 28 09:15:33 2012 From: dmiller at amfes.com (Daniel L. Miller) Date: Fri, 27 Jan 2012 23:15:33 -0800 Subject: [Dovecot] Crash on mail folder delete In-Reply-To: <4F209878.5040505@amfes.com> References: <4F20922C.60206@amfes.com> <54B5D728-EC26-4633-A927-7EC040043BF5@iki.fi> <4F20939E.4010903@amfes.com> <4F209878.5040505@amfes.com> Message-ID: On 1/25/2012 4:04 PM, Daniel L. Miller wrote: > On 1/25/2012 3:43 PM, Daniel L. Miller wrote: >> On 1/25/2012 3:42 PM, Timo Sirainen wrote: >>> On 26.1.2012, at 1.37, Daniel L. Miller wrote: >>> >>>> Attempting to delete a folder from within the trash folder using >>>> Thunderbird. I see the following in the log: >>> Dovecot version? >>> >> 2.1.rc3. I'm compiling rc5 now... >> > Error still there on rc5. > Can I do anything to help find this? Folders are still shown in Trash - unable to delete. -- Daniel From user+dovecot at localhost.localdomain.org Sat Jan 28 17:34:16 2012 From: user+dovecot at localhost.localdomain.org (Pascal Volk) Date: Sat, 28 Jan 2012 16:34:16 +0100 Subject: [Dovecot] v2.1.rc5 (85a9b5236b6c) Error: lmtp client: DNS lookup of $FQDN failed: connect(dns-client) failed: No such file or directory Message-ID: <4F241578.2090904@localhost.localdomain.org> When the Sieve plugin tries to send a vacation message or redirect a message to another address it fails. dovecot: lmtp(6412, user at example.com): Error: lmtp client: DNS lookup of orange.example.com failed: connect(dns-client) failed: No such file or directory dovecot: lmtp(6412, user at example.com): Error: dAIOClYSJE8MGQAAhQ0vrQ: sieve: msgid=<4F241255.2060900 at example.com>: failed to redirect message to (refer to server log for more information) But the dns-client sockets are created when Dovecot starts up: # find /usr/local/var/run/dovecot -name dns-client -exec ls -l {} + srw-rw-rw- 1 root staff 0 Jan 28 16:15 /usr/local/var/run/dovecot/dns-client srw-rw-rw- 1 root root 0 Jan 28 16:15 /usr/local/var/run/dovecot/login/dns-client Hum, is it Dovecot or Pigeonhole (dovecot-2.1-pigeonhole 1600:b2a456e15ed5)? Regards, Pascal -- The trapper recommends today: c01dcofe.1202816 at localdomain.org From adrian.minta at gmail.com Sat Jan 28 17:48:53 2012 From: adrian.minta at gmail.com (Adrian Minta) Date: Sat, 28 Jan 2012 17:48:53 +0200 Subject: [Dovecot] XFS Developer Takes Shots At Btrfs, EXT4 Message-ID: <4F2418E5.2020107@gmail.com> Nice article about XFS improvements: http://tinyurl.com/7pvr9ju From jd.beaubien at gmail.com Sat Jan 28 17:59:39 2012 From: jd.beaubien at gmail.com (Jean-Daniel Beaubien) Date: Sat, 28 Jan 2012 10:59:39 -0500 Subject: [Dovecot] maildir vs mdbox Message-ID: Hi, I am planning on running on test between maildir and mdbox to see which is a better fit for my use case. And I'm just looking for general advice/recommendation. I will post any results I obtain here. Important question: I have multiple users hitting the same email account at the same time. Can be a problem with mdbox? (either via thunderbird or with custom webmail apps). I remember having huge issues with mbox a decade ago because of this. Maildir fixed this... will mdbox reintroduce this problem? This is a very important point for me. Here is my use case: - Ubuntu server (any specific recommandations on FS to use?) - Standard PC hardware (core i5 or i7, few gigs of ram, hdds at first, probably ssd afterwards, nothing very fancy) - Serving only a hand full of email accounts but some of the accounst have over 3 millions emails in them (with individual mail folders having 100k+ emails) - Will use latest dovecot (2.1 when it comes out) - fts-lucene or fts-solr? -jd From tss at iki.fi Sat Jan 28 18:05:48 2012 From: tss at iki.fi (Timo Sirainen) Date: Sat, 28 Jan 2012 18:05:48 +0200 Subject: [Dovecot] maildir vs mdbox In-Reply-To: References: Message-ID: <959D4700-F076-45A4-9C42-47ED218A50BC@iki.fi> On 28.1.2012, at 17.59, Jean-Daniel Beaubien wrote: > I am planning on running on test between maildir and mdbox to see which is > a better fit for my use case. And I'm just looking for general > advice/recommendation. I will post any results I obtain here. Maildir is good for reliability, since it's just about impossible to corrupt, and even in case of filesystem corruption it's easier to recover than other formats. mdbox is good if you want the best performance. > Important question: I have multiple users hitting the same email account at > the same time. Can be a problem with mdbox? No problem. > - Serving only a hand full of email accounts but some of the accounst have > over 3 millions emails in them (with individual mail folders having 100k+ > emails) Maildir gets slow with that many mails in one folder. > - fts-lucene or fts-solr? fts-lucene uses the latest CLucene version, which is a little old. With fts-solr you can use the latest Solr/Lucene. So as long as you don't mind setting up a Solr instance it should be better. The good thing about fts-lucene is that you can simply enable it and it works without any external servers. From jd.beaubien at gmail.com Sat Jan 28 18:13:59 2012 From: jd.beaubien at gmail.com (Jean-Daniel Beaubien) Date: Sat, 28 Jan 2012 11:13:59 -0500 Subject: [Dovecot] maildir vs mdbox In-Reply-To: <959D4700-F076-45A4-9C42-47ED218A50BC@iki.fi> References: <959D4700-F076-45A4-9C42-47ED218A50BC@iki.fi> Message-ID: Wow, incredible response time :) I have 1 more question which I forgot to put in the initial post. Considering my use case (small number of accounts but alot of emails per account, and I should add that they are mostly small emails, most under 5k, alot under 30k) what mdbox setting would you recommend i start testing with (mdbox_rotate_size and mdbox_rotate_interval). -JD On Sat, Jan 28, 2012 at 11:05 AM, Timo Sirainen wrote: > On 28.1.2012, at 17.59, Jean-Daniel Beaubien wrote: > > > I am planning on running on test between maildir and mdbox to see which > is > > a better fit for my use case. And I'm just looking for general > > advice/recommendation. I will post any results I obtain here. > > Maildir is good for reliability, since it's just about impossible to > corrupt, and even in case of filesystem corruption it's easier to recover > than other formats. mdbox is good if you want the best performance. > > > Important question: I have multiple users hitting the same email account > at > > the same time. Can be a problem with mdbox? > > No problem. > > > - Serving only a hand full of email accounts but some of the accounst > have > > over 3 millions emails in them (with individual mail folders having 100k+ > > emails) > > Maildir gets slow with that many mails in one folder. > > > - fts-lucene or fts-solr? > > > fts-lucene uses the latest CLucene version, which is a little old. With > fts-solr you can use the latest Solr/Lucene. So as long as you don't mind > setting up a Solr instance it should be better. The good thing about > fts-lucene is that you can simply enable it and it works without any > external servers. From tss at iki.fi Sat Jan 28 18:37:19 2012 From: tss at iki.fi (Timo Sirainen) Date: Sat, 28 Jan 2012 18:37:19 +0200 Subject: [Dovecot] maildir vs mdbox In-Reply-To: References: <959D4700-F076-45A4-9C42-47ED218A50BC@iki.fi> Message-ID: <548DDD91-D0F1-49F7-8E8D-3EA03DF72397@iki.fi> On 28.1.2012, at 18.13, Jean-Daniel Beaubien wrote: > Considering my use case (small number of accounts but alot of emails per > account, and I should add that they are mostly small emails, most under 5k, > alot under 30k) what mdbox setting would you recommend i start testing with > (mdbox_rotate_size and mdbox_rotate_interval). mdbox_rotate_interval is useful only if you want smaller incremental backups (so files that are backed up no longer change unless messages are deleted). Its default is 0 (I just fixed example-config, which showed it as 1day). I don't really know about mdbox_rotate_size. It would be nice if someone were to test different values over longer period and report how it affects disk IO. From jd.beaubien at gmail.com Sat Jan 28 19:02:46 2012 From: jd.beaubien at gmail.com (Jean-Daniel Beaubien) Date: Sat, 28 Jan 2012 12:02:46 -0500 Subject: [Dovecot] maildir vs mdbox In-Reply-To: <548DDD91-D0F1-49F7-8E8D-3EA03DF72397@iki.fi> References: <959D4700-F076-45A4-9C42-47ED218A50BC@iki.fi> <548DDD91-D0F1-49F7-8E8D-3EA03DF72397@iki.fi> Message-ID: On Sat, Jan 28, 2012 at 11:37 AM, Timo Sirainen wrote: > On 28.1.2012, at 18.13, Jean-Daniel Beaubien wrote: > > > Considering my use case (small number of accounts but alot of emails per > > account, and I should add that they are mostly small emails, most under > 5k, > > alot under 30k) what mdbox setting would you recommend i start testing > with > > (mdbox_rotate_size and mdbox_rotate_interval). > > mdbox_rotate_interval is useful only if you want smaller incremental > backups (so files that are backed up no longer change unless messages are > deleted). Its default is 0 (I just fixed example-config, which showed it as > 1day). > > To be honest, the smaller incremental backup part is interesting. That along with auto-gzip of the mdbox files are very interesting for me. > I don't really know about mdbox_rotate_size. It would be nice if someone > were to test different values over longer period and report how it affects > disk IO. > > I was thinking on doing a test with 20MB and 80MB, look at the results and go from there. Btw, when I migrate my emails from Maildir to mdbox, dsync should take into account the rotate_size parameter. If I want to change the rotate_size parameter, I simply edit the config file, change the parameter (erase the mdbox folder?) and re-run dsync. Is that correct? From tss at iki.fi Sat Jan 28 19:27:53 2012 From: tss at iki.fi (Timo Sirainen) Date: Sat, 28 Jan 2012 19:27:53 +0200 Subject: [Dovecot] Crash on mail folder delete In-Reply-To: References: <4F20922C.60206@amfes.com> <54B5D728-EC26-4633-A927-7EC040043BF5@iki.fi> <4F20939E.4010903@amfes.com> <4F209878.5040505@amfes.com> Message-ID: On 28.1.2012, at 9.15, Daniel L. Miller wrote: > Can I do anything to help find this? Folders are still shown in Trash - unable to delete. gdb backtrace would be helpful: http://dovecot.org/bugreport.html and doveconf -n and the folder name. From tss at iki.fi Sat Jan 28 19:29:18 2012 From: tss at iki.fi (Timo Sirainen) Date: Sat, 28 Jan 2012 19:29:18 +0200 Subject: [Dovecot] v2.1.rc5 (85a9b5236b6c) Error: lmtp client: DNS lookup of $FQDN failed: connect(dns-client) failed: No such file or directory In-Reply-To: <4F241578.2090904@localhost.localdomain.org> References: <4F241578.2090904@localhost.localdomain.org> Message-ID: <64779967-206F-4F44-8F01-32810EE0795A@iki.fi> On 28.1.2012, at 17.34, Pascal Volk wrote: > When the Sieve plugin tries to send a vacation message or redirect > a message to another address it fails. > > dovecot: lmtp(6412, user at example.com): Error: lmtp client: DNS lookup of orange.example.com failed: connect(dns-client) failed: No such file or directory Fixed: http://hg.dovecot.org/dovecot-2.1/rev/bc2eea348f55 http://hg.dovecot.org/dovecot-2.1/rev/32318f1588d4 The same problem exists in v2.0 also, but I didn't bother to fix it there. A workaround is to use IP instead of host in submission_host. From tss at iki.fi Sat Jan 28 19:30:05 2012 From: tss at iki.fi (Timo Sirainen) Date: Sat, 28 Jan 2012 19:30:05 +0200 Subject: [Dovecot] v2.1.rc5 (85a9b5236b6c) Error: lmtp client: DNS lookup of $FQDN failed: connect(dns-client) failed: No such file or directory In-Reply-To: <64779967-206F-4F44-8F01-32810EE0795A@iki.fi> References: <4F241578.2090904@localhost.localdomain.org> <64779967-206F-4F44-8F01-32810EE0795A@iki.fi> Message-ID: <36FD5315-1EB7-4F70-AB4E-9E6C1D535747@iki.fi> On 28.1.2012, at 19.29, Timo Sirainen wrote: > On 28.1.2012, at 17.34, Pascal Volk wrote: > >> When the Sieve plugin tries to send a vacation message or redirect >> a message to another address it fails. >> >> dovecot: lmtp(6412, user at example.com): Error: lmtp client: DNS lookup of orange.example.com failed: connect(dns-client) failed: No such file or directory > > Fixed: http://hg.dovecot.org/dovecot-2.1/rev/bc2eea348f55 http://hg.dovecot.org/dovecot-2.1/rev/32318f1588d4 > > The same problem exists in v2.0 also, but I didn't bother to fix it there. A workaround is to use IP instead of host in submission_host. Oh, clarification: With LMTP it just happens to work with v2.0, but with dovecot-lda it doesn't work. From tss at iki.fi Sat Jan 28 19:32:48 2012 From: tss at iki.fi (Timo Sirainen) Date: Sat, 28 Jan 2012 19:32:48 +0200 Subject: [Dovecot] LMTP : Can't handle mixed proxy/non-proxy destinations In-Reply-To: References: Message-ID: <33BD52FA-1FE0-46D5-A1E8-9A54C406BE64@iki.fi> On 27.1.2012, at 12.59, Alexis Lelion wrote: > Jan 25 09:05:12 mail01 postfix/lmtp[23934]: A92709300DB: to=< > user_on_mail02 at domain.com>, relay=mail01.domain.com[private/dovecot-lmtp], > delay=0.07, delays=0.01/0/0/0.06, dsn=4.3.0, status=deferred (host > mail01.domain.com[private/dovecot-lmtp] said: 451 4.3.0 < > user_on_mail02 at domain.com> Can't handle mixed proxy/non-proxy destinations > (in reply to RCPT TO command)) > > I was wondering if there was another way of handling this, for example > by triggering an immediate queue lookup from postfix or forwarding a > copy of the mail to the other server. Note that the postfix > "queue_run_delay" was increased to 15min on purpose, so I cannot change > that. It would be possible to change the code to support mixed destinations, but it's probably not a simple change and I have other things to do.. Maybe you could work around it so that LMTP always proxies the mails, to localhost as well, but to a different port which doesn't do proxying at all. From tss at iki.fi Sat Jan 28 19:45:13 2012 From: tss at iki.fi (Timo Sirainen) Date: Sat, 28 Jan 2012 19:45:13 +0200 Subject: [Dovecot] IMAP to Maildir Migration preserving UIDs? In-Reply-To: <4F21E92C.4090509@gedalya.net> References: <4F20D718.9010805@gedalya.net> <4F21E4CD.3070001@gedalya.net> <4F21E92C.4090509@gedalya.net> Message-ID: <3F3C09E9-1E8F-4243-BC39-BAEA38AF5300@iki.fi> On 27.1.2012, at 2.00, Gedalya wrote: > Starting program: /usr/bin/doveadm -o imapc_user=jedi at example.com -o imapc_password=**** backup -u jedi at example.com -R imapc: > > Program received signal SIGSEGV, Segmentation fault. > mailbox_log_iter_open_next (iter=0x80cbd90) at mailbox-log.c:213 > 213 mailbox-log.c: No such file or directory. > in mailbox-log.c This crash is now fixed, so there's no need to give /tmp/imapc path anymore: http://hg.dovecot.org/dovecot-2.1/rev/7b94d1c8a6e7 From tss at iki.fi Sat Jan 28 19:51:08 2012 From: tss at iki.fi (Timo Sirainen) Date: Sat, 28 Jan 2012 19:51:08 +0200 Subject: [Dovecot] Problem with Postfix + Dovecot + MySQL + Squirrelmail In-Reply-To: <1327667537.79787.YahooMailNeo@web65309.mail.ac2.yahoo.com> References: <1327667537.79787.YahooMailNeo@web65309.mail.ac2.yahoo.com> Message-ID: <8C57281B-2C18-4C19-9F80-57BDF77D83B4@iki.fi> On 27.1.2012, at 14.32, Gustavo wrote: > #service dovecot start > Starting IMAP/POP3 mail server: dovecotLast died with error (see error log for more information): Auth process died too early - shutting down No need to keep guessing the problem. "See error log for more information" like it says. http://wiki.dovecot.org/Logging From tss at iki.fi Sat Jan 28 19:55:09 2012 From: tss at iki.fi (Timo Sirainen) Date: Sat, 28 Jan 2012 19:55:09 +0200 Subject: [Dovecot] problem compiling imaptest under solaris In-Reply-To: <89f61bff49f4c5343be06dd45459b14a@imapproxy.hrz> References: <89f61bff49f4c5343be06dd45459b14a@imapproxy.hrz> Message-ID: <3A621688-A7AE-4C08-96EA-D9668ECA02D1@iki.fi> On 25.1.2012, at 16.43, J?rgen Obermann wrote: > today I tried to compile imaptest under solaris 10 with studio 11 compiler and got the following error: > > gmake[2]: Entering directory `/net/fileserv/export/sunsrc/src/imaptest-20111119/src' > source='client.c' object='client.o' libtool=no \ > DEPDIR=.deps depmode=none /bin/bash ../depcomp \ > cc -DHAVE_CONFIG_H -I. -I. -I.. -I/opt/local/include/dovecot -I/usr/local/include -fast -xarch=v8plusa -I/usr/sfw/include -c client.c > "/opt/local/include/dovecot/imap-util.h", line 6: warning: useless declaration > "client-state.h", line 6: warning: useless declaration > "client.c", line 655: operand cannot have void type: op "==" > "client.c", line 655: operands have incompatible types: > const void "==" int > cc: acomp failed for client.c http://hg.dovecot.org/imaptest/rev/7e490e59f1ee should fix it? From tss at iki.fi Sat Jan 28 19:57:29 2012 From: tss at iki.fi (Timo Sirainen) Date: Sat, 28 Jan 2012 19:57:29 +0200 Subject: [Dovecot] Password auth scheme question with mysql In-Reply-To: <4F1F46D7.7050600@wildgooses.com> References: <4F1F2B7F.3070005@wildgooses.com> <4F1F46D7.7050600@wildgooses.com> Message-ID: <143E640C-EE04-4B5B-B5A5-991AF3C2D567@iki.fi> On 25.1.2012, at 2.03, Ed W wrote: > The error seems to be that I set the "pass" variable in my password_query to set the master password for the upstream proxied to server. I can't actually remember now why this was required, but it was necessary to allow the proxy to work correctly in the past. I guess this assumption needs revisiting now since it can't be used if the plain password isn't in the database... I'm not sure if I understand correctly, but if you need the user's plaintext password it's in %w variable (assuming plaintext authentication). So a common configuration is to use: '%w' AS pass From tss at iki.fi Sat Jan 28 19:58:55 2012 From: tss at iki.fi (Timo Sirainen) Date: Sat, 28 Jan 2012 19:58:55 +0200 Subject: [Dovecot] [PATCH] autoconf small fix In-Reply-To: References: Message-ID: <1BE2A6DE-DC86-4BC4-BFBC-E58A57361368@iki.fi> On 24.1.2012, at 17.58, Luca Di Vizio wrote: > the attached patch seems to solve a warning from autoconf: > > libtoolize: Consider adding `AC_CONFIG_MACRO_DIR([m4])' to configure.in and > libtoolize: rerunning libtoolize, to keep the correct libtool macros in-tree. I have considered it before, but I remember at one point there was some reason why I didn't want to do it. I just can't remember the reason anymore, maybe there isn't any.. But I don't really understand why libtoolize keeps complaining about that, since it works just fine as it is. From tss at iki.fi Sat Jan 28 20:06:01 2012 From: tss at iki.fi (Timo Sirainen) Date: Sat, 28 Jan 2012 20:06:01 +0200 Subject: [Dovecot] Quota-warning and setresgid In-Reply-To: References: Message-ID: <480D0593-2405-42B5-8EA9-9A66CD8F3B97@iki.fi> On 10.1.2012, at 11.34, l.chelchowski wrote: > Jan 10 10:15:06 lda: Debug: auth input: tester at domain.eu home=/home/vmail/domain.eu/tester/ mail=maildir:/home/vmail/domain.eu/tester/:INDEX=/var/mail/vmail/domain.eu/tester at domain.eu/index/public uid=101 gid=12 quota_rule=*:storage=2097 acl_groups= Note that userdb lookup returns gid=12(mail) > Jan 10 10:15:06 lda(tester at domain.eu): Fatal: setresgid(12(mail),12(mail),101(vmail)) failed with euid=101(vmail): Operation not permitted But you're running it with gid=101(vmail). > mail_gid = vmail > mail_privileged_group = vmail > mail_uid = vmail Here you're also using gid=101(vmail). (The mail_privileged_group=vmail is a useless setting BTW) > userdb { > args = /usr/local/etc/dovecot/dovecot-sql.conf > driver = sql > } My guess for the best fix: Change the user_query not to return uid or gid fields at all. From tss at iki.fi Sat Jan 28 20:23:12 2012 From: tss at iki.fi (Timo Sirainen) Date: Sat, 28 Jan 2012 20:23:12 +0200 Subject: [Dovecot] UID 0 problem while issuing an UID THREAD REFS command In-Reply-To: <201201231355.15051.jesus.navarro@bvox.net> References: <201201201724.41631.jesus.navarro@bvox.net> <4F19BD71.9000603@iki.fi> <201201231355.15051.jesus.navarro@bvox.net> Message-ID: <30046BB5-6E1C-41E5-9B04-787F568DE604@iki.fi> On 23.1.2012, at 14.55, Jes?s M. Navarro wrote: >>> I'm having problems on a maildir due to dovecot returning an UID 0 to an >>> UID THREAD REFS command: > > I'm sending to your personal address a whole maildir that reproduces the bug > (it's very short) to avoid having it published in the mail archives. Thanks, I finally looked at this. The problem happens only when the THREADing isn't done for all messages. I thought this would have been a much more complex bug. Fixed: http://hg.dovecot.org/dovecot-2.0/rev/57498cad6ab9 From tss at iki.fi Sat Jan 28 20:29:36 2012 From: tss at iki.fi (Timo Sirainen) Date: Sat, 28 Jan 2012 20:29:36 +0200 Subject: [Dovecot] where is subscribed list stored? In-Reply-To: <4F1C1536.1000407@makuch.org> References: <4F1C1536.1000407@makuch.org> Message-ID: <544CBFAD-1A55-422A-9292-2D876E65AE53@iki.fi> On 22.1.2012, at 15.55, Michael Makuch wrote: > I use dovecot locally for internal only access to my email archives, of which I have many gigs of email archives. Over time I end up subscribing to a couple dozen different IMAP email folders. Problem is that periodically my list of subscribed folders get zapped to none, and I have to go and re-subscribe to a dozen or two folders again. > > Anyone seen this happen? It looks like the list of subscribed folders is here ~/Mail/.subscriptions and I can see in my daily backup that it reflects what appears in TBird. What might be zapping it? I use multiple email clients simultaneously on different hosts. (IOW I leave them open) Is this a problem? Does dovecot manage that in some way? Or is that my problem? I don't think this is the problem since this only occurs like a few times per year. If it were the problem I'd expect it to occur much more frequently. No idea, but you could prevent it by making sure that it can't change the subscriptions: mail_location = mbox:~/Mail:CONTROL=~/mail-subscriptions mkdir ~/mail-subscriptions mv ~/Mail/.subscriptions ~/mail-subscriptions chmod 0500 ~/mail-subscriptions I thought Dovecot would also log an error if client tried to change subscriptions, but looks like it doesn't. It only returns failure to client: a unsubscribe INBOX a NO [NOPERM] No permission to modify subscriptions From tss at iki.fi Sat Jan 28 22:07:02 2012 From: tss at iki.fi (Timo Sirainen) Date: Sat, 28 Jan 2012 22:07:02 +0200 Subject: [Dovecot] MySQL server has gone away In-Reply-To: References: Message-ID: <02F8EC25-CB7D-4AD1-8ED8-5CB4950B6EC5@iki.fi> On 13.1.2012, at 20.29, Mark Moseley wrote: > If there are multiple hosts, it seems like the most robust thing to do > would be to exhaust the existing connections and if none of those > succeed, then start a new connection to one of them. It will probably > result in much more convoluted logic but it'd probably match better > what people expect from a retry. Done: http://hg.dovecot.org/dovecot-2.0/rev/4e7676b890f1 From tss at iki.fi Sat Jan 28 22:24:49 2012 From: tss at iki.fi (Timo Sirainen) Date: Sat, 28 Jan 2012 22:24:49 +0200 Subject: [Dovecot] imap-login process_limit reached In-Reply-To: <4F158473.1000901@orlitzky.com> References: <4F14A7AA.8010507@easystreet.net> <4F158473.1000901@orlitzky.com> Message-ID: <2D5E0681-DF1F-4798-83BF-54648B2DAFB4@iki.fi> On 17.1.2012, at 16.23, Michael Orlitzky wrote: > First of all, feature request: > > doveconf -d > show the default value of all settings Done: http://hg.dovecot.org/dovecot-2.1/rev/41cb0217b7c3 From tss at iki.fi Sat Jan 28 22:42:21 2012 From: tss at iki.fi (Timo Sirainen) Date: Sat, 28 Jan 2012 22:42:21 +0200 Subject: [Dovecot] 2.1.rc3 (1a722c7676bb):dsync umlaut problems In-Reply-To: <4F0FA0A7.10909@localhost.localdomain.org> References: <4F0FA0A7.10909@localhost.localdomain.org> Message-ID: <7D563028-0149-4A06-A7DF-9A3F7B84F805@iki.fi> On 13.1.2012, at 5.10, Pascal Volk wrote: > All umlauts in mailbox names are lost after converting mbox/Maildir > mailboxes to mdbox. Looks like it was a generic problem in v2.1 dsync. Fixed: http://hg.dovecot.org/dovecot-2.1/rev/ef6f3b7f6038 From tss at iki.fi Sat Jan 28 22:44:45 2012 From: tss at iki.fi (Timo Sirainen) Date: Sat, 28 Jan 2012 22:44:45 +0200 Subject: [Dovecot] moving mail out of alt storage In-Reply-To: References: <87sjnya3z5.fsf@algae.riseup.net> <1316077133.12936.18.camel@hurina> <87obylafsw.fsf_-_@algae.riseup.net> Message-ID: On 12.1.2012, at 20.32, Mark Moseley wrote: >>> On Wed, 2011-09-14 at 23:17 -0400, Micah Anderson wrote: >>>> I moved some mail into the alt storage: >>>> >>>> doveadm altmove -u johnd at example.com seen savedbefore 1w >>>> >>>> and now I want to move it back to the regular INBOX, but I can't see how >>>> I can do that with either 'altmove' or 'mailbox move'. >>> >>> Is this sdbox or mdbox? With sdbox you could simply "mv" the files. Or >>> apply patch: http://hg.dovecot.org/dovecot-2.0/rev/1910c76a6cc9 >> >> This is mdbox, which is why I am not sure how to operate because I am >> used to individual files as is with maildir. >> >> micah >> > > I'm curious about this too. Is moving the m.# file out of the ALT > path's storage/ directory into the non-ALT storage/ directory > sufficient? Or will that cause odd issues? You can manually move m.* files to alt storage and back. Just make sure that the same file isn't being simultaneously modified by Dovecot or you'll corrupt it. From tss at iki.fi Sat Jan 28 23:04:24 2012 From: tss at iki.fi (Timo Sirainen) Date: Sat, 28 Jan 2012 23:04:24 +0200 Subject: [Dovecot] dovecot 2.0.15 - purge errors In-Reply-To: <87hb00run6.fsf@alfa.kjonca> References: <87hb00run6.fsf@alfa.kjonca> Message-ID: <88D79565-2FEC-4B69-88F3-FC6F6AAB435A@iki.fi> On 13.1.2012, at 8.20, Kamil Jo?ca wrote: > Dovecot 2.0.15, debian package, am I lost some mails? How can I check > what is in *.broken file? You can look at the .broken file with text editor for example :) > --8<---------------cut here---------------start------------->8--- > $doveadm -v purge > doveadm(kjonca): Error: Corrupted dbox file /home/kjonca/Mail/0/storage/m.6469 (around offset=291530): purging found mismatched offsets (291500 vs 299692, 60/215) 299692 - 291500 = 8192 = output stream's buffering size. I guess what happened is that sometimes earlier Dovecot crashed while it was saving a message, but it had managed to write 8192 bytes. Now purging notices the extra 8192 bytes and wonders what to do about them, so it starts index rebuild, which probably adds it as a new message to mailbox. In future this check probably should be done before appending the next message to mdbox, so it's noticed earlier and it probably should delete the message instead of adding a partially saved message to mailbox. > doveadm(kjonca): Error: Corrupted dbox file /home/kjonca/Mail/0/storage/m.6469 (around offset=599914): metadata header has bad magic value This is about the same error as above. So, in short: Nothing to worry about. Although you could look into why the earlier saving crashed in the first place. From robert at schetterer.org Sat Jan 28 23:06:01 2012 From: robert at schetterer.org (Robert Schetterer) Date: Sat, 28 Jan 2012 22:06:01 +0100 Subject: [Dovecot] MySQL server has gone away In-Reply-To: <02F8EC25-CB7D-4AD1-8ED8-5CB4950B6EC5@iki.fi> References: <02F8EC25-CB7D-4AD1-8ED8-5CB4950B6EC5@iki.fi> Message-ID: <4F246339.708@schetterer.org> Am 28.01.2012 21:07, schrieb Timo Sirainen: > On 13.1.2012, at 20.29, Mark Moseley wrote: > >> If there are multiple hosts, it seems like the most robust thing to do >> would be to exhaust the existing connections and if none of those >> succeed, then start a new connection to one of them. It will probably >> result in much more convoluted logic but it'd probably match better >> what people expect from a retry. > > Done: http://hg.dovecot.org/dovecot-2.0/rev/4e7676b890f1 > Hi Timo doc/example-config/dovecot-sql.conf.ext from hg has something like # Database connection string. This is driver-specific setting. # HA / round-robin load-balancing is supported by giving multiple host # settings, like: host=sql1.host.org host=sql2.host.org but i dont find it in http://wiki2.dovecot.org/AuthDatabase/SQL -- Best Regards MfG Robert Schetterer Germany/Munich/Bavaria From tss at iki.fi Sat Jan 28 23:47:56 2012 From: tss at iki.fi (Timo Sirainen) Date: Sat, 28 Jan 2012 23:47:56 +0200 Subject: [Dovecot] dovecot 2.0.15 - purge errors In-Reply-To: <88D79565-2FEC-4B69-88F3-FC6F6AAB435A@iki.fi> References: <87hb00run6.fsf@alfa.kjonca> <88D79565-2FEC-4B69-88F3-FC6F6AAB435A@iki.fi> Message-ID: <3F7BA98D-9295-4823-80E5-A647FBD71D68@iki.fi> On 28.1.2012, at 23.04, Timo Sirainen wrote: > 299692 - 291500 = 8192 = output stream's buffering size. I guess what happened is that sometimes earlier Dovecot crashed while it was saving a message, but it had managed to write 8192 bytes. Now purging notices the extra 8192 bytes and wonders what to do about them, so it starts index rebuild, which probably adds it as a new message to mailbox. > > In future this check probably should be done before appending the next message to mdbox, so it's noticed earlier Done: http://hg.dovecot.org/dovecot-2.1/rev/bde005e302e0 > and it probably should delete the message instead of adding a partially saved message to mailbox. Not done. Safer to not delete any data. From tss at iki.fi Sat Jan 28 23:54:17 2012 From: tss at iki.fi (Timo Sirainen) Date: Sat, 28 Jan 2012 23:54:17 +0200 Subject: [Dovecot] howto disable indexing on dovecot-lda ? In-Reply-To: <4F0DC747.4070505@gmail.com> References: <4F06D5D9.20001@gmail.com> <4F06DFF5.40707@hardwarefreak.com> <4F06F0E7.904@gmail.com> <4F0DC747.4070505@gmail.com> Message-ID: On 11.1.2012, at 19.30, Adrian Minta wrote: > Hello, > > I tested with "mail_location = whatever-you-have-now:INDEX=MEMORY" and it seems to help, but in the mean time I found another option completely undocumented that seems to do exactly what I wanted: > protocol lda { > mailbox_list_index_disable= yes > > } > > Does anyone knows exactly what "mailbox_list_index_disable" does and if is still available in 2.0 and 2.1 branch ? mailbox_list_index_disable does absolutely nothing in v2.0, and it defaults to "no" in v2.1 also. It's about a different kind of index. From tss at iki.fi Sun Jan 29 00:00:27 2012 From: tss at iki.fi (Timo Sirainen) Date: Sun, 29 Jan 2012 00:00:27 +0200 Subject: [Dovecot] A small bug and a question about CONTEXT=SEARCH In-Reply-To: <20120111155746.BD7BDDA030B2B@bmail06.one.com> References: <20120111155746.BD7BDDA030B2B@bmail06.one.com> Message-ID: On 11.1.2012, at 17.57, Anders wrote: > Sorry, apparently I was a bit too fast there. ADDTO and REMOVEFROM should not > be sent by a client, but I think that a client can send CONTEXT as a hint to > the server, see > > http://tools.ietf.org/html/rfc5267#section-4.2 Yes, that was a bug. Thanks, fixed: http://hg.dovecot.org/dovecot-2.0/rev/fd16e200f0f7 From tss at iki.fi Sun Jan 29 00:04:16 2012 From: tss at iki.fi (Timo Sirainen) Date: Sun, 29 Jan 2012 00:04:16 +0200 Subject: [Dovecot] sieve under lmtp using wrong homedir ? In-Reply-To: References: Message-ID: On 11.1.2012, at 17.35, Frank Post wrote: > All is working well except lmtp. Sieve scripts are correctly saved under > /var/vmail/test.com/test/sieve, but under lmtp sieve will use > /var/vmail//testuser/ > Uid testuser has mail=test at test.com configured in ldap. > > As i could see in the debug logs, there is a difference between the auth > "master out" lines, but why ? .. > Jan 11 14:39:53 auth: Debug: master in: USER 1 testuser > service=lmtp lip=10.234.201.9 rip=10.234.201.4 This means that Dovecot LMTP got: RCPT TO: instead of: RCPT TO: You probably should fix your userdb lookup so that that would return "unknown user" instead of accepting it. But the real problem is anyway in your MTA setup. From tss at iki.fi Sun Jan 29 00:17:44 2012 From: tss at iki.fi (Timo Sirainen) Date: Sun, 29 Jan 2012 00:17:44 +0200 Subject: [Dovecot] MySQL server has gone away In-Reply-To: <4F246339.708@schetterer.org> References: <02F8EC25-CB7D-4AD1-8ED8-5CB4950B6EC5@iki.fi> <4F246339.708@schetterer.org> Message-ID: <9B43B5C1-8375-43E9-8CA3-722F601846A2@iki.fi> On 28.1.2012, at 23.06, Robert Schetterer wrote: > doc/example-config/dovecot-sql.conf.ext > from hg > has something like > > # Database connection string. This is driver-specific setting. > # HA / round-robin load-balancing is supported by giving multiple host > # settings, like: host=sql1.host.org host=sql2.host.org > > but i dont find it in > http://wiki2.dovecot.org/AuthDatabase/SQL I added something about it there. From tss at iki.fi Sun Jan 29 00:20:06 2012 From: tss at iki.fi (Timo Sirainen) Date: Sun, 29 Jan 2012 00:20:06 +0200 Subject: [Dovecot] maildir vs mdbox In-Reply-To: References: <959D4700-F076-45A4-9C42-47ED218A50BC@iki.fi> <548DDD91-D0F1-49F7-8E8D-3EA03DF72397@iki.fi> Message-ID: On 28.1.2012, at 19.02, Jean-Daniel Beaubien wrote: > Btw, when I migrate my emails from Maildir to mdbox, dsync should take into > account the rotate_size parameter. If I want to change the rotate_size > parameter, I simply edit the config file, change the parameter (erase the > mdbox folder?) and re-run dsync. Is that correct? Yes. You can also give -o mdbox_rotate_size=X parameter to dsync to override the config. The mdbox_rotate_size is started to be used immediately, so if you increase it Dovecot may start appending new mails to old files. The existing files aren't immediately shrunk, but during purge when writing new files the files can become smaller. From tss at iki.fi Sun Jan 29 00:26:01 2012 From: tss at iki.fi (Timo Sirainen) Date: Sun, 29 Jan 2012 00:26:01 +0200 Subject: [Dovecot] compressed mboxes very slow In-Reply-To: <8739blw6gl.fsf@alfa.kjonca> References: <87iptnoans.fsf@alfa.kjonca> <8739blw6gl.fsf@alfa.kjonca> Message-ID: <0C550F94-3CAE-4B0E-9E95-B6E1A708DBA0@iki.fi> I wonder if this patch helps here: http://hg.dovecot.org/dovecot-2.0/rev/9b2931607063 At least I can't now see any slowness with either v2.1 or the latest v2.0. But I don't know if I would have slowness with older versions either.. From robert at schetterer.org Sun Jan 29 00:27:22 2012 From: robert at schetterer.org (Robert Schetterer) Date: Sat, 28 Jan 2012 23:27:22 +0100 Subject: [Dovecot] MySQL server has gone away In-Reply-To: <9B43B5C1-8375-43E9-8CA3-722F601846A2@iki.fi> References: <02F8EC25-CB7D-4AD1-8ED8-5CB4950B6EC5@iki.fi> <4F246339.708@schetterer.org> <9B43B5C1-8375-43E9-8CA3-722F601846A2@iki.fi> Message-ID: <4F24764A.1080207@schetterer.org> Am 28.01.2012 23:17, schrieb Timo Sirainen: > On 28.1.2012, at 23.06, Robert Schetterer wrote: > >> doc/example-config/dovecot-sql.conf.ext >> from hg >> has something like >> >> # Database connection string. This is driver-specific setting. >> # HA / round-robin load-balancing is supported by giving multiple host >> # settings, like: host=sql1.host.org host=sql2.host.org >> >> but i dont find it in >> http://wiki2.dovecot.org/AuthDatabase/SQL > > I added something about it there. > cool thanks ! -- Best Regards MfG Robert Schetterer Germany/Munich/Bavaria From tss at iki.fi Sun Jan 29 00:36:25 2012 From: tss at iki.fi (Timo Sirainen) Date: Sun, 29 Jan 2012 00:36:25 +0200 Subject: [Dovecot] 2.0.17: Index lost -> SAVEDON lost as well? In-Reply-To: <17E5EF90-C133-4F70-A377-0D93694181DE@iki.fi> References: <20120109074057.GC22506@charite.de> <17E5EF90-C133-4F70-A377-0D93694181DE@iki.fi> Message-ID: On 9.1.2012, at 16.57, Timo Sirainen wrote: >> After that, the SAVEDON date for all mails was reset to today: > > Yeah. The "save date" is stored only in index. And index rebuild drops all those fields. I guess this could/should be fixed in index rebuild. Fixed: http://hg.dovecot.org/dovecot-2.0/rev/c30ea8aec902 From tss at iki.fi Sun Jan 29 00:38:54 2012 From: tss at iki.fi (Timo Sirainen) Date: Sun, 29 Jan 2012 00:38:54 +0200 Subject: [Dovecot] Attribute Cache flush errors on FreeBSD 8.2 In-Reply-To: <4F079021.4090001@kernick.org> References: <4F079021.4090001@kernick.org> Message-ID: On 7.1.2012, at 2.21, Phil Kernick wrote: > I'm running dovecot 2.0.16 on FreeBSD 8.2 with the mail spool and indexes on an NFS server. > > Lines like the following keep appearing in syslog for access to each mailbox: > > Error: nfs_flush_attr_cache_fd_locked: fchown(/home/philk/Mail/Deleted) failed: Bad file descriptor I've given up on trying to make mail_nfs_* settings work. If you have only one Dovecot server, you don't need these settings at all. If you have more than one Dovecot server, use director (and then you also don't need these settings). From tss at iki.fi Sun Jan 29 00:40:53 2012 From: tss at iki.fi (Timo Sirainen) Date: Sun, 29 Jan 2012 00:40:53 +0200 Subject: [Dovecot] 2.1.rc1 (056934abd2ef): virtual plugin mailbox search pattern In-Reply-To: <4EF4BB6C.3050902@gmx.de> References: <4EF4BB6C.3050902@gmx.de> Message-ID: <1F065FD5-11B7-44C0-A4CB-96B346801986@iki.fi> On 23.12.2011, at 19.33, e-frog wrote: > For testing propose I created the following folders with each containing one unread message > > INBOX, INBOX/level1 and INBOX/level1/level2 .. > Result: virtual/unread shows only 1 unseen message. Further tests showed it's the one from INBOX. The mails from the deeper levels are not found. What mailbox format are you using? Maybe I fixed this with http://hg.dovecot.org/dovecot-2.1/rev/54e74090fb42 From ronald at rmacd.com Sun Jan 29 01:16:19 2012 From: ronald at rmacd.com (Ronald MacDonald) Date: Sat, 28 Jan 2012 18:16:19 -0500 Subject: [Dovecot] Migration to multi-dbox and SiS Message-ID: Dear list, A huge thank-you first of all for all the work that's gone into Dovecot itself. I'm rebuilding a mail server next week and so, taking the rare opportunity to re-consider all the options I've had running over the past couple of years. Around the time of the last re-build (2010), there had been some discussion on single instance storage, which was quite new on Dovecot around then. I chickened out of setting it up though. Now with it having been in the wild for a couple of years, I wonder, how have people found SiS to behave? Additionally, though there was talk of the prospect of it being merged with 2.x am I right in thinking it's not yet in the main project? Couldn't find any 2.x changelogs that could confirm this. With best wishes, Ronald. From tss at iki.fi Sun Jan 29 01:53:59 2012 From: tss at iki.fi (Timo Sirainen) Date: Sun, 29 Jan 2012 01:53:59 +0200 Subject: [Dovecot] Migration to multi-dbox and SiS In-Reply-To: References: Message-ID: <7E4C5ED4-BE84-4638-8D2E-51D25FF88EB5@iki.fi> On 29.1.2012, at 1.16, Ronald MacDonald wrote: > Around the time of the last re-build (2010), there had been some discussion on single instance storage, which was quite new on Dovecot around then. I chickened out of setting it up though. Now with it having been in the wild for a couple of years, I wonder, how have people found SiS to behave? Additionally, though there was talk of the prospect of it being merged with 2.x am I right in thinking it's not yet in the main project? Couldn't find any 2.x changelogs that could confirm this. It's in v2.0 and used by at least a few installations. Apparently it works quite well. As long as you have a pretty typical setup it should work fine. It gets more complex if you want to spread the data across multiple mount points. Backups may also be more difficult, since filesystem snapshots are pretty much the only 100% safe way to do them. BTW. SIS, not SiS ("Instance", not "in") From stan at hardwarefreak.com Sun Jan 29 02:25:50 2012 From: stan at hardwarefreak.com (Stan Hoeppner) Date: Sat, 28 Jan 2012 18:25:50 -0600 Subject: [Dovecot] XFS Developer Takes Shots At Btrfs, EXT4 In-Reply-To: <4F2418E5.2020107@gmail.com> References: <4F2418E5.2020107@gmail.com> Message-ID: <4F24920E.6080500@hardwarefreak.com> On 1/28/2012 9:48 AM, Adrian Minta wrote: > Nice article about XFS improvements: > http://tinyurl.com/7pvr9ju The "article" is strictly a badly written summary of the video. But, the video was great. Until watching this I'd never seen Dave in a photo or video, though I correspond with him regularly on the XFS list. Nice to finally put a face and voice to a name. One of many reasons the summary is badly written is the use of present tense when referring to XFS deficiencies, specifically the part about EXT4 being 20-50x faster with some metadata operations. The author writes as if this was the current state of affairs right up to Dave's recent presentation. The author misread or misinterpreted the slides or Dave's speech, and apparently has no personal knowledge of Linux filesystem development. This 20-50x EXT4 advantage disappeared in 2009, almost 3 years ago. I've mentioned many of these "new" improvements on this list over the past 2-3 years. They're not "new". We have an "author" writing about something he knows nothing about, and making lots of mistakes in his summary. This seems to be a trend with Phoronix. They are decided desktop-only oriented. Thus when they attempt to write about the big stuff they fail badly. And the title? Juvenile attempt to draw readers. Pretty pathetic. The "article" was all about Dave's presentation. Dave's 50 minute presentation took 2 "shots" of 10-15 seconds each at EXT4 and BTRFS. A better title would have been simply something like "XFS dev details improvements at Linux.Conf.Au 2012." -- Stan From moseleymark at gmail.com Sun Jan 29 06:04:44 2012 From: moseleymark at gmail.com (Mark Moseley) Date: Sat, 28 Jan 2012 20:04:44 -0800 Subject: [Dovecot] MySQL server has gone away In-Reply-To: <02F8EC25-CB7D-4AD1-8ED8-5CB4950B6EC5@iki.fi> References: <02F8EC25-CB7D-4AD1-8ED8-5CB4950B6EC5@iki.fi> Message-ID: On Sat, Jan 28, 2012 at 12:07 PM, Timo Sirainen wrote: > On 13.1.2012, at 20.29, Mark Moseley wrote: > >> If there are multiple hosts, it seems like the most robust thing to do >> would be to exhaust the existing connections and if none of those >> succeed, then start a new connection to one of them. It will probably >> result in much more convoluted logic but it'd probably match better >> what people expect from a retry. > > Done: http://hg.dovecot.org/dovecot-2.0/rev/4e7676b890f1 > Excellent, thanks! From e-frog at gmx.de Sun Jan 29 10:33:01 2012 From: e-frog at gmx.de (e-frog) Date: Sun, 29 Jan 2012 09:33:01 +0100 Subject: [Dovecot] 2.1.rc1 (056934abd2ef): virtual plugin mailbox search pattern In-Reply-To: <1F065FD5-11B7-44C0-A4CB-96B346801986@iki.fi> References: <4EF4BB6C.3050902@gmx.de> <1F065FD5-11B7-44C0-A4CB-96B346801986@iki.fi> Message-ID: <4F25043D.7000501@gmx.de> On 28.01.2012 23:40, wrote Timo Sirainen: > On 23.12.2011, at 19.33, e-frog wrote: > >> For testing propose I created the following folders with each containing one unread message >> >> INBOX, INBOX/level1 and INBOX/level1/level2 > .. >> Result: virtual/unread shows only 1 unseen message. Further tests showed it's the one from INBOX. The mails from the deeper levels are not found. > > What mailbox format are you using? mdbox > Maybe I fixed this with http://hg.dovecot.org/dovecot-2.1/rev/54e74090fb42 Just tested and yes it works with the above mentioned patch. Thanks a lot Timo! From adrian.minta at gmail.com Sun Jan 29 12:08:44 2012 From: adrian.minta at gmail.com (Adrian Minta) Date: Sun, 29 Jan 2012 12:08:44 +0200 Subject: [Dovecot] XFS Developer Takes Shots At Btrfs, EXT4 In-Reply-To: <4F24920E.6080500@hardwarefreak.com> References: <4F2418E5.2020107@gmail.com> <4F24920E.6080500@hardwarefreak.com> Message-ID: <4F251AAC.4050803@gmail.com> On 01/29/12 02:25, Stan Hoeppner wrote: > The "article" is strictly a badly written summary of the video. But, > the video was great. Until watching this I'd never seen Dave in a photo > or video, though I correspond with him regularly on the XFS list. Nice > to finally put a face and voice to a name. Yes, the video is very nice. From CMarcus at Media-Brokers.com Sun Jan 29 20:00:49 2012 From: CMarcus at Media-Brokers.com (Charles Marcus) Date: Sun, 29 Jan 2012 13:00:49 -0500 Subject: [Dovecot] imap-login process_limit reached In-Reply-To: <2D5E0681-DF1F-4798-83BF-54648B2DAFB4@iki.fi> References: <4F14A7AA.8010507@easystreet.net> <4F158473.1000901@orlitzky.com> <2D5E0681-DF1F-4798-83BF-54648B2DAFB4@iki.fi> Message-ID: <4F258951.20006@Media-Brokers.com> On 2012-01-28 3:24 PM, Timo Sirainen wrote: > On 17.1.2012, at 16.23, Michael Orlitzky wrote: > >> First of all, feature request: >> >> doveconf -d >> show the default value of all settings > > Done: http://hg.dovecot.org/dovecot-2.1/rev/41cb0217b7c3 Awesome, thanks Timo! This makes it much easier to make sure that you aren't specifying anything which would be the same as the default, which minimizes doveconf -n 'noise'... -- Best regards, Charles From user+dovecot at localhost.localdomain.org Mon Jan 30 01:36:24 2012 From: user+dovecot at localhost.localdomain.org (Pascal Volk) Date: Mon, 30 Jan 2012 00:36:24 +0100 Subject: [Dovecot] 2.1.rc3 (1a722c7676bb): Panic: file ostream.c: line 173 (o_stream_sendv): assertion failed: (stream->stream_errno != 0) In-Reply-To: <4F10AA71.6030901@localhost.localdomain.org> References: <4F10AA71.6030901@localhost.localdomain.org> Message-ID: <4F25D7F8.7060609@localhost.localdomain.org> Looks like http://hg.dovecot.org/dovecot-2.1/rev/3c0bd1fd035b has solved the problem. -- The trapper recommends today: fabaceae.1203000 at localdomain.org From bryder at wetafx.co.nz Mon Jan 30 02:05:58 2012 From: bryder at wetafx.co.nz (Bill Ryder) Date: Mon, 30 Jan 2012 13:05:58 +1300 Subject: [Dovecot] A namespace error on 2.1rc5 Message-ID: <4F25DEE6.7020402@wetafx.co.nz> Hello all, I'm not sure if this is a bug. It's probably just an upgrade note. In summary I had no namespace section in my 2.0.17 config. When trying out 2.1rc5 no user could login because of a namespace error. 2.1rc5 adds a default namespace clause which broke my logins (It was noted in the changelog) I seemed to fix it by just putting this in the config file: namespace inbox { inbox = yes } Long story: I've been recently testing dovecot against cyrus to decide where we should go for our next mail server(s) I loaded up the mail server with mail delivered via postfix all on dovecot 2.0.15 (I've since moved to 2.0.17) I have three dovecot directors, two backends on the same NFS mail store. With dovecot 2.0.xx the tester works fine (it't just a script which logins in and emulates thunderbird when a user is idle - without using IDLE so the client asks for mail every few minutes). When I moved to 2.1rc5 I got namespace errors and the user can not login. The server said: dovecot-error.log-20120128:Jan 27 13:37:59 imap(ethab01): Error: user ethab01: Initialization failed: namespace configuration error: inbox=yes namespace missing The client says: * BYE Internal error occurred. Refer to server log for more information. The session looks like 0.000000 192.168.121.37 -> 192.168.121.2 TCP 33213 > imap [SYN] Seq=0 Win=5840 Len=0 MSS=1460 TSV=1056649457 TSER=0 WS=5 0.000036 192.168.121.2 -> 192.168.121.37 TCP imap > 33213 [SYN, ACK] Seq=0 Ack=1 Win=5792 Len=0 MSS=1460 TSV=3264407631 TSER=1056649457 WS=7 0.000187 192.168.121.37 -> 192.168.121.2 TCP 33213 > imap [ACK] Seq=1 Ack=1 Win=5856 Len=0 TSV=1056649458 TSER=3264407631 0.006338 192.168.121.2 -> 192.168.121.37 IMAP Response: * OK [CAPABILITY IMAP4rev1 LITERAL+ SASL-IR LOGIN-REFERRALS ID ENABLE IDLE STARTTLS AUTH=PLAIN AUTH=LOGIN] Dovecot ready. 0.006889 192.168.121.37 -> 192.168.121.2 TCP 33213 > imap [ACK] Seq=1 Ack=124 Win=5856 Len=0 TSV=1056649465 TSER=3264407637 0.006973 192.168.121.37 -> 192.168.121.2 IMAP Request: I ID ("x-originating-ip" "192.168.114.249" "x-originating-port" "49403" "x-connected-ip" "192.168.121.37" "x-connected-port" "143") 0.006980 192.168.121.2 -> 192.168.121.37 TCP imap > 33213 [ACK] Seq=124 Ack=178 Win=6912 Len=0 TSV=3264407638 TSER=1056649465 0.007086 192.168.121.2 -> 192.168.121.37 IMAP Response: * ID NIL 0.018471 192.168.121.2 -> 192.168.121.37 IMAP Response: * BYE Internal error occurred. Refer to server log for more information. (interestingly the tshark output strips out the user name and password which is convenient but which may mean there's not enough information?) I rolled back to 2.0.17 and it was fine again. It's the same config files for both, same maildirs etc etc. All I did was change the dovecot version from 2.0.17 to 2.1rc5 However I see from the changelog that 2.1rc5 added a default namespace inbox: diff doveconf-n.2.0.17 doveconf-n.2.1-rc5 1c1 < # 2.0.17 (684381041dc4+): /etc/dovecot/dovecot.conf --- > # 2.1.rc5: /etc/dovecot/dovecot.conf 20a21,39 > namespace inbox { > location = > mailbox Drafts { > special_use = \Drafts > } > mailbox Junk { > special_use = \Junk > } > mailbox Sent { > special_use = \Sent > } > mailbox "Sent Messages" { > special_use = \Sent > } > mailbox Trash { > special_use = \Trash > } > prefix = > } We had this section commented out in 2.0.x so there was no namespace inbox anywhere. ============== doveconf -n for 2.0.17 (for the backends) # 2.0.17 (684381041dc4+): /etc/dovecot/dovecot.conf # OS: Linux 2.6.32-131.6.1.el6.x86_64 x86_64 Scientific Linux release 6.1 (Carbon) nfs auth_mechanisms = plain login auth_username_format = %n auth_verbose = yes debug_log_path = /var/log/dovecot/dovecot-debug.log disable_plaintext_auth = no first_valid_uid = 200 info_log_path = /var/log/dovecot/dovecot-info.log log_path = /var/log/dovecot/dovecot-error.log mail_debug = yes mail_fsync = always mail_gid = vmail mail_location = maildir:/vol/dt_mailstore1/spool/%n:INDEX=/var/indexes/%n mail_nfs_storage = yes mail_plugins = " fts fts_solr mail_log notify quota" mail_uid = vmail maildir_very_dirty_syncs = yes managesieve_notify_capability = mailto managesieve_sieve_capability = fileinto reject envelope encoded-character vacation subaddress comparator-i;ascii-numeric relational regex imap4flags copy include variables body enotify environment mailbox date ihave passdb { driver = pam } plugin { autocreate = Trash autocreate2 = Drafts autocreate3 = Sent autocreate4 = Templates autosubscribe = Trash autosubscribe2 = Drafts autosubscribe3 = Sent autosubscribe4 = Templates fts = solr fts_solr = break-imap-search debug url=http://dovecot-solr1.wetafx.co.nz:8080/solr/ mail_log_events = delete undelete expunge copy mailbox_delete mailbox_rename sieve = ~/.dovecot.sieve sieve_dir = ~/sieve } protocols = imap pop3 lmtp sieve service auth { unix_listener auth-userdb { group = vmail user = vmail } } service lmtp { inet_listener lmtp { address = 192.168.121.2 127.0.0.1 port = 24 } process_min_avail = 20 unix_listener /var/spool/postfix/private/dovecot-lmtp { group = postfix mode = 0660 user = postfix } } service managesieve-login { inet_listener sieve { port = 4190 } inet_listener sieve_deprecated { port = 2000 } } ssl_cert = hi folks hi timo hi master of "Fu" I just migrate my emails from one type of Maildir to Mailbox I did as I was having problems reading speed with my webmail. I did it in order to optimize when do you my current config work for me sincerely -- http://pgp.mit.edu:11371/pks/lookup?op=get&search=0xC2626742 gpg --keyserver pgp.mit.edu --recv-key C2626742 http://urlshort.eu fakessh @ http://gplus.to/sshfake http://gplus.to/sshswilting http://gplus.to/john.swilting https://lists.fakessh.eu/mailman/ This list is moderated by me, but all applications will be accepted provided they receive a note of presentation -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 189 bytes Desc: Ceci est une partie de message num?riquement sign?e URL: From gedalya at gedalya.net Mon Jan 30 05:29:51 2012 From: gedalya at gedalya.net (Gedalya) Date: Sun, 29 Jan 2012 22:29:51 -0500 Subject: [Dovecot] IMAP to Maildir Migration preserving UIDs? In-Reply-To: References: <4F20D718.9010805@gedalya.net> Message-ID: <4F260EAF.4090408@gedalya.net> On 01/26/2012 07:27 AM, Timo Sirainen wrote: > On 26.1.2012, at 6.31, Gedalya wrote: > >> I'm facing the need to migrate from a proprietary IMAP server to Dovecot. The migration must be as smooth and transparent as possible. >> >> The mailbox format I would want to use is Maildir++. >> >> The storage format used by the current server is unknown, and I don't look forward to trying to reverse-engineer it. This leaves me with the option of reading the mailboxes using IMAP. There are tools like offlineimap or mbsync, and they do store the UID and UIDVALIDITY info. The last piece of the puzzle is a process to properly create the dovecot-uidlist and dovecot-uidvalidity files. So far I wasn't able to find anything on this. Are there any tips? Are there any tools available to do this job, or part of it? > Get Dovecot v2.1 and configure it to work. Then for migration add to dovecot.conf: > > imapc_host = imap.example.com > imapc_port = 993 > imapc_ssl = imaps > imapc_ssl_ca_dir = /etc/ssl/certs > mail_prefetch_count = 50 > > And do the migration one user at a time: > > doveadm -o imapc_user=USERNAME -o imapc_password=PASSWORD backup -R imapc: > Now, to the issue of POP3. The old system uses the message filename for UIDL, but we need to migrate via IMAP in order to preserve IMAP info and UIDs (which have nothing to do with the POP3 UIDL in this case). So I've just finished writing a script to insert X-UIDL headers, and pop3_reuse_xuidl is doing the job. Question: Since the system currently serves in excess of 10 pop3 connections per second, would there be any performance gain from using pop3_save_uidl? Would it be faster or slower to fetch the UIDL list from the uidlist rather than look up the X-UIDL in the index? Just wondering. Also, what order does dovecot return the UIDLs in? From tss at iki.fi Mon Jan 30 08:31:39 2012 From: tss at iki.fi (Timo Sirainen) Date: Mon, 30 Jan 2012 08:31:39 +0200 Subject: [Dovecot] Mountpoints Message-ID: <563DC292-C26B-42FD-9E0D-119A5ECC451B@iki.fi> I've been thinking about mountpoints recently. There have been a few problems related to them: - If dbox mails and indexes are in different filesystems, and index fs isn't mounted and mailbox is accessed -> Dovecot rebuilds indexes from scratch, which changes UIDVALIDITY, which causes client to redownload mails. All mails will also show up as unread. Once index fs gets mounted again, the UIDVALIDITY changes again and client again redownloads mails. What should happen instead is that Dovecot simply refuses to rebuild indexes when the index fs isn't mounted. This isn't as critical for mbox/maildir, but probably a good idea to do there as well. - If dbox's alternative storage isn't mounted and a mail from there is tried to be accessed -> Dovecot rebuilds indexes and sees that all mails in alt path are gone, so Dovecot also deletes them from indexes as well. Once alt fs is mounted again, the mails in there won't come back without manual index rebuild and then they have also lost flags and have updated UIDs causing clients to redownload them. So again what should happen is that Dovecot won't rebuild indexes while alt fs isn't mounted. - For dsync-based replication I need to keep a state of each mountpoint (online, offline, failover) to determine how to access user's mails. So in the first two cases the main problem is: How does Dovecot know where a mountpoint begins? If the mountpoint is actually mounted there is no problem, because there are functions to find it (e.g. from /etc/mtab). So how to find a mountpoint that should exist, but doesn't? In some OSes Dovecot could maybe read and parse /etc/fstab, but that doesn't exist in all OSes, and do all installations even have all of the filesystems listed there anyway? (They could be in some startup script.) So, I was thinking about adding doveadm commands to explicitly tell Dovecot about the mountpoints that it needs to care about. When no mountpoints are defined Dovecot would behave as it does now. doveadm mount add|remove - add/remove mountpoint doveadm mount state [ []] - get/set state of mountpoint (used by replication) - if path isn't given list states of all mountpoints List of mountpoints is kept in /var/lib/dovecot/mounts. But because the dovecot directory is only accessible to root (and probably too much trouble to change that), there's another list in /var/run/dovecot/mounts. This one also contains the states of the mounts. When Dovecot starts up and can't find the mounts from rundir, it creates it from vardir's mounts. When mail processes notice that a directory is missing, it usually autocreates it. With mountpoints enabled, Dovecot first finds the root mountpoint for the directory. The mount root is stat()ed and its parent is stat()ed. If their device numbers equal, the filesystem is unmounted currently, and Dovecot fails instead of creating a new directory. Similar logic is used to avoid doing a dbox rebuild if its alt dir is currently in unmounted filesystem. The main problem I see with all this is how to make sysadmins remember to use these commands when they add/remove mountpoints?.. Perhaps the additions could be automatic at startup. Whenever Dovecot sees a new mountpoint, it's added. If an old mountpoint doesn't exist at startup a warning is logged about it. Of course many of the mountpoints aren't intended for mail storage. They could be hidden from the "mount state" list by setting their state to "ignore". Dovecot could also skip some of the common known mountpoints, such as where type is proc/tmpfs/sysfs. Thoughts? From tss at iki.fi Mon Jan 30 08:34:08 2012 From: tss at iki.fi (Timo Sirainen) Date: Mon, 30 Jan 2012 08:34:08 +0200 Subject: [Dovecot] Mountpoints In-Reply-To: <563DC292-C26B-42FD-9E0D-119A5ECC451B@iki.fi> References: <563DC292-C26B-42FD-9E0D-119A5ECC451B@iki.fi> Message-ID: <8CA38742-F346-4925-82D7-B282E6B284FF@iki.fi> On 30.1.2012, at 8.31, Timo Sirainen wrote: > The main problem I see with all this is how to make sysadmins remember to use these commands when they add/remove mountpoints?.. Perhaps the additions could be automatic at startup. Whenever Dovecot sees a new mountpoint, it's added. If an old mountpoint doesn't exist at startup a warning is logged about it. Of course many of the mountpoints aren't intended for mail storage. They could be hidden from the "mount state" list by setting their state to "ignore". Dovecot could also skip some of the common known mountpoints, such as where type is proc/tmpfs/sysfs. I wonder how automounts would work with this.. Probably rather randomly.. From Juergen.Obermann at hrz.uni-giessen.de Mon Jan 30 09:57:22 2012 From: Juergen.Obermann at hrz.uni-giessen.de (=?UTF-8?Q?J=C3=BCrgen_Obermann?=) Date: Mon, 30 Jan 2012 08:57:22 +0100 Subject: [Dovecot] problem compiling imaptest under solaris In-Reply-To: <3A621688-A7AE-4C08-96EA-D9668ECA02D1@iki.fi> References: <89f61bff49f4c5343be06dd45459b14a@imapproxy.hrz> <3A621688-A7AE-4C08-96EA-D9668ECA02D1@iki.fi> Message-ID: <1dfabd64651b84755e914d4510ba1310@imapproxy.hrz> Am 28.01.2012 18:55, schrieb Timo Sirainen: > On 25.1.2012, at 16.43, J?rgen Obermann wrote: > >> today I tried to compile imaptest under solaris 10 with studio 11 >> compiler and got the following error: >> >> gmake[2]: Entering directory >> `/net/fileserv/export/sunsrc/src/imaptest-20111119/src' >> source='client.c' object='client.o' libtool=no \ >> DEPDIR=.deps depmode=none /bin/bash ../depcomp \ >> cc -DHAVE_CONFIG_H -I. -I. -I.. -I/opt/local/include/dovecot >> -I/usr/local/include -fast -xarch=v8plusa -I/usr/sfw/include -c >> client.c >> "/opt/local/include/dovecot/imap-util.h", line 6: warning: useless >> declaration >> "client-state.h", line 6: warning: useless declaration >> "client.c", line 655: operand cannot have void type: op "==" >> "client.c", line 655: operands have incompatible types: >> const void "==" int >> cc: acomp failed for client.c > > http://hg.dovecot.org/imaptest/rev/7e490e59f1ee should fix it? Yes it does. Thank you, J?rgen Obermann From f.bonnet at esiee.fr Mon Jan 30 10:37:59 2012 From: f.bonnet at esiee.fr (Frank Bonnet) Date: Mon, 30 Jan 2012 09:37:59 +0100 Subject: [Dovecot] converting from mbox to maildir ? Message-ID: <4F2656E7.8060501@esiee.fr> Hello We are planning to convert our mailhub ( freebsd 7.4 ) from mbox format to maildir format. I've read the documentation and performed some tests on another machine it is a bit long ... I would like some feedback from guys who did this operation and need some advice on what to convert first ? - first convert INBOX then convert IMAP folders ? - first convert IMAP folders then convert INBOX ? the machine use real users thru openldap ( pam_ldap + nss_ldap ) another problem is disk space. The users's email data takes about 2 Terabytes of data and I cannot duplicate as I only have 3 Tb on the raid array of the server. My idea is to use one of our NFS netapp filer during the convertion to throw the result of the convertion on an NFS mounted directory. Anyone did this before ? If yes I would be greatly interrested by their experience Thank you From alexis.lelion at gmail.com Mon Jan 30 11:24:02 2012 From: alexis.lelion at gmail.com (Alexis Lelion) Date: Mon, 30 Jan 2012 10:24:02 +0100 Subject: [Dovecot] LMTP : Can't handle mixed proxy/non-proxy destinations In-Reply-To: <33BD52FA-1FE0-46D5-A1E8-9A54C406BE64@iki.fi> References: <33BD52FA-1FE0-46D5-A1E8-9A54C406BE64@iki.fi> Message-ID: On 1/28/12, Timo Sirainen wrote: > On 27.1.2012, at 12.59, Alexis Lelion wrote: > >> Jan 25 09:05:12 mail01 postfix/lmtp[23934]: A92709300DB: to=< >> user_on_mail02 at domain.com>, relay=mail01.domain.com[private/dovecot-lmtp], >> delay=0.07, delays=0.01/0/0/0.06, dsn=4.3.0, status=deferred (host >> mail01.domain.com[private/dovecot-lmtp] said: 451 4.3.0 < >> user_on_mail02 at domain.com> Can't handle mixed proxy/non-proxy destinations >> (in reply to RCPT TO command)) >> >> I was wondering if there was another way of handling this, for example >> by triggering an immediate queue lookup from postfix or forwarding a >> copy of the mail to the other server. Note that the postfix >> "queue_run_delay" was increased to 15min on purpose, so I cannot change >> that. > > It would be possible to change the code to support mixed destinations, but > it's probably not a simple change and I have other things to do.. Yes I understand, this is a quite specific request, and not that impacting actually. But it would be cool if you could keep this request somewhere in your queue :-) > > Maybe you could work around it so that LMTP always proxies the mails, to > localhost as well, but to a different port which doesn't do proxying at all. Actually this was my first try, but I had proxying loops because unlike for IMAP, the LMTP server doesn't seem to support 'proxy_maybe' option yet, does it? > > From jesus.navarro at bvox.net Mon Jan 30 12:48:43 2012 From: jesus.navarro at bvox.net (=?iso-8859-1?q?Jes=FAs_M=2E?= Navarro) Date: Mon, 30 Jan 2012 11:48:43 +0100 Subject: [Dovecot] UID 0 problem while issuing an UID THREAD REFS command In-Reply-To: <30046BB5-6E1C-41E5-9B04-787F568DE604@iki.fi> References: <201201201724.41631.jesus.navarro@bvox.net> <201201231355.15051.jesus.navarro@bvox.net> <30046BB5-6E1C-41E5-9B04-787F568DE604@iki.fi> Message-ID: <201201301148.43979.jesus.navarro@bvox.net> Hi Timo: On S?bado, 28 de Enero de 2012 19:23:12 Timo Sirainen escribi?: > On 23.1.2012, at 14.55, Jes?s M. Navarro wrote: > >>> I'm having problems on a maildir due to dovecot returning an UID 0 to > >>> an > > > >>> UID THREAD REFS command: > > I'm sending to your personal address a whole maildir that reproduces the > > bug (it's very short) to avoid having it published in the mail archives. > > Thanks, I finally looked at this. The problem happens only when the > THREADing isn't done for all messages. I thought this would have been a > much more complex bug. Fixed: > http://hg.dovecot.org/dovecot-2.0/rev/57498cad6ab9 Thank you very much. Do you have a expected date for new packages covering this issue to be published at xi.rename-it.nl? From mark.zealey at webfusion.com Mon Jan 30 15:32:33 2012 From: mark.zealey at webfusion.com (Mark Zealey) Date: Mon, 30 Jan 2012 15:32:33 +0200 Subject: [Dovecot] Director to keep redirecting users to the same server even after all sessions closed? Message-ID: <4F269BF1.8010607@webfusion.com> Hi there, Just wondering how easy it would be to make the director continue to send a user to the same server (assuming it's still in the pool) for say 90 seconds after they have last been active (ie lmtp or pop/imap)? Basically we are working in quite a heavily cached environment so it takes perhaps 60-90 seconds for our imap servers to properly flush to our network storage meaning if the user got put on a different server in that time we would see some issues. Presently we have fixed proxying, but I'd really like to use the director if possible to allow us to more easily add & remove servers. Mark From tss at iki.fi Mon Jan 30 15:58:37 2012 From: tss at iki.fi (Timo Sirainen) Date: Mon, 30 Jan 2012 15:58:37 +0200 Subject: [Dovecot] Director to keep redirecting users to the same server even after all sessions closed? In-Reply-To: <4F269BF1.8010607@webfusion.com> References: <4F269BF1.8010607@webfusion.com> Message-ID: On 30.1.2012, at 15.32, Mark Zealey wrote: > Just wondering how easy it would be to make the director continue to send a user to the same server (assuming it's still in the pool) for say 90 seconds after they have last been active (ie lmtp or pop/imap)? Basically we are working in quite a heavily cached environment so it takes perhaps 60-90 seconds for our imap servers to properly flush to our network storage meaning if the user got put on a different server in that time we would see some issues. Presently we have fixed proxying, but I'd really like to use the director if possible to allow us to more easily add & remove servers. Already done, and enabled by default: # How long to redirect users to a specific server after it no longer has # any connections. #director_user_expire = 15 min I added this mainly to make sure that all attribute caches have timed out. From Mark.Zealey at webfusion.com Mon Jan 30 16:07:16 2012 From: Mark.Zealey at webfusion.com (Mark Zealey) Date: Mon, 30 Jan 2012 14:07:16 +0000 Subject: [Dovecot] Director to keep redirecting users to the same server even after all sessions closed? In-Reply-To: References: <4F269BF1.8010607@webfusion.com>, Message-ID: Brilliant; I had read the director page in the wiki but didn't see it there & a search of the wiki text doesn't show up the option - perhaps you could add it or is there another place to see a list of director options? Mark ________________________________________ From: Timo Sirainen [tss at iki.fi] Sent: 30 January 2012 13:58 To: Mark Zealey Cc: dovecot at dovecot.org Subject: Re: [Dovecot] Director to keep redirecting users to the same server even after all sessions closed? On 30.1.2012, at 15.32, Mark Zealey wrote: > Just wondering how easy it would be to make the director continue to send a user to the same server (assuming it's still in the pool) for say 90 seconds after they have last been active (ie lmtp or pop/imap)? Basically we are working in quite a heavily cached environment so it takes perhaps 60-90 seconds for our imap servers to properly flush to our network storage meaning if the user got put on a different server in that time we would see some issues. Presently we have fixed proxying, but I'd really like to use the director if possible to allow us to more easily add & remove servers. Already done, and enabled by default: # How long to redirect users to a specific server after it no longer has # any connections. #director_user_expire = 15 min I added this mainly to make sure that all attribute caches have timed out. From f.bonnet at esiee.fr Mon Jan 30 19:29:04 2012 From: f.bonnet at esiee.fr (Frank Bonnet) Date: Mon, 30 Jan 2012 18:29:04 +0100 Subject: [Dovecot] INBOX and IMAP forlders on differents machines ? Message-ID: <4F26D360.303@esiee.fr> Hello In MBOX format would it be possible with dovecot 2 to have two machines one containing the INBOX and the other containing IMAP folders. Of course this need a frontend but would it be possible ? thanks From tss at iki.fi Mon Jan 30 22:03:50 2012 From: tss at iki.fi (Timo Sirainen) Date: Mon, 30 Jan 2012 22:03:50 +0200 Subject: [Dovecot] INBOX and IMAP forlders on differents machines ? In-Reply-To: <4F26D360.303@esiee.fr> References: <4F26D360.303@esiee.fr> Message-ID: <7D7B1E45-9ED4-4E34-BF1C-EE14671F15AD@iki.fi> On 30.1.2012, at 19.29, Frank Bonnet wrote: > In MBOX format would it be possible with dovecot 2 to have two machines > one containing the INBOX and the other containing IMAP folders. > > Of course this need a frontend but would it be possible ? With v2.1 I guess you could in theory do this with imapc backend. From jtam.home at gmail.com Tue Jan 31 02:03:45 2012 From: jtam.home at gmail.com (Joseph Tam) Date: Mon, 30 Jan 2012 16:03:45 -0800 (PST) Subject: [Dovecot] Mountpoints In-Reply-To: References: Message-ID: On Mon, 30 Jan 2012, dovecot-request at dovecot.org wrote: > So, I was thinking about adding doveadm commands to explicitly tell > Dovecot about the mountpoints that it needs to care about. When no > mountpoints are defined Dovecot would behave as it does now. Maybe I don't understand the subtlety of your question, but are you trying to disambiguate between a mounted filesytem and a failed mount that presents the underlying filesystem (which looks like an uninitilized index directory)? Couldn't you write some cookie file "/mount/.../dovecot-data-root/.dovemount", whose existence will tell you whether the FS is mounted without trying to find the mount root. Oh, but then again if you have per-user mounts, that's going to get messy. Joseph Tam From deepa.malleeswaran at gmail.com Mon Jan 30 19:12:00 2012 From: deepa.malleeswaran at gmail.com (Deepa Malleeswaran) Date: Mon, 30 Jan 2012 12:12:00 -0500 Subject: [Dovecot] Help required Message-ID: Hi I use dovecot on CentOS. It was installed and configured by some other person who doesn't work here anymore. I am trying to renew ssl. But the command works fine and restarts the dovecot. But the license shows the same old expiry. Can you please help me with the same. When I type in dovecot --version, I get command not found. Please guide me! Regards, -- Deepa Malleeswaran From tss at iki.fi Tue Jan 31 02:42:33 2012 From: tss at iki.fi (Timo Sirainen) Date: Tue, 31 Jan 2012 02:42:33 +0200 Subject: [Dovecot] Mountpoints In-Reply-To: References: Message-ID: On 31.1.2012, at 2.03, Joseph Tam wrote: > On Mon, 30 Jan 2012, dovecot-request at dovecot.org wrote: > >> So, I was thinking about adding doveadm commands to explicitly tell >> Dovecot about the mountpoints that it needs to care about. When no >> mountpoints are defined Dovecot would behave as it does now. > > Maybe I don't understand the subtlety of your question, but are you > trying to disambiguate between a mounted filesytem and a failed mount > that presents the underlying filesystem (which looks like an uninitilized > index directory)? Yes. A mounted filesystem where a directory doesn't exist vs. accidentally unmounted filesystem. > Couldn't you write some cookie file "/mount/.../dovecot-data-root/.dovemount", > whose existence will tell you whether the FS is mounted without trying to > find the mount root. This would require that existing installations create such a file or start failing after upgrade. Or that it's made optional and most people wouldn't use this functionality at all.. And I'm sure many people with a single filesystem wouldn't be all that happy creating /.dovemount or /home/.dovemount or such files. From moseleymark at gmail.com Tue Jan 31 03:24:12 2012 From: moseleymark at gmail.com (Mark Moseley) Date: Mon, 30 Jan 2012 17:24:12 -0800 Subject: [Dovecot] moving mail out of alt storage In-Reply-To: References: <87sjnya3z5.fsf@algae.riseup.net> <1316077133.12936.18.camel@hurina> <87obylafsw.fsf_-_@algae.riseup.net> Message-ID: On Sat, Jan 28, 2012 at 12:44 PM, Timo Sirainen wrote: > On 12.1.2012, at 20.32, Mark Moseley wrote: > >>>> On Wed, 2011-09-14 at 23:17 -0400, Micah Anderson wrote: >>>>> I moved some mail into the alt storage: >>>>> >>>>> doveadm altmove -u johnd at example.com seen savedbefore 1w >>>>> >>>>> and now I want to move it back to the regular INBOX, but I can't see how >>>>> I can do that with either 'altmove' or 'mailbox move'. >>>> >>>> Is this sdbox or mdbox? With sdbox you could simply "mv" the files. Or >>>> apply patch: http://hg.dovecot.org/dovecot-2.0/rev/1910c76a6cc9 >>> >>> This is mdbox, which is why I am not sure how to operate because I am >>> used to individual files as is with maildir. >>> >>> micah >>> >> >> I'm curious about this too. Is moving the m.# file out of the ALT >> path's storage/ directory into the non-ALT storage/ directory >> sufficient? Or will that cause odd issues? > > You can manually move m.* files to alt storage and back. Just make sure that the same file isn't being simultaneously modified by Dovecot or you'll corrupt it. > Cool, good to know. Thanks! From stan at hardwarefreak.com Tue Jan 31 12:30:46 2012 From: stan at hardwarefreak.com (Stan Hoeppner) Date: Tue, 31 Jan 2012 04:30:46 -0600 Subject: [Dovecot] Help required In-Reply-To: References: Message-ID: <4F27C2D6.5070508@hardwarefreak.com> On 1/30/2012 11:12 AM, Deepa Malleeswaran wrote: > I use dovecot on CentOS. It was installed and configured by some other > person who doesn't work here anymore. I am trying to renew ssl. But the > command works fine and restarts the dovecot. But the license shows the same > old expiry. Can you please help me with the same. Please be much more specific. We need details. Log entries of errors would be very useful as well. > When I type in dovecot --version, I get command not found. Please guide me! That's strange. Are you sure you're on the right machine? What version of CentOS? -- Stan From nmilas at noa.gr Tue Jan 31 14:07:29 2012 From: nmilas at noa.gr (Nikolaos Milas) Date: Tue, 31 Jan 2012 14:07:29 +0200 Subject: [Dovecot] Renaming user account / mailbox Message-ID: <4F27D981.7060304@noa.gr> Hello, I am running dovecot-2.0.13-1_128.el5 x86_64 RPM on CentOS 5.7. All accounts are virtual, hosted on LDAP Server. We are using Maildir mailboxes. The question: What is the process to rename an existing account/mailbox? I would like to rename userx with email: userx at example.com to ux at example.com with a mailbox of ux (currently: userx) Of course the idea is that new mail will continue to be delivered to the same mailbox, although it has been renamed. How can I achieve it? Would it be enough (after changing the associated data in the associated LDAP entry) to simply rename the virtual user directory name, e.g. from /home/vmail/userx to /home/vmail/ux ? Thanks in advance, Nick From ath at b-one.net Tue Jan 31 14:36:13 2012 From: ath at b-one.net (Anders) Date: Tue, 31 Jan 2012 13:36:13 +0100 Subject: [Dovecot] A small bug and a question about CONTEXT=SEARCH Message-ID: <20120131123613.49B53A7952BCD@bmail02.one.com> Hi, My colleague just pointed me to the recent fix of this issue, thanks! From la at iki.fi Tue Jan 31 17:48:39 2012 From: la at iki.fi (Lauri Alanko) Date: Tue, 31 Jan 2012 17:48:39 +0200 Subject: [Dovecot] force-resync fails to recover all messages in mdbox Message-ID: <20120131174839.13512v46jc7ur23b.lealanko@webmail.helsinki.fi> To my understanding, when using mdbox, doveadm force-resync should be able to recover all the messages from the storage files alone, though of course losing all metadata except the initial delivery folder. However, this does not seem to be the case. For me, force-resync creates only partial indices that lose messages. The message contents are of course still in the storage files, but dovecot just doesn't seem to be aware of some of them after recreating the indices. Here is an example. I created a test mdbox by syncing a mailing list folder from a mbox location: $ dsync -m haskell-cafe backup mdbox:~/dbox Then I switched the location to the new mdbox: $ /usr/sbin/dovecot -n # 2.0.15: /etc/dovecot/dovecot.conf # OS: Linux 3.2.0-0.bpo.1-amd64 x86_64 Debian wheezy/sid mail_fsync = never mail_location = mdbox:~/dbox mail_plugins = zlib passdb { driver = pam } plugin { sieve = ~/etc/sieve/dovecot.sieve sieve_dir = ~/etc/sieve zlib_save = bz2 zlib_save_level = 9 } protocols = " imap" ssl_cert = References: <20120131174839.13512v46jc7ur23b.lealanko@webmail.helsinki.fi> Message-ID: <38EB3A30-DFD5-484B-852B-327BDA5E936E@iki.fi> On 31.1.2012, at 17.48, Lauri Alanko wrote: > $ doveadm search all | wc > 93236 186472 3625098 .. > Then I removed all the indices and rebuilt them: > > $ doveadm search all | wc > 43864 87728 1699590 > > Somehow dovecot lost over half of the messages! There may be a bug, and I just yesterday noticed something weird in the rebuilding code. I'll have to look into that. But anyway, "search all" isn't the proper way to test this. Try instead with: doveadm fetch guid all | sort | uniq | wc When you removed indexes Dovecot no longer knew about copies of messages. From la at iki.fi Tue Jan 31 18:34:45 2012 From: la at iki.fi (Lauri Alanko) Date: Tue, 31 Jan 2012 18:34:45 +0200 Subject: [Dovecot] force-resync fails to recover all messages in mdbox In-Reply-To: <38EB3A30-DFD5-484B-852B-327BDA5E936E@iki.fi> References: <20120131174839.13512v46jc7ur23b.lealanko@webmail.helsinki.fi> <38EB3A30-DFD5-484B-852B-327BDA5E936E@iki.fi> Message-ID: <20120131183445.545717eennh24eg5.lealanko@webmail.helsinki.fi> Quoting "Timo Sirainen" : > Try instead with: > > doveadm fetch guid all | sort | uniq | wc > > When you removed indexes Dovecot no longer knew about copies of messages. Well, well, well. This is interesting. Back with the indices created by dsync: $ doveadm fetch guid all | grep guid: | sort | uniq -c | sort -n | tail 17 guid: 1b28b22d4b2ee2885b5b81221c41201d 17 guid: 730c692395661dd62f82088804b85652 17 guid: 865e1537fddba6698e010d0b9dbddd02 17 guid: d271b6ba8af0e7fa39c16ea8ed13abcf 17 guid: d2cd391e837cf51cc85991bde814dc54 17 guid: ebce8373da6ffb134b58aca7906d61f1 18 guid: 1222b6c222ecb53fdbbec407400cba36 18 guid: 65695586efc69adc2d7294216ea88e55 19 guid: 4288f61ebbdcd44870c670439a97693b 20 guid: 080ec72aa49e2a01c8e249fe127605f6 This would explain why rebuilding the indices reduced the number of messages. However, those guid assignments seem really weird, because: $ doveadm fetch hdr guid 080ec72aa49e2a01c8e249fe127605f6 | grep -i '^Message-ID: ' Message-ID: <4B1ACA53.7040503 at rkit.pp.ru> Message-ID: <29bf512f0912051251u74d246afxafdfb9e5ea24342c at mail.gmail.com> Message-ID: <5e0214850912051300r3ebd0e44n61a4d6e020c94f4c at mail.gmail.com> Message-ID: <4B1ACD40.3040507 at btinternet.com> Message-Id: <200912052220.00317.daniel.is.fischer at web.de> Message-Id: <200912052225.28597.daniel.is.fischer at web.de> Message-ID: <20091205212848.GA23711 at seas.upenn.edu> Message-Id: <200912051336.13792.hgolden at socal.rr.com> Message-Id: <200912052243.03144.daniel.is.fischer at web.de> Message-Id: <0B59A706-8C41-47B9-A858-5ACE297581E1 at cs.uu.nl> Message-ID: <20091205215707.GA6161 at protagoras.phil.berkeley.edu> Message-ID: <471726.55822.qm at web113106.mail.gq1.yahoo.com> Message-ID: <4B1AD7FB.8050704 at btinternet.com> Message-ID: <5fdc56d70912051400h663a25a9w4f9b2e065a5b395e at mail.gmail.com> Message-Id: <1B613EE3-B4F8-4F6E-8A36-74BACF0C86FC at yandex.ru> Message-ID: <4B1ADA0E.5070207 at btinternet.com> Message-Id: <36C40624-B050-4A8C-8CAF-F15D84467180 at phys.washington.edu> Message-ID: Message-id: Message-ID: <29bf512f0912051423safd7842ka39c8b8b6dee1ac0 at mail.gmail.com> So all these completely unrelated messages have somehow received the same guid? And that guid is stored even in the storage files themselves so they cannot be cleaned up even with force-resync? Something is _seriously_ wrong. The complexity and opaqueness of the mdbox format is a worrisome. It would ease my mind quite a bit if there were a simple tool that would just dump out the plain message contents that are stored inside the storage files, without involving any of dovecot's index machinery. Then I would at least know that whatever happens, as long as the storage files stay intact, I can always migrate my mails into some other format. Lauri