[Dovecot] v2.0.9 -segfault in lib21_fts_solr_plugin.so (G.Nau)
interfaSys sàrl
interfasys at gmail.com
Sun Jan 16 21:25:37 EET 2011
Interesting since imap_acl also makes dovecot segfault.
Maybe there is a problem in the way Dovecot loads plugins?
Cheers,
Olivier
On 16/01/2011 19:09, dovecot-request at dovecot.org wrote:
> Send dovecot mailing list submissions to
> dovecot at dovecot.org
>
> Message: 1
> Date: Sun, 16 Jan 2011 12:50:17 +0100
> From: "G.Nau" <b404_r66 at yahoo.de>
> Subject: [Dovecot] v2.0.9 -segfault in lib21_fts_solr_plugin.so
> To: dovecot at dovecot.org
> Message-ID: <4D32DB79.8060005 at yahoo.de>
> Content-Type: text/plain; charset=ISO-8859-15
>
> After installing dovecot 2.0.8 (just updated to 2.0.9) I installed the
> fts plugin 2.0.9 and get several seconds after reloading dovecot segfaults.
>
> Jan 16 12:43:04 lin kernel: [601979.190764] imap[13313]: segfault at 18
> ip 00007f5094cd99d6 sp 00007fffb10e96a0 error 4 in
> lib21_fts_solr_plugin.so[7f5094cd6000+7000]
> Jan 16 12:43:04 lin kernel: [601979.404712] imap[13315]: segfault at 18
> ip 00007ff1f248a9d6 sp 00007fff500b0a90 error 4 in
> lib21_fts_solr_plugin.so[7ff1f2487000+7000]
> Jan 16 12:43:04 lin kernel: [601979.628830] imap[13317]: segfault at 18
> ip 00007f4b5a2809d6 sp 00007fff8d012260 error 4 in
> lib21_fts_solr_plugin.so[7f4b5a27d000+7000]
> Jan 16 12:45:30 lin kernel: [602124.948358] show_signal_msg: 6 callbacks
> suppressed
> Jan 16 12:45:30 lin kernel: [602124.948369] imap[13960]: segfault at 18
> ip 00007fa4f92ea9d6 sp 00007fff65212ee0 error 4 in
> lib21_fts_solr_plugin.so[7fa4f92e7000+7000]
> Jan 16 12:45:30 lin kernel: [602125.167202] imap[13962]: segfault at 18
> ip 00007f07a1a8d9d6 sp 00007fff3183cb10 error 4 in
> lib21_fts_solr_plugin.so[7f07a1a8a000+7000]
>
> Regards
> Gunther
>
>
> ------------------------------
>
> Message: 2
> Date: Sun, 16 Jan 2011 13:58:59 +0100
> From: Jan ?koda <lefty at multihost.cz>
> Subject: [Dovecot] fs quota backend bug
> To: dovecot at dovecot.org
> Message-ID: <4D32EB93.8070300 at multihost.cz>
> Content-Type: text/plain; charset=ISO-8859-2
>
> Hi,
>
> I maintain postfix mailserver with sudo enabled dovecot LDA using
> multiple uids. I want to enforce quotas per-unix-user (which can have
> multiple domains with multiple emails set up using postfixadmin), so I
> have set up a FS quota backend with imap_quota plugin. Unfortunately,
> the quota appears to work (when i disable chrooting) according to
> dovecot-info.log, but doesn't send anything over IMAP. I found several
> topics related to this bug in this mailing list, but without any reply.
>
> All files in Maildir belong to unix-user ofight (1042)
>
> # sudo -u ofight quota
> Disk quotas for user ofight (uid 1042):
> Filesystem blocks quota limit files quota limit
> /dev/mapper/vg-home
> 2058888 2150400 2365440 35432 537600 591360
>
> Info: Loading modules from directory: /usr/lib64/dovecot/imap
> Info: Module loaded: /usr/lib64/dovecot/imap/lib10_quota_plugin.so
> Info: Module loaded: /usr/lib64/dovecot/imap/lib11_imap_quota_plugin.so
> Info: Effective uid=1042, gid=102,
> home=/home/vmail/ofight.org/team at ofight.org/
> Info: Quota root: name=Kvota backend=fs args=mount=/home
> Info: Namespace: type=private, prefix=, sep=., inbox=yes, hidden=no,
> list=yes, subscriptions=yes
> Info: maildir: data=/home/vmail/ofight.org/team at ofight.org/
> Info: maildir++: root=/home/vmail/ofight.org/team at ofight.org, index=,
> control=, inbox=/home/vmail/ofight.org/team at o
> fight.org
> Info: fs quota add storage dir = /home/vmail/ofight.org/team at ofight.org
> Info: fs quota block device = /dev/mapper/vg-home
> Info: fs quota mount point = /home
> Info: fs quota mount type = jfs
> Info: Namespace: type=private, prefix=INBOX., sep=., inbox=no,
> hidden=yes, list=no, subscriptions=yes
> Info: maildir: data=/home/vmail/ofight.org/team at ofight.org/
> Info: maildir++: root=/home/vmail/ofight.org/team at ofight.org, index=,
> control=, inbox=
> Info: Namespace : Using permissions from
> /home/vmail/ofight.org/team at ofight.org: mode=0700 gid=-1
> Info: Disconnected: Logged out bytes=133/834
>
> Part of IMAP conversation captured with wireshark:
>
> getquotaroot "INBOX"
> QUOTAROOT "INBOX"
> Ok Getquotaroot completed
>
> Part of strace:
> read(0, "357 getquotaroot \"INBOX\"\r\n", 1259) = 26
> stat("/home/vmail/ofight.org/team at ofight.org/tmp", {st_dev=makedev(253,
> 2), st_ino=475158, st_mode=S_IFDIR|0700, st_nlink=2, st_uid=1042,
> st_gid=102, st_b
> lksize=4096, st_blocks=0, st_size=1, st_atime=2011/01/16-10:07:59,
> st_mtime=2011/01/16-10:07:59, st_ctime=2011/01/16-10:07:59}) = 0
> stat("/home/vmail/ofight.org/team at ofight.org", {st_dev=makedev(253, 2),
> st_ino=475157, st_mode=S_IFDIR|0700, st_nlink=11, st_uid=1042,
> st_gid=102, st_blks
> ize=4096, st_blocks=16, st_size=4096, st_atime=2009/05/18-12:07:57,
> st_mtime=2011/01/16-11:36:32, st_ctime=2011/01/16-11:36:32}) = 0
> stat("/home/vmail/ofight.org/team at ofight.org", {st_dev=makedev(253, 2),
> st_ino=475157, st_mode=S_IFDIR|0700, st_nlink=11, st_uid=1042,
> st_gid=102, st_blks
> ize=4096, st_blocks=16, st_size=4096, st_atime=2009/05/18-12:07:57,
> st_mtime=2011/01/16-11:36:32, st_ctime=2011/01/16-11:36:32}) = 0
> stat("/home/vmail/ofight.org/team at ofight.org/dovecot-shared",
> 0x3f5aa9cf710) = -1 ENOENT (No such file or directory)
> stat("/home/vmail/ofight.org/team at ofight.org", {st_dev=makedev(253, 2),
> st_ino=475157, st_mode=S_IFDIR|0700, st_nlink=11, st_uid=1042,
> st_gid=102, st_blks
> ize=4096, st_blocks=16, st_size=4096, st_atime=2009/05/18-12:07:57,
> st_mtime=2011/01/16-11:36:32, st_ctime=2011/01/16-11:36:32}) = 0
> stat("/home/vmail/ofight.org/team at ofight.org", {st_dev=makedev(253, 2),
> st_ino=475157, st_mode=S_IFDIR|0700, st_nlink=11, st_uid=1042,
> st_gid=102, st_blks
> ize=4096, st_blocks=16, st_size=4096, st_atime=2009/05/18-12:07:57,
> st_mtime=2011/01/16-11:36:32, st_ctime=2011/01/16-11:36:32}) = 0
> access("/home/vmail/ofight.org/team at ofight.org/cur", W_OK) = 0
> setsockopt(1, SOL_TCP, TCP_CORK, [1], 4) = 0
> write(1, "* QUOTAROOT \"INBOX\"\r\n357 OK Getq"..., 53) = 53
>
> # 1.2.16: /etc/dovecot/dovecot.conf
> # OS: Linux 2.6.32-hardened-r31 x86_64 Gentoo Base System release 2.0.0 jfs
> base_dir: /var/run/dovecot/
> log_path: /var/log/dovecot.log
> info_log_path: /var/log/dovecot-info.log
> protocols: imap imaps pop3 pop3s managesieve
> listen: *, [::]
> ssl_cert_file: /etc/apache2/ssl/server.pem
> ssl_key_file: /etc/apache2/ssl/server.key
> disable_plaintext_auth: no
> shutdown_clients: no
> login_dir: /var/run/dovecot//login
> login_executable(default): /usr/libexec/dovecot/imap-login
> login_executable(imap): /usr/libexec/dovecot/imap-login
> login_executable(pop3): /usr/libexec/dovecot/pop3-login
> login_executable(managesieve): /usr/libexec/dovecot/managesieve-login
> login_greeting: Dovecot ready. Welcome to Multihost.cz
> login_process_per_connection: no
> login_processes_count: 2
> login_max_processes_count: 32
> login_max_connections: 128
> valid_chroot_dirs: /home/vmail
> max_mail_processes: 150
> first_valid_uid: 102
> mail_uid: vmail
> mail_gid: vmail
> mail_location: maildir:/home/vmail/%d/%u
> mail_debug: yes
> mail_executable(default): /usr/libexec/dovecot/imap
> mail_executable(imap): /usr/libexec/dovecot/imap
> mail_executable(pop3): /usr/libexec/dovecot/pop3
> mail_executable(managesieve): /usr/libexec/dovecot/managesieve
> mail_plugins(default): quota imap_quota
> mail_plugins(imap): quota imap_quota
> mail_plugins(pop3): quota
> mail_plugins(managesieve):
> mail_plugin_dir(default): /usr/lib64/dovecot/imap
> mail_plugin_dir(imap): /usr/lib64/dovecot/imap
> mail_plugin_dir(pop3): /usr/lib64/dovecot/pop3
> mail_plugin_dir(managesieve): /usr/lib64/dovecot/managesieve
> imap_client_workarounds(default): outlook-idle delay-newmail
> imap_client_workarounds(imap): outlook-idle delay-newmail
> imap_client_workarounds(pop3):
> imap_client_workarounds(managesieve):
> pop3_client_workarounds(default):
> pop3_client_workarounds(imap):
> pop3_client_workarounds(pop3): outlook-no-nuls oe-ns-eoh
> pop3_client_workarounds(managesieve):
> namespace:
> type: private
> separator: .
> inbox: yes
> list: yes
> subscriptions: yes
> namespace:
> type: private
> separator: .
> prefix: INBOX.
> hidden: yes
> list: no
> subscriptions: yes
> lda:
> postmaster_address: postmaster at multihost.cz
> mail_plugins: sieve quota
> quota_full_tempfail: yes
> auth default:
> mechanisms: plain login skey
> cache_size: 256
> user: nobody
> verbose: yes
> passdb:
> driver: sql
> args: /etc/dovecot/dovecot-sql.conf
> userdb:
> driver: sql
> args: /etc/dovecot/dovecot-sql.conf
> socket:
> type: listen
> client:
> path: /var/spool/postfix/private/auth
> mode: 432
> user: postfix
> group: mail
> master:
> path: /var/run/dovecot/auth-master
> mode: 432
> user: vmail
> group: mail
> plugin:
> quota: fs:Kvota:mount=/home
> trash: /etc/dovecot/trash.conf
> sieve_global_path: /etc/dovecot/global.sieve
> sieve_global_dir: /etc/dovecot/sieve-repo/
>
> Thanks,
> Lefty
>
>
> ------------------------------
>
> Message: 3
> Date: Sun, 16 Jan 2011 09:49:19 -0400
> From: Cor Bosman <cor at xs4all.nl>
> Subject: Re: [Dovecot] SSD drives are really fast running Dovecot
> To: Dovecot Mailing List <dovecot at dovecot.org>
> Message-ID: <4076DC31-518B-4233-874B-7D9328F95986 at xs4all.nl>
> Content-Type: text/plain; charset=us-ascii
>
>> This discussion has been in the context of _storing_ user email. The assumption
>> is that an OP is smart/talented enough to get his spam filters/appliances
>> killing 99% before it reaches intermediate storage or mailboxes. Thus, in the
>> context of this discussion, the average size of a spam message is irrelevant,
>> because we're talking about what goes into the mail store.
>
> The fact is, we all live in different realities, so we're all arguing about apples and oranges. If you're managing a SOHO, small company, large company, university, or in our case, an ISP, the requirements are all different. We have about a million mailboxes, about 20K active at the same time, and people pay for it.
>
> Take for example Stan's spam quote above. In the real world of an ISP, killing 99% of all spam before it hits the storage is unthinkable. We only block spam that is guaranteed to be unwanted, mostly based on technical facts that can't ever happen in normal email. But email that our scanning system flags as probable spam, is just that, probable spam. We can not just throw that away, because in the real world, there are always, and I mean always, false positives. It is unthinkable to throw false positives away. So we have to put these emails in a spam folder in case the user wants to look at it. We block about 40% of all spam on technical grounds, our total spam percentage is 90%, so still about 80% of all customer email reaching the storage is spam.
>
> But in other environments throwing away all probable spam may be perfectly fine. For my SOHO id have no problem throwing probable spam away. I never look in my spam folder anyways, so cant be missing much.
>
> The same goes for SSD. We use SSD drives extensively in our company. Currently mostly in database servers, but our experiences have been good enough that we're slowly starting to add them to more systems as even boot drives. But we're not using them yet in email storage. Like Brad we're using Netapp filers because as far as I know they're one of the few commercially available HA filesystem companies. We've looked at EMC and Sun as well, but havent found a reason to move away from Netapp. In 12 years of netapp we've only had 1 major outage that lasted half a day (and made the front page of national news papers). So, understand that bit. Major outages make it to national news papers for us. HA, failover, etc are kind of important to us.
>
> So why not build something ourselves and use SSD? I suppose we could, but it's not as easy as it sounds for us. (your mileage may vary). It would take significant amounts of engineering time, testing, migrating, etc etc. And the benefits are uncertain. We dont know if an open source HA alternative can give us another 12 years of virtually faultless operation. It may. It may not. Email is not something to start gambling with. People get kind of upset when their email disappears. We know what we've got with Netapp.
>
> I did dabble in using SSD for indexes for a while, and it looked very promising. Certainly indexes are a prime target for SSD drives. But when the director matured, we started using the director and the netapp for indexes again. I may still build my own NFS server and use SSD drives just for indexes, simply to offload IOPS from the Netapp. Indexes are a little less scary to experiment with.
>
> So, if you're in the position to try out SSD drives for indexes or even for storage, go for it. Im sure it will perform much better than spinning drives.
>
> Cor
>
>
>
> ------------------------------
>
> Message: 4
> Date: Sun, 16 Jan 2011 18:18:40 +0200
> From: Timo Sirainen <tss at iki.fi>
> Subject: [Dovecot] Smart IMAP proxying with imapc storage
> To: dovecot at dovecot.org
> Message-ID: <1295194720.3133.9.camel at hurina>
> Content-Type: text/plain; charset="iso-8859-15"
>
> I just committed a very early initial implementation of "imapc" storage
> backend to v2.1 hg: http://hg.dovecot.org/dovecot-2.1
>
> You can't really do anything except open INBOX and read mails from it,
> so it's currently only intended for initial testing. It sucks in many
> ways right now, but I'll be improving it.
>
> The idea is that you could set for example:
>
> mail_location = imapc:imap.gmail.com
>
> And then Dovecot could act as a proxy to gmail. It won't actually work
> currently with gmail, because there's no SSL support for outgoing
> connections.
>
> Currently index files are forcibly disabled, but it would be easy to
> enable them, allowing Dovecot to do caching locally to improve
> performance. In future perhaps it will also support caching message
> headers/bodies to avoid unnecessary traffic.
>
> Besides the mail_location setting, you'll need to also forward the
> user's password to imap process in "pass" userdb extra field. How to do
> that depends on what passdb/userdb you're using. While testing you could
> simply set a static password:
>
> plugin {
> pass = yourpassword
> }
>
> -------------- next part --------------
> A non-text attachment was scrubbed...
> Name: not available
> Type: application/pgp-signature
> Size: 198 bytes
> Desc: This is a digitally signed message part
> Url : http://dovecot.org/pipermail/dovecot/attachments/20110116/e29d4c32/attachment-0001.bin
>
> ------------------------------
>
> Message: 5
> Date: Sun, 16 Jan 2011 19:00:51 +0100
> From: Javier de Miguel Rodr?guez <javierdemiguel at us.es>
> Subject: Re: [Dovecot] SSD drives are really fast running Dovecot
> To: dovecot at dovecot.org
> Message-ID: <4D333253.1060706 at us.es>
> Content-Type: text/plain; charset=UTF-8; format=flowed
>
> El 13/01/11 17:01, David Woodhouse escribi?:
>> On Wed, 2011-01-12 at 09:53 -0800, Marc Perkel wrote:
>>> I just replaced my drives for Dovecot using Maildir format with a pair
>>> of Solid State Drives (SSD) in a raid 0 configuration. It's really
>>> really fast. Kind of expensive but it's like getting 20x the speed for
>>> 20x the price. I think the big gain is in the 0 seek time.
>> You may find ramfs is even faster :)
> ramfs (tmpfs in linux-land) is useful for indexes. If you lose the
> indexes, they will created automatically the next time a user logs in.
>
> We are now trying zlib plugin to lower the number of iops to our
> maildir storage systems. We are using gzip (bzip2 increases a lot the
> latency). LZMA/xz seems interesting (high compression and rather good
> decompression speed) and lzo also seems interesting (blazing fast
> compression AND decompression, not much compression savings though)
>
> What kind of "tricks" do you use to lower the number of IOPs of
> your dovecot servers?
>
> Regards
>
> Javier
>
>
>
>
>> I hope you have backups.
>>
>
>
>
> ------------------------------
>
> Message: 6
> Date: Sun, 16 Jan 2011 20:48:26 +0200
> From: Timo Sirainen <tss at iki.fi>
> Subject: Re: [Dovecot] SSD drives are really fast running Dovecot
> To: Stan Hoeppner <stan at hardwarefreak.com>
> Cc: dovecot at dovecot.org
> Message-ID: <1295203706.3133.19.camel at hurina>
> Content-Type: text/plain; charset="iso-8859-15"
>
> On Sun, 2011-01-16 at 00:05 -0600, Stan Hoeppner wrote:
>> Using O_DIRECT with mbox files, the IOPS
>> performance can be even greater. However, I don't know if this applies to
>> Dovecot because AFAIK MMAP doesn't work well with O_DIRECT...
>> ***Hay Timo, does/can Dovecot use Linux O_DIRECT for writing the mail files?
>
> mmap doesn't matter, because mbox files aren't read with mmap. But I
> doubt it's a good idea to use O_DIRECT for mbox files, because even if
> it gives higher iops, you're using more iops because you keep re-reading
> the same data from disk since it's not cached to memory.
>
> As for O_DIRECT writes.. I don't know if it's such a good idea either.
> If client is connected, it's often going to read the mail soon after it
> was written, so it's again a good idea that it stays in cache.
>
> I once wrote a patch to free message contents from OS cache once the
> message was read entirely, because it probably wouldn't be read again.
> No one ever reported if it gave any better or worse performance.
> http://dovecot.org/patches/1.1/fadvise.diff
>
> -------------- next part --------------
> A non-text attachment was scrubbed...
> Name: not available
> Type: application/pgp-signature
> Size: 198 bytes
> Desc: This is a digitally signed message part
> Url : http://dovecot.org/pipermail/dovecot/attachments/20110116/689b2047/attachment-0001.bin
>
> ------------------------------
>
> Message: 7
> Date: Sun, 16 Jan 2011 21:05:43 +0200
> From: Timo Sirainen <tss at iki.fi>
> Subject: Re: [Dovecot] v2.0.9 released
> To: Holger Mauermann <holger at mauermann.org>
> Cc: dovecot at dovecot.org
> Message-ID: <1295204743.3133.21.camel at hurina>
> Content-Type: text/plain; charset="iso-8859-15"
>
> On Thu, 2011-01-13 at 23:22 +0100, Holger Mauermann wrote:
>
>>>>> Renaming a mailbox that has children still doesn't work for me with
>>>>> v2.0.9.... Any ideas?
>>>>
>> Ok, seems to be a bug in the listescape plugin. If I remove it from
>> mail_plugins renaming works fine. Unfortunately, that's not an option
>> because some users have folders with '.' in its name.
>
> This bug has existed in previous 2.x versions also. And I'm not sure if
> there is any good way to fix it either. I tried a few ways but those
> failed. Maybe it can't be fully fixed before v2.1.
>
> -------------- next part --------------
> A non-text attachment was scrubbed...
> Name: not available
> Type: application/pgp-signature
> Size: 198 bytes
> Desc: This is a digitally signed message part
> Url : http://dovecot.org/pipermail/dovecot/attachments/20110116/468c77bf/attachment-0001.bin
>
> ------------------------------
>
> Message: 8
> Date: Sun, 16 Jan 2011 21:09:12 +0200
> From: Timo Sirainen <tss at iki.fi>
> Subject: Re: [Dovecot] Can I get rid of the fchown log messages?
> To: Maarten Bezemer <mcbdovecot at robuust.nl>
> Cc: dovecot at dovecot.org
> Message-ID: <1295204952.3133.22.camel at hurina>
> Content-Type: text/plain; charset="iso-8859-15"
>
> On Sat, 2011-01-15 at 01:42 +0100, Maarten Bezemer wrote:
>
>> Jan 15 00:55:17 srv0303 dovecot: POP3(obm03): fchown(/home/obm/obm03/mail/.imap/INBOX/dovecot.index.tmp, -1, 8(mail)) failed: Operation not permitted (egid=1033(obm), group based on /var/mail/obm03)
>>
>> I know that this is because the mailbox in /var/mail has ownership
>> username:mail.
>> However, in this setup this is intentional, and quota-related (quota on
>> inbox is enforced by Exim, not Dovecot, and kernel does group-quota but
>> not for group mail). Also, group read rights for group mail are
>> intentional.
>
> It's fine to have mail as the group, but does the group really need to
> have read or write permissions? chmod 0600 /var/mail/* would solve this.
>
> -------------- next part --------------
> A non-text attachment was scrubbed...
> Name: not available
> Type: application/pgp-signature
> Size: 198 bytes
> Desc: This is a digitally signed message part
> Url : http://dovecot.org/pipermail/dovecot/attachments/20110116/27eccd3f/attachment.bin
>
> ------------------------------
>
> _______________________________________________
> dovecot mailing list
> dovecot at dovecot.org
> http://dovecot.org/cgi-bin/mailman/listinfo/dovecot
>
> End of dovecot Digest, Vol 93, Issue 61
> ***************************************
More information about the dovecot
mailing list