[Dovecot] Big sites using Dovecot
Hi,
Are there any big Dovecot sites (say 50,000-100,000) users on this list?
What sort of hardware do you use.
What sort of problems do you see?
Many thanks, Jonathan.
* On 21/09/06 18:14 +0100, Jonathan wrote:
| Hi,
|
| Are there any big Dovecot sites (say 50,000-100,000) users on this
| list?
Hmm, I doubt but there could be. My biggest deployment has 8800 users.
A few of those could be inactive as I need to do some housekeeping.
| What sort of hardware do you use.
HP DL380, single 3.0GHz CPU, 2GB RAM, SCSI disks.
| What sort of problems do you see?
Other than dovecot dying unexpectedly when compiled with kqueue support
on FreeBSD 6.1, nothing else. I had to disable Kqueue support.
Some tuning is required though. So far I have only bumped up the value
of "login_max_processes_count" to 200. Since then, it's been running
non-stop for 10 days without any hiccup.
-Wash
http://www.netmeister.org/news/learn2quote.html
DISCLAIMER: See http://www.wananchi.com/bms/terms.php
--
+======================================================================+
|\ _,,,---,,_ | Odhiambo Washington
Hi,
Are there any big Dovecot sites (say 50,000-100,000) users on this list?
What sort of hardware do you use.
What sort of problems do you see?
Many thanks, Jonathan. Small install, dovecot 1.0rc7 on FreeBSD 6.1/kqueue, sometimes imap
Jonathan пишет: process eats up to 90% CPU, ktrace shows about 20000 calls of gettimeofday() per second. I mentioned about this problem already in this list.
-- С уважением, Савчук Тарас ООО "Элантек" : Аутсорсинг ИТ, WEB-разработка http://www.elantech.ru +7 (495) 589 68 81 +7 (926) 575 22 11
* On 21/09/06 23:38 +0400, Taras Savchuk wrote:
| Jonathan пишет:
| >Hi,
| >
| >Are there any big Dovecot sites (say 50,000-100,000) users on this
| >list?
| >
| >What sort of hardware do you use.
| >
| >What sort of problems do you see?
| >
| >Many thanks,
| >Jonathan.
| Small install, dovecot 1.0rc7 on FreeBSD 6.1/kqueue, sometimes imap
| process eats up to 90% CPU, ktrace shows about 20000 calls of
| gettimeofday() per second. I mentioned about this problem already in
| this list.
I have same environment ;)
What happens if you compile using WITHOUT_KQUEUE=1?
-Wash
http://www.netmeister.org/news/learn2quote.html
DISCLAIMER: See http://www.wananchi.com/bms/terms.php
--
+======================================================================+
|\ _,,,---,,_ | Odhiambo Washington
Odhiambo WASHINGTON пишет:
* On 21/09/06 23:38 +0400, Taras Savchuk wrote: | Jonathan пишет: | >Hi, | > | >Are there any big Dovecot sites (say 50,000-100,000) users on this | >list? | > | >What sort of hardware do you use. | > | >What sort of problems do you see? | > | >Many thanks, | >Jonathan. | Small install, dovecot 1.0rc7 on FreeBSD 6.1/kqueue, sometimes imap | process eats up to 90% CPU, ktrace shows about 20000 calls of | gettimeofday() per second. I mentioned about this problem already in | this list.
I have same environment ;)
What happens if you compile using WITHOUT_KQUEUE=1?
I didn't compiled dovecot without kqueue yet.
-Wash
http://www.netmeister.org/news/learn2quote.html
DISCLAIMER: See http://www.wananchi.com/bms/terms.php
-- +======================================================================+ |\ _,,,---,,_ | Odhiambo Washington
Zzz /,`.-'`' -. ;-;;,_ | Wananchi Online Ltd. www.wananchi.com |,4- ) )-,_. ,\ ( `'-'| Tel: +254 20 313985-9 +254 20 313922 '---''(_/--' `-'\_) | GSM: +254 722 743223 +254 733 744121 +======================================================================+ Absent, adj.: Exposed to the attacks of friends and acquaintances; defamed; slandered.
-- С уважением, Савчук Тарас ООО "Элантек" : Аутсорсинг ИТ, WEB-разработка http://www.elantech.ru +7 (495) 589 68 81 +7 (926) 575 22 11
On Thu, Sep 21, 2006 at 06:14:14PM +0100, Jonathan wrote:
Are there any big Dovecot sites (say 50,000-100,000) users on this list?
Yes. Actually, more users than that.
What sort of hardware do you use.
2 x NetApp FAS2070c clustered filer heads w/NFS (fast fibrechannel disk shelves) IBM x336 SMTP/POP3/IMAP front-ends, 4GB RAM, 2x3GHz CPU. Load-balancing across front-ends via Cisco 6509 SLB. Gigabit Ethernet, jumbo frames.
What sort of problems do you see?
You need a stable NFS client. We're using Linux 2.6.16 with NFS client patches from Trond Myklebust.
nfs mount options: 10.x.x.x:/vol/mailstore /var/mailstore nfs rw,tcp,nfsvers=3,rsize=8192,wsize=8192,nosuid,nodev,soft,intr,noac,actimeo=0 0 0
dovecot.conf
- mmap_disable=yes
- lock_method=fnctl
It's extremely fast & stable (server loads rarely above 1.0), serving a six-figure userbase their spam, their mailbombs, and just occasionally, some genuine mail :)
Other major platform components: OpenLDAP 2.3, Postfix 2.3.
The main bottleneck is Postfix spooling before delivery, but I have to generate synthetic mailbombs to make it seriously blink (and I'm still in the "Dovecot probably needs a LMTP speaker someday" camp). We can eliminate even that with /var/spool/postfix on a tmpfs, but I'm unwilling to go that far.
Note: without Trond M's NFS client patchset we see comedy VFS out-of-sync errors after a few days uptime, resulting in mistaken deadlocking of dovecot indices (usually only one user, but always a high-volume user) that needs a node reboot to fix, and some intervention with the "lock status / break" command on the OnTAP command line. With the patches it's been rock solid.
If only I could make anti-spam services run so fast! Timo, could you design a mail hygiene system next? Thanks :)
all the best JG
-- Josh "Koshua" Goodall "as modern as tomorrow afternoon" joshua@roughtrade.net - FW109
Joshua Goodall joshua@roughtrade.net wrote:
The main bottleneck is Postfix spooling before delivery, but I have to generate synthetic mailbombs to make it seriously blink (and I'm still in the "Dovecot probably needs a LMTP speaker someday" camp). We can eliminate even that with /var/spool/postfix on a tmpfs, but I'm unwilling to go that far.
Several years ago I found out that using ext3 with journaled data (data=journal mount option) had very good performance in this case; when spooled mail is delivered before journal is full and file can then be unlinked, filesystem can skip writing to real storage altogether. No slow seeking necessary, just streaming to journal.
-- Antti Salmela
On 9/21/2006 Joshua Goodall (joshua@roughtrade.net) wrote:
If only I could make anti-spam services run so fast! Timo, could you design a mail hygiene system next? Thanks
Add ASSP in front of Postfix, and you'd reduce the load on Postfix dramatically...
--
Best regards,
Charles
On 2006-09-22 08:09:38 -0400, Charles Marcus wrote:
On 9/21/2006 Joshua Goodall (joshua@roughtrade.net) wrote:
If only I could make anti-spam services run so fast! Timo, could you design a mail hygiene system next? Thanks
Add ASSP in front of Postfix, and you'd reduce the load on Postfix dramatically...
having all the various default header checks enabled in postfix already cleans up a lot. for the rest i run dspam. but the load on dspam is very low compared to what postfix already rejects.
darix
-- openSUSE - SUSE Linux is my linux openSUSE is good for you www.opensuse.org
Charles Marcus wrote:
Add ASSP in front of Postfix, and you'd reduce the load on Postfix dramatically...
In the spirit of TMTOWTDI, you can also run instances of Qpsmtpd[1]:
http://smptd.develooper.com
in front of your actual SMTP server and reject as much as possible during the initial SMTP transaction. All of the perl.[com|org] domains as well as all apache.org mail goes through Qpsmtpd. You have multiple run methods: forking, tcpserver, pre-fork (still beta), or even running under Apache. And it's written in Perl with a robust plugin architecture, so you can tailor your architecture to an alarming amount.
John
- I'm one of the developers of Qpsmtpd, and I authorize this message ;-)
-- John Peacock Director of Information Research and Technology Rowman & Littlefield Publishing Group 4501 Forbes Boulevard Suite H Lanham, MD 20706 301-459-3366 x.5010 fax 301-429-5748
On Fri, Sep 22, 2006 at 08:09:38AM -0400, Charles Marcus wrote:
On 9/21/2006 Joshua Goodall (joshua@roughtrade.net) wrote:
If only I could make anti-spam services run so fast! Timo, could you design a mail hygiene system next? Thanks
Add ASSP in front of Postfix, and you'd reduce the load on Postfix dramatically...
Postfix is not the bottleneck in mail hygiene. When I said Postfix was a potential bottleneck, I'm only talking about mail delivery. AV/AS scanning is on a separate cluster of servers and is a whole different kettle of worms :-)
I do question the wisdom of anything front-ending a high-quality MTA - one of the reasons I use Postfix is *because* the protocol speakers are written so well and are so high performing.
JG
I do question the wisdom of anything front-ending a high-quality MTA - one of the reasons I use Postfix is *because* the protocol speakers are written so well and are so high performing.
In general, I agree - but once you've seen ASSP in action, you'll never look back. It is *so* much more effective than SpamAssassin, and even DSPAM - and is very simple to install/set-up - and best of all, *very* light on the resources.
--
Best regards,
Charles
On Tue, 2006-09-26 at 06:29, Charles Marcus wrote:
I do question the wisdom of anything front-ending a high-quality MTA - one of the reasons I use Postfix is *because* the protocol speakers are written so well and are so high performing.
In general, I agree - but once you've seen ASSP in action, you'll never look back. It is *so* much more effective than SpamAssassin, and even DSPAM - and is very simple to install/set-up - and best of all, *very* light on the resources.
MimeDefang running as a sendmail milter can do whatever checks you want, controlled by a small snippet of perl and it intelligently multiplexes the operations so it doesn't serialize everything behind the slow operations. Even if you run spamassassin with it, you can get better performance by running faster checks (virus, network rbl, etc.) that are likely to reject messages first.
-- Les Mikesell lesmikesell@gmail.com
Matt wrote:
In general, I agree - but once you've seen ASSP in action, you'll never look back.
Does ASSP support vhosted mailboxs? Can I allow each user to configure their own white/black lists like spam assassin allows?
ASSP is an anti-spam PROXY... it will block most spam at the connection stage, long before any filtering needs to be done, but it only has one 'user specific' setting that you can set: individual users can be configured to receive all mail.
I have found it to be so effective that nothing is needed behind it. The lone spam that slips through from time to time is easily dealt with with the delete button, and I haven't had a (known/discoverable) false positive - well, since the first few weeks of training (over a year)...
You can also set it to forward all blocked messages to an address, 'just in case'...
In case you haven't already, you can read more about how it works here:
If you really want/need user-specific settings/quarantine, then you could use DSPAM or SpamAssassin AFTER ASSP. Putting ASSP in front of everything simply drastically reduces the load that your real servers have to bear.
--
Best regards,
Charles
On Tuesday 26 September 2006 11:57, Matt wrote:
In general, I agree - but once you've seen ASSP in action, you'll never look back.
Does ASSP support vhosted mailboxs? Can I allow each user to configure their own white/black lists like spam assassin allows?
This is going way off topic ... okay, not going, it's gone. :) But I contribute to the problem by tossing out this bit:
I don't see the point in per-user spam controls. Spam is spam. If a site's sending out UBE, that is a spam site, period. Block it.
Spam is not "mail that $USER doesn't want to see". Spam is UBE. If
$USER subscribed to something and confirmed it before the mailings
started, that's not spam. If $USER happens to be interested in the
stock tips or pharmaceuticals or other such spammer spew, it's STILL
spam, and should be treated as such.
Furthermore, users rarely understand how mail works. They think that sender addresses really mean something. You got spam from some sender address, so you should blacklist that address? Well duh, it probably wasn't really that sender. Maybe you just blocked a real person, an innocent victim of spammers.
Whitelisting by sender address is just as bad for the same reasons. Suppose you whitelist an Outhouse Distress user who gets a virus, and the virus goes out to everyone in the address book.
End users occasionally discover the great FUSSP of C/R systems ... and thus join the legions of spammers. All out of ignorance. Spam "solutions" implemented by people who don't understand spam and SMTP always make the problem worse for everyone.
I'm not paternalistic, at least I don't think so, but I'd like to see movement away from user spam controls and toward *clueful* server-side spam abatement. Maybe that would indeed be a FUSSP? No telling, because it will never happen.
Offlist mail to this address is discarded unless
"/dev/rob0" or "not-spam" is in Subject: header
/dev/rob0 wrote:
On Tuesday 26 September 2006 11:57, Matt wrote:
In general, I agree - but once you've seen ASSP in action, you'll never look back. Does ASSP support vhosted mailboxs? Can I allow each user to configure their own white/black lists like spam assassin allows?
This is going way off topic ... okay, not going, it's gone. :) But I contribute to the problem by tossing out this bit:
I don't see the point in per-user spam controls. Spam is spam. If a site's sending out UBE, that is a spam site, period. Block it.
Spam is not "mail that $USER doesn't want to see". Spam is UBE. If
$USER subscribed to something and confirmed it before the mailings started, that's not spam. If $USER happens to be interested in the stock tips or pharmaceuticals or other such spammer spew, it's STILL spam, and should be treated as such.Furthermore, users rarely understand how mail works. They think that sender addresses really mean something. You got spam from some sender address, so you should blacklist that address? Well duh, it probably wasn't really that sender. Maybe you just blocked a real person, an innocent victim of spammers.
Whitelisting by sender address is just as bad for the same reasons. Suppose you whitelist an Outhouse Distress user who gets a virus, and the virus goes out to everyone in the address book.
I'd disagree with this point. A spam whitelist doesn't have to allow viruses to pass. Also, most automated server side anti-spam systems, especially those that are content based, like SpamAssassin, need per user whitelists to allow the system to adjust to the needs of individual users. This may not be true for Corporate users, where policy reigns supreme, but for a general email provider, like an ISP, per user whitelists are a necessary evil. Without them, content based filters would have to be very lame.
Ken A. Pacific.Net
End users occasionally discover the great FUSSP of C/R systems ... and thus join the legions of spammers. All out of ignorance. Spam "solutions" implemented by people who don't understand spam and SMTP always make the problem worse for everyone.
I'm not paternalistic, at least I don't think so, but I'd like to see movement away from user spam controls and toward *clueful* server-side spam abatement. Maybe that would indeed be a FUSSP? No telling, because it will never happen.
On Tue, 2006-09-26 at 12:29 -0500, /dev/rob0 wrote:
I don't see the point in per-user spam controls. Spam is spam. If a site's sending out UBE, that is a spam site, period. Block it.
Spam is not "mail that $USER doesn't want to see". Spam is UBE. If
$USER subscribed to something and confirmed it before the mailings started, that's not spam.
Yes, hence it's a per-user attribute when two users receive the same thing and one of them doesn't remember signing up or giving his email address to a salesperson and asking for more information.
-- Les Mikesell lesmikesell@gmail.com
On Tue, Sep 26, 2006 at 12:29:08PM -0500, /dev/rob0 wrote:
Furthermore, users rarely understand how mail works. They think that sender addresses really mean something. You got spam from some sender address, so you should blacklist that address? Well duh, it probably wasn't really that sender. Maybe you just blocked a real person, an innocent victim of spammers.
Here's why we have white/blacklists at ISPs at all.
Whitelisting at ISPs is usually provided for users who receive mail from broken automated systems that trip over heuristic spam-detection filters (i.e. spamassassin and co). Telco Fax/SMS gateways are a typical culprit, and badly coded feedback forms at websites.
Blacklisting at ISPs is usually provided for people to block spew. It is a mistake to provide it alongside anti-spam controls.
Both are cheap hack solutions, but it is never worth an ISP's time coming up with a sound and correct solution (usually - fix an external, customer 3rd party source) for what is commonly an isolated case. So, these imperfect but generic tools suffice.
JG
On Fri, 22 Sep 2006, Joshua Goodall wrote:
On Thu, Sep 21, 2006 at 06:14:14PM +0100, Jonathan wrote:
Are there any big Dovecot sites (say 50,000-100,000) users on this list?
Yes. Actually, more users than that.
What sort of hardware do you use.
2 x NetApp FAS2070c clustered filer heads w/NFS (fast fibrechannel disk shelves) IBM x336 SMTP/POP3/IMAP front-ends, 4GB RAM, 2x3GHz CPU. Load-balancing across front-ends via Cisco 6509 SLB. Gigabit Ethernet, jumbo frames.
What sort of problems do you see?
You need a stable NFS client. We're using Linux 2.6.16 with NFS client patches from Trond Myklebust.
nfs mount options: 10.x.x.x:/vol/mailstore /var/mailstore nfs rw,tcp,nfsvers=3,rsize=8192,wsize=8192,nosuid,nodev,soft,intr,noac,actimeo=0 0 0
We, too, are hoping to put ~20,000 accounts onto a set of Linux IMAP servers, NFS-mounting from NetApp.
Hitherto we have been using Washington IMAP on these machines; Washington strongly discourages NFS access, but we seem to have been more-or-less OK with such "noac"-like options (no known lost mail or mailbox corrpution).
Two main reasons prompt our thoughts to migrate from Washington to dovecot:
Performance has been sluggish: high load average, probably caused by NFS stat activity (itself because of "noac"?).
Although older Linuxes (e.g. Redhat 9, 2.4.20-43.9) have been OK, more recent releases (e.g. FC5, 2.6.16-1) introduced some nasty deadlocking, requiring machine reboot every day. (Unacceptable!)
We hope dovecot will improve matters.
Any advice or comments or experiences?
Also, in such a set-up (multiple IMAP/Linux NFS-mouting from NetApp) where should the dovecot index files be? NFS from the NetApp? Or on each Linux machine (if so, on disk or ramdisk?)?
We plan to use dovecot LDA (rather than "mail" or Washington "tmail") delivery.
Usually all activity (IMAP, delivery) for a given user should take place within one machine. But thi cannot be guaranteed.
Any other hints welcome!
dovecot.conf
- mmap_disable=yes
- lock_method=fnctl
Thanks. Got that! Nice to have it confirmed.
Note: without Trond M's NFS client patchset we see comedy VFS out-of-sync errors after a few days uptime, resulting in mistaken deadlocking of dovecot indices (usually only one user, but always a high-volume user) that needs a node reboot to fix, and some intervention with the "lock status / break" command on the OnTAP command line. With the patches it's been rock solid.
Could you provide more details? (I wonder if these are related to deadlock problems we see with Washington/IMAP on 2.6.16?) Are these patches in the processes of being pushed into the relevant source codes so that they will ultimately be unnecessary?
--
: David Lee I.T. Service : : Senior Systems Programmer Computer Centre : : Durham University : : http://www.dur.ac.uk/t.d.lee/ South Road : : Durham DH1 3LE : : Phone: +44 191 334 2752 U.K. :
On Fri, Sep 22, 2006 at 03:28:03PM +0100, David Lee wrote:
Performance has been sluggish: high load average, probably caused by NFS stat activity (itself because of "noac"?).
Although older Linuxes (e.g. Redhat 9, 2.4.20-43.9) have been OK, more recent releases (e.g. FC5, 2.6.16-1) introduced some nasty deadlocking, requiring machine reboot every day. (Unacceptable!)
We hope dovecot will improve matters.
It will (it should).
Any advice or comments or experiences?
Use maildir on NFS, not mbox. Especially for random-access IMAP services, and it isn't shabby in the POP3 slurpdown scenarios either. The performance difference is an order-of-magnitude, and converting mailboxes between these formats is trivially scriptable. The only practical downside to maildir I've experienced is that backup of lots-of-tiny-files (maildir) is more expensive than one-big-file (mbox).
Also, in such a set-up (multiple IMAP/Linux NFS-mouting from NetApp) where should the dovecot index files be? NFS from the NetApp? Or on each Linux machine (if so, on disk or ramdisk?)?
I put indices on the filer. Access serializes on dotlocking of dovecot-uidlist anyway.
Note: without Trond M's NFS client patchset we see comedy VFS out-of-sync errors after a few days uptime, resulting in mistaken deadlocking of dovecot indices (usually only one user, but always a high-volume user) that needs a node reboot to fix, and some intervention with the "lock status / break" command on the OnTAP command line. With the patches it's been rock solid.
Could you provide more details? (I wonder if these are related to deadlock problems we see with Washington/IMAP on 2.6.16?) Are these patches in the processes of being pushed into the relevant source codes so that they will ultimately be unnecessary?
They could well be related issues. I'm no expert on the Linux NFS client code itself and I wouldn't wish such a role on anyone :)
Unpatched, we see (after a day or two of uptime) the following kernel gripe: "do_vfs_lock: VFS is out of sync with lock manager!". This repeats a few times, and shortly afterwards the box locks up hard.
Patched, our mailstores are currently at >two months uptimes, and the last reboot was for routine maintenance. The only problem with this is that nfsstat's counters (e.g. for getattr) have reached 2^31-1 and stopped turning :)
Trond's code is often incorporated into the kernel. I have always tracked his patchsets in production platforms that were reliant upon NFS, and have for years found his code to make a difference in stability.
Also note that he works for NetApp, so you can guess what his interoperability testing might include.
JG
-- Josh "Koshua" Goodall "as modern as tomorrow afternoon" joshua@roughtrade.net - FW109
Hi,
Having asked if there are any big sites (50,000-100,000 users) it seems there are a few. I'd like to ask some fairly general questions.
I have inherited responsibility for a Cyrus mail store, at a UK university.
It is front-ended by a pair of mail gateways running Exim which handle spam, A/V etc.
Local delivery is a dedicated Suse Linux box running Postfix feeding Cyrus over LMTP. There are around 80,000 accounts, with around 20,000 active (one or (many) more messages per day). I suspect we peak at around 500 simultaneous users. The message store is around 600Gb.
Cyrus back-end storage is a fibre-channel SAN. We use most of the Cyrus functionality including Sieve, quotas and shared mailboxes. Clients access the mail store using their choice of client, predominantly IMAP/SSL from Horde/IMP or Outlook, although some of us use Thunderbird. In theory we have a stand-by box, which is a similar configuration (but with a local RAID array). The two used to be connected by DRBD, which was replaced by rsync - I believe this is because following any comms failure the entire mail store had to be resynced. Backups run over the net to tape and take around 24 hours to complete.
A small number of users are on an Exchange server instead of Cyrus. They will not be moving. User authentication runs over LDAP and there is an attribute in LDAP which identifies whether the user is a Cyrus user or an Exchange user, so that Exim knows which mail store to send their mail to, and Webmail knows whether to redirect them to Horde or to a Microsoft Outlook Web server.
It is time for a refresh which needs to take place seamlessly, and in short order (complete roll-out in the next couple of months). We need to add a few extras into the equation...
It is corporate policy to move all storage to a NetApp filer which is replicated using frequent snap-mirrors to a second site over a shared 1Gb link. (Due to possible bandwidth issues, the two filers do not update synchronously, but the backup NetApp should be no more than a couple of minutes behind, and this much loss of data would be tolerated in the event of a disaster recovery deployment.)
NFS is preferred over iSCSI, due to file recovery and disk space utilisation on the NetApp.
The two servers (or two clusters, if we go that way) will be sited one at each site. In the event of a data centre failure, we need to have quick and effective fail over to the other site (manual intervention is acceptable). It is possible that the redundant link between the sites could fail, leading to the servers losing touch with each other but both still running.
We have user communities at both sites. Currently they both talk to the single Cyrus server at "HQ".
Clustered servers would be preferred so we can do rolling upgrades by removing individual machines for OS patches etc. We have layer 4 load balancers available.
Our preferred corporate platform is Suse Linux Enterprise Server 9 running on Intel hardware.
Cyrus generally is seen as a very competent solution, and greatly preferred to the UW Imap server it replaced (this may be to do with the NFS servers UW used). Reasons for leaving Cyrus are (1) NFS and (2) replication - although I understand the Cyrus 2.3 tree has some good support for keeping multiple servers loosely synchronised over a WAN.
I am very nervous about comments on this list concerning NFS lock-ups. This system has to be bullet-proof 24/7. I would consider SolarisX86 (or possibly FreeBSD) if the NFS implementation is robust out-of-the-box. Management would like the warm feeling that a vendor-supported operating system would give them (so Suse and Sun are preferred).
My gut feeling is that I would like to split the users into two communities, with half on each NetApp, and with the two NetApps mirroring to each other. In practice users will work from both sites (and remotely) but each one has a "home" site in terms of their home directory, etc. At each site, I'd like 2 identical Dovecot boxes. I'll call this a 2 x 2 solution.
All users (Exchange users excepted) have the same address wired into their e-mail client for IMAP/SSL and SMTP/SSL, so there would have to be some magic to ensure that the user ended up talking to a Dovecot server which could see the appropriate NetApp. I don't think the load balancers are clever enough to be able to do this. I think I've read it's possible for an IMAP server to hand a user off to a different IMAP server, but can Dovecot do this and is there client support. Or should I just proxy users who hit the wrong server. Or should I just put everyone on the same NetApp and use 4 servers? I'll call this a 4 x 1 solution.
If we lost a site with a live NetApp, I would expect the surviving site to mount the latest snap-mirror and serve it. In the case we are running 2 x 2 it would become 1 x 2. If we were running 4 x 1 it would become 2 x 1 which is arguably more robust.
Does anyone have any comments on any of this. If it were your site, what would you be doing? What kit would you use? Which operating system? How will it play with our load balancers. 4 x 1 or 2 x 2? Would anyone else in UK academia like to compare notes?
Many thanks, Jonathan.
Jonathan wrote:
Hi,
Having asked if there are any big sites (50,000-100,000 users) it seems there are a few. I'd like to ask some fairly general questions.
I have inherited responsibility for a Cyrus mail store, at a UK university.
And I have responsibility for a Dovecot mail store at a UK university :)
It is front-ended by a pair of mail gateways running Exim which handle spam, A/V etc.
Similarly, though we have four (two for internal-only mail, two doing the spam-scoring for externally-originated mail).
Local delivery is a dedicated Suse Linux box running Postfix feeding Cyrus over LMTP. There are around 80,000 accounts, with around 20,000 active (one or (many) more messages per day). I suspect we peak at around 500 simultaneous users. The message store is around 600Gb.
We've got about 30,000 accounts, about half of which are "active", I guess, though I haven't counted. The message store is about 1.2 TB, served from some Sun T3 and T4 RAID5 fibre-channel arrays. There is no backup store, yet, but we're expecting to do something soon. Our inbox backups are nearly 24 hours to DLT tape too, but we hope to replace that with backup-to-disk (a NetApp R200) soon. Folders take a couple of days, but we do incremental backups of them during the week.
A small number of users are on an Exchange server instead of Cyrus. They will not be moving. User authentication runs over LDAP and there is an attribute in LDAP which identifies whether the user is a Cyrus user or an Exchange user, so that Exim knows which mail store to send their mail to, and Webmail knows whether to redirect them to Horde or to a Microsoft Outlook Web server.
It is time for a refresh which needs to take place seamlessly, and in short order (complete roll-out in the next couple of months). We need to add a few extras into the equation...
It is corporate policy to move all storage to a NetApp filer which is replicated using frequent snap-mirrors to a second site over a shared 1Gb link. (Due to possible bandwidth issues, the two filers do not update synchronously, but the backup NetApp should be no more than a couple of minutes behind, and this much loss of data would be tolerated in the event of a disaster recovery deployment.)
NFS is preferred over iSCSI, due to file recovery and disk space utilisation on the NetApp.
We too have two NetApps, now (at last, two years late!) in different machine rooms, replicating asynchronously via snapmirror. Only one of them is "live", though I've considered dividing the load between them.
Our Exchange service is served off the live NetApp via iSCSI (with the file recovery/disk space issues!), but it has deep pockets as far as management are concerned.
We're supposed to be migrating all our staff users to Exchange over the next year, hopefully without them noticing. Redundancy of the remaining Dovecot "student" service is considered much less important (my boss has even been heard to mutter about outsourcing the student service :o )
Having said that, I still have an aspiration to move the Dovecot mailstore to the NetApps, probably converting to Maildir in the process.
- The two servers (or two clusters, if we go that way) will be sited one at each site. In the event of a data centre failure, we need to have quick and effective fail over to the other site (manual intervention is acceptable). It is possible that the redundant link between the sites could fail, leading to the servers losing touch with each other but both still running.
Similarly, failover is expected to be manual and we'd be prepared to lose a few minutes worth of data.
- Clustered servers would be preferred so we can do rolling upgrades by removing individual machines for OS patches etc. We have layer 4 load balancers available.
I guess we could do this with Dovecot, subject to NFS limitations. Our load balancers can remember which server an IP was sent to, so as long as users didn't log in from two different machines simultaneously, we'd probably get away even with having the Dovecot indexes on the NetApp. Even if they did get connected through more than one server, I'd expect Dovecot not to get too upset. See the Dovecot Wiki page on NFS.
- Our preferred corporate platform is Suse Linux Enterprise Server 9 running on Intel hardware.
For Unix, this is becoming the preferred platform, but the Dovecot service is currently running on Solaris 8 on a Sun V480. Management prefers Windows :(
Cyrus generally is seen as a very competent solution, and greatly preferred to the UW Imap server it replaced (this may be to do with the NFS servers UW used).
We too migrated from UW, but we kept the mail in mbox format (I managed to make Dovecot behave identically to UW so we could just switch between them as necessary).
Cyrus has more features (ACLs, quotas, Sieve), which Dovecot is gradually implementing as plugins, but I'd expect the Cyrus versions to be more fully-featured/robust (e.g. the IMAP ACL extension isn't yet supported in Dovecot, so clients can't see/change the ACLs).
Reasons for leaving Cyrus are (1) NFS and (2) replication - although I understand the Cyrus 2.3 tree has some good support for keeping multiple servers loosely synchronised over a WAN.
Before Dovecot came to my attention, I was planning a move to Cyrus, similar to the migration that Cambridge went through. Cyrus is probably more "mature" (though the code seemed quite messy) but the transparency of the move to Dovecot was worth the risk!
I am very nervous about comments on this list concerning NFS lock-ups. This system has to be bullet-proof 24/7. I would consider SolarisX86 (or possibly FreeBSD) if the NFS implementation is robust out-of-the-box. Management would like the warm feeling that a vendor-supported operating system would give them (so Suse and Sun are preferred).
Likewise, I'm nervous, but some NFS patches went in to SuSE Enterprise 9 this week; presumably the ones mentioned for kernel 2.6.16. I'd also expect the NetApp to be more forgiving than most servers (you might even be able to force Cyrus onto it). I've wondered whether NFSv4 would be better in any way (but that would probably have to be Solaris 10 as the client).
My gut feeling is that I would like to split the users into two communities, with half on each NetApp, and with the two NetApps mirroring to each other. In practice users will work from both sites (and remotely) but each one has a "home" site in terms of their home directory, etc. At each site, I'd like 2 identical Dovecot boxes. I'll call this a 2 x 2 solution.
All users (Exchange users excepted) have the same address wired into their e-mail client for IMAP/SSL and SMTP/SSL, so there would have to be some magic to ensure that the user ended up talking to a Dovecot server which could see the appropriate NetApp. I don't think the load balancers are clever enough to be able to do this. I think I've read it's possible for an IMAP server to hand a user off to a different IMAP server, but can Dovecot do this and is there client support. Or should I just proxy users who hit the wrong server. Or should I just put everyone on the same NetApp and use 4 servers? I'll call this a 4 x 1 solution.
Yes Dovecot can act as proxy. I set up a test one this afternoon, as it happens! You may also want to look at Perdition, which is the "market leader" in this sort of proxying. Both support various databases, but you'd probably need to modify the code to look at your existing LDAP attribute.
If we lost a site with a live NetApp, I would expect the surviving site to mount the latest snap-mirror and serve it. In the case we are running 2 x 2 it would become 1 x 2. If we were running 4 x 1 it would become 2 x 1 which is arguably more robust.
Does anyone have any comments on any of this. If it were your site, what would you be doing? What kit would you use? Which operating system? How will it play with our load balancers. 4 x 1 or 2 x 2? Would anyone else in UK academia like to compare notes?
Many thanks, Jonathan.
Best Wishes, Chris
-- --+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+- Christopher Wakelin, c.d.wakelin@reading.ac.uk IT Services Centre, The University of Reading, Tel: +44 (0)118 378 8439 Whiteknights, Reading, RG6 2AF, UK Fax: +44 (0)118 975 3094
participants (14)
-
/dev/rob0
-
Antti Salmela
-
Charles Marcus
-
Chris Wakelin
-
David Lee
-
John Peacock
-
Jonathan
-
Joshua Goodall
-
Ken A
-
Les Mikesell
-
Marcus Rueckert
-
Matt
-
Odhiambo WASHINGTON
-
Taras Savchuk