[Dovecot] Dovecot performance on GFS clustered filesystem
Hello All,
We are using Dovecot 1.1.3 to serve IMAP on a pair of clustered Postfix servers which share a fiber array via the GFS clustered filesystem. This all works very well for the most part, with the exception that certain operations are so inefficient on GFS that they generate significant I/O load and hurt performance. We are using the Maildir format on disk. We're also using Dovecot's deliver from Postfix to handle local delivery.
As best I can determine, the worst problems occur when certain users with very large Inboxes (~10k messages) receive new mail and their client looks up information about that message. GFS doesn't seem to efficiently handle the large directories that contain folders like this. As a result, lots of I/O ops are generated and performance suffers for everyone.
I am beginning to wonder if it might be more efficient to revert to the old mbox format, with one file per folder (plus whatever indices are creates.) It seems that this ought to work better with GFS which is geared toward smaller numbers of larger files. Is anyone on the list currently doing that? Alternately, any thoughts regarding tuning or other options would be appreciated.
Thanks, Allen
-- Allen Belletti allen@isye.gatech.edu 404-894-6221 Phone Industrial and Systems Engineering 404-385-2988 Fax Georgia Institute of Technology
I've read somewhere that one of gfs2 goals was to improve performance for directory access with many files.
I've tested it doing a simple ls in a directory with many test empty files in gfs and it was _really_ slow, doing the ls on a gfs2 with the same amount of emtpy files is actually faster.
But when I tested gfs2 with bonnie++ I got fewer sequential I/O speed than in gfs (consider that I tested a beta version of gfs2 some months ago, maybe things are better now).
So my conclusion of the tests was that gfs is best with mbox, gfs2 beta with maildir.
But, again, I haven't tested gfs2 improvements recently.
Regards, Diego.
On Wed, Sep 24, 2008 at 9:18 PM, Diego Liziero diegoliz@gmail.com wrote:
I've read somewhere that one of gfs2 goals was to improve performance for directory access with many files.
I've tested it doing a simple ls in a directory with many test empty files in gfs and it was _really_ slow, doing the ls on a gfs2 with the same amount of emtpy files is actually faster.
But when I tested gfs2 with bonnie++ I got fewer sequential I/O speed than in gfs (consider that I tested a beta version of gfs2 some months ago, maybe things are better now).
This seems true only for sequential write speed.
So my conclusion of the tests was that gfs is best with mbox, gfs2 beta with maildir.
But, again, I haven't tested gfs2 improvements recently.
Regards, Diego.
I lauched bonnie++ on the same lvm2 partition formatted with different filesystems to test their speed. Consider that I launced the test just one time per filesystem, so the numbers should not be considered sharp, but with a certain percentage of error.
With clustered filesystems (gfs and gfs2) the test has been launched on the same box, but with the fillesystem mounted on two nodes. Option used to format gfs and gfs2: "-r 2048 -p lock_dlm -j 4"
The test box has two dual-core Opteron processors, 8 Gb of ram, two 4Gb fiber channel HBA, gigabit ethernet (for the distributed lock manager connection) and Centos 5.2 x86_64 installed.
With this configuration it seems that: gfs is faster than gfs2 when writing sequential blocks, much faster in creating and deleting files; gfs2 seems to have a faster read speed.
Regards, Diego.
bonnie++ -s 16g Version 1.03c ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Filesystem Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP ext2 16G 69230 95 149430 47 54849 25 71681 90 215568 54 473.3 1 xfs 16G 62828 94 135482 54 64010 28 71841 92 238351 51 632.3 2 ext3 16G 56485 93 115398 68 56051 32 73536 92 211219 48 552.0 2 gfs 16G 47079 98 124123 82 42651 53 65692 91 189533 65 431.5 3 gfs2 16G 40203 77 74620 53 52596 39 73187 93 226909 58 496.2 2
------Sequential Create------ --------Random Create--------
-Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
ext2 16 3810 99 +++++ +++ +++++ +++ 2395 99 +++++ +++ 10099 99 xfs 16 2446 51 +++++ +++ 1073 24 2019 59 +++++ +++ 796 21 ext3 16 5597 77 +++++ +++ 16764 99 19450 99 +++++ +++ +++++ +++ gfs 16 882 40 +++++ +++ 4408 82 1001 56 +++++ +++ 2734 69 gfs2 16 457 0 +++++ +++ 994 0 477 0 +++++ +++ 995 29
On Sep 24, 2008, at 10:03 PM, Allen Belletti wrote:
As best I can determine, the worst problems occur when certain users with very large Inboxes (~10k messages) receive new mail and their client looks up information about that message. GFS doesn't seem to efficiently handle the large directories that contain folders like this. As a result, lots of I/O ops are generated and performance suffers for everyone.
I am beginning to wonder if it might be more efficient to revert to
the old mbox format, with one file per folder (plus whatever indices are creates.) It seems that this ought to work better with GFS which is geared toward smaller numbers of larger files. Is anyone on the list currently doing that? Alternately, any thoughts regarding tuning or other options would be appreciated.
One possibility would be to use dbox format with hashed directories so
for each mailbox it could create n directories where to store the
messages. Two problems here though:
dbox code hasn't been tested all that much yet in real world (but
it works well in my stress tests)dbox doesn't yet support directory hashing, but it would be pretty
easy to implement.
Hi All,
I wanted to follow up my own message from September now that I've got more information.
As of RHEL 5.3, GFS2 was finally advertised as "production ready" and the servers discussed below have been upgraded from GFS to GFS2. The difference is night and day. Essentially GFS2 has completely eliminated the long periods of heavy I/O load that were seen before. In addition, the user experience is markedly better.
For anyone who is considering something like this, feel free to contact me as I'll be glad to pass along whatever wisdom I've accumulated.
Thanks, Allen
Allen Belletti wrote:
Hello All,
We are using Dovecot 1.1.3 to serve IMAP on a pair of clustered Postfix servers which share a fiber array via the GFS clustered filesystem. This all works very well for the most part, with the exception that certain operations are so inefficient on GFS that they generate significant I/O load and hurt performance. We are using the Maildir format on disk. We're also using Dovecot's deliver from Postfix to handle local delivery.
As best I can determine, the worst problems occur when certain users with very large Inboxes (~10k messages) receive new mail and their client looks up information about that message. GFS doesn't seem to efficiently handle the large directories that contain folders like this. As a result, lots of I/O ops are generated and performance suffers for everyone.
I am beginning to wonder if it might be more efficient to revert to the old mbox format, with one file per folder (plus whatever indices are creates.) It seems that this ought to work better with GFS which is geared toward smaller numbers of larger files. Is anyone on the list currently doing that? Alternately, any thoughts regarding tuning or other options would be appreciated.
Thanks, Allen
-- Allen Belletti allen@isye.gatech.edu 404-894-6221 Phone Industrial and Systems Engineering 404-385-2988 Fax Georgia Institute of Technology
participants (3)
-
Allen Belletti
-
Diego Liziero
-
Timo Sirainen