I've been thinking about mountpoints recently. There have been a few problems related to them:
If dbox mails and indexes are in different filesystems, and index fs isn't mounted and mailbox is accessed -> Dovecot rebuilds indexes from scratch, which changes UIDVALIDITY, which causes client to redownload mails. All mails will also show up as unread. Once index fs gets mounted again, the UIDVALIDITY changes again and client again redownloads mails. What should happen instead is that Dovecot simply refuses to rebuild indexes when the index fs isn't mounted. This isn't as critical for mbox/maildir, but probably a good idea to do there as well.
If dbox's alternative storage isn't mounted and a mail from there is tried to be accessed -> Dovecot rebuilds indexes and sees that all mails in alt path are gone, so Dovecot also deletes them from indexes as well. Once alt fs is mounted again, the mails in there won't come back without manual index rebuild and then they have also lost flags and have updated UIDs causing clients to redownload them. So again what should happen is that Dovecot won't rebuild indexes while alt fs isn't mounted.
For dsync-based replication I need to keep a state of each mountpoint (online, offline, failover) to determine how to access user's mails.
So in the first two cases the main problem is: How does Dovecot know where a mountpoint begins? If the mountpoint is actually mounted there is no problem, because there are functions to find it (e.g. from /etc/mtab). So how to find a mountpoint that should exist, but doesn't? In some OSes Dovecot could maybe read and parse /etc/fstab, but that doesn't exist in all OSes, and do all installations even have all of the filesystems listed there anyway? (They could be in some startup script.)
So, I was thinking about adding doveadm commands to explicitly tell Dovecot about the mountpoints that it needs to care about. When no mountpoints are defined Dovecot would behave as it does now.
doveadm mount add|remove <path>
- add/remove mountpoint
doveadm mount state [<path> [<state>]]
- get/set state of mountpoint (used by replication)
- if path isn't given list states of all mountpoints
List of mountpoints is kept in /var/lib/dovecot/mounts. But because the dovecot directory is only accessible to root (and probably too much trouble to change that), there's another list in /var/run/dovecot/mounts. This one also contains the states of the mounts. When Dovecot starts up and can't find the mounts from rundir, it creates it from vardir's mounts.
When mail processes notice that a directory is missing, it usually autocreates it. With mountpoints enabled, Dovecot first finds the root mountpoint for the directory. The mount root is stat()ed and its parent is stat()ed. If their device numbers equal, the filesystem is unmounted currently, and Dovecot fails instead of creating a new directory. Similar logic is used to avoid doing a dbox rebuild if its alt dir is currently in unmounted filesystem.
The main problem I see with all this is how to make sysadmins remember to use these commands when they add/remove mountpoints?.. Perhaps the additions could be automatic at startup. Whenever Dovecot sees a new mountpoint, it's added. If an old mountpoint doesn't exist at startup a warning is logged about it. Of course many of the mountpoints aren't intended for mail storage. They could be hidden from the "mount state" list by setting their state to "ignore". Dovecot could also skip some of the common known mountpoints, such as where type is proc/tmpfs/sysfs.
Thoughts?
On 30.1.2012, at 8.31, Timo Sirainen wrote:
The main problem I see with all this is how to make sysadmins remember to use these commands when they add/remove mountpoints?.. Perhaps the additions could be automatic at startup. Whenever Dovecot sees a new mountpoint, it's added. If an old mountpoint doesn't exist at startup a warning is logged about it. Of course many of the mountpoints aren't intended for mail storage. They could be hidden from the "mount state" list by setting their state to "ignore". Dovecot could also skip some of the common known mountpoints, such as where type is proc/tmpfs/sysfs.
I wonder how automounts would work with this.. Probably rather randomly..
participants (1)
-
Timo Sirainen