Re: [Dovecot] I stream read - stale NFS file handle (reboot of server)
In the old days NFS Shared Path had a static handle (ie a number), normal based on some number pulled out of the file system/inode. To fix (well work around) a security issue, for about 10+ years now, when a NFS server reboots, it generates a new random handle for the NFS Share. (sever may generate a new random handle per mount request)
The NFS Stale Handle happens when the client is still using the old NFS handle, the only fix is a remount
Some NFSv3 server let you fix the handle id for NFS share so it does not change over reboots. (this is how NFS clustering works, sometimes you need to install cluster software if the exportfs/share command does not let you hard code the fsid by hand.
The other option is to use NFSv4 which is design to handle reboots by servers.
On Wed, 2010-03-17 at 01:59 +1100, Damon Atkins wrote:
In the old days NFS Shared Path had a static handle (ie a number), normal based on some number pulled out of the file system/inode. To fix (well work around) a security issue, for about 10+ years now, when a NFS server reboots, it generates a new random handle for the NFS Share. (sever may generate a new random handle per mount request)
The NFS Stale Handle happens when the client is still using the old NFS handle, the only fix is a remount
But that applies only to file handles that were opened before reboot, not to new fds opened after reboot, right? At least I can't see it working any other way.
To fix (well work around) a security issue, for about 10+ years now, when a NFS server reboots, it generates a new random handle for the NFS Share. (sever may generate a new random handle per mount request) I don't concur.
NFS is stateless and designed to survive server reboots (why would you have stad otherwise?). What you do is inode randomization on the file system backing the NFS export.
You get those stale handles when someone on the client has a file on the mount open and some other way the file gets deleted (by another client or right on the server). Since NFS is stateless, the server knows nothing about the file being open. If the file was open on a local file system, the kernel wouldn't actually free the inode because there's still a reference (albeit with no directory entry) on it. But NFS lacks this reference. So clients can work around this by converting unlinks to renames to .nfsXXXX names and sending an unlink to the server only on the last local unlink. Of course, this works only with one client.
Just to make sure I rememberer all this correctly, I just confirmed with The BooK: See the fourth paragraph of "The NFS Protocol" in Section 9.2 of McKusick/Bostic/Karels/Quarterman (in my edition it's on page 317)
I don't concur.
NFS is stateless and designed to survive server reboots (why would you have stad otherwise?). What you do is inode randomization on the file system backing the NFS export.
You get those stale handles when someone on the client has a file on the mount open and some other way the file gets deleted (by another client or right on the server). Since NFS is stateless, the server knows nothing about the file being open. If the file was open on a local file system, the kernel wouldn't actually free the inode because there's still a reference (albeit with no directory entry) on it. But NFS lacks this reference. So clients can work around this by converting unlinks to renames to .nfsXXXX names and sending an unlink to the server only on the last local unlink. Of course, this works only with one client.
Just to make sure I rememberer all this correctly, I just confirmed with The BooK: See the fourth paragraph of "The NFS Protocol" in Section 9.2 of McKusick/Bostic/Karels/Quarterman (in my edition it's on page 317)
Well, The jist of my setup is this. I have two dovecot installs on two different machines. They are running within a jail. The vmail directory is actually nfs mounted via AMD on the host machine and is exported to the jail'd dovecot's via a nullfs mount. The two dovecot servers both handle the incoming mail service but only one of the machines actually services imap clients. I dont believe that the handful of users i have would have more than one computer logged into dovecot at a time and if they did it would only be on the mail01 server and never the mail02.
I've tried restarting the jails and the problem has continued. I have not tried to unmount the nullfs mount nor the nfs mount via AMD. I have another few jails running web servers. These web servers also use the nullfs mount which is actually a NFS mount via the host machine to get the content that it serves to the reverse proxy. The web servers also use the nullfs NFS mount to write their log files to. Between the two machines, there is about 180mbps worth of traffic coming off the NFS server. None of the web servers have issues gettnig the content nor writing the log files.
I'm not sure if this helps at but it is how i have things set up.
On 17.3.2010, at 0.11, Joe wrote:
I'm not sure if this helps at but it is how i have things set up.
Well, if you can't get it fixed on OS side, you can always patch Dovecot. I'm just not sure if there's a way I could make this work well for everyone. diff -r a1177c6cf8c7 src/deliver/deliver.c --- a/src/deliver/deliver.c Wed Jan 20 11:44:06 2010 +0200 +++ b/src/deliver/deliver.c Wed Mar 17 00:13:57 2010 +0200 @@ -659,7 +659,7 @@ int fd; path = t_str_new(128); - str_append(path, mail_user_get_temp_prefix(user)); + str_append(path, "/tmp/deliver"); fd = safe_mkstemp(path, 0600, (uid_t)-1, (gid_t)-1); if (fd == -1 && errno == ENOENT) { dir = str_c(path);
These web servers also use the nullfs mount which is actually a NFS mount via the host machine to get the content that it serves to the reverse proxy. But Web servers usually don't delete or rename files, do they?
On 3/17/2010 2:59 PM, Edgar Fuß wrote:
These web servers also use the nullfs mount which is actually a NFS mount via the host machine to get the content that it serves to the reverse proxy.
But Web servers usually don't delete or rename files, do they?
Some can depending on whether any of the sites use php's session handler. Generally speaking, none of the sites i host do any deleting/renaming of files. I should have mentioned, for what its worth, that i also use cronolog to manage the apache log files.
participants (4)
-
Damon Atkins
-
Edgar Fuß
-
Joe
-
Timo Sirainen