Multiple backends with NFSv4.1 (supports file locking): should work without Director, right?
Hi Dovecot community,
We're looking at running multiple Dovecot backend servers in parallel, all using the same shared NFSv4.1 mount to store mailboxes in the maildir format.
We've read in multiple places that running multiple backends with a shared NFS can result in issues like index files corruption. The standard solution seems to use the Director feature, or some kind of IP based proxy/load balancer.
But: 1 - The Director feature will be removed in future free versions of Dovecot (https://dovecot.org/mailman3/archives/list/dovecot@dovecot.org/thread/ILA3C6...). 2 - NFSv4 and above support file locking (flock and fcntl, flock being emulated using fcntl). 3 - It looks like Dovecot does use file locking, though we're unsure if it does on everything and in particular on index files.
Thus, we are wondering if the need for Director is still relevant with NFSv4? Shouldn't it work without Director thanks to file locking? Has anyone tried it? We're thinking that the documentation and various threads on the subject may be outdated, based on NFSv3 and lower (no file locking).
If it doesn't work, anybody knows why? Isn't file locking there precisely to handle concurrency?
Thanks!
Hi Pierre,
when we tested NFSv4 couple of years ago, we found out that NFSv4 has a caching feature witch delegate file caching to a specific client. This was a problem with same share mounted on multiple servers. The contention will explode the load on the clients due to I/O waits and in some cases crash the dovecot servers.
We didn't use dovecot director at that time since NFSv3 was behaving more nicely and just worked on our tests.
It seem that some NFSv4 flags exists and could mitigate this behaviour making it resemble NFSv3 but we didn't test them.
On 5/19/23 17:21, pierre.alletru@gmail.com wrote:
Hi Dovecot community,
We're looking at running multiple Dovecot backend servers in parallel, all using the same shared NFSv4.1 mount to store mailboxes in the maildir format.
We've read in multiple places that running multiple backends with a shared NFS can result in issues like index files corruption. The standard solution seems to use the Director feature, or some kind of IP based proxy/load balancer.
But: 1 - The Director feature will be removed in future free versions of Dovecot (https://dovecot.org/mailman3/archives/list/dovecot@dovecot.org/thread/ILA3C6...). 2 - NFSv4 and above support file locking (flock and fcntl, flock being emulated using fcntl). 3 - It looks like Dovecot does use file locking, though we're unsure if it does on everything and in particular on index files.
Thus, we are wondering if the need for Director is still relevant with NFSv4? Shouldn't it work without Director thanks to file locking? Has anyone tried it? We're thinking that the documentation and various threads on the subject may be outdated, based on NFSv3 and lower (no file locking).
If it doesn't work, anybody knows why? Isn't file locking there precisely to handle concurrency?
Thanks!
dovecot mailing list -- dovecot@dovecot.org To unsubscribe send an email to dovecot-leave@dovecot.org
-- Best regards, Adrian Minta
+1 NFSv3 has always been more stable in our testing..
Will have to put it on the road map to run full testing again, but you know the old adage, if it ain't broke, don't fix it.. ;)
On 2023-05-19 08:23, Adrian Minta wrote:
Hi Pierre,
when we tested NFSv4 couple of years ago, we found out that NFSv4 has a caching feature witch delegate file caching to a specific client. This was a problem with same share mounted on multiple servers. The contention will explode the load on the clients due to I/O waits and in some cases crash the dovecot servers.
We didn't use dovecot director at that time since NFSv3 was behaving more nicely and just worked on our tests.
It seem that some NFSv4 flags exists and could mitigate this behaviour making it resemble NFSv3 but we didn't test them.
On 5/19/23 17:21, pierre.alletru@gmail.com wrote:
Hi Dovecot community,
We're looking at running multiple Dovecot backend servers in parallel, all using the same shared NFSv4.1 mount to store mailboxes in the maildir format.
We've read in multiple places that running multiple backends with a shared NFS can result in issues like index files corruption. The standard solution seems to use the Director feature, or some kind of IP based proxy/load balancer.
But: 1 - The Director feature will be removed in future free versions of Dovecot (https://dovecot.org/mailman3/archives/list/dovecot@dovecot.org/thread/ILA3C6...). 2 - NFSv4 and above support file locking (flock and fcntl, flock being emulated using fcntl). 3 - It looks like Dovecot does use file locking, though we're unsure if it does on everything and in particular on index files.
Thus, we are wondering if the need for Director is still relevant with NFSv4? Shouldn't it work without Director thanks to file locking? Has anyone tried it? We're thinking that the documentation and various threads on the subject may be outdated, based on NFSv3 and lower (no file locking).
If it doesn't work, anybody knows why? Isn't file locking there precisely to handle concurrency?
Thanks!
dovecot mailing list -- dovecot@dovecot.org To unsubscribe send an email to dovecot-leave@dovecot.org
-- "Catch the Magic of Linux..."
Michael Peddemors, President/CEO LinuxMagic Inc. Visit us at http://www.linuxmagic.com @linuxmagic A Wizard IT Company - For More Info http://www.wizard.ca "LinuxMagic" a Registered TradeMark of Wizard Tower TechnoServices Ltd.
604-682-0300 Beautiful British Columbia, Canada
This email and any electronic data contained are confidential and intended solely for the use of the individual or entity to which they are addressed. Please note that any views or opinions presented in this email are solely those of the author and are not intended to represent those of the company.
Thanks for the input!
Great to know that you got clusters working with at least some version of NFS without using Director. Were you guys using NLM (Network Lock Manager), dotlock, or something else, to have file locking capabilities with NFSv3?
The delegation feature of NFSv4 mentioned by Adrian can be disabled (https://docs.oracle.com/cd/E19253-01/816-4555/rfsrefer-140/index.html#:~:tex....). Perhaps without it things would run just as fine as with NFSv3.
On 20/05/2023 01:23, Adrian Minta wrote:
Hi Pierre,
when we tested NFSv4 couple of years ago, we found out that NFSv4 has a caching feature witch delegate file caching to a specific client. This was a problem with same share mounted on multiple servers. The contention will explode the load on the clients due to I/O waits and in some cases crash the dovecot servers.
We didn't use dovecot director at that time since NFSv3 was behaving more nicely and just worked on our tests.
It seem that some NFSv4 flags exists and could mitigate this behaviour making it resemble NFSv3 but we didn't test them.
NFSv4, a dozen front ends to an EMC backend, with v4 we added "noac lookupcache=none" in very early days - not sure if they are still needed.
otherwise just like when using NFSv3, no problems, and never used director. real (hardware) load balancers are actually smart and exponentially more reliable and robust than server based :)
-- Regards, Noel Butler
This Email, including attachments, may contain legally privileged
information, therefore at all times remains confidential and subject to
copyright protected under international law. You may not disseminate
this message without the authors express written authority to do so.
If you are not the intended recipient, please notify the sender then
delete all copies of this message including attachments immediately.
Confidentiality, copyright, and legal privilege are not waived or lost
by reason of the mistaken delivery of this message.
On EMC Unity there is a NAS server parameter that can be changed to disable
NFSv4 delegations using the following command,
svc_nas
On Sun, May 21, 2023 at 7:34 AM Noel Butler noel.butler@ausics.net wrote:
NFSv4, a dozen front ends to an EMC backend, with v4 we added "noac lookupcache=none" in very early days - not sure if they are still needed.
otherwise just like when using NFSv3, no problems, and never used director. real (hardware) load balancers are actually smart and exponentially more reliable and robust than server based :)
--
Regards, Noel Butler
dovecot mailing list -- dovecot@dovecot.org To unsubscribe send an email to dovecot-leave@dovecot.org
Nice to know, similar option doesn't exist on VNX's though
On 22/05/2023 17:30, Adrian M wrote:
On EMC Unity there is a NAS server parameter that can be changed to disable NFSv4 delegations using the following command, svc_nas
-param -facility nfsv4 -modify delegationsEnabled -value 0 On Sun, May 21, 2023 at 7:34 AM Noel Butler noel.butler@ausics.net wrote:
NFSv4, a dozen front ends to an EMC backend, with v4 we added "noac lookupcache=none" in very early days - not sure if they are still needed.
otherwise just like when using NFSv3, no problems, and never used director. real (hardware) load balancers are actually smart and exponentially more reliable and robust than server based :)
-- Regards, Noel Butler
This Email, including attachments, may contain legally privileged
information, therefore at all times remains confidential and subject to
copyright protected under international law. You may not disseminate
this message without the authors express written authority to do so.
If you are not the intended recipient, please notify the sender then
delete all copies of this message including attachments immediately.
Confidentiality, copyright, and legal privilege are not waived or lost
by reason of the mistaken delivery of this message.
On 22/05/2023 22:33, Marc wrote:
used director. real (hardware) load balancers are actually smart and exponentially more reliable and robust than server based :) because there runs no software on it, right ....
this statement here, shows what a clueless newbie you are
-- Regards, Noel Butler
This Email, including attachments, may contain legally privileged
information, therefore at all times remains confidential and subject to
copyright protected under international law. You may not disseminate
this message without the authors express written authority to do so.
If you are not the intended recipient, please notify the sender then
delete all copies of this message including attachments immediately.
Confidentiality, copyright, and legal privilege are not waived or lost
by reason of the mistaken delivery of this message.
On 22/05/2023 22:36, Marc wrote:
On EMC Unity there is a NAS server parameter that can be changed to
Maybe a bit to much of topic, but why EMC and not something like ceph? You rarely see any interesting comparisons on line (except of course the stupid ones listing features)
there is a reason these things cost more than you'll earn in a year.
second post in a row showing your lack of knowledge in actual networks,
before you make an even bigger ass out of yourself, how about getting
some experience in the real world or spending some time researching from
actual information - not blogs
-- Regards, Noel Butler
This Email, including attachments, may contain legally privileged
information, therefore at all times remains confidential and subject to
copyright protected under international law. You may not disseminate
this message without the authors express written authority to do so.
If you are not the intended recipient, please notify the sender then
delete all copies of this message including attachments immediately.
Confidentiality, copyright, and legal privilege are not waived or lost
by reason of the mistaken delivery of this message.
On EMC Unity there is a NAS server parameter that can be
changed to
Maybe a bit to much of topic, but why EMC and not something like ceph? You rarely see any interesting comparisons on line (except of course the stupid ones listing features)
there is a reason these things cost more than you'll earn in a year.
second post in a row showing your lack of knowledge in actual networks, before you make an even bigger ass out of yourself, how about getting some experience in the real world or spending some time researching from actual information - not blogs
Since when has there ever been a relationship between money and being good, money and intelligence etc. 2nd I have a hard time believing that are still companies out there that hardwire millions of logic circuits to create a load balancer that meets current day standards without the use of any software, and the updates come in shipped circuit boards or not at all because it was perfect from the start. Noel the only dumb ass here seems to be you. You are certainly not a good advocate for the EMC product compared to institutions like NASA and CERN that have >4000 drives in ceph solutions.
On 23/05/2023 17:23, Marc wrote:
there is a reason these things cost more than you'll earn in a year.
second post in a row showing your lack of knowledge in actual networks, before you make an even bigger ass out of yourself, how about getting some experience in the real world or spending some time researching from actual information - not blogs
Since when has there ever been a relationship between money and being good, money and intelligence etc. 2nd I have a hard time
welcome to reality, time for you to jump back in your short narrow minded bubble if thats your beliefs.
believing that are still companies out there that hardwire millions of logic circuits to create a load balancer that meets current day standards without the use of any software, and the
perhaps open your dark curtains some day, but since when do companies have to explain shit to a troll like you explaining why they do things the way they do.
Noel the only dumb ass here seems to be you. You are certainly not a good advocate for the EMC product compared to institutions like NASA and CERN that have >4000 drives in ceph solutions.
oh I hope your happy, I'm gonna lose a lot of sleep over that piss poor pathetic attempt to disparage me . n o t ... better people have tried and failed over the past 30 years.
final words, I don't care how nasa cern or whoever run their network, christ, i'm not even in the same country as them so why would I care, and the fact they have a name that most, but not all, would recognise, means nothing, Microsoft is a big name too, as is google, bigger and more known, and they have made some monumental fuck ups. I get it your a fangirl, and you can never reason with people like you.
the end.
Hi Dovecot community,
We're looking at running multiple Dovecot backend servers in parallel, all using the same shared NFSv4.1 mount to store mailboxes in the maildir format.
Just my experience, you can use multi-IMAP proxy in front of the real IMAP server which has powerful hardware like strong CPU, big ram, fast disk and high throughput networks.
b/c IMAP proxy can offload the clients' connections, and reduce the connection number to backend server (the real IMAP server) via long-live connections, it should optimize a lot on performance of the whole cluster.
Thanks.
-- sent from https://dkinbox.com/
Thanks Tom. Are you refering to a proxy software in particular (e.g. Dovecot proxy, Nginx, ...)? Do you mean having a single proxy in front of all the backends?
We'd prefer to avoid that if possible, as that makes the proxy a single point of failure. But it seems to be the recommended way to deal with cluster indeed (https://doc.dovecot.org/configuration_manual/nfs/#clustering-without-directo...).
I ran nfs3 with dovecot using dotlock and then nlm lock since 2008,
never had an issue, using maildir.
I moved to director arouns 2015, and then to mdbox to fix several
performance issues.
I have moved to nfs4 about 2years ago, but still using director and mdbox.
For me to move without director, would require a user aware load
balancer, as my clients log in from many ip's at the same time.
But I have been thinking of removing nfs, and maybe the directors and
just handling it directly on the nfs servers as the move to mdbox and
everything else is really trimming my requirements.
Quoting pierre.alletru@gmail.com:
Hi Dovecot community,
We're looking at running multiple Dovecot backend servers in
parallel, all using the same shared NFSv4.1 mount to store mailboxes
in the maildir format.We've read in multiple places that running multiple backends with a
shared NFS can result in issues like index files corruption. The
standard solution seems to use the Director feature, or some kind of
IP based proxy/load balancer.But: 1 - The Director feature will be removed in future free versions of
Dovecot
(https://dovecot.org/mailman3/archives/list/dovecot@dovecot.org/thread/ILA3C6...). 2 - NFSv4 and above support file locking (flock and fcntl, flock
being emulated using fcntl). 3 - It looks like Dovecot does use file locking, though we're unsure
if it does on everything and in particular on index files.Thus, we are wondering if the need for Director is still relevant
with NFSv4? Shouldn't it work without Director thanks to file
locking? Has anyone tried it? We're thinking that the documentation
and various threads on the subject may be outdated, based on NFSv3
and lower (no file locking).If it doesn't work, anybody knows why? Isn't file locking there
precisely to handle concurrency?Thanks!
dovecot mailing list -- dovecot@dovecot.org To unsubscribe send an email to dovecot-leave@dovecot.org
participants (10)
-
Adrian M
-
Adrian Minta
-
D D
-
Eirik Rye
-
Marc
-
Michael Peddemors
-
Noel Butler
-
Patrick Domack
-
pierre.alletru@gmail.com
-
Tom Reed