Replication to wrong mailbox
It happened now twice that replication created folders and mails in the wrong mailbox :(
Here's the architecture we use:
- 2 Dovecot (2.2.32) backends in two different datacenters replicating via a VPN connection
- Dovecot directors in both datacenters talks to both backends with vhost_count of 100 vs 1 for local vs remote backend
- backends use proxy dict via a unix domain socket and socat to talk via tcp to a dict on a different server (kubernetes cluster)
- backends have a local sqlite userdb for iteration (also containing home directories, as just iteration is not possible)
- serving around 7000 mailboxes in a roughly 200 different domains
Everything works as expected, until dict is not reachable eg. due to a server failure or a planed reboot of a node of the kubernetes cluster. In that situation it can happen that some requests are not answered, even with Kubernetes running multiple instances of the dict. I can only speculate what happens then: it seems the connection failure to the remote dict is not correctly handled and leads to situation in which last mailbox/home directory is used for the replication :(
When it happened the first time we attributed it to the fact that the Sqlite database at that time contained no home directory information, which we fixed after. This first time (server failure) took a couple of minutes and lead to many mailboxes containing mostly folders but also some new arrived mails belonging to other mailboxes/users. We could only resolve that situation by rolling back to a zfs snapshot before the downtime.
The second time was last Friday night during a (much shorter) reboot of a Kubernetes node and lead only to a single mailbox containing folders and mails of other mailboxes. That was verified by looking at timestamps of directories below $home/mdbox/mailboxes and files in $home/mdbox/storage. I can not tell if adding the home directory to the Sqlite database or the shorter time of the failure limited the wrong replication to a single mailbox.
Can someone with more knowledge of the Dovecot code please check/verify how replication deals with failures in proxy dict. I'm of cause happy to provide more information of our configuration if needed.
Here is an exert of our configuration (full doveconf -n is attached):
passdb { args = /etc/dovecot/dovecot-dict-master-auth.conf driver = dict master = yes } passdb { args = /etc/dovecot/dovecot-dict-auth.conf driver = dict } userdb { driver = prefetch } userdb { args = /etc/dovecot/dovecot-dict-auth.conf driver = dict } userdb { args = /etc/dovecot/dovecot-sql.conf driver = sql }
dovecot-dict-auth.conf: uri = proxy:/var/run/dovecot_auth_proxy/socket:backend password_key = passdb/%u/%w user_key = userdb/%u iterate_disable = yes
dovecot-dict-master-auth.conf: uri = proxy:/var/run/dovecot_auth_proxy/socket:backend password_key = master/%{login_user}/%u/%w iterate_disable = yes
dovecot-sql.conf: driver = sqlite connect = /etc/dovecot/users.sqlite user_query = SELECT home,NULL AS uid,NULL AS gid FROM users WHERE userid = '%n' AND domain = '%d' iterate_query = SELECT userid AS username, domain FROM users
-- Ralf Becker EGroupware GmbH [www.egroupware.org] Handelsregister HRB Kaiserslautern 3587 Geschäftsführer Birgit und Ralf Becker Leibnizstr. 17, 67663 Kaiserslautern, Germany Telefon +49 631 31657-0
No one any idea?
Replication into wrong mailboxes caused by an unavailable proxy dict backend is a serious privacy and/or security problem!
Ralf
Am 30.10.17 um 10:05 schrieb Ralf Becker:
It happened now twice that replication created folders and mails in the wrong mailbox :(
Here's the architecture we use:
- 2 Dovecot (2.2.32) backends in two different datacenters replicating via a VPN connection
- Dovecot directors in both datacenters talks to both backends with vhost_count of 100 vs 1 for local vs remote backend
- backends use proxy dict via a unix domain socket and socat to talk via tcp to a dict on a different server (kubernetes cluster)
- backends have a local sqlite userdb for iteration (also containing home directories, as just iteration is not possible)
- serving around 7000 mailboxes in a roughly 200 different domains
Everything works as expected, until dict is not reachable eg. due to a server failure or a planed reboot of a node of the kubernetes cluster. In that situation it can happen that some requests are not answered, even with Kubernetes running multiple instances of the dict. I can only speculate what happens then: it seems the connection failure to the remote dict is not correctly handled and leads to situation in which last mailbox/home directory is used for the replication :(
When it happened the first time we attributed it to the fact that the Sqlite database at that time contained no home directory information, which we fixed after. This first time (server failure) took a couple of minutes and lead to many mailboxes containing mostly folders but also some new arrived mails belonging to other mailboxes/users. We could only resolve that situation by rolling back to a zfs snapshot before the downtime.
The second time was last Friday night during a (much shorter) reboot of a Kubernetes node and lead only to a single mailbox containing folders and mails of other mailboxes. That was verified by looking at timestamps of directories below $home/mdbox/mailboxes and files in $home/mdbox/storage. I can not tell if adding the home directory to the Sqlite database or the shorter time of the failure limited the wrong replication to a single mailbox.
Can someone with more knowledge of the Dovecot code please check/verify how replication deals with failures in proxy dict. I'm of cause happy to provide more information of our configuration if needed.
Here is an exert of our configuration (full doveconf -n is attached):
passdb { args = /etc/dovecot/dovecot-dict-master-auth.conf driver = dict master = yes } passdb { args = /etc/dovecot/dovecot-dict-auth.conf driver = dict } userdb { driver = prefetch } userdb { args = /etc/dovecot/dovecot-dict-auth.conf driver = dict } userdb { args = /etc/dovecot/dovecot-sql.conf driver = sql }
dovecot-dict-auth.conf: uri = proxy:/var/run/dovecot_auth_proxy/socket:backend password_key = passdb/%u/%w user_key = userdb/%u iterate_disable = yes
dovecot-dict-master-auth.conf: uri = proxy:/var/run/dovecot_auth_proxy/socket:backend password_key = master/%{login_user}/%u/%w iterate_disable = yes
dovecot-sql.conf: driver = sqlite connect = /etc/dovecot/users.sqlite user_query = SELECT home,NULL AS uid,NULL AS gid FROM users WHERE userid = '%n' AND domain = '%d' iterate_query = SELECT userid AS username, domain FROM users
-- Ralf Becker EGroupware GmbH [www.egroupware.org] Handelsregister HRB Kaiserslautern 3587 Geschäftsführer Birgit und Ralf Becker Leibnizstr. 17, 67663 Kaiserslautern, Germany Telefon +49 631 31657-0
Can you somehow reproduce this issue with auth_debug=yes and mail_debug=yes and provide those logs?
Aki
On 02.11.2017 10:55, Ralf Becker wrote:
No one any idea?
Replication into wrong mailboxes caused by an unavailable proxy dict backend is a serious privacy and/or security problem!
Ralf
Am 30.10.17 um 10:05 schrieb Ralf Becker:
It happened now twice that replication created folders and mails in the wrong mailbox :(
Here's the architecture we use:
- 2 Dovecot (2.2.32) backends in two different datacenters replicating via a VPN connection
- Dovecot directors in both datacenters talks to both backends with vhost_count of 100 vs 1 for local vs remote backend
- backends use proxy dict via a unix domain socket and socat to talk via tcp to a dict on a different server (kubernetes cluster)
- backends have a local sqlite userdb for iteration (also containing home directories, as just iteration is not possible)
- serving around 7000 mailboxes in a roughly 200 different domains
Everything works as expected, until dict is not reachable eg. due to a server failure or a planed reboot of a node of the kubernetes cluster. In that situation it can happen that some requests are not answered, even with Kubernetes running multiple instances of the dict. I can only speculate what happens then: it seems the connection failure to the remote dict is not correctly handled and leads to situation in which last mailbox/home directory is used for the replication :(
When it happened the first time we attributed it to the fact that the Sqlite database at that time contained no home directory information, which we fixed after. This first time (server failure) took a couple of minutes and lead to many mailboxes containing mostly folders but also some new arrived mails belonging to other mailboxes/users. We could only resolve that situation by rolling back to a zfs snapshot before the downtime.
The second time was last Friday night during a (much shorter) reboot of a Kubernetes node and lead only to a single mailbox containing folders and mails of other mailboxes. That was verified by looking at timestamps of directories below $home/mdbox/mailboxes and files in $home/mdbox/storage. I can not tell if adding the home directory to the Sqlite database or the shorter time of the failure limited the wrong replication to a single mailbox.
Can someone with more knowledge of the Dovecot code please check/verify how replication deals with failures in proxy dict. I'm of cause happy to provide more information of our configuration if needed.
Here is an exert of our configuration (full doveconf -n is attached):
passdb { args = /etc/dovecot/dovecot-dict-master-auth.conf driver = dict master = yes } passdb { args = /etc/dovecot/dovecot-dict-auth.conf driver = dict } userdb { driver = prefetch } userdb { args = /etc/dovecot/dovecot-dict-auth.conf driver = dict } userdb { args = /etc/dovecot/dovecot-sql.conf driver = sql }
dovecot-dict-auth.conf: uri = proxy:/var/run/dovecot_auth_proxy/socket:backend password_key = passdb/%u/%w user_key = userdb/%u iterate_disable = yes
dovecot-dict-master-auth.conf: uri = proxy:/var/run/dovecot_auth_proxy/socket:backend password_key = master/%{login_user}/%u/%w iterate_disable = yes
dovecot-sql.conf: driver = sqlite connect = /etc/dovecot/users.sqlite user_query = SELECT home,NULL AS uid,NULL AS gid FROM users WHERE userid = '%n' AND domain = '%d' iterate_query = SELECT userid AS username, domain FROM users
Hi Aki,
Am 02.11.17 um 09:57 schrieb Aki Tuomi:
Can you somehow reproduce this issue with auth_debug=yes and mail_debug=yes and provide those logs?
I will try, knowing now someone will look at the logs.
It might take a couple of days. I'll come back once I have the logs.
Ralf
Aki
On 02.11.2017 10:55, Ralf Becker wrote:
No one any idea?
Replication into wrong mailboxes caused by an unavailable proxy dict backend is a serious privacy and/or security problem!
Ralf
Am 30.10.17 um 10:05 schrieb Ralf Becker:
It happened now twice that replication created folders and mails in the wrong mailbox :(
Here's the architecture we use:
- 2 Dovecot (2.2.32) backends in two different datacenters replicating via a VPN connection
- Dovecot directors in both datacenters talks to both backends with vhost_count of 100 vs 1 for local vs remote backend
- backends use proxy dict via a unix domain socket and socat to talk via tcp to a dict on a different server (kubernetes cluster)
- backends have a local sqlite userdb for iteration (also containing home directories, as just iteration is not possible)
- serving around 7000 mailboxes in a roughly 200 different domains
Everything works as expected, until dict is not reachable eg. due to a server failure or a planed reboot of a node of the kubernetes cluster. In that situation it can happen that some requests are not answered, even with Kubernetes running multiple instances of the dict. I can only speculate what happens then: it seems the connection failure to the remote dict is not correctly handled and leads to situation in which last mailbox/home directory is used for the replication :(
When it happened the first time we attributed it to the fact that the Sqlite database at that time contained no home directory information, which we fixed after. This first time (server failure) took a couple of minutes and lead to many mailboxes containing mostly folders but also some new arrived mails belonging to other mailboxes/users. We could only resolve that situation by rolling back to a zfs snapshot before the downtime.
The second time was last Friday night during a (much shorter) reboot of a Kubernetes node and lead only to a single mailbox containing folders and mails of other mailboxes. That was verified by looking at timestamps of directories below $home/mdbox/mailboxes and files in $home/mdbox/storage. I can not tell if adding the home directory to the Sqlite database or the shorter time of the failure limited the wrong replication to a single mailbox.
Can someone with more knowledge of the Dovecot code please check/verify how replication deals with failures in proxy dict. I'm of cause happy to provide more information of our configuration if needed.
Here is an exert of our configuration (full doveconf -n is attached):
passdb { args = /etc/dovecot/dovecot-dict-master-auth.conf driver = dict master = yes } passdb { args = /etc/dovecot/dovecot-dict-auth.conf driver = dict } userdb { driver = prefetch } userdb { args = /etc/dovecot/dovecot-dict-auth.conf driver = dict } userdb { args = /etc/dovecot/dovecot-sql.conf driver = sql }
dovecot-dict-auth.conf: uri = proxy:/var/run/dovecot_auth_proxy/socket:backend password_key = passdb/%u/%w user_key = userdb/%u iterate_disable = yes
dovecot-dict-master-auth.conf: uri = proxy:/var/run/dovecot_auth_proxy/socket:backend password_key = master/%{login_user}/%u/%w iterate_disable = yes
dovecot-sql.conf: driver = sqlite connect = /etc/dovecot/users.sqlite user_query = SELECT home,NULL AS uid,NULL AS gid FROM users WHERE userid = '%n' AND domain = '%d' iterate_query = SELECT userid AS username, domain FROM users
-- Ralf Becker EGroupware GmbH [www.egroupware.org] Handelsregister HRB Kaiserslautern 3587 Geschäftsführer Birgit und Ralf Becker Leibnizstr. 17, 67663 Kaiserslautern, Germany Telefon +49 631 31657-0
On 30 Oct 2017, at 11.05, Ralf Becker rb@egroupware.org wrote:
It happened now twice that replication created folders and mails in the wrong mailbox :(
Here's the architecture we use:
- 2 Dovecot (2.2.32) backends in two different datacenters replicating via a VPN connection
- Dovecot directors in both datacenters talks to both backends with vhost_count of 100 vs 1 for local vs remote backend
- backends use proxy dict via a unix domain socket and socat to talk via tcp to a dict on a different server (kubernetes cluster)
- backends have a local sqlite userdb for iteration (also containing home directories, as just iteration is not possible)
- serving around 7000 mailboxes in a roughly 200 different domains
Everything works as expected, until dict is not reachable eg. due to a server failure or a planed reboot of a node of the kubernetes cluster. In that situation it can happen that some requests are not answered, even with Kubernetes running multiple instances of the dict. I can only speculate what happens then: it seems the connection failure to the remote dict is not correctly handled and leads to situation in which last mailbox/home directory is used for the replication :(
It sounds to me like a userdb lookup changes the username during a dict failure. Although I can't really think of how that could happen. The only thing that comes to my mind is auth_cache, but in that case I'd expect the same problem to happen even when there aren't dict errors.
For testing you could see if it's reproducible with:
- get random username
- do doveadm user <user>
- verify that the result contains the same input user
Then do that in a loop rapidly and restart your test kubernetes once in a while.
Hi Timo,
Am 02.11.17 um 10:34 schrieb Timo Sirainen:
On 30 Oct 2017, at 11.05, Ralf Becker rb@egroupware.org wrote:
It happened now twice that replication created folders and mails in the wrong mailbox :(
Here's the architecture we use:
- 2 Dovecot (2.2.32) backends in two different datacenters replicating via a VPN connection
- Dovecot directors in both datacenters talks to both backends with vhost_count of 100 vs 1 for local vs remote backend
- backends use proxy dict via a unix domain socket and socat to talk via tcp to a dict on a different server (kubernetes cluster)
- backends have a local sqlite userdb for iteration (also containing home directories, as just iteration is not possible)
- serving around 7000 mailboxes in a roughly 200 different domains
Everything works as expected, until dict is not reachable eg. due to a server failure or a planed reboot of a node of the kubernetes cluster. In that situation it can happen that some requests are not answered, even with Kubernetes running multiple instances of the dict. I can only speculate what happens then: it seems the connection failure to the remote dict is not correctly handled and leads to situation in which last mailbox/home directory is used for the replication :( It sounds to me like a userdb lookup changes the username during a dict failure. Although I can't really think of how that could happen.
Me neither.
Users are in multiple MariaDB databases on a Galera cluster. We have no problems or unexpected changes there.
The dict is running multiple time, but that might not guarantee no single request might fail.
The only thing that comes to my mind is auth_cache, but in that case I'd expect the same problem to happen even when there aren't dict errors.
For testing you could see if it's reproducible with:
- get random username
- do doveadm user <user>
- verify that the result contains the same input user
Then do that in a loop rapidly and restart your test kubernetes once in a while.
Ok, I'll give that a try. It's would be a lot easier then the whole replication setup.
Ralf
-- Ralf Becker EGroupware GmbH [www.egroupware.org] Handelsregister HRB Kaiserslautern 3587 Geschäftsführer Birgit und Ralf Becker Leibnizstr. 17, 67663 Kaiserslautern, Germany Telefon +49 631 31657-0
participants (3)
-
Aki Tuomi
-
Ralf Becker
-
Timo Sirainen