The end of Dovecot Director?

hi at zakaria.website hi at zakaria.website
Thu Oct 27 10:45:57 UTC 2022


On 2022-10-27 08:31, William Edwards wrote:
> 
>> Op 27 okt. 2022 om 04:25 heeft Timo Sirainen <timo at sirainen.com> het 
>> volgende geschreven:
>> 
>> Director never worked especially well, and for most use cases it's 
>> just unnecessarily complex. I think usually it could be replaced with:
>> 
>> * Database (sql/ldap/whatever) containing user -> backend table.
>> * Configure Dovecot proxy to use this database as passdb.
>> * For HA change dovemon to update the database if backend is down to 
>> move users elsewhere
>> * When backend comes up, move users into it. Set delay_until extra 
>> field for user in passdb to 5 seconds into future and kick the user in 
>> its old backend (e.g. via doveadm HTTP API).
>> 
>> All this can be done with existing Dovecot. Should be much easier to 
>> build a project doing this than forking director.
> 
> This is my train of thought as well. I believe the following would 
> suffice for most setups.
> 
> A database with:
> 
> - Current vhost count per backend server. Alternatively, count the 
> temporary user mappings.
> - Backend servers.
> - Temporary user mappings between user - backend server.
> 
> This database is accessible by all Dovecot proxies in case there’s 
> multiple.
> 
> Steps when receiving a login:
> 
> - Check if a temporary user mapping exists.
> - If so, proxy to the backend server in the temporary mapping. (To do: 
> clean up mappings.)
> - If not, pick the backend server with the lowest vhost count, create a 
> temporary mapping, then increase the vhost count of the chosen backend 
> server.
> 
> A monitoring service up/downs backend servers. E.g. by checking the 
> port that we proxy to for each backend server. When a backend server is 
> set to down, kick the user to force a reconnection. (Is that how 
> Director ‘moves’ users?)

Here is my alternative input as well using database cluster/file.

Create connection mappings table in database cluster where each row must 
be containing user id, backend id and frontend id and agent hash, 
alternatively mappings file containing such info and synced across all 
servers.

Incorporate multiple simultaneous mappings using agent hash which can be 
useful e.g. in the event of using client apps from several devices, in 
the IMAP proxy perhaps update the first row agent hash which doesnt have 
hash and matching frontend and user id in post login requests.

Create service in each backend, monitoring login and logout entries, and 
whenever there is one, add the relevant user and frontend row in 
mappings table/file. In the event of remove just mark one matching entry 
with exclusion to unknown agent hash as soft removed.

In load balancing solution, for SMTP/IMAP connections, use perhaps a lua 
script, to check mappings in database or file, and find which backend 
user was logged to, and alongside generate user agent hash perhaps using 
base64 encoding to locate exact client connection backend row in 
mappings where several entries might be present, and proxy the incoming 
request to it, uncheck soft removed if same backend using same user 
agent hash, if there is no mappings, use the normal load balancing 
method which in post login requests its mappings will be automatically 
created.

Zakaria.


More information about the dovecot mailing list