[Dovecot] Roadmap to future
v1.1.0 is finally getting closer, so it's time to start planning what happens after it. Below is a list of some of the larger features I'm planning on implementing. I'm not yet sure in which order, so I think it's time to ask again what features companies would be willing to pay for?
Sponsoring Dovecot development gets you:
- listed in Credits in www.dovecot.org
- listed in AUTHORS file
- you can tell me how you want to use the feature and I'll make sure that it's possible (within reasonable limits)
- allows you to change the order in which features are implemented (more or less)
As you can see below, some features are easier to implement when other features are implemented first. Several of them depend on v2.0 master rewrite, so I'm thinking that maybe it's time to finally finish that first.
I know several companies have requested replication support, but since it kind of depends on several other features, it might be better to leave that for v2.1. But of course sponsoring development of those features that replication depends on gets you the replication support faster. :)
I'm still trying to go to school (and this year I started taking CS classes as well as biotech classes..), so it might take a while for some features to get implemented. I'm anyway hoping for a v1.2 or v2.0 release before next summer.
So, the list..
Implemented, but slightly buggy so not included in v1.1 (yet):
- Mailbox list indexes
- THREAD REFERENCES indexes
- THREAD X-REFERENCES2 algorithm where threads are sorted by their latest message instead of the thread root message
Could be implemented for v1.2:
- Shared mailboxes and ACLs http://www.dovecot.org/list/dovecot/2007-April/021624.html
- Virtual mailboxes:
http://www.dovecot.org/list/dovecot/2007-May/022828.html
- Depends on mailbox list indexes
dbox plans, could be implemented for v1.2:
- Support for single instance attachment storage
- Global dbox storage so copying messages from mailbox to another involves only small metadata updates
- Support some kind of hashed directories for storing message files, instead of storing everything in a single directory.
- Finish support for storing multiple messages in a file
v2.0 master/config rewrite
It's mostly working already. One of the larger problems left is how to handle plugin settings. Currently Dovecot just passes everything in plugin {} section to mail processes without doing any checks. I think plugin settings should be checked just as well as the main settings. Also plugins should be able to create new sections instead of having to use plain key=value settings for everything.
v2.0 currently runs a perl script on Dovecot's source tree to find settiongs from different places then then creates a single all-settings.c for config process. This obviously doesn't work for out-of-tree plugins, so there needs to be some other way for them to pass the allowed settings. There are two possibilities:
a) Read them from separate files. This is a bit annoying, because it's not anymore possible to distribute simple plugin.c files and have them compiled into plugin.so files.
b) Read them from the plugin itself. This would work by loading the plugin and then reading the variable containing the configuration. I'm a bit worried about security implications though, because the plugins could execute some code while they're loaded. But I guess a new unprivileged process could be created for doing this. Of course normally admins won't run any user-requested plugins anyway so this might not be a real issue anyway. :)
Index file optimizations
Since these are not backwards compatible changes, the major version number should be increased whenever these get implemented. So I think these should be combined with v2.0 master/config rewrite. I'd rather not release v3.0 soon after v2.0 just for these..
dovecot.index.log: Make it take less space! Get rid of the current "extension introduction" records that are large and written over and over again. Compress UID ranges using similar methods than Squat uses now. Try all kinds of other methods to reduce space usage.
Write transaction boundaries, so the reader can process entire transactions at a time. This will make some things easier, such as avoiding NFS data cache flushes, replication support and also allows lockless writes using O_APPEND.
- dovecot.index: Instead of keeping a single message record array, split it into two: A compressed list of message UIDs and array containing the rest of the data. This makes the file smaller and keeping UIDs separate allows better CPU cache (and maybe disk I/O) utilization for UID -> sequence lookups. At least this is the theory, I should benchmark this before committing to this design. :)
There could also be a separate expunged UIDs list. UIDs existing in there would be removed from existing UIDs list, but it wouldn't affect calculating offsets to record data in the file. So while currently updating dovecot.index file after expunges requires recreating the whole file, with this method it would be possible to just write 4 bytes to the expunged UIDs list. The space reserved for expunged UIDs list would be pretty small though, so when it gets full the file would be recreated.
- dovecot.index.cache: Better disk I/O and CPU cache utilization by keeping data close together in the file when it's commonly accessed toghether. For example message sizes and dates for all messages could be close to each others to optimize SORTing by message size/date. If clients use conflicting access patterns, the data could be either duplicated to optimize both patterns or just let it be slower for the other pattern.
Deliver / LMTP server
Currently deliver parses dovecot.conf using its own parser. This has caused all kinds of problems. v2.0's master rewrite helps with this, because deliver can then just ask the configuration from config process the same way as other processes are.
Another problem is that people who use multiple user IDs have had to make deliver setuid-root. I really hate this, but currently there's no better way. Better fix for this would be to use LMTP server instead. There could be either a separate LMTP client or deliver could support LMTP protocol as well. I'm not sure which one is better.
How would LMTP server then get around the multiple user IDs issue? There are two possibilities:
a) Run as root and temporarily set effective UID to the user whose mails we are currently delivering. This has the problem that if users have direct access to their mailboxes and index files, the mailbox handling code has to be bulletproof to avoid regular users gaining root privileges by doing special modifications to index files -- possibly at the same time as the mail is being delivered to exploit some race condition with mmaped index files..
b) Create a child process for each different UID and drop its privileges completely. If all users have different UIDs, this usually means forking a new process for each mail delivery, making the performance much worse than with a) method. It would anyway be possible to delay destroying the child processes in case a new message comes in for the same UID. This would probably mostly help only when UIDs are shared by multiple users, such as each domain having their own UID.
I think I'll implement both and let admin choose which method to use.
This same problem exists for several other things besides deliver/LMTP server. For example a) method is already implemented for expire plugin's expire-tool in v1.1. So this code needs to be somewhat generic so it won't have to be duplicated in multiple places.
All of this pretty much requires the new v2.0 master rewrite. Adding these to v1.x would be ugly.
Proxying
These could be implemented to v1.2.
Log in normally (no proxying) if destination IP is the server itself.
Support for per-namespace proxying:
namespace public { prefix = Public/ location = proxy:public.example.org }
There are two choices for this: Dummy IMAP proxying or handling it with a mail storage backend. Dummy IMAP proxying would just let the remote server whose mailbox is currently selected handle most of the commands and their replies. I'm just afraid that this will become problematic in future.
For example if the proxying server wants to send the client an event, such as "new message was added to this other mailbox", it would have to be sure that the remote server isn't currently in the middle of sending something. And the only way to be sure would be to parse its input, which in turn would make the proxying more complex. Then there would be problems if the servers support different IMAP extensions. And maybe some extensions will require knowing the state of multiple mailboxes, which will make handling them even more complex if the mailboxes are on different servers.
Implementing this as a mail storage backend would mean that the proxying server parses all the IMAP commands internally and uses Dovecot's mail storage API to process them. The backend would then request data from the remote server when needed and cache it internally to avoid asking the same data over and over again. Besides caching, this would mean that there are no problems with extensions, since Dovecot handles the the exact same way as for other backends, requiring no additional code.
To get better performance the storage backend would have to be smarter than most other Dovecot backends. For example while the current backends handle searches by reading through all the messages, the proxy backend should just send the search request to the remote server. The same for SORT. And for THREAD, which the current storage API doesn't handle..
And how would proxying server and remote server then communicate? Again two possibilities: Either a standard IMAP protocol, with possibly some optional Dovecot-extensions to improve performance, or a bandwidth-efficient protocol. The main advantage with IMAP protocol is that it could be used for proxying to other servers. Bandwidth-efficient protocol could be based on dovecot.index.log format, making it faster/easier to parse and generate.
Once we're that far with proxying, we're close to having..:
Replication
The plans haven't changed much from: http://www.dovecot.org/list/dovecot/2007-May/022792.html
The roadmap to complete replication support would be basically:
Implement proxying with mail storage API.
Implement asynchronous replication:
- Replication master process collecting changes from other Dovecot processes via UNIX sockets. It would forward these changes to slaves. Note that this requires that only Dovecot is modifying mailboxes, mails must be delivered with Dovecot's deliver!
- If some slaves are down, save the changes to files in case they come back up. In case they don't and we reach some specific max. log file size, keep also track of mailboxes that have changed and once the slaves do come back, make sure those mailboxes are synced.
- If replication master is down and there have been changes to mailboxes.. Well, I haven't really thought about this. Maybe refuse to do any changes to mailboxes? Maybe have mail processes write to some "mailboxes changed" log file directly? Probably not that big of a deal since replication master just shouldn't be down. :)
- Support for manual resync of all mailboxes, in case something was done outside Dovecot.
- Replication slave processes would be reading incoming changes. I think this process should just make sure that the changes are read as fast as possible, so with sudden bursts of activity the slave would start saving the changes to a log instead of the master.
- Replication mailbox writer processes would receive changes from the main slave process via UNIX sockets. They'd be responsible for actually saving the data to mailboxes. If multiple user IDs are used, this has the same problems as LMTP server.
- Conflict resolution. If mailbox writer process notices that the UID already exists in the mailbox, both the old and the new message must be given new UIDs. This involves telling the replication master about the problem, so both servers can figure out together the new UIDs. I haven't thought this through yet.
- Conflict resolution for message flags/keywords. The protocol would normally send flag changes incrementally, such as "add flag1, remove flag2". Besides that there could be a 8bit checksom of all the resulting flags/keywords. If the checksum doesn't match, request all the current flags. It would be of course possible to send the current flags always instead of incremental updates, but this would waste space when there are lots of keywords, and multi-master replication will work better when the updates are incremental.
- Synchronous replication:
- When saving messages, don't report success until at least one replication slave has acknowledged that it has also saved the message. Requires adding bidirectional communication with the replication master process. The message could sent to slaves in smaller parts, so if the message is large the wait at the end would basically consist of sending "finished saving UID 1234, please ack".
- Synchronous expunges also? Maybe optional. The main reason for this is that it wouldn't be nice to break IMAP state by having expunged messages come back.
- Multi-master replication:
Almost everything is already in place. The main problem here is IMAP UID allocation. They must be allocated incrementally, so each mailbox must have a globally increasing UID counter. The usage goes like:
- Send the entire message body using a global unique ID to all servers. No need to wait for their acknowledgement.
- Increase the global IMAP UID counter for this mailbox.
- Send global UID => IMAP UID message to all servers.
- Wait for one of them to reply with "OK".
Because messages sent by different servers can end up being visible in different times, the servers must wait that they've seen all the previous UIDs before notifying client of a new UID. If the sending server crashed between steps 2 and 3, this never happens. So the replication processes must be able to handle that, and also handle requesting missing UIDs in case they had lost connections to other servers.
I just had the idea of global UID counters instead of global locks, so I haven't yet thought how the counters would be best implemented. It should be easier to implement than global locks though.
Non-incremental changes are somewhat problematic and could actually require global locks. The base IMAP protocol supports replacing flags, so if there is no locking and conflicting flag replacement commands are sent to two servers, which one of those commands should be effective? Timestamps would probably be enough for flag changes, but it may not be enough for some IMAP extensions (like CONDSTORE?).
If global locks are required, they would work by passing the lock from one server to another when requested. So as long as only a single server is modifying the mailbox, it would hold the global lock and there would be no need to keep requesting it for each operation.
- Automation:
- Automatically move/replicate users between servers when needed. This will probably be implemented separately long after 4.
The replication processes would be ugly to implement for v1.x master, so this pretty much depends on v2.0 master rewrite.
Hi
My 2p
dbox plans, could be implemented for v1.2:
Interesting to see this as production ready product. Could be very interesting if it adds performance and new features.
v2.0 master/config rewrite
It's mostly working already. One of the larger problems left is how to handle plugin settings. Currently Dovecot just passes everything in plugin {} section to mail processes without doing any checks.
Isn't one of the simplest things still to build a "hash" or "dictionary" (pick your term) of all the things in the config file and then let the relevant chunk of code figure out what to do with it?
You need some kind of API to get out the bunch of possible options (eg see Xine/mplayer and how it lists all the default keymappings so that you can easily build a custom set). But after that you should really be able to stick in anything you like and the code just parses it and uses sensible defaults...
If config is shared then it might be nice to break out the config parser into a separate lib so that it's easily shared around - in particular default configs
What about it the default configs (and hence config structure) are provided with each plugin in a simple text file? Main app has a way to scoop all these together if someone needs a base template config file?
Index file optimizations
Nice to reduce admin knowhow wherever possible. ie reduce the number of corner cases where files can grow uncontrollably. This is important when converting someone from MS Exchange to dovecot...
Proxying
- These could be implemented to v1.2.
- Log in normally (no proxying) if destination IP is the server itself.
Personally I think the two main use cases here are:
- Set of frontend servers feeding to dedicated backend servers. Simple passthrough case
- Smaller setup has mixed frontend/backend servers (also useful to some extent when upgrading from some old setup with another brand of IMAP server). Basically the frontend server should notice if it's also the final destination and skip the proxying step, otherwise it behaves just like step 1) above
Idea 2) above is problematic in the case of mixed backend server types, eg I was upgrading from Courier to Dovecot. I don't think it's currently within scope to do anything too clever here - there are other proxy servers which can be looked at if necessary for really complex setups. I fixed it by using the CAPABILITIES option and limiting to the min set available on both servers - can't really see a better fix anyway because clients often ask for CAPA before even logging in...
So lets keep this simple and see if anyone shouts for anything more complicated before implementing it... Webmail optimisation is a far more common request than some of the advanced stuff that you were suggesting I should think?
Adding DNS lookups to the current proxy process would likely help some LDAP users I think?
Replication
I'm really up for this. However, your plans all seem to involve online sync - I need a very low bandwidth dialup sync...
Much more interested in features like
- only syncing certain mailboxes, or prioritising the order at least
- Perhaps only syncing the main mime structure and the delayed attachments sync (really nice if could be done cleanly!!)
- Completely multi-master, eg customer walks to remote location, plugs in and works seamlessly - walks back to main office and continues working there. Server figures out how to combine changes later
- Really, really low bandwidth usage
You mention some problems like unique ID's. This is actually pretty simple - just use some GUID type process. All you need is something guaranteed unique on each server, combinations of time, something unique per server and some randomness get you there. Discount anything which is some kind of counter because it will break eventually.
- Support for manual resync of all mailboxes, in case something was done outside Dovecot.
Definitely want this as robust as possible in case of corner cases where some change gets past the replication routine
- Conflict resolution.
It's better that you get a duplicated message than something is lost.
Most usage patterns for most email are: read/delete, or read/file.
Optimise for those
- Synchronous expunges also? Maybe optional. The main reason for this is that it wouldn't be nice to break IMAP state by having expunged messages come back.
Expunges need to be async and tracked. Very likely we will see people delete messages from two servers (because they moved server and didn't see the delete sync across yet.
Also need to watch for clients copying up deleted messages...?
- Multi-master replication:
- Almost everything is already in place. The main problem here is IMAP UID allocation. They must be allocated incrementally, so each mailbox must have a globally increasing UID counter. The usage goes like:
Don't see why? Use some kind of GUID and ensure that there are never any collisions right from step one?
Study Lotus Notes on this. Absolutely fantastic sync. You can create any kind of loops you want, shake it all around bring servers up in any order, sync clients to random servers and it all seems to work pretty well. This is what we really want for this
- Automation:
- Automatically move/replicate users between servers when needed. This will probably be implemented separately long after 4.
Don't see why this is a lower priority? Can I suggest that you *start* with this and go back to do the normal replication afterwards?
The moving between servers will have wider usage I should think? It's also clearly a synchronous operation and can be implemented initially as a completely special case, changed later to use any replication code.
I think this code will initially be external to the main server, and I think a huge amount will probably be learned about the requirements and problems that will occur by implementing this stuff first. It doesn't even need to be that efficient initially...
Actually this same code seems more likely to develop into a more general demon:
- Adjust filters per user
- Adjust permissions on shared folders
- Other admin operations per mailbox (forced purge, cleanup Trash, etc)
The replication processes would be ugly to implement for v1.x master, so this pretty much depends on v2.0 master rewrite.
OK, but rewrites are bad, incremental changes are always preferred...
Can I also add to your TODO list:
- Lemonade Profile!
- Enhancements to the Expire plugin making it easier to enforce a quota, eg trimming Trash initially, then Sent Items, then old stuff in other folders, etc, etc
In particular push email is all the rage right now and not having push features and not having a "Blackberry" plugin is really contributing to the rise of Exchange and other options right now. Please lets get open IMAP servers back on the map by giving users push type features. OK, it's not easy to figure out what to do right now, but it's a bit chicken and egg - until there is some rudimentry support in IMAP servers then the networks won't react and help, lets get something in no matter how rubbish and see how it develops from there (make it flexible so that people with crazy ideas can extend it...)
Good luck
Ed W
On Sat, 2007-12-08 at 10:56 +0000, Ed W wrote:
dbox plans, could be implemented for v1.2:
Interesting to see this as production ready product. Could be very interesting if it adds performance and new features.
It should be production ready in v1.1 already. Or at least I don't see any bugs in my stress tests. And it is somewhat faster than maildir :) A simple stress test shows:
./imaptest secs=10 seed=0
Logi List Stat Sele Fetc Fet2 Stor Dele Expu Appe Logo
100% 50% 50% 100% 100% 100% 50% 100% 100% 100% 100%
30% 5%
maildir:
3506 1814 1770 3506 3501 5034 1316 2830 3501 3506 7012
dbox:
4027 2032 2022 4026 4023 5819 1541 3197 4023 4025 8056
Other tests will probably show even larger difference.
v2.0 master/config rewrite
It's mostly working already. One of the larger problems left is how to handle plugin settings. Currently Dovecot just passes everything in plugin {} section to mail processes without doing any checks.
Isn't one of the simplest things still to build a "hash" or "dictionary" (pick your term) of all the things in the config file and then let the relevant chunk of code figure out what to do with it?
The design is currently that there's a config process that is responsible for reading and parsing the configuration into simple key=value form. Other processes (such as imap) will ask the configuration from the config process via UNIX socket, and get the simple key=value pairs.
Those key=value pairs is how Dovecot v1.0 currently handles the configuration. It just gets them from environment. To make it even easier I wanted to deserialize them directly to different settings structures. This works by everyone calling some settings_register() type of function before the settings are actually read.
So the valid settings are verified by the config process. It needs to know about all the valid settings, otherwise errors won't be reported early enough. The key=value pairs can contain unknown keys without anything complaining about it, so for example if imap process gets settings to a plugin it hasn't loaded, those settings just get ignored.
If config is shared then it might be nice to break out the config parser into a separate lib so that it's easily shared around - in particular default configs
The key=value deserializer is in lib-settings in the sources. The config process will probably support reading settings from different kinds of sources. By default it will read v1.0-like dovecot.conf, but it could just as well read the configuration from SQL (either built-in or maybe plugin, I'm not sure yet).
What about it the default configs (and hence config structure) are provided with each plugin in a simple text file? Main app has a way to scoop all these together if someone needs a base template config file?
I thought about this too, but it makes it more difficult to install new plugins and it's easier to make mistakes. The less things there are to remember the better. :)
Replication
I'm really up for this. However, your plans all seem to involve online sync - I need a very low bandwidth dialup sync...
The plans are mainly about online synchronization, but I think a lot of the same infrastructure can be used for low bandwidth sync as well.
Much more interested in features like
- only syncing certain mailboxes, or prioritising the order at least
This shouldn't be too difficult. And I think some of this could be done automatically too. Like prioritize syncing those mailboxes first that the user is currently accessing.
- Perhaps only syncing the main mime structure and the delayed attachments sync (really nice if could be done cleanly!!)
This would be a bit tricky, although it's somewhat related to dbox's single instance attachment storage. Once that's working it shouldn't be too difficult to delay replicating attachments.
You mention some problems like unique ID's. This is actually pretty simple - just use some GUID type process. All you need is something guaranteed unique on each server, combinations of time, something unique per server and some randomness get you there. Discount anything which is some kind of counter because it will break eventually.
It will have global UIDs which it uses internally. I hadn't thought much yet how those would be generated, but they're not the problem. The problem is IMAP protocol where it's not possible to use them. IMAP requires using a growing 32bit integer as the UID. There's no way around that, so IMAP UID conflicts can happen and they have to be handled somehow.
The IMAP UID conflicts are detected by a message having the same IMAP UID but different global UIDs. So the replication then just needs to make a new global UID -> IMAP UID mapping for both of the messages to fix the problem.
- Conflict resolution.
It's better that you get a duplicated message than something is lost.
Yes, that's the plan. Although the duplication still should happen only rarely. :)
- Synchronous expunges also? Maybe optional. The main reason for this is that it wouldn't be nice to break IMAP state by having expunged messages come back.
Expunges need to be async and tracked. Very likely we will see people delete messages from two servers (because they moved server and didn't see the delete sync across yet.
The synchronous operation mode would be optional. You probably wouldn't want to enable it for low bandwidth links anyway. Even for multi-master replication it's not a requirement to have the servers connected to each others. It just prevents the possibility of IMAP UID conflicts.
Also need to watch for clients copying up deleted messages...?
What do you mean by this?
- Automation:
- Automatically move/replicate users between servers when needed. This will probably be implemented separately long after 4.
Don't see why this is a lower priority? Can I suggest that you *start* with this and go back to do the normal replication afterwards? .. Actually this same code seems more likely to develop into a more general demon:
- Adjust filters per user
- Adjust permissions on shared folders
- Other admin operations per mailbox (forced purge, cleanup Trash, etc)
The main reason why I don't want to do anything about this yet is that it would be difficult to make it work with different kinds of configurations. I don't yet want to create some "Dovecot admin" tool that requires you to have set up your filesystem in a specific way or use a specific database for storing user configuration.
Actually I think if such a tool is created it should be a completely separate package from Dovecot. It most likely will expand into a generic mail server administrator tool, similar to PostfixAdmin etc. And since developing that tool doesn't require knowing any Dovecot internals, it could just as well be developed by pretty much anyone. :)
The replication processes would be ugly to implement for v1.x master, so this pretty much depends on v2.0 master rewrite.
OK, but rewrites are bad, incremental changes are always preferred...
I know. It's not that big of a rewrite though. And I am thinking about doing this incrementally. First move over the settings deserializer and after that's known to work replace the master/log/config backend (which can't really be done incrementally in any easy way). The rewrite will then be only for about 5% of Dovecot's code :)
Can I also add to your TODO list:
- Lemonade Profile!
Yes, and other extensions.. I haven't thought about these much yet, but I think most of them won't be too difficult to implement.
- Enhancements to the Expire plugin making it easier to enforce a quota, eg trimming Trash initially, then Sent Items, then old stuff in other folders, etc, etc
Sounds exactly what trash plugin does: http://wiki.dovecot.org/Plugins/Trash
On Sat, 2007-12-08 at 14:02 +0200, Timo Sirainen wrote:
You mention some problems like unique ID's. This is actually pretty simple - just use some GUID type process. All you need is something guaranteed unique on each server, combinations of time, something unique per server and some randomness get you there. Discount anything which is some kind of counter because it will break eventually.
It will have global UIDs which it uses internally. I hadn't thought much yet how those would be generated, but they're not the problem. The problem is IMAP protocol where it's not possible to use them. IMAP requires using a growing 32bit integer as the UID. There's no way around that, so IMAP UID conflicts can happen and they have to be handled somehow.
The IMAP UID conflicts are detected by a message having the same IMAP UID but different global UIDs. So the replication then just needs to make a new global UID -> IMAP UID mapping for both of the messages to fix the problem.
Although if you just want to keep the servers synchronized and use completely different clients for them, then there's no need to keep IMAP UIDs synchronized.
I was thinking more about a setup where a client could connect to any of the replicated servers. If you just need to have for example Dovecot on a laptop where the mail client on laptop always connects only to the laptop Dovecot, there's no problem with IMAP UID conflicts.
Timo Sirainen wrote:
On Sat, 2007-12-08 at 10:56 +0000, Ed W wrote:
Can I also add to your TODO list:
- Lemonade Profile!
Yes, and other extensions.. I haven't thought about these much yet, but I think most of them won't be too difficult to implement.
That'd be a really great-to-have feature :)
And as now there's a robust client library that implements all this[1], we'll hopefully see new client proliferation supporting 'push-like' email [2] for tablets and phones. So having support for all this on IMAP servers will become a must in the not so distant future in order to stay in the map :)
[1] http://www.tinymail.org/ [2] http://modest.garage.maemo.org/
Angel Marin http://anmar.eu.org/
On Thu, Dec 06, 2007 at 04:17:22PM +0200, Timo Sirainen wrote:
Deliver / LMTP server
Currently deliver parses dovecot.conf using its own parser. This has caused all kinds of problems. v2.0's master rewrite helps with this, because deliver can then just ask the configuration from config process the same way as other processes are.
Another problem is that people who use multiple user IDs have had to make deliver setuid-root. I really hate this, but currently there's no better way. Better fix for this would be to use LMTP server instead. There could be either a separate LMTP client or deliver could support LMTP protocol as well. I'm not sure which one is better.
I'll weigh in 50 euro for the deliver LMTP interface. It's not much, but others may add to it. :-)
Geert
Hmm. Maybe a feature packed v1.2 would be a good idea after all. It
wouldn't have any huge underlying changes, so most likely it could be
released only a few months after v1.1.
Implemented, but slightly buggy so not included in v1.1 (yet):
- Mailbox list indexes
- THREAD REFERENCES indexes
- THREAD X-REFERENCES2 algorithm where threads are sorted by their
latest message instead of the thread root message
These would just need some bug fixing and testing, so it shouldn't
take long to get them finished. Mailbox list indexes will also allow
automatically sending STATUS replies when non-selected mailboxes
change. Perhaps that shouldn't be done by default, but it allows an
efficient implementation for Lemonade's NOTIFY draft.
Mailbox list indexes would work faster with improved v2.0 transaction
log format though.. Now it's kind of bloated. I should probably
benchmark it again after my recent index handling fixes. Maybe it's
not that bad anymore.
Could be implemented for v1.2:
- Shared mailboxes and ACLs http://www.dovecot.org/list/dovecot/2007-April/021624.html
- Virtual mailboxes: http://www.dovecot.org/list/dovecot/2007-May/022828.html
- Depends on mailbox list indexes
Again shouldn't be a huge job.
Then a bunch of IMAP extensions could be implemented:
- Lemonade: CATENATE, URLAUTH, COMPRESS, WITHIN
- ESEARCH and maybe some other simple to implement (draft) extensions
- ANNOTATE and METADATA drafts? URLAUTH could be built on top of
METADATA functionality. All of these could be built on top of lib-dict.
lib-dict could probably use a couple of more backends. At least a
simple "filesystem" backend for testing, and it could also be good as
URLAUTH backend.
dbox plans, could be implemented for v1.2:
- Support for single instance attachment storage
- Global dbox storage so copying messages from mailbox to another
involves only small metadata updates- Support some kind of hashed directories for storing message
files, instead of storing everything in a single directory.- Finish support for storing multiple messages in a file
These could be implemented for v1.2, but they're pretty complex
changes and I'm not too excited about implementing them.. Although
some day I would want to move my own mails from mbox format to
multiple-mails-per-file dbox format. :)
v2.0 master/config rewrite
v2.0 would then have proxying, replication, LMTP server and index
file optimizations.
CONDSTORE extension probably needs to wait for v2.0 as well, since it
would require non-backwards compatible changes to transaction log
files (new "conditional flag/keyword update" records).
If v1.2 happens, v2.0 would probably be released sometimes (end of?)
next summer. If v1.2 doesn't happen, v2.0 would probably come around
spring.
participants (4)
-
Angel Marin
-
Ed W
-
Geert Hendrickx
-
Timo Sirainen