I was more thinking about thousands of servers, not clients. Each server should contribute to the amount of storage you have. Buying huge storages is more expensive. Also it would be nice if you could just keep plugging in more servers to get more storage space, disk I/O and CPU and the system would just automatically reconfigure itself to take advantage of those. I can't really see any of that happening easily with AFS.
Well, me too. But there are interesting (and working) solutions like e.g. GlusterFS...
I mention it because you stated wanting to outsource the storage portion. The complexity of whatever database engine you choose or supporting a clustered filesystem (like NFS) is a wash since you're not maintaining either one personally.
I also want something that's cheap and easy to scale. Sure, people who already have NFS/AFS/etc. systems can keep using Dovecot with the filesystem backends, but I don't think it's the cheapest or easiest choice. There's a reason why e.g. Amazon S3 isn't running on top of them.
I think the basic behind the initial idea, which I like very much, is to have a choice between redundancy/scalability and easiness of running a platform.
In my opinion there isn't the perfect solution which addresses all of above in the best way. I think that's why there are so many different solutions out there. Anyway, having indexes centralized in either form of a "database" would be a nice solution (and very important: easy to run in case of SQL!) for not all, but many installations. If the speed penalty and coding penalties/efforts aren't that much, it would be worth to implement solutions like SQL-based index storage, too. And everyone is/would be free to decide which one would be the best for his platform/environment.
Huge installations with servers > 50 will always be a kind of a special solution and won't be built out of the box. Dovecot can just help in having good alternatives of storing all kind of lock-dependant stuff in different ways (files/memory/databases).
Regards, Sebastian