That is why I am using mdbox files of 4MB. I hope that should give me hardly any write amplification. I am also seperating between ssd and hdd pools by auto archiving email to the hdd pools I am using rbd. After luminuous I had some issues with the cephfs and do not want to store operational stuff on it yet.
I am very interested in that setup, since I am currently planning to reshape my cluster in a similar way (currently from sole distribution via director to distribution + HA).
Currently I am running just one instance. Hyper converged on ceph. I had once a lsi card fail in a ceph node and strangely enough it took with it 2nd lsi card in that node. So all drives were down in the ceph node, could not log in (os drive also down). Yet all vm's on that ceph node kept running without any issues. Really, really nice.
Could you post a short overview
I am trying to stick as much as possible to defaults and only when required seek some tuning. For now I am able to run with such 'defaults'.
(scheme) and some important configurations of your setup? Did you do any performancetesting?
Yes I tried a bit of testing but did not have to much time to do it properly. Look though at how the cephfs performs.
storage description mailbox type test message size test type messages processed ms/cmd avg
imaptest1 mail04 vdb mbox 64kb append 2197 800.7 imaptest3 mail04 lvm vdb mbox 64kb append 2899 833.3 imaptest4 mail04 lvm vdb, added 2nd cpu mbox 64kb append 2921 826.9 imaptest5 mail04 cephfs (mtu 1500) mbox 64kb append 4123 585.0 imaptest14 mail04 cephfs nocrc (mtu 8900) mbox 64kb append 4329 555.6 imaptest10 mail04 cephfs (mtu 8900) mdbox 64kb append 2754 875.2 imaptest16 mail04 cephfs (mtu 8900) 2 mounts idx store mdbox 64kb append 2767 imaptest6 mail04 lvm vdb (4M) mdbox 64kb append 1978 1,244.6 imaptest7 mail04 lvm vdb (16M) mdbox 64kb append 2021 1,193.7 imaptest8 mail04 lvm vdb (4M) mdbox zlib 64kb append 1145 2,240.4 imaptest9 mail04 lvm vdb (4M) mdbox zlib 1kb append 345 7,545.8 imaptest11 mail04 lvm sda mbox 64kb append 4117 586.8 imaptest12 mail04 vda mbox 64kb append 3365 716.2 imaptest13 mail04 lvm sda dual test mbox 64kb append 2175 (4350) 580.8 (1,161.7) imaptest15 mail03 lvm sdd mbox 64kb append 20850 119.2 imaptest17 mail04 ceph hdd 3x rbox 64kb append 2519 1003 imaptest18 mail04 ceph ssd 3x mbox 64kb append 31474 imaptest19 mail04 ceph ssd 3x mdbox 64kb append 15706
imaptest1 mail04 vdb mbox 64kb append 2197 800.7 imaptest3 mail04 lvm vdb mbox 64kb append 2899 833.3 imaptest4 mail04 lvm vdb, added 2nd cpu mbox 64kb append 2921 826.9 imaptest5 mail04 cephfs (mtu 1500) mbox 64kb append 4123 585.0 imaptest14 mail04 cephfs nocrc (mtu 8900) mbox 64kb append 4329 555.6 imaptest10 mail04 cephfs (mtu 8900) mdbox 64kb append 2754 875.2 imaptest16 mail04 cephfs (mtu 8900) 2 mounts idx store mdbox 64kb append 2767 imaptest6 mail04 lvm vdb (4M) mdbox 64kb append 1978 1,244.6 imaptest7 mail04 lvm vdb (16M) mdbox 64kb append 2021 1,193.7 imaptest8 mail04 lvm vdb (4M) mdbox zlib 64kb append 1145 2,240.4 imaptest9 mail04 lvm vdb (4M) mdbox zlib 1kb append 345 7,545.8 imaptest11 mail04 lvm sda mbox 64kb append 4117 586.8 imaptest12 mail04 vda mbox 64kb append 3365 716.2 imaptest13 mail04 lvm sda dual test mbox 64kb append 2175 (4350) 580.8 (1,161.7) imaptest15 mail03 lvm sdd mbox 64kb append 20850 119.2 imaptest16 mail06 rbox 64kb append 2755 917 imaptest17 mail06 (index on rbd ssd) rbox 64kb append 4862 492 imaptest18 mail06 (index on rbd ssd) rbox 64kb append 4803 imaptest19 mail06 (index on rbd ssd) rbox 64kb append 9055 272 imaptest20 mail06 (index on rbd ssd) rbox 64kb append 8731 276 imaptest20 mail06 (index on rbd ssd) rbox 62kb-txt append 11315 212 imaptest21 mail06 (index on rbd ssd, compression) rbox 64kb append 8298 290 imaptest22 mail06 (index on rbd ssd, compression) rbox 64kb-txt append 10790 223
Also, when you say rbd in a clustered context, is that one block device per node
I have 3x ssd block devices (index inbox, index archive, storage inbox) for the inbox namespace, 1x hdd block device for archive namespace storage. That will give you some iops limits which you probably would be able to increase with adding block devices or tuning in the vm config.
while the director still spreads the accounts over the nodes?
I am not doing anything like this yet. I do want to expand into a larger volume of accounts. But I can segment this growth by simply directing traffic to different servers via dns configuration.