[Dovecot] RAID1+md concat+XFS as mailstorage

Ed W lists at wildgooses.com
Thu Jun 28 23:35:40 EEST 2012


On 28/06/2012 17:54, Charles Marcus wrote:
> On 2012-06-28 12:20 PM, Ed W <lists at wildgooses.com> wrote:
>> Bad things are going to happen if you loose a complete chunk of your
>> filesystem.  I think the current state of the world is that you should
>> assume that realistically you will be looking to your backups if you
>> loose the wrong 2 disks in a raid1 or raid10 array.
>
> Which is a very good reason to have at least one hot spare in any RAID 
> setup, if not 2.
>
> RAID10 also statistically has a much better chance of surviving a 
> multi drive failure than RAID5 or 6, because it will only die if two 
> drives in the same pair fail, and only then if the second one fails 
> before the hot spare is rebuilt.
>

Actually this turns out to be incorrect... Curious, but there you go!

Search google for a recent very helpful expose on this.  Basically 
RAID10 can sometimes tolerate multi-drive failure, but on average raid6 
appears less likely to trash your data, plus under some circumstances it 
better survives recovering from a single failed disk in practice

The executive summary is something like: when raid5 fails, because at 
that point you effectively do a raid "scrub" you tend to suddenly notice 
a bunch of other hidden problems which were lurking and your rebuild 
fails (this happened to me...).  RAID1 has no better bad block detection 
than assuming the non bad disk is perfect (so won't spot latent 
unscrubbed errors), and again if you hit a bad block during the rebuild 
you loose the whole of your mirrored pair.

So the vulnerability is not the first failed disk, but discovering 
subsequent problems during the rebuild.  This certainly correlates with 
my (admittedly limited) experiences.  Disk array scrubbing on a regular 
basis seems like a mandatory requirement (but how many people do..?) to 
have any chance of actually repairing a failing raid1/5 array

Digressing, but it occurs there would be a potentially large performance 
improvement if spinning disks could do a read/rewrite cycle with the 
disk only moving a minimal distance (my understanding is this can't 
happen at present without a full revolution of the disk). Then you could 
rewrite parity blocks extremely quickly without re-reading a full stripe...

Anyway, challenging problem and basically the observation is that large 
disk arrays are going to have a moderate tail risk of failure whether 
you use raid10 or raid5 (raid6 giving a decent practical improvement in 
real reliability, but at a cost in write performance).

Cheers

Ed W



More information about the dovecot mailing list