On 4/12/2012 5:58 AM, Ed W wrote:
The claim by ZFS/BTRFS authors and others is that data silently "bit rots" on it's own. The claim is therefore that you can have a raid1 pair where neither drive reports a hardware failure, but each gives you different data?
You need to read those articles again very carefully. If you don't understand what they mean by "1 in 10^15 bits non-recoverable read error rate" and combined probability, let me know.
And this has zero bearing on RAID1. And RAID1 reads don't work the way you describe above. I explained this in some detail recently.
I do agree that if one drive reports a read error, then it's quite easy to guess which pair of the array is wrong...
Been working that way for more than 2 decades Ed. :) Note that "RAID1" has that "1" for a reason. It was the first RAID level. It was in production for many many years before parity RAID hit the market. It is the most well understood of all RAID levels, and the simplest.
-- Stan