As hard disk capacities increase, at what point do two disk mirrored vdevs stop making sense?

nickw_

Cadet
Joined
May 9, 2023
Messages
9
Hello All,

New to the forum, but I have been quietly reading for awhile.

I am continuing to learn about ZFS and TrueNAS, and I have a general question about using two-disk mirror vdevs. As hard disk capacities increase, is there a point where two disk mirrored vdevs stop making sense? In terms of having a URE's or drive failures start during a resilver?

To paint a hypothetical situation, lets say a 12 disk pool, comprised of 6x mirrored vdev's. Used doing typical homelab type tasks, around 50% full, using pro drives, where someone backs up to the cloud. So even though a backup is available, it's a pain to pull large amounts of data down.

I read many articles* and posts talking about benefits of different vdev's types. But I can't help but wonder as hard disk capacities grow, will we eventually outgrow 2 disk mirrors being considered a good practice? Similar to how raid5/z1 vdev's are no longer considered good practice?** I can't help but wonder if the days of 2disk mirrors coming to an end.

Again, this is a general question and I'm trying to learn. I've tried to do my homework with the recommend readings (and then some), but I don't see this addressed anywhere. I have also tried searching, but haven't found a solid discussion on the topic.

From what I can find, meta data is already duplicated on each drive, so thats more likely safe. But an URE in a file seems like a real concern. As do the long rebuild times as capacities grow. They could start taking 12+ hours and becoming very taxing on the remaining drive.

Thoughts? Experiences?

Thanks,
Nick


Links:
* https://jrs-s.net/2015/02/06/zfs-you-should-use-mirror-vdevs-not-raidz/
** https://www.zdnet.com/article/why-raid-5-stops-working-in-2009/
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I can't help but wonder if the days of 2disk mirrors coming to an end.

No. Unless you mean to ask if 3 disk mirrors will become best practice, in which case, yes. Mirrors inherently have much better I/O than RAIDZ, and 3 way mirrors are better than 2 way mirrors. If you don't need the extra IOPS, then stick to 2 way. But 3 way mirrors also give a property: if you have a policy that redundancy shall not be compromised, you need to use 3 way.

See, it wasn't as complicated an answer as you might have thought. :smile:
 

nickw_

Cadet
Joined
May 9, 2023
Messages
9
Thanks, I have been reading all about IOPS and pro's and con's of different vdev's and pool layouts. This is really a question of drive capacities and two way mirrors. As the capacities increase, wouldn't the risks during resilver increase as well?

Is there a point when people stop feeling comfortable with 2 way mirrors in my example above? I would think <4TB drives okay. But what about 8TB or 16TB drives? Just trying to get a feel for what people think, and/or what peoples experiences are.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I began using 3 way mirrors with 2TB drives eight years ago. See again the rule I suggested about "redundancy shall not be compromised".
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
EDIT 09/07/2023: It's now available a resource that addresses with more depth the very content of this post (which contains errors) and more.

TL;DR: Don't do 6 vdevs of 12TB disks in two-way mirror each, please.

It has been a few years since I studied this subject so please correct me if you spot errors.

Assuming the probability of any single drive failing is p, the VDEV size is n, and the number of drives that fail simultaneously in that VDEV is X, then:

Pr(X) = C(n,X) * (p)^X * (1-p)^(n-X)
see wikipedia to understand what is a combination

Now, assuming that p = 0.03 (3%) and that n = 2 (vdev is a 2-way mirror), we get the following numbers:​

X​
0​
1​
2​
Pr(X)​
0.9409​
0.0582​
0.0018​

Which means that we have a 94% of having no drives failure, a 5.8% of a single drive failure and a 0.2% of both drives failing.
Which means that the Probability of at least one drive failing is 0.0582 + 0.0018 = 0.06 which means 6%.

Now, since we have 6x 2-way mirrored vdevs, the probability of both drives failing in at least one vdev is 0.0018 * 6 = 0.01 which means 1%.
Do note that those numbers are all relative to the probability of drive failures, not a single drive failure and an URE (which means data loss).

In order to put at least a single URE in our number, we need to first calculate the Probablity of a URE happening in a 2-way mirror:

Pr(URE) = 1 - (1-URE)^bit_read where bit_read = (4,8e+13) for 12TB drives at 50% of used space (please forgive my approximation).
see here for a conversion

Assuming enterprise drives with an URE = 1e-15 we get the probability of a single URE (in a single disk) is Pr(URE) ≈ 0.05 which means 5%.

Now we add this number to the probability of at least one drive failing we previously calculated:

0.0582 + 0.0018 + 0.05 ≈ 0.11 which means 11% of experiencing data loss.

The event of 2 drives failing and the event of experiencing an URE are incompatible in a 2-way mirror vdev: the formula shouldn't include the 0.0018 value, but given its low relevance to the total number I left it in.

Since we have 6x 2-way mirrored vdevs, the probability of our (≈ 50% full) pool experiencing data loss is 0.11 * 6 ≈ 0.66 which means an awful 66%.
VDEV n (2TB DRIVES)​
P(LOSS)​
1​
6.6%​
2​
13.2%​
3​
19.8%​
4​
26.5%​
5​
33.1%​
6​
39.7%​
VDEV n (4TB DRIVES)​
P(LOSS)​
1​
7.4%​
2​
14.8%​
3​
22.2%​
4​
29.6%​
5​
37.0%​
6​
44.1%​
VDEV n (6TB DRIVES)​
P(LOSS)​
1​
8.2%​
2​
16.4%​
3​
24.6%​
4​
32.8%​
5​
40.9%​
6​
49.1%​
VDEV n (8TB DRIVES)​
P(LOSS)​
1​
9.0%​
2​
17.9%​
3​
26.9%​
4​
35.9%​
5​
44.8%​
6​
53.8%​
VDEV n (10TB DRIVES)​
P(LOSS)​
1​
9.7%​
2​
19.5%​
3​
29.9%​
4​
39.0%​
5​
48.7%​
6​
58.4%​
VDEV n (12TB DRIVES)​
P(LOSS)​
1​
10.5%​
2​
21.0%​
3​
31.5%​
4​
42.0%​
5​
52.5%​
6​
63.0%​
VDEV n (14TB DRIVES)​
P(LOSS)​
1​
11.3%​
2​
22.5%​
3​
33.8%​
4​
45.0%​
5​
56.3%​
6​
67.6%​
VDEV n (16TB DRIVES)​
P(LOSS)​
1​
12.0%​
2​
24.0%​
3​
36.0%​
4​
48.1%​
5​
60.1%​
6​
72.1%​


With 3-way vdevs the probability of a single URE grows* but the overall probability of data loss is drastically reduced by the extra parity drive; if anyone wants to run the numbers you have all the ways to do so.

*I'm not sure though, maybe ZFS splits the loads between the remaining disks in which case the Pr(URE) doesn't change.

EDIT: Spelling correction and insertion of the "Table" Spoiler.​
 
Last edited:

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
As hard disk capacities increase, is there a point where two disk mirrored vdevs stop making sense? In terms of having a URE's or drive failures start during a resilver?
As far as URE goes, it's the same reasoning, and same maths, as for "RAID5 is dead".
Consumer drives are typically rated for a URE rate of "less than" 1 in 1E14 bits read; enterprise drives may be rated for 1 in 1E15 bits. 12 TB is 12*8*1E12 = 9.6E13 = 0.96E14 bits; compare with "1E14 bits"… Of course, this does not mean that there will necessarily be an URE for a complete reading of a 12 TB drive but the risk is significant.
Contrary to a RAID controller, ZFS needs not read a whole drive to resilver, only the actual data, and an URE during the resilver of a degraded, non-redundant vdev would not end the process with a complete pool failure, it would lose the affected file. Still, the basic expectation with redundancy is that, if you have an array with N degrees of redundancy and you lose exactly N drives (to mechanical failure, or whatever), then you should recover. But with large drives, one essentially loses one degree of redundancy to the risk of an URE: Have a 2-way mirror/RAID5/raidz1 (N=1 in all cases), lose exactly one drive, resilver a large amount of data; an URE occurs, and then you "go and grab that backup" even though you had just the right degree of redundancy to cover for the lost drive.

We are at, or beyond, the capacity point where 2-way mirrors of HDDs are NOT "safe enough" to avoid that. It actually closely followed the point where parity arrays were called unsafe: It is a simple matter of "data to resilver" vs. URE rate. If RAID5 was called "dead" beyond 1 TB, it only was a matter of drives growing a little more before 2-way mirror had to be called unsafe: There's the same capacity, and the same amount of data to resilver, in a 6-wide RAID5 of 1 TB drives as there is in a single 5 TB drive in RAID1. (With ZFS, make that "raidz1" and "mirror", and adjust for actual fill rate; change width at will; same conclusion anyway: Multi-TB drives are too large for comfort.)

With respect to the risk of a further mechanical failure during the resilver, and possibly caused by strain from the resilver, one may argue that resilvering a mirror is less stressful than resilvering a raidz# array (simple huge sequential read of the surviving drive vs. mixed read/write workload with parity data). But basically one does not want to go down to that: If you're resilvering with no remaining redundancy, and anxiously computing the likelihood of a further drive failure during that, implying a complete pool loss, you already are in trouble.

SSDs are fine, for now, because their URE rate is typically (less than) 1 in 1E17 bits.
 

Whattteva

Wizard
Joined
Mar 5, 2013
Messages
1,824
I think your analysis is missing a comparison vs a RAIDZ equivalent for a more fair assessment of striped mirrors risk.
The fact that the probability of other drives' failure in a RAIDZ goes up for all the other drives involved vs just the sibling drive due to two confounding variables:
- Resilvering puts load on all the other n-1 drives in the vdev (which is typically the entire pool) vs just the siblings in the striped mirrors.
- Resilvering time is orders of magnitudes longer putting a lot more load on the rest of the surviving drives for a lot longer.

If RAID5 was called "dead" beyond 1 TB, it only was a matter of drives growing a little more before 2-way mirror had to be called unsafe: There's the same capacity, and the same amount of data to resilver, in a 6-wide RAID5 of 1 TB drives as there is in a single 5 TB drive in RAID1. (With ZFS, make that "raidz1" and "mirror", and adjust for actual fill rate; change width at will; same conclusion anyway: Multi-TB drives are too large for comfort.)
This is simply not true. In a 6-wide RAIDZ1 (can't speak for RAID5), for each block resilvered, a block has to be read from each of the surviving drive putting FIVE times the I/O load vs a 6-drive mirrors where you only need to read a block from the sibling drive. You can clearly observe this from the significant difference in resilver times between the two topologies even if you don't use the pool at all while degraded. This is also why your degraded RAIDZ pool performance is slow as snail when you're resilvering. On the other hand, a 6-drive striped mirrors should still perform fairly well while resilvering. Putting RAIDZ1 and mirrors in the same category (your last statement in parentheses) is an injustice.

TL; DR:
RAIDZ1 != mirrors. Mirrors are significantly better in both performance and redundancy.
 
Last edited:

nickw_

Cadet
Joined
May 9, 2023
Messages
9
Thank you everyone. This paints a very clear picture and helps grow my knowledge.

Davvo, this is helpful for me really understanding the individual risks. Thank you for taking the time to break it all down, much appreciated!

With SSD's so cheap these days, I would sooner use those (in a second pool) for a VM's or situation where IOPS is needed. Then just backup the SSD pool to a larger z2/z3 pool of HDD. I'm still learning and trying to understand, but that's probably what I will do.

Cheers,
Nick
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
A heads up since I have written a resource that goes in depth about the topics we touched in this thread, correcting a few mistakes stated in my original post.
 

Richard Kellogg

Dabbler
Joined
Jul 30, 2015
Messages
27
EDIT 09/07/2023: It's now available a resource that addresses with more depth the very content of this post (which contains errors) and more.

TL;DR: Don't do 6 vdevs of 12TB disks in two-way mirror each, please.

It has been a few years since I studied this subject so please correct me if you spot errors.

Assuming the probability of any single drive failing is p, the VDEV size is n, and the number of drives that fail simultaneously in that VDEV is X, then:

Pr(X) = C(n,X) * (p)^X * (1-p)^(n-X)
see wikipedia to understand what is a combination

Now, assuming that p = 0.03 (3%) and that n = 2 (vdev is a 2-way mirror), we get the following numbers:​

X​
0​
1​
2​
Pr(X)​
0.9409​
0.0582​
0.0018​

Which means that we have a 94% of having no drives failure, a 5.8% of a single drive failure and a 0.2% of both drives failing.
Which means that the Probability of at least one drive failing is 0.0582 + 0.0018 = 0.06 which means 6%.

Now, since we have 6x 2-way mirrored vdevs, the probability of both drives failing in at least one vdev is 0.0018 * 6 = 0.01 which means 1%.
Do note that those numbers are all relative to the probability of drive failures, not a single drive failure and an URE (which means data loss).

In order to put at least a single URE in our number, we need to first calculate the Probablity of a URE happening in a 2-way mirror:

Pr(URE) = 1 - (1-URE)^bit_read where bit_read = (4,8e+13) for 12TB drives at 50% of used space (please forgive my approximation).
see here for a conversion

Assuming enterprise drives with an URE = 1e-15 we get the probability of a single URE (in a single disk) is Pr(URE) ≈ 0.05 which means 5%.

Now we add this number to the probability of at least one drive failing we previously calculated:

0.0582 + 0.0018 + 0.05 ≈ 0.11 which means 11% of experiencing data loss.

The event of 2 drives failing and the event of experiencing an URE are incompatible in a 2-way mirror vdev: the formula shouldn't include the 0.0018 value, but given its low relevance to the total number I left it in.

Since we have 6x 2-way mirrored vdevs, the probability of our (≈ 50% full) pool experiencing data loss is 0.11 * 6 ≈ 0.66 which means an awful 66%.
VDEV n (2TB DRIVES)​
P(LOSS)​
1​
6.6%​
2​
13.2%​
3​
19.8%​
4​
26.5%​
5​
33.1%​
6​
39.7%​
VDEV n (4TB DRIVES)​
P(LOSS)​
1​
7.4%​
2​
14.8%​
3​
22.2%​
4​
29.6%​
5​
37.0%​
6​
44.1%​
VDEV n (6TB DRIVES)​
P(LOSS)​
1​
8.2%​
2​
16.4%​
3​
24.6%​
4​
32.8%​
5​
40.9%​
6​
49.1%​
VDEV n (8TB DRIVES)​
P(LOSS)​
1​
9.0%​
2​
17.9%​
3​
26.9%​
4​
35.9%​
5​
44.8%​
6​
53.8%​
VDEV n (10TB DRIVES)​
P(LOSS)​
1​
9.7%​
2​
19.5%​
3​
29.9%​
4​
39.0%​
5​
48.7%​
6​
58.4%​
VDEV n (12TB DRIVES)​
P(LOSS)​
1​
10.5%​
2​
21.0%​
3​
31.5%​
4​
42.0%​
5​
52.5%​
6​
63.0%​
VDEV n (14TB DRIVES)​
P(LOSS)​
1​
11.3%​
2​
22.5%​
3​
33.8%​
4​
45.0%​
5​
56.3%​
6​
67.6%​
VDEV n (16TB DRIVES)​
P(LOSS)​
1​
12.0%​
2​
24.0%​
3​
36.0%​
4​
48.1%​
5​
60.1%​
6​
72.1%​


With 3-way vdevs the probability of a single URE grows* but the overall probability of data loss is drastically reduced by the extra parity drive; if anyone wants to run the numbers you have all the ways to do so.

*I'm not sure though, maybe ZFS splits the loads between the remaining disks in which case the Pr(URE) doesn't change.

EDIT: Spelling correction and insertion of the "Table" Spoiler.​

Something seems odd to me. Let’s take a simpler example to explain my position. Take a pool with one vdev consisting of a 2-disk mirror. Let the disks be Segate Exos 20 TB drives. They have a stated MTBF of 2.5 million hrs, at an annual use rate of 550 TB and have a 5 year warranty.

So first off - what is the probability of at least one UE over their 5 year life? That would be 2 drives x 5 yr x 8760 hr/yr / 2,500,000hr/drive = 0.035. In other words, there is only a 1 in 28.5 chance of getting even one UE over their 5 year warrantied life.

So now lets say we have a UE on a drive, the most likely result is, the other drive in the mirror has a good copy of the bad data, and the drive is repaired, requiring just a few writes on the drive that had the UE. But let’s take the case where the drive is failed, or where we take a no-tolerance position, and we replace the now suspected failing drive. This will require a total read of our good drive, without any UEs.

What is that probability? Instead of using a 1 in 10^14 bit error rate, I go back to the stated 2,500,000 MTBF, and a 550 TB yearly rate. Reading 20 TB is 20/550*8760 hours = 318.5 hrs.

318.5 hrs /2,500,000 hrs/failure = 0.00013 or about 1/7800. In other words, even if we have an error that requires disk replacement with subsequent copy of the entire 20 TB drive there is only a 1/7800 that the copy will fail, resulting in a lost pool.

This seems good enough to me.

Tell me where my analysis is flawed.
 
Last edited by a moderator:

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
The flaw is equating Unrecoverable Read Error with failure. "Failure" is the disk breaking down, mechanically or electrically. An URE is not a "failure" and you can't get the URE rate from the MTBF.

Resilver does not require a total read of the good drive, only of the actual data on it.

Let 'u' be the URE rate.
The probability of not getting an URE on one bit is p(1) = 1 - u.
For 'n' bits read, p(n) = (1-u)^n. Logarithms will help with computing: ln(p(n)) = n*ln(1-u) ≈ -u * n.
So: p(n) ≈ exp(-u*n)
For N bytes: p(N) ≈ exp(-8*u*N).

Say we have 12 TB of data on the drive (60% full) and u=1E-14:
p(12TB) = exp(-8*12*1E12*1E-14) = exp(-0.96) = 0.38
i.e. a 38% chance of a smooth resilver, without having to fetch the backup to restore an individual file where an URE occurred. Do you take your luck with it, or go for 3-way mirrors (or raidz2)?
 

Richard Kellogg

Dabbler
Joined
Jul 30, 2015
Messages
27
I take your point. But then you would have the same problem with whenever you decide its time to replace a drive in the pool.

But where does 10^-14 error rate come from? I don’t see it on the manufacturers data sheet. The UE must come from data that was previously written, and now can no longer be read. But aren’t these caught when the disks are periodically scrubbed? And corrected by reading the corresponding block on the mirrored disk.

Are you suggesting a nearly full 20 TB disk that was just successfully scrubbed, and now is constantly being read, will come up with a UE every 10^14 bits? That is more than 1 UE expected for each complete read of a 20 TB disk.

Perhaps that is true, but it doesn’t seem right.
 

Heracles

Wizard
Joined
Feb 2, 2018
Messages
1,401
But then you would have the same problem with whenever you decide its time to replace a drive in the pool.
If you wait for a drive to fail before replacing it, then you are better to have more than a single drive of redundancy because by definition, you will need to re-silver with one drive short.

When you mix the strategy of not replacing before failure + a single drive redundancy, the only outcome is to re-silver while standing on one foot.

That is why @jgreco talked about policies :
If your policy says that you must always keep some redundancy, you can not do 2-way mirrors and even less replacing only on failure.
Should your policy asks for best effort to avoid redundancy loss, you can do 2-way mirror + preventive replacement with an extra disk drive in an extra bay.
Without such policies, 2-way mirrors + replacing on failure can be considered a calculated risk if you have the corresponding backups and DR in place like zfs replication to a different system.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
They have a stated MTBF of 2.5 million hrs
In addition to the (already-noted) error in equating URE with "failure", there's also the fact that MTBF is a near-meaningless spec. It doesn't mean that you can reasonably expect a disk to last 2.5M power-on hours (which would be about 285 years) before it fails, which is what a normal person would assume from that spec. Nor does it mean that they tested a large number of drives (or, indeed, any drives at all) to a lifespan of 2M+ hours. It's more like 1000 drives each ran for 2500 hours without failure. You'd be doing well to get 3% of that 2.5M hours time-in-service.
 

Richard Kellogg

Dabbler
Joined
Jul 30, 2015
Messages
27
In addition to the (already-noted) error in equating URE with "failure", there's also the fact that MTBF is a near-meaningless spec. It doesn't mean that you can reasonably expect a disk to last 2.5M power-on hours (which would be about 285 years) before it fails, which is what a normal person would assume from that spec. Nor does it mean that they tested a large number of drives (or, indeed, any drives at all) to a lifespan of 2M+ hours. It's more like 1000 drives each ran for 2500 hours without failure. You'd be doing well to get 3% of that 2.5M hours time-in-service.
Yes. I understand that, and I used mtbf in the way you described. What I got wrong was, manufacturers apparently don't consider an inability to reliably read data that you have previously successfully written as a failure.
 

Richard Kellogg

Dabbler
Joined
Jul 30, 2015
Messages
27
If you wait for a drive to fail before replacing it, then you are better to have more than a single drive of redundancy because by definition, you will need to re-silver with one drive short.

When you mix the strategy of not replacing before failure + a single drive redundancy, the only outcome is to re-silver while standing on one foot.

That is why @jgreco talked about policies :
If your policy says that you must always keep some redundancy, you can not do 2-way mirrors and even less replacing only on failure.
Should your policy asks for best effort to avoid redundancy loss, you can do 2-way mirror + preventive replacement with an extra disk drive in an extra bay.
Without such policies, 2-way mirrors + replacing on failure can be considered a calculated risk if you have the corresponding backups and DR in place like zfs replication to a different system.
I have been using a 3-way mirror for the past 7 years. So my policy has been for 3 way mirrors.

But I’m questioning the facts here. I don’t question the formulas. But I question the 10^-14 bit read error rate. I believe if you wrote a series of 100gb files and filled up a 20 TB disk, you could reliably read the entire disk contents back several times without an error. Many home theaters owners do just that.
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
But where does 10^-14 error rate come from? I don’t see it on the manufacturers data sheet.
Screenshot_1.png

They are in the data sheets, often called different ways.

Edit: quoting the resource linked in this thread.
I haven't been able to find reliable information about known how the OEMs test UREs and what are the legal constrains behind this process: we don't know values such as the variance or standard deviation between the reported value and the actual values, but we are observing a huge discrepancies; having a scientific background this annoys me greatly. [...] what's certain is that the lack of independent and reliable testing (scientific method and large enough population) in conjunction to perhaps a grey area in regulations is hurting us consumers.
 
Last edited:

Richard Kellogg

Dabbler
Joined
Jul 30, 2015
Messages
27
I think I understand this better now, and agree with what you guys have been saying. Thanks for everyone who contributed.

I found this presentation from cern which goes a long way into understanding modern drive error mechanisms. https://indico.cern.ch/event/247864...ts/426734/592321/HEPIX_October_2013_ver_6.pdf

Attached is a slide that shows cumulative errors vs TB read. From this graph, it sure looks like one can expect read errors when reading an additional 10 TB, especially as the drive gets older.
 

Attachments

  • IMG_0443.png
    IMG_0443.png
    1 MB · Views: 86

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
I take your point. But then you would have the same problem with whenever you decide its time to replace a drive in the pool.
Yes, but only if there's no redundancy in the pool. A raidz2 or 3-way mirror vdev which loses one drive still has redundancy and can resilver successfully even if UREs occur during the read—as long as two drives do not return an URE exactly on the same bit of data.
Essentially, with large drives one degree of redundancy is lost to the risk of an URE. To safely sustain the loss of one drive, a vedv should then have two degrees of redundancy.

But where does 10^-14 error rate come from?
Err… actually from your own post #10.
20 TB Exos drives have a specified rate of 1E-15 (datasheet), which then works out to p(12TB) = 0.908. So a 2-way mirror vdev of 20 TB Exos drives which loses one drive while 60% full would have a 91% chance of resilvering without issue. Better… but there's still a less-than-fully-comfortable 9% chance of having to go for the backup to restore a damaged file.

20 TB Ironwolf and WD Red Pro drives are also rated at u=1E-15. WD Red Plus are rated at u=1E-14, but this line tops at 14 TB "only". (Still, no-one is going to like the calculation for resilvering out of a 14 TB Red Plus drive with 8-10 TB of actual, valuable, data on it.)

Take home points: Do not entrust large amounts of valuable data to vdevs/pools with only one degree of reduandancy. And ALWAYS HAVE BACKUPS!
 
Last edited:

Whattteva

Wizard
Joined
Mar 5, 2013
Messages
1,824
Still, no-one is going to like the calculation for resilvering out of a 14 TB Red Plus drive with 8-10 TB of actual, valuable, data on it.)
Another thing to point out here is, resilvering a RAIDZ is always significantly (possibly orders of magnitudes) slower than simple mirrors and puts extra load on more drives in the pool.
 
Top