Comparing vdev and zpool configuration reliability

Status
Not open for further replies.

melp

Explorer
Joined
Apr 4, 2014
Messages
55
I'm currently planning for a 24-drive FreeNAS build and wanted to compare some different vdev configurations. I did some basic math to come up with relative probabilities of data loss and wanted to share these for comments and discussions.

A couple of notes before I get started: my build will be used as a media server, so performance isn't my highest priority and I won't be sticking to the (2^n)+x rule-of-thumb that cyberjock outlined on slide 33 of his excellent guide. I also don't account for the decrease of vdev reliability as the number of drives within that vdev increases; this is something I don't know very much about and was hoping to discuss here.

In addition to that, keep the following in mind (from DrKK, who helped a lot with these calculations):
Let's start with the assumption (try not to laugh, it's serious) that one drive failing in a vdev, or in any part of the zpool, is completely independent of another drive's failure. i.e., the fact that drive #3 just failed will have no bearing on when or if drive #7 fails. Now, you're thinking "obviously"!

But it's not so obvious. Two reasons:
  1. A lot of people tend to populate with drives from the same production run. i.e., made at around the same time (or even, the same exact lot) in the factory. Obviously, QC dynamics indicate that the drives in that particular lot could share characteristics, in terms of failures rates, and times to failure. In other words, the very fact that one drive from the same lot failed might indicate that the other drives (if any) from the lot might be more prone to fail. I hope this is clear. We are going to *NOT* factor in this kind of intra-dependence.
  2. There is actually anecdotal evidence that drives, even if not from the same run, somehow may tend to fail in groups. Also, when one drive has failed in ZFS, the act of resilvering or whatever is itself a high-risk, high-intensity operation which puts the drive under considerably more stress than its default run condition. So, the *ACT* of trying to resilver the pool may actually, ironically, increase the likelihood that the pool dies. We are *NOT* going to factor this in either.
Both of the above properties are real, and non-zero---but we'll assume "minor". If we tried to factor them in, we really WOULD need a statistician. Your mileage may vary.

With that in mind, the two configurations that I'm considering are as follows:
  1. 3 vdevs, 8 drives per vdev, each in RAIDZ2
  2. 2 vdevs, 12 drives per vdev, each in RAIDZ3
The quantity of drives used for parity data is equal in these two configurations, and will thus give you the same storage capacity. This fact makes for an interesting comparison; as I'll show, one appears to offer more reliability than the other.

Let's assume that all 24 of your drives have a certain probability to fail during a given unit of time, and that you would use the same exact drives for either configuration. This would result in a certain rate of failure (similar to the ones that drive manufacturers pull out of their ass and call "mean time between failures" or "MTBF"). Because we're comparing the relative reliability (as opposed to the absolute reliability, i.e., a period of time), the exact rate doesn't matter as long as we're consistent with the two cases. If we call the rate at which our drives fail r, we can call the probability of failure of a single drive f = 1/r.

A few quick points for those who haven't studied basic probability before:
  • Multiplication in probability is like an AND operator; if you multiply the probability of event X occurring with the probability of event Y occurring, you get the probability that event X AND event Y will occur.
  • Addition in probability is like an OR operator; if you add the probability of event X occurring and the probability of event Y occurring, you get the probability that event X OR event Y will occur.
  • 1 - probability(event X occurring) = probability(event X NOT occurring)
We'll start with configuration 1. In this config, we have 3 vdevs and any 3 drives in the same vdev must fail for us to have data loss, and a loss of a single vdev will result in a total loss of the zpool. We'll start by calculating the probability of losing a single vdev of 8 drives using a binomial distribution (http://en.wikipedia.org/wiki/Binomial_distribution):

f(k;n,p) = (n choose k) * p^n * (1-p)^(n-k)
where​
(n choose k) = n!/(k!(n-k)!)

For configuration 1, we'll have p = f (the probability of a single drive failure), n = 8, k = 3:

8 choose 3 * f^3 * (1-f)^5
-> 56 * f^3 * (1-f)^5

This has 3 parts to it:

i) (8 choose 3) is saying "how many ways can I have 3 failures in 8 drives?" Using the binomial coefficient, we determine there are 56.​
ii) f^3 is the probability of 3 drive failures​
iii) (1-f)^5 is the probability that the other 5 drives don't fail.​

Taken all together, it's saying:

The probability that...​
...drives 1, 2, and 3 fail, and that 4, 5, 6, 7, and 8 don't fail, -OR-​
...drives 1, 2, and 4 fail, and that 3, 5, 6, 7, and 8 don't fail, -OR-​
...drives 1, 2, and 5 fail, and that 3, 4, 6, 7, and 8 don't fail, -OR-...​
...and so on, 56 times, once for each possible combination of failures. Again, all of this is the probability that we'll lose 3 drives on one vdev. However, this alone doesn't fully account for the probability that we'll lose the vdev, since we can lose it by having 4 drives fail, or 5, 6, 7, or even all 8 drives. To account for these, we'd have to add 5 more binomial distributions, with n=8 and k=4 ... 8. With all these summed it, we'd have the probability that 3 or more drives in a vdev failed. That's a lot of terms. Another option that will be simpler is to express makes use of the fact that:

probability(3 or more drives failing) = 1 - probability(2 or fewer drives failing)
Because of a similar trick you'll see in the next step, we'll actually use probability(2 or fewer drives failing), i.e., the probability that the vdev is still alive (the same equation as the probability that it's dead, but without the (1 - ...) part in front). We'll still use several binomial distributions (3 of them, to be exact, as opposed to 6 with the other way) with n=8 and k=2, 1, 0, and we'll sum them all up. This is what it'll look like (we'll call the whole thing A):​
A = [(8 choose 2) * f^2 * (1-f)^6] + [(8 choose 1) * f^1 * (1-f)^7] + [(8 choose 0) * f^0 * (1-f)^8]
-> A = [28 * f^2 * (1-f)^6] + [8 * f * (1-f)^7] + [1 * 1 * (1-f)^8]
-> A = [28 * f^2 * (1-f)^6] + [8 * f * (1-f)^7] + [(1-f)^8]

Notice the 3 sets of [] brackets. The first set is the probability that 2 drives in our vdev fail, the second set is the probability that 1 drive fails, and the last is the probability that none of the drives fail. Summing all these up is saying "the probability that two drives fail -OR- one drive fails -OR- zero drives fail".

Now we need to account for the fact that we have 3 vdevs, and that if at least one of them fails (2 could fail, or even all 3), we lose the whole zpool, we'll need another set of binomial distributions, this time using p = A, n = 3, and k = 1, 2, and 3. An easier option is to use the same trick as above:

probability(at least one vdev fails) = 1 - probability(all 3 vdevs are alive)

We calculated the probability of a single vdev being alive in the previous step, and we'll use that here to calculate C1, the probability of losing our whole zpool in configuration 1:

C1 = 1 - A^3
-> C1 = 1 - ([28 * f^2 * (1-f)^6] + [8 * f * (1-f)^7] + [(1-f)^8])^3

To reiterate, A is the probability that one of our vdevs is healthy, so A^3 is the probability that vdev1 AND vdev2 AND vdev3 are healthy, and 1 - A^3, is the opposite of that, i.e., fewer than 1 vdevs are healthy (and our whole zpool is lost).​
Now lets look at configuration 2. In this config, we have 2 vdevs and any 4 (or more) drives in the same vdev must fail for us to have data loss, but a loss of either vdev will result in a total loss of the zpool. We'll proceed in the same way as config 1, using the same trick to compute the probability that one vdev is alive, but with p = f (same as before) n = 12, k = 3, 2, 1, and 0:​
B = [(12 choose 3) * f^3 * (1-f)^9] + [(12 choose 2) * f^2 * (1-f)^10] + [(12 choose 1) * f^1 * (1-f)^11] + [(12 choose 0) * f^0 * (1-f)^12]
-> B = [220 * f^3 * (1-f)^9] + [66 * f^2 * (1-f)^10] + [12 * f * (1-f)^11] + [1 * 1 * (1-f)^12]
-> B = [220 * f^3 * (1-f)^9] + [66 * f^2 * (1-f)^10] + [12 * f * (1-f)^11] + [(1-f)^12]
Agian, this is the probability that one of our 12-drive vdevs is alive. As above, we'll use a second binomial distribution to determine the probability that at least two vdevs fail by computing 1 - the probability that both are alive, and we'll call this C2:

C2 = 1 - B^2
-> C2 = 1 - ([220 * f^3 * (1-f)^9] + [66 * f^2 * (1-f)^10] + [12 * f * (1-f)^11] + [(1-f)^12])^2

At this point, we want to compare them. Lets look at a graph of both C1 and C2 Wolfram Alpha: https://www.wolframalpha.com/input/?i=graph+%281-%28%5B28*f%5E2*%281-f%29%5E6%5D%2B%5B8*f*%281-f%29%5E7%5D%2B%5B%281-f%29%5E8%5D%29%5E3%29+and+%281-%28%5B220*f%5E3*%281-f%29%5E9%5D%2B%5B66*f%5E2*%281-f%29%5E10%5D%2B%5B12*f*%281-f%29%5E11%5D%2B%5B%281-f%29%5E12%5D%29%5E2%29
Here's a close-up of the part we care about: http://i.imgur.com/rVxLqjq.png (blue = config 1, red = config 2)

Obviously, this graph is showing the probability of a failure (as opposed to the rate of failure), so smaller numbers are better. Based on this, it appears that the probability of losing our whole zpool is always lower when using configuration 2.

You can easily apply the process I've outlined here to any other configurations to compare them, but take the results with a grain of salt. Again, I'm not accounting for the apparent decrease of vdev reliability as the number of drives within that vdev increases, and the possibility that drive failures may not be entirely independent.

Footnote: This has been an iterative process, and you can see some of the steps below. DrKK helped out a lot with getting everything straight and simplified. If you have any questions, comments, corrections, please let me know.
 

melp

Explorer
Joined
Apr 4, 2014
Messages
55
Ok I messed this up a bit (a lot). My GF is a stats PhD student and she said I did it all wrong. The correct formulas for each are (where p = 1/r):
  1. 3 * (8 choose 3 * p^3 * (1-p)^5)
  2. 2 * (12 choose 4 * p^4 * (1-p)^8)
This is via the binomial distribution (which I really did learn about in school, but must have blocked out...): http://en.wikipedia.org/wiki/Binomial_distribution

This is a helpful visualization of the relationship: https://www.wolframalpha.com/input/?i=graph+168*p^3*(1-p)^5+and+990*p^4*(1-p)^8
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
To our resident math guy.... @Drkk
 

melp

Explorer
Joined
Apr 4, 2014
Messages
55
Still not convinced I did this right... see: http://www.reddit.com/r/math/comments/229sfi/can_you_do_a_nested_binomial_distribution/

I think I need another binomial distribution for the vdevs, but I'm not sure if that's "allowed". Asking more stats PhD friends for help... stay tuned. Once I get it figured out, I'm gonna set up a js calculator or something that will generate Wolfram Alpha links so you can compare your own setups without expoloding your head.
 

DrKK

FreeNAS Generalissimo
Joined
Oct 15, 2013
Messages
3,630
I might try to rederive this in a bit. By the way, I have, like you, had relations with a few "stats Ph.D. women" in my day. Believe me when I tell you: It does not follow that you can trust their math, sir.
 

DrKK

FreeNAS Generalissimo
Joined
Oct 15, 2013
Messages
3,630
I might try to rederive this in a bit. By the way, I have, like you, had relations with a few "stats Ph.D. women" in my day. Believe me when I tell you: It does not follow that you can trust their math, sir.
That was a shameless troll by me. I kid I kid I kid :) (Well, sort of.)

Getting out my pen and paper now.
 

melp

Explorer
Joined
Apr 4, 2014
Messages
55
Heard that, brother. I'm re-doing the OP now. Hopefully we come up with the same numbers.

edit: ok OP updated... but I think the C'2 might actually be (adding in that more than 1 vdev could fail at once):

C1 = [3 * A * (1-A)^2] + [(3 choose 2) * A^2 * (1-A)] + [A^3]​
C2 = [2 * B * (1-B)] + [B^2]​
This wouldn't be too far off what I had, since the probability of two or even three vdevs failing at once time is slim to nil (unless, of course, you forget to sacrifice a virgin to the data storage gods on the night of the first full moon in harvest season).

edit2: my stats woman says that I can apply the same logic to the individual vdev probabilities (A and B) to account for more drives failing at once, but it wouldn't end up changing the results much.
 

DrKK

FreeNAS Generalissimo
Joined
Oct 15, 2013
Messages
3,630
I have completed my analysis. 2x12 in a Z3 is far safer, under certain (reasonable) assumptions. The figures and code are coming.
 

melp

Explorer
Joined
Apr 4, 2014
Messages
55
This is what I came up with, calculating
  1. 1-p(2 or fewer drives fail)
  2. 1-p(3 or fewer drives fail)
and 1 or more vdev failing:

A = (1-((28*f^2*(1-f)^6)+(8*f*(1-f)^7)+((1-f)^8)))​
B = (1-((220*f^3*(1-f)^9)+(66*f^2*(1-f)^10)+(12*f*(1-f)^11)+((1-f)^12)))​
C1 =​
(3*(1-((28*f^2*(1-f)^6)+(8*f*(1-f)^7)+((1-f)^8)))*(1-(1-((28*f^2*(1-f)^6)+(8*f*(1-f)^7)+((1-f)^8)))^2))​
+(6*(1-((28*f^2*(1-f)^6)+(8*f*(1-f)^7)+((1-f)^8)))^2*(1-(1-((28*f^2*(1-f)^6)+(8*f*(1-f)^7)+((1-f)^8)))))​
+(1-((28*f^2*(1-f)^6)+(8*f*(1-f)^7)+((1-f)^8)))^3​
C2 =​
(2*(1-((220*f^3*(1-f)^9)+(66*f^2*(1-f)^10)+(12*f*(1-f)^11)+((1-f)^12)))*(1-(1-((220*f^3*(1-f)^9)+(66*f^2*(1-f)^10)+(12*f*(1-f)^11)+((1-f)^12)))))​
+ ((1-((220*f^3*(1-f)^9)+(66*f^2*(1-f)^10)+(12*f*(1-f)^11)+((1-f)^12)))^2)​
... a bit of a mess. Wolfram won't take them because the input is too long. I'll run it through R later today.
 

DrKK

FreeNAS Generalissimo
Joined
Oct 15, 2013
Messages
3,630
(Full Disclosure: I have a Ph.D. in math from a good school. Trust me.)

OK, here's the deal.

Let's start with the assumption (try not to laugh, it's serious) that one drive failing in a vdev, or in any part of the zpool, is completely independent of another drive's failure. i.e., the fact that drive #3 just failed will have no bearing on when or if drive #7 fails. Now, you're thinking "obviously"!

But it's not so obvious. Two reasons:

  1. A lot of people tend to populate with drives from the same production run. i.e., made at around the same time (or even, the same exact lot) in the factory. Obviously, QC dynamics indicate that the drives in that particular lot could share characteristics, in terms of failures rates, and times to failure. In other words, the very fact that one drive from the same lot failed might indicate that the other drives (if any) from the lot might be more prone to fail. I hope this is clear. We are going to *NOT* factor in this kind of intra-dependence.
  2. There is actually anecdotal evidence that drives, even if not from the same run, somehow may tend to fail in groups. Also, when one drive has failed in ZFS, the act of resilvering or whatever is itself a high-risk, high-intensity operation which puts the drive under considerably more stress than its default run condition. So, the *ACT* of trying to resilver the pool may actually, ironically, increase the likelihood that the pool dies. We are *NOT* going to factor this in either.
Both of the above properties are real, and non-zero---but we'll assume "minor". If we tried to factor them in, we really WOULD need a statistician. Your mileage may vary.

OK this being the case, let's calculate the 3x8 drive in the Z2 pool:

Such a vdev survives if 0, 1, or 2 drives die. Therefore, let us (instead of calculating the probability of failure), calculate the (easier) probability of survival. Let's call p = "the probability that one drive fails, independently, in some unit time frame". Then, q = (1-p) is obviously the probability that it survives. Then, we have:
  • 0-failure probability: q^8
  • 1-failure probability: 8 * p * q^7
  • 2-failure probability: 28 * p^2 * q^6.
Thus, adding all those up, and doing some 7th grade arithmetic/algebra, you'll find your probability of having 2, or fewer, failures sums to:
S(p)=q^6* (21p^2 + 6p + 1) **PER VDEV***. So, if we'd like the probability that a zpool, with 3 such vdevs, would ***FAIL***, then we're going to have:
1.0 - (S(p) ^3).

Similarly, for the Z3 pool (I won't repeat all the math), we find that the probability of survival of a single vdev sums to:
T(p)=q^9*(165p^3 + 45p^2 + 9p + 1) **PER VDEV**. So, with 2 such vdev, the FAIL probability is:
1.0 - (T(p) ^ 2).

Now, *****RECALL***** this is just a basic combinatorial argument here. There are ZFS considerations and all kinds of potential interdependence in these failures. BUT, when you work it out, you get the following results, which shows that in this naive analysis, 2 vdevs of 12 disks in Z3 is far lower zpool loss rate than the 3 vdevs of 8 disks in Z2:

=============

p 3x8/Z2 2x12/Z3
0.000000 0.000000000000000 0.000000000000000
0.010000 0.000161791237341 0.000107863733593
0.020000 0.001245854556261 0.000830742261246
0.030000 0.004044125938784 0.002697904450341
0.040000 0.009210613069475 0.006149873663340
0.050000 0.017264337314026 0.011542932392574
0.060000 0.028591914985831 0.019153283613196
0.070000 0.043450973621787 0.029181248950223
0.080000 0.061975160563826 0.041755741158220
0.090000 0.084181140473199 0.056939137412428
0.100000 0.109977687199337 0.074732595534596
0.110000 0.139176747948087 0.095081793144244
0.120000 0.171506188997157 0.117883025992508
0.130000 0.206623817171573 0.142989573045062
0.140000 0.244132204218262 0.170218219317097
0.150000 0.283593815763880 0.199355820508790
0.160000 0.324545955867771 0.230165793982863
0.170000 0.366515075217018 0.262394426725668
0.180000 0.409030048725709 0.295776901083910
0.190000 0.451634099992542 0.330042951969350
0.200000 0.493895129593705 0.364922083796583
0.210000 0.535414286132323 0.400148290786188
0.220000 0.575832698775906 0.435464239722356
0.230000 0.614836364046834 0.470624889247627
0.240000 0.652159245164966 0.505400533885953
0.250000 0.687584697434385 0.539579273900017
0.260000 0.720945376986034 0.572968923597233
0.270000 0.752121822314346 0.605398380669755
0.280000 0.781039918758725 0.636718487523833
0.290000 0.807667466139334 0.666802422315268
0.300000 0.832010070280719 0.695545662598084
0.310000 0.854106571511281 0.722865568185849
0.320000 0.874024208916931 0.748700632119775
0.330000 0.891853699707390 0.773009449655083
0.340000 0.907704390047956 0.795769455051179
0.350000 0.921699608550952 0.816975474824125
0.360000 0.933972327603526 0.836638144137672
0.370000 0.944661211954392 0.854782230317112
0.380000 0.953907109423348 0.871444904210693
0.390000 0.961850015967557 0.886673996432090
0.400000 0.968626527175250 0.900526271522406
0.410000 0.974367770913013 0.913065748889292
0.420000 0.979197801510463 0.924362095121335
0.430000 0.983232424561282 0.934489108033328
0.440000 0.986578413063077 0.943523308655880
0.450000 0.989333070021573 0.951542653412388
0.460000 0.991584089543151 0.958625374986795
0.470000 0.993409667518263 0.964848956924402
0.480000 0.994878813911145 0.970289243861340
0.490000 0.996051821062291 0.975019686471994
0.500000 0.996980845928192 0.979110717773438
0.510000 0.997710568495755 0.982629255338658
0.520000 0.998278893412704 0.985638322244996
0.530000 0.998717666902631 0.988196778212377
0.540000 0.999053386056513 0.990359151353372
0.550000 0.999307882426408 0.992175560244258
0.560000 0.999498966348355 0.993691715609441
0.570000 0.999641022487383 0.994948990763876
0.580000 0.999745550661833 0.995984550050121
0.590000 0.999821649033671 0.996831524807599
0.600000 0.999876439240623 0.997519226889830
0.610000 0.999915435012234 0.998073390369225
0.620000 0.999942857290231 0.998516432807403
0.630000 0.999961899911572 0.998867728291993
0.640000 0.999974950566080 0.999143885320251
0.650000 0.999983772069411 0.999359023519276
0.660000 0.999989649056955 0.999525044108354
0.670000 0.999993505063375 0.999651889909747
0.680000 0.999995994660181 0.999747791581552
0.690000 0.999997574928217 0.999819497564737
0.700000 0.999998560085059 0.999872485993316
0.710000 0.999999162603504 0.999911157502366
0.720000 0.999999523674068 0.999939008476219
0.730000 0.999999735402312 0.999958784804625
0.740000 0.999999856705436 0.999972616656290
0.750000 0.999999924490862 0.999982135137543
0.760000 0.999999961367182 0.999988571981733
0.770000 0.999999980855660 0.999992843616636
0.780000 0.999999990836756 0.999995621088603
0.790000 0.999999995777090 0.999997387390395
0.800000 0.999999998132954 0.999998483752550
0.810000 0.999999999211388 0.999999146424098
0.820000 0.999999999683317 0.999999535396120
0.830000 0.999999999879784 0.999999756419789
0.840000 0.999999999957150 0.999999877547273
0.850000 0.999999999985773 0.999999941287284
0.860000 0.999999999995643 0.999999973324068
0.870000 0.999999999998784 0.999999988605633
0.880000 0.999999999999695 0.999999995469196
0.890000 0.999999999999933 0.999999998343508
0.900000 0.999999999999987 0.999999999451972
0.910000 0.999999999999998 0.999999999839346
0.920000 1.000000000000000 0.999999999959441
0.930000 1.000000000000000 0.999999999991526
0.940000 1.000000000000000 0.999999999998618
0.950000 1.000000000000000 0.999999999999839
0.960000 1.000000000000000 0.999999999999989
0.970000 1.000000000000000 1.000000000000000
0.980000 1.000000000000000 1.000000000000000
0.990000 1.000000000000000 1.000000000000000
 

melp

Explorer
Joined
Apr 4, 2014
Messages
55
Yup, we came up with the same thing, but I like your way of calculating 1-p(all vdevs surviving) rather than the combinations of failure. Thanks for the reply! I'll draw up some graphs later.

If you had to take a guess, how much of this goes out the window by using vdevs with 12 drives in them (I know this is a no-no)?
 

DrKK

FreeNAS Generalissimo
Joined
Oct 15, 2013
Messages
3,630
I suspect this thread will become very popular, and viewed many times in the future. Let's be very clear on what we're looking at.

These numbers are ***IF YOU DO NOTHING*** and ****REPLACE NO DRIVES AS THEY FAIL****. This is the probability with failure rate p per your unit time, that your pool will die.

IN PRACTICE, you *WILL* be monitoring your pool, and REPLACING the drives AS THEY FAIL, so your probability of losing the pool is, presumably, much, much, much, much, lower than these numbers.

There is a lot going on here, but remember the numbers you're looking at. This is the probability if all things are independent (probably not true), and if NO DRIVES ARE REPLACED, **EVER**, that your pool will die in the time frame for which the rate p applies. Now, you can extrapolate, any situation worse in this sense will extend to something worse in real life usage. And you might be right. I don't know. Talk to Cyberjock.

But what I've posted above is the cold math on how many drives, IF LEFT TO THEIR OWN DEVICES are going to fail, and how likely a pool is to die if unmaintained.
 

DrKK

FreeNAS Generalissimo
Joined
Oct 15, 2013
Messages
3,630
And furthermore, you know, just sitting here thinking about it, if all things were equal (which they're not, we plainly see the Z3 offers a substantial improvement in the naive analysis), Z3 would just in general be a better idea, with the stress of resilvering.

So the evidence is overwhelming, really. You're much better off, given equal net pool space for two configurations, with Z3 over Z2. From a "likelihood that a minimally maintained pool will die" standpoint.

Of course, it's so interesting, right? The proportion of time a 12-drive VDEV will spend in a degraded state is ****WAY**** higher than it would be for an 8-drive VDEV (there's simply more physical drives, so a higher probability that any one, or more, of them is on the fritz, at any given time.). A pool is not at risk unless it's degraded.

So there's a bunch of ways to think about these things that have to be reconciled before you can conclude something really definitive. Pros and cons. ZFS doesn't like having gigantic vdevs necessarily. There's performance issues, yadda yadda yadda.
 

DrKK

FreeNAS Generalissimo
Joined
Oct 15, 2013
Messages
3,630
Also, I use RaidZ1.

Which means I'm a moron.
 

DrKK

FreeNAS Generalissimo
Joined
Oct 15, 2013
Messages
3,630
Of course, I've only got like...3 or 4 drives in the vdev. So that effectively means I have 33% redundancy. So it's "dumb" but perhaps not Dumb.
 
Status
Not open for further replies.
Top