Vdev raidZ2 max recommend disks per vdev.

NicoLambrechts

Dabbler
Joined
Nov 13, 2019
Messages
12
Good day, I have purchased a Supermicro 60bay storage server with 14Tb Seagate disks.
My requirements is Max Capacity.
In a RaidZ2 vdev, what would you recommend as the max amount of disks per vdev? Can I do like 20 disks per vdev (18+2)?
It's role will be as a Veeam backup storage repository.
 

artlessknave

Wizard
Joined
Oct 29, 2016
Messages
1,506
10
raidz3 for anything bigger, but generally not more than 15 disk per vdev.
the GUI will not stop you going bigger but the chances of the whole thing dying because too many disks are degraded goes up exponentially [edit: by a debatable amount] with each disk you add to the vdev.
 
Last edited:

anmnz

Patron
Joined
Feb 17, 2018
Messages
286
the chances of the whole thing dying because too many disks are degraded goes up exponentially with each disk you add to the vdev
No it doesn't. The problem with very wide RAIDZ-2/3 vdevs is all about performance, not safety.
 

artlessknave

Wizard
Joined
Oct 29, 2016
Messages
1,506
I don't see how more drives that can fail wouldn't increase the chances of too many drives failing. how do you know that that isn't the case?
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Multiple simultaneous drive failures is a bit unusual unless you're buying used 50K hour eBay drives.

There is definitely an IOPS density issue when you start widening vdevs too wide especially on huge drives. Historically, there were some implementation issues as well that strongly discouraged going more than maybe 20 drives wide, don't recall if those ever got cleared up.

You need to bear in mind that the vdev will behave with IOPS levels similar to a single component drive, so if Veeam is fine with a low IOPS target then great. My feeling is that you'll wind up unhappy.
 

blanchet

Guru
Joined
Apr 17, 2018
Messages
516
RaidZ3 + cold spare is the best solution for a Veeam repository: if a disk fails during the Chrismas holiday, the issue can wait until you come back to the office.

Theoretically you can go with 20 disks RaidZ3 vdevs, because Veeam needs IOPS only when using Instant VM Recovery or SureBackup.
When doing backup, the bottleneck is always the source (VMware), not the target (FreeNAS), because Veeam writes data in large chuncks.

Nevertheless, if I were you, I will use 4 x 15disks in Raidz3. It will offer you a good balance between IOPS and capacity, in particular if you add other workloads later on this storage server.
 
Last edited:

John Doe

Guru
Joined
Aug 16, 2011
Messages
635
I don't see how more drives that can fail wouldn't increase the chances of too many drives failing. how do you know that that isn't the case?

agree!

probability of failing disk increases the more drives you add.
Infinitive amount of disks, 3 are redundant =probability that 4 or more disks are failing at the same time is 100%

since we usually don't add infinitive amount of disks in daily usage, we can reduce the probability but in general, the statement is correct.
 

anmnz

Patron
Joined
Feb 17, 2018
Messages
286
I don't see how more drives that can fail wouldn't increase the chances of too many drives failing. how do you know that that isn't the case?
Of course it increases the chance, but what you actually said was
the chances of the whole thing dying because too many disks are degraded goes up exponentially with each disk you add to the vdev
which is absurd. It is the complete opposite of "exponential". It is a small incremental risk -- if the increased risk of disk failure due to adding one drive to a vdev is not negligible, you have other problems! -- and it's absolutely not the reason that very wide vdevs are frowned upon.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
which is absurd. It is the complete opposite of "exponential". It is a small incremental risk

Neither "small incremental" nor "exponential" are correct.

The likelihood of multiple devices dying simultaneously is a fascinating study. People often make the mistake of assuming that the likelihood of a drive dying is statistically independent, and then calculate the likelihood of failure using the normal rules of probability, but that gets you to a result that is significantly happier than what actually happens in the real world.

I'm too lazy to go dig out a discrete mathematics text and see if I can work out the probability formula for statistically independent events on a RAIDZ3 because it's irrelevant. It would be a best case scenario, and the real world isn't. Formula's probably not that hard though.
 

HolyK

Ninja Turtle
Moderator
Joined
May 26, 2011
Messages
654
Multiple simultaneous drive failures is a bit unusual unless you're buying used 50K hour eBay drives.

Well there is one specific situation where multiple drives can fail at nearly same time - if there is a manufacturing/material flaw. I recall 4 or 5 dead HDDs (out of 40 in the same enclosure) within one week. That is a HUGE ratio. A case was raised with supplier. We got brand new enclosure + replacement disks. The "old" one was sent together with the dead disks for examination. Later on it was found that it was a disk flaw and all of them were from the same production batch (serial numbers very similar). Two more disks from the same batch were still in the storage so both got replaced as well just for safety. I guess they raised a case with HDD manufacturer. Sadly i don't recall the brand/model. It was 10 years ago in my previous work.

Anyway when i am buying more disks at the same time i am always asking for different batches. I don't want more than X disks from the same batch where "X" is the number of parity disks. Yea a bit higher paranoia but still...
 

artlessknave

Wizard
Joined
Oct 29, 2016
Messages
1,506
yes, seagate replaced probably at least a few 1000 drives at one point, predictive fail for a whole batch. it takes awhile to replace enterprise arrays worth of drives one drive a day....
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
Back in 2001 or 2002, I had to replace about 20 disks in some small RAID-5 disk arrays. Each array only had 4 disks populated of 8 slots total. We had started to have failures, all the same brand & model, so we insisted that the vendor replace all of those disks. We had a disk change party day. Some arrays even had 2 out of 4 from the problematic models. So we had to do it 1 disk at a time, and hope for the best.

It all worked out okay. But, things like that, (and it was not my first experience with bad batches of drives, nor last), caused me to be careful at home. Buy different brands, models and from different times to try and avoid that problem.

You are only paranoid if someone is NOT out to get you.
 

artlessknave

Wizard
Joined
Oct 29, 2016
Messages
1,506
You are only paranoid if someone is NOT out to get you.
except in this case the drives are not, in fact, out to get you, since they have no will of their own.
 
Top