15 drives -- Suggestions about vdev configuration please

Status
Not open for further replies.
Joined
Mar 29, 2014
Messages
6
I am setting up a new FreeNAS server for home use and am in the process of buying the components. I just bought a 15 bay rack mount chassis and 8 x Seagate 4TB NAS drives (so far). I have been reading a lot about optimal (and sub-optimal) vdev drive configurations and would like to solicit opinions.

I am thinking either a single z3 vdev across 15 drives, or two z2 vdev, i.e 8 drives and 7 drives. But I also understand that 15 drives in z3 is not an optimal number of drives.

Any suggestions are welcome!
-Jan
 

bigphil

Patron
Joined
Jan 30, 2014
Messages
486
It just depends...how much usable space are you looking for? What type of work load will be served by the setup?
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
No, vdevs should never be bigger than 11 disks... ever. I went with an 18 drive vdev and there's a laundry list of reasons I won't go into detail with here as to why you shouldn't do it.. You need to go to 2 vdevs of RAIDZ2. 6 drive vdevs is ideal, or if you had 1 more disk you could do 8 disk vdevs(which is non-optimal).
 

Fox

Explorer
Joined
Mar 22, 2014
Messages
66
No, vdevs should never be bigger than 11 disks... ever. I went with an 18 drive vdev and there's a laundry list of reasons I won't go into detail with here as to why you shouldn't do it.. You need to go to 2 vdevs of RAIDZ2. 6 drive vdevs is ideal, or if you had 1 more disk you could do 8 disk vdevs(which is non-optimal).

So 6 RAIDZ2 is better than 11 RAIDZ3? Is it better for performance reasons? I know it is recommended that the VDEV be in powers of two (plus the parity drives), but what happens when they are not? I would assume larger than 11 causes performance problems.

EDIT:

I may have found my own answer, from Wikipedia, is it true?:
IOPS performance of a ZFS storage pool can suffer if the ZFS raid is not appropriately configured. This applies to all types of RAID, in one way or another. If the zpool consists of only one group of disks configured as, say, eight disks in raidz2, then the write IOPS performance will be that of a single disk. However, read IOPS will be the sum of eight individual disks. This means, to get high write IOPS performance, the zpool should consist of several vdevs, because one vdev gives the write IOPS of a single disk. However, there are ways to mitigate this IOPS performance problem, for instance add SSDs as ZIL cache — which can boost IOPS into 100.000s.[59] In short, a zpool should consist of several groups of vdevs, each vdev consisting of 8–12 disks. It is not recommended to create a zpool with a single large vdev, say 20 disks, because write IOPS performance will be that of a single disk, which also means that resilver time will be very long (possibly weeks with future large drives).
 
Joined
Mar 29, 2014
Messages
6
Thanks for the responses so far. This post here tells me about the recommended configurations for 4k disks, which are:
RAID-Z: 3, 5, 9, 17, 33 drives
RAID-Z2: 4, 6, 10, 18, 34 drives
RAID-Z3: 5, 7, 11, 19, 35 drives
Further, my understanding is that with two vdevs in one pool, both vdevs should ideally be of identical size/configuration. So what @cyberjock is saying is correct, the most ideal config for me is 6 drives per vdev in a raid-z2 config.

Or 7 drives per vdev in a raid-z3 config, which obviously gives me better redundancy, but also more overhead (waste of space) for the two additional parity drives.

Which begs the question for me -- how much of an issue would a single 15 disk vdev in raid-z3, or 2 x 7 disks in raid-z2, really be? Is it mainly performance issues?

This blog here says:
For home use, creating larger vdevs is not an issue, even an 18 disk vdev is probably fine, but don't expect any significant random I/O. It is always recommended to use multiple smaller VDEVs to increase random I/O performance (at the cost of capacity lost to parity) as ZFS does stripe I/O-requests across VDEVs. If you are building a home NAS, random I/O is probably not very relevant.

Further, in this other blog here it says:
Try (and not very hard) to keep the number of data disks in a raidz vdev to an even number. This means if its raidz1, the total number of disks in the vdev would be an odd number. If it is raidz2, an even number, and if it is raidz3, an odd number again. Breaking this rule has very little repercussion, however, so you should do so if your pool layout would be nicer by doing so (like to match things up on JBOD's, etc).
and:
For raidz2, do not use less than 6 disks, nor more than 10 disks in each vdev (8 is a typical average).For raidz3, do not use less than 7 disks, nor more than 15 disks in each vdev (13 & 15 are typical average).
So a single 15 disk vdev for raidz3 isn't really all that bad then?

Lots of confusing and contradicting information out there :)

My main use is for keeping lots of 720p and 1080p movies, and to also act as Plex server for a few family members.

Thanks for any further opinions on this matter.

-J
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
No, 11 disks are the maximum recommended by the ZFS designers. I went overboard and I really regret the decision. After seeing the consequences first hand(which i'm not about to try to discuss right now) I'd never do it again. That's exactly why I gave the advice I gave. Sorry, but some random blogger doesn't constitute a person with actual experience. If you want to trust %randomblogger% over me, feel free. Just expect the "I told ya so" later...

My replacement pool will be 2 vdevs unless hard drives are big enough I can make it 1 vdev of 11 disks or less.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
IIRC the current ZFS best practices say not to use more than 9 disks in a vdev, but that might be my Solaris creeping in again.

In your situation with 15 drives max and 8 current, I'd say buy four more and make two 6-drive RAIDZ2 volumes for a total usable space of 8 drives worth, or 32TB before formatting. The only other option would be RAIDZ1 but given the size of drives involved I would suggest against that.

And as an additional recommendation buy ECC RAM or else.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Naa.. the recommended widest vdev is 8 disks + redundancy. So 9 disk RAIDZ1, 10 disk RAIDZ2, 11 disk RAIDZ3.
 
Joined
Mar 29, 2014
Messages
6
Thanks for the additional responses. However, no one has commented on this post here by "sub.mesa" on AnandTech that I previously linked above, which says:
The 'no more than 9 devices in a vdev' advice is outdated and should be nuanced. The problem is that 30 devices in a RAID-Z3 vdev will still have the random I/O performance of a single drive. IOps scale with the number of vdevs, load balanced.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
Well bear in mind that line about 9 devices is from "best practices" and not "absolute requirements."

I'm sure a vdev won't spontaneously combust if oversized, but it will probably have some less than desirable behavior, which cyberjock can probably attest to personally.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
cannondale815,

The sentence you quoted is only the tip of a very big iceberg of problems related to very wide pools. There's a laundry list of reasons to NOT do really wide, hence the recommendation never to go above what I mentioned above. You probably aren't aware of the reasons, and unless you plan to get a deep level education of ZFS over the next few months, you won't understand it either. And I bet if you did get that deep-level education you'd realize that very wide vdevs are a bad idea.

So just don't do it. It's not worth it.
 

panz

Guru
Joined
May 24, 2013
Messages
556
Just done some experience with 2 Vdevs of 6 disks each (RAIDZ2): it works beautifully.
 
Joined
Mar 29, 2014
Messages
6
Sorry for the late response on my own thread guys. I ended up going with Ubuntu server and decided to use snapraid and mhddfs across my drives. Works great for my purposes.

Sent from my XT1053 using Tapatalk
 

aufalien

Patron
Joined
Jul 25, 2013
Messages
374
Well, as a storage agnostic person who has messed with many file systems, I think you are missing out on ZFS. Its no question not only the FS of the future, but the FS for here and now. Even in a clustered FS env, I'd go ZFS as the underlying FS on all cluster nodes.
 
Joined
Mar 29, 2014
Messages
6
ZFS is no doubt a great FS. But my combination of snapraid / ext4 / mhddfs, at least so far, fully satisfies my needs for hassle free, expandable storage, one drive (of any size) at a time.

Sent from my XT1053 using Tapatalk
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Thanks for the additional responses. However, no one has commented on this post here by "sub.mesa" on AnandTech that I previously linked above, which says:

I don't see where in that post it actually says that super-wide devices are a good idea. It appears to be an explanation of the optimal values for RAIDZ widths, which is based on the powers-of-two issue.

This is a multidimensional problem and it is a disservice to fixate on only one dimension.

We discourage wide vdevs for a number of reasons. For example, one reason that people want massively wide vdevs is that it is more economical to have a 67 device RAIDZ3 made out of 4TB drives to get 256TB, than an 88 device (11 * 4TB in RAIDZ3 in 8 vdevs) pool to get 256TB. That's 21 fewer drives! Better, right? But the problem is that the resilvering time starts to get very long, and since ZFS isn't really designed to work with such massive vdevs, you run into practical constraints, and the very real danger that a resilver will not complete before another failure strikes. This all assumes that it is fine to have an IOPS constrained pool too.

We're basically not going to comment on sub.mesa's post because it doesn't actually introduce anything new to the discussion. It does not say what you think it does: it is not a blessing to use wider vdevs. It is a discussion of the general mathematical principles.
 
Joined
Mar 29, 2014
Messages
6
Thank you for elaborating. You guys have to understand that it's rather confusing, once you start reading, to make the right decisions regarding zfs pool configurations. So many different opinions and contradicting advise out there.

Sent from my XT1053 using Tapatalk
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
We understand that. There are rules and exceptions to rules and all sorts of other fun.
 
Status
Not open for further replies.
Top