Building a new nas wondering about vdev configuration.

risho

Dabbler
Joined
May 21, 2016
Messages
18
This is going to be used as a fileserver/plex/torrent machine. This is what I'm planning on building around as of right now:

https://www.asrockrack.com/general/productdetail.asp?Model=X470D4U2-2T#Specifications
ecc ram
https://www.amd.com/en/products/cpu/amd-ryzen-9-3950x
Dell h310 HBA
64-128gb of ecc memory(the board says it only supports 64gb, but some people on servethehome claim to have run more in there)

This build is still somewhat in the planning stages. If they announce some threadrippers in the next month or so I may opt for a build around that instead.

and then 9-12 16 tb ironwolf pro's depending on the layout.

Now what I would ideally like to do is set it up as an 11 drive raid z3.That will give me 8(power of 2 which matters I guess?) plus 3 parity drives. That will give me the most data with the most parity and all in 1 vdev. Is there any reason for me not to do this? With a build like this I'm not planning on upgrading for quite a while, so I don't think upgrading the pool will be an issue. By the time it would be I will probably just be building a new nas anyways. Is this going to make scrubs impossibly long? Is this going to make resilvering failed drives take so long that other drives will have a higher probability of failing? Is the triple parity of so many large drives going to be like overly stressful on my cpu or ram?

Or will a layout like this be okay? I'm pretty comfortable with this amount of parity. I can't imagine 4 drives failing before I can replace 1. I'm more concerned about the overhead on general resources and performance implications.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
That will give me 8(power of 2 which matters I guess?)
It's been a long time since that mattered. Scrub time will mostly depend on how much data you have and how active the pool is while you're scrubbing.
 

risho

Dabbler
Joined
May 21, 2016
Messages
18
Is there any reason to not run an 9-12 raid z3 vdev? aside from your risk tolerance? like are there non-trivial performance implications of having a z3 vdev that wide with 16tb disks?
 

anmnz

Patron
Joined
Feb 17, 2018
Messages
286
I've not run one myself but I'm pretty sure the consensus on the forums would be that although that's quite large for a single vdev, it should be fine. For the use cases that RAIDZ is good for, that is.

If you were to set up a 12-disk RAIDZ-3 vdev you could always give it a thorough test before putting it into service, and have the option of falling back to 2 6-disk RAIDZ-2 vdevs (a very conventional setup) if you found that worked better.

Thought I'd mention, receiving files over BitTorrent doesn't play very well with ZFS due to the fragmentation caused by lots of small writes all over the file. This can cause performance to degrade faster than it otherwise would as your pool gets more full. (The "pre-allocation" feature that some clients provide to address this does not work with ZFS due to ZFS's copy-on-write nature.) A good approach is to receive files on a separate scratch pool, which you can destroy-and-recreate if it gets fragmented, and then move them to the main pool once they are complete. (That move will produce a huge sequential write, which plays to RAIDZ's strengths.)
 
Last edited:

risho

Dabbler
Joined
May 21, 2016
Messages
18
Thanks for the information. I'll definitely consider setting up a scratch pool. that makes sense.

My necessary iops should be quite low for the spinning metal as I'll be running all of my vms and jails and stuff off of a separate set of ssds. The hdds should be almost entirely large media files that will be accessed sequentially. Would increasing the record size(that's basically zfs's version of block size right?) to 1mb or even higher improve performance or help with resilvers?

Also I'm a bit confused as to how 2 z2 vdevs vs 1 z3 vdev would work. Sorry I'm a bit stupid. When I use 2 z2 vdevs do the vdevs have separate mount points? like tank/mount and tank/mount2? Or are the 2 z2 vdevs striped? Do the 2 vdevs have to be the same size or can they be different? and if they do stripe when I add a 3rd z2 vdev at a later date will it have to realign the entire pool so that it stripes properly with the 3rd disk?
 
Last edited:

anmnz

Patron
Joined
Feb 17, 2018
Messages
286
A pool is called a "pool" because it "pools" lots of storage together and presents it as a single entity. A pool with 2 or more vdevs will still have a single top-level dataset which is mounted at a single point in the filesystem. The pool stripes data across all its vdevs. (Vdevs take responsibility for redundancy to protect against disk failure, not pools.) In general you want vdevs in the same pool to be of similar kinds -- same number of disks, same RAIDZ level if RAIDZ etc. -- because if you mix them you'll effectively end up with the worst performance and resilience characteristics of each. But a pool with two otherwise similar vdevs with different sized disks (e.g. 6 disk RAIDZ-2 of 4TB disks and 6 disk RAIDZ-2 of 10TB disks), where the differently sized disks don't have wildly different performance characteristics, is totally fine. This would be a normal result of upgrading your pool by adding a vdev of newer bigger disks.
 

risho

Dabbler
Joined
May 21, 2016
Messages
18
when you add a new vdev wouldn't that mean you would have to rewrite the entire pool for it to stripe properly across all of them? or would it just stripe new files? or am I just confused?

and should i modify the record block size or just leave it to the default?
 

anmnz

Patron
Joined
Feb 17, 2018
Messages
286
It tries to keep vdevs roughly balanced so will write new data preferentially to the new vdev. (It's not like RAID 0.) Much more detail about this findable by searching the forums.

Can't help much with recommendations for record size. Do note though it is set at the dataset level, not the pool level. So it's not something you have to rebuild your pool to change.
 
Top