Performance impacts or downsides of mixing VDEVs in a single pool

Joined
Feb 23, 2020
Messages
8
Evening all,

I originally posted under the FreeNAS subreddit but didn't quite get the full answer I was hoping for. Basically, does anyone have experience mixing VDEVs in a single pool in a home environment? I found a handful of older posts asking the same question but never a "this was the end result" summary.

Current setup is a Ryzen 3700x with 32 GB of RAM and a data pool made of one RAIDZ2 vdev of 8 x 4TB drives (78% full).
I have 6 x 10TB drives that I want to add to the existing pool as another RAIDZ2 vdev, however Freenas provides the warning of "Adding data vdevs with different numbers of disks is not recommended."
Limited google-fu suggests that adding another vdev of different composition *should* be safe as far as data integrity, but that performance *may* be impacted.
  1. Does anyone have any firsthand experience with this?
  2. If I expand the pool but performance takes a significant impact, can I roll back to a previous snapshot from when there was only 1 vdev (acknowledging that I lose any data that was striped to the new vdev)?
Again, this is a home-use system basically backing up personal files, serving Plex media, and eventually acting as a Steam cache with the desire to saturate a 10GB connection (depending on individuals files, etc).

Thanks in advance.
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
Since "nobody does this" I doubt you will get many answers with real-world numbers. Mixing 8-way RAIDZ2 and 6-way RAIDZ2 looks harmless to me, performance for transaction heavy workloads and/or VMs is bad with RAIDZ2 anyway and as a huge archive/datastore it will probably do fine.

Snapshots work on the dataset or volume level and will not permit you to roll back your pool topology. If you want a chance to remove your new vdev, a pool checkpoint is what you are looking for. You will probably have to use the command line to create one, though. And rolling back means you will have to stop all services, export the pool, roll back, then re-import. And all the data you wrote since the checkpoint will be gone. It's a real roll back in time.

HTH,
Patrick
 

no_connection

Patron
Joined
Dec 15, 2013
Messages
480
Simplest would be to just have two pools, or is there a specific reason why you have to have a single contiguous space?
 
Joined
Feb 23, 2020
Messages
8
Thank you for the replies.

Patrick
I'm unfamiliar with pool snapshots so will have to look into those, though I'd rather avoid that route if possible.

no_connection
It's mostly to keep things simple and have a feasible expansion path down the road (adding a new VDEV as req'd). I want to avoid multiple pools and juggling what data goes where, multiple shares, etc.....unless I've completely missed an easy way to present multiple pools as one dataset for SMB shares, Plex service, etc. I moved to FreeNAS to take advantage of the benefits of ZFS data integrity and because Linux kept exploding in my face every time I used it regardless of distro. Adding VDEVs to an existing pool seemed the closest option to the online expansion that Linux RAIDs offered since RAIDZ expansion hasn't materialized.

One thought I had is to create a new pool with the 6 x 10TB drives in a RAIDZ2, move everything to that, and then destroy the first pool and add 6 of the 4TB drives to that so at least both VDEVs are 6 x X TB drive VDEVs, thoughts?
 

no_connection

Patron
Joined
Dec 15, 2013
Messages
480
Just for the argument, since you are adding an empty vdev to a full vdev any regards for "matching" is out the window anyway. To my reasoning as long as each vdev is happy to the point that you trust each vdevs integrity to not loose the pool and performance is good enough it should be fine.
I have not built enough systems to be the voice of reason here tho.

Also you are running ECC RAM right?
 
Joined
Feb 23, 2020
Messages
8
Each VDEV should be happy in that sense, they're all matching drives within the VDEVs (4 TB's are branded WD Red and 10 TB are all shucked WD white labels). And I did finally switch to ECC RAM when I moved from a virtualized setup to bare metal. Side note, FreeNAS virtualized under Windows 10 Hyper-V does surprisingly well in spite of W10's stupidity.
 
Top