Different vdevs in one zpool

Status
Not open for further replies.

Scampicfx

Contributor
Joined
Jul 4, 2016
Messages
125
Dear folks,

I'm still in preparation of building my first FreeNas system. While thinking of doing an all-SSD-based solution versus HDD-based-server following questions arised:

1.) - What happens when a zpool consists of multiple vdevs of different disk numbers and different disk sizes and even different RAID-levels?
2.) - What happens when a zpool consists of multiple vdevs, each vdev containing the same number of disks, however different disk sizes, same RAID-levels?

Example for 1.)
vdev A: 6x 6TB (RAID-Z2)
vdev B: 9x 1TB (RAID-Z3)

Example for 2.)
vdev A: 6x 2 TB (RAID-Z2)
vdev B: 6x 4 TB (RAID-Z2)

I'm especially interested in 2) because I'm thinking of starting a "small" FreeNas Server based on 6x 2 TB SSDs and add additional capacity by adding additional vdevs with bigger drives at a later point, when SSDs prices get cheaper.This would mean that I start with vdev A and add vdev B in maybe one or two years.

How is the striping-performance affected when each vdev is different in total capacity? How gets a file stored (e.g. 10 MByte file) on such a zpool / vdev setup (total capacity vdev A: 4 TB, total capacity vdev B: 8TB)? How does this file gets balanced accross the vdevs?

I think this is a usual problem, isn't it? Maybe, the drives of a vdev get replaced by bigger drives upon disk failure, which could result in vdevs having different total capacities within one zpool?

In very general terms, is it true to say, the more vdevs you have the higher will be your throughput? (because of more striping?) Do get files automatically striped accross all vdevs when adding additional vdevs? (Well, I think so because the loss of one vdev means the total loss of the zpool?).

Thanks so much for your help!
 

DrKK

FreeNAS Generalissimo
Joined
Oct 15, 2013
Messages
3,630
So, you are fully able to mix and match VDEV types in a pool. As stated however in "FreeBSD Mastery: ZFS", and other books on the subject, having non-homogeneous zpools with respect to vdevs is not recommended for performance reasons. But it is more or less perfectly safe.


That being said, your example #2 seems like something I would do without hesitation in a home environment.
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
The suggestion is to use same width and RAID-Zx protection level for vDevs in a pool. But, any type of vDev
will work. (Though FreeNAS is starting to get safety checks to prevent some of the more common problems...)

I also agree with @DrKK that example #2 is something I would do.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
I also would do a number 2.

:-/

Anyway, FreeNAS balances transactions across the available vdevs. Yes, if you fill up the smaller vdev then you can't write to it any more, but then again if you remove content... etc.

If you had a vdev which was significantly less performant than another then ZFS would have to wait for this slow vdev every time it wanted to ensure a sync write was completed, thus slowing down all the other vdevs to the speed of the slowest.

This is why you're advised to have similar vdevs. So mixing ssd and HDd vdevs is not a good idea but 6x 2TB + 6x4TB should be fine.
 

Scampicfx

Contributor
Joined
Jul 4, 2016
Messages
125
Hey,

thanks for your replies and your information. That's good to hear! :)

So this means striping is automatically done by adding additional vdevs and I don't have to configure any additional things in the WebGUI in order to use this striping? Just add additional vdev to existing zpool and that's it? :)
By the way, this again shows that ZFS RAIDs are different from other solutions like a RAID10. At raid10 i will only have the total capacity of the smallest drive whereas ZFS allows to combine different sized mirrors (vdevs) in one big zpool and use them adequately (2x 2TB, 2x 3TB). That's a nice feature! :)

However, regarding performance: Does ZFS performance increase the more vdevs I add to one zpool (assumed that each vdev uses the same model and same amount of drives)?

By the way I found another article. Maybe it helps some others when reading this thread: http://jrs-s.net/2015/02/06/zfs-you-should-use-mirror-vdevs-not-raidz/ (I don't agree with every point in here, however some basic principles get explained on this website).
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
Each vdev contributes a number of IOPS to the pool roughly equal to the IOPS of one of its component drives.

Thus a pool made of 8 mirrored pairs will have roughly 8x the IOPS of a single drive.

A pool made of 2 sets of 8way Raidz2 will only have 2x the IOPS of a single drive.

Each vdev type contributes different sequential read write performance.

A mirror has read/write performance n,1, Ie read is n times the single drive performance and write performance of 1 times the single drive performance.

A n-way raidz[p] vdev has read/write performance of n-p where p is the number of parity drives. So a 8 way raidz2 will have r/w performance of 6x the single drive sequential read/write speed.

Note drives are roughly twice as fast at the outer sector vs the inner sector.

Add up all the vdevs to get the performance.

Insert performance slow downs due to fragmentation.

Actual results will vary. Consult your doctor, yadayada.
 
Status
Not open for further replies.
Top