RAIDZ expansion, it's happening ... someday!

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
No idea, really--to even speculate would require a lot more knowledge of ZFS internals than I have.

Yeah, me too :)
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
Any idea if there is a significant performance penalty when an array is degraded like that?
I would think that depends on if ZFS would continually calculate the parity for the stripwidth of the array including the missing disk. Perhaps its smart enough to reduce the stripe width by the one missing drive and still store the second parity.:confused:
People don't give ZFS enough credit for its complex yet elegant design.:eek:
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
I can't imagine performance being much worse than straight RAIDZ2. Maybe a little bit worse due to less optimized code paths.
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
Why would you create a single disk degraded RAID-Z2?
When you could create a 2 disk degraded RAID-Z3!

That brings up an interesting point. Perhaps a new vDev type, (RAID-Yx ?), that has the property to add or remove parity drives. So you could go from 1 disk of parity to 2, or 3. Maybe some day / decade we deprecate RAID-Zx!

In all seriousness, if it were limited or no impact, I'd suggest putting a warning on RAID-Z1 and allowing a GUI option to create a degraded RAID-Z2. Then we just need to cause any alarm on that specific degradation to be muted.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Well, you won't be able to extend degraded vdevs, so that's a big disadvantage.
 

MiG

Dabbler
Joined
Jan 6, 2017
Messages
21
Yes, but as currently planned, you'd need to add them one at a time.
Apart from external phenomena like catastrophic multiple HDD failures during resilvering, can we expect there to be any innate risks associated with the new expansion feature?

I currently have a 6x8TB RZ2 pool that's at 84% and was considering adding a second one... However, expansion would make it interesting to just add 4 disks to the existing one, so now I'm considering holding off until this one finds its way into a stable release.
 
Last edited:

m0nkey_

MVP
Joined
Oct 27, 2015
Messages
2,739
How far is this feature? Is it implemented?
There was supposed to be a demo at the ZFS Dev Summit in October 2018 and expected to land in FreeBSD 12. That being said, FreeBSD 12 has been released but as far as I know RAIDZ expansion wasn't included. More work needs to be done.
 

raymondbh

Cadet
Joined
Sep 27, 2018
Messages
2
There was supposed to be a demo at the ZFS Dev Summit in October 2018 and expected to land in FreeBSD 12. That being said, FreeBSD 12 has been released but as far as I know RAIDZ expansion wasn't included. More work needs to be done.
I have not been able to find any way to track the progress of this, does OpenZFS have a system (bug/issue/tracking) so I (and others) can follow the progress?
 

m0nkey_

MVP
Joined
Oct 27, 2015
Messages
2,739
I have not been able to find any way to track the progress of this, does OpenZFS have a system (bug/issue/tracking) so I (and others) can follow the progress?
You can find all of the developer resources at http://open-zfs.org/wiki/Developer_resources

OpenZFS is currently going through a transition, so things might be a little be slow at this time. Right now, FreeBSD pulls in changes from Illumos. Delphix are transitioning to Linux based appliances, which means FreeBSD will soon start tracking ZFS on Linux. With this, ZFS on FreeBSD will see newer features sooner. The nice thing is there will be feature parity between Linux, FreeBSD, macOS and even Windows (yes, it's a thing)!

https://lists.freebsd.org/pipermail/freebsd-current/2018-December/072422.html
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
We'll also be doing a Call for Testers in the next few weeks for https://zfsonfreebsd.github.io/ZoF/ with what's in now, the currently known caveats, and a great big warning that testing needs to be on experimental systems only at this stage.
Is that going to be incorporated into a FreeNAS beta release?
 
D

dlavigne

Guest
For 12, yes. the CfT will be for very early testing on the 12 nightlies.
 

intertan

Cadet
Joined
Jan 28, 2019
Messages
3
biggest thing is will there be a performance hit? Part of the reason I like unraid is I can expand when I want.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
There is an edge case when recovering blocks from parity, IIRC during the expansion process. The expansion process itself also takes up IO, obviously, but that's about it.
 

intertan

Cadet
Joined
Jan 28, 2019
Messages
3
There is an edge case when recovering blocks from parity, IIRC during the expansion process. The expansion process itself also takes up IO, obviously, but that's about it.
all greek to me at the moment. biggest question is if I currently have 100tb in my system. I add another 100, after expanding will I have 200tb or will I loose some with this feature? How about read/write performance after expanding?
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
Matthew Ahrens‏ @mahrens1 23h23 hours ago
It will rebalance the data so that it's evenly across all disks in the RAIDZ group, and a big chunk of free space at the end of each disk.
This point is critical... I had totally discounted the idea of using it until I saw this. Now (when it finally arrives in FreeNAS) it will be an option on my list.
biggest question is if I currently have 100tb in my system. I add another 100, after expanding will I have 200tb or will I loose some with this feature? How about read/write performance after expanding?
Let's say you have 10x10TB drives today, making your 100TB... RAIDZ2 means you lose 2 of those disks, so 80TB... let's not get into padding and keeping 20% free for CoW.

If you add 10x10TB, that will mean a RAIDZ2 with 20 disks, meaning 18 for data 2 for parity (keep in mind recommended width of a RAIDZ2 is not to go over 12), meaning 180TB.

With the quote above, you may find performance is a little better after the addition due to the rebalance work done by the expand operation, but the performance of RAIDZ2 is the performance of a single VDEV, so effectively as slow as the slowest single disk in it in many scenarios.

Since you asked the question, you probably care about performance, so the recommendation would be not to do that (for many reasons).

If your only concern is capacity (and your data is important enough to keep too, hence using RAIDZ2), you could consider the adrenalin rush of "living on the edge" with a 20-wide RAIDZ2 VDEV (against the recommendation of all the clever people around here).
 
Last edited:

Mannekino

Patron
Joined
Nov 14, 2012
Messages
332
Man, even 12 disks in a RAIDZ2 with a single underlaying VDev would have me a bit worried. Is that common practice?

I have 4x 4 TB now in a RAIDZ1 for media. Let's say I would like to expand. What would be the better option:
  1. Create a new VDev and add it to the pool
  2. Move the data and create a new RAIDZ2 pool and import again
 

intertan

Cadet
Joined
Jan 28, 2019
Messages
3
This point is critical... I had totally discounted the idea of using it until I saw this. Now (when it finally arrives in FreeNAS) it will be an option on my list.

Let's say you have 10x10TB drives today, making your 100TB... RAIDZ2 means you lose 2 of those disks, so 80TB... let's not get into padding and keeping 20% free for CoW.

If you add 10x10TB, that will mean a RAIDZ2 with 20 disks, meaning 18 for data 2 for parity (keep in mind recommended width of a RAIDZ2 is not to go over 12), meaning 180TB.

With the quote above, you may find performance is a little better after the addition due to the rebalance work done by the expand operation, but the performance of RAIDZ2 is the performance of a single VDEV, so effectively as slow as the slowest single disk in it in many scenarios.

Since you asked the question, you probably care about performance, so the recommendation would be not to do that (for many reasons).

If your only concern is capacity (and your data is important enough to keep too, hence using RAIDZ2), you could consider the adrenalin rush of "living on the edge" with a 20-wide RAIDZ2 VDEV (against the recommendation of all the clever people around here).

I am coming from the unraid world. I am planning on a complete new build from the ground up once this gets implemented. I was looking at a 60 drive case and working from there. I don't understand this going over 12 talk.
 
Top