RAIDZ expansion, it's happening ... someday!

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Yup that's what I was talking about. But I'm sure none of it will help anyway since threads like this one will still continue to show up.
Unfortunately, any of those require someone to try to learn how it is supposed to be done instead of just fumble through it on their own.
When you just fumble through without reading the directions, you also will not avail yourself of any of the other resources that are available.
There are even Youtube videos showing how it is supposed to be done. How much easier can it be? Some people just refuse to ask, read or even sit back and watch a video.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,367

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,466
Not enough pictures. Some red arrows would be good too ;)
Red arrows and circles take time; screenshots are quick.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,466

Philip Robar

Contributor
Joined
Jun 10, 2014
Messages
116
The parity:data ratio in place at the time data was written doesn't change. So, with an n-disk RAIDZp pool, that ratio will be p:n-p. If you add a disk to that pool, the existing data will have that same ratio, but newly-written data will be at p:n-p+1. I don't think my mind has worked its way around the implications of this yet.

It just means that old data will consume the same size as it did before - say you have a six-wide RAIDZ2 vdev, with not-tiny data. Not-tiny data will be cut up into units of four chunks plus two of parity for a storage overhead of 33%. After expanding it to seven-wide, new writes will be cut up into five chunks plus two parity for a storage overhead of 29%.

If reclaiming that few percent matters to someone it could easily be done by: 1) creating a new dataset in the expanded pool and moving all existing data to it, 2) deleting the old dataset, and 3) renaming the new dataset to the old name. You might have to find some temporary space for your biggest files if you're short on free space, but the only thing that you will likely have to worry about is managing availability—easy for most home servers and doable, if inconvenient, for many business situations.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,176
Yes, that is a possible workaround.
 

joeinbend

Cadet
Joined
Jun 22, 2012
Messages
8
Old thread, just thought I'd give it a bump to see if there's any updates. It was stated earlier that this feature for RAID-Z expansion is expected to be available in FreeBSD 12, which is targeted for November 2018; practically around the corner!
 

m0nkey_

MVP
Joined
Oct 27, 2015
Messages
2,739
Old thread, just thought I'd give it a bump to see if there's any updates. It was stated earlier that this feature for RAID-Z expansion is expected to be available in FreeBSD 12, which is targeted for November 2018; practically around the corner!
OK, I can sort of give an update.

I was at BSDCan last week where Matt Ahrens was giving a talk on RAIDZ expansion. Hopefully the video's will be released along with the slides soon.
 

joeinbend

Cadet
Joined
Jun 22, 2012
Messages
8
Awesome, thanks for the quick updates! @dlavigne just wondering if you accidentally pasted in the wrong link? it looks like that's just pertaining to Jails. Is there anything that talks about the roadmap and some of those potential caveats of RAID-Z Expansion?

thanks guys!
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,367

Jailer

Not strong, but bad
Joined
Sep 12, 2014
Messages
4,975

Roveer

Dabbler
Joined
Feb 22, 2018
Messages
40
So I'm poking my head in here to see if this applies to my system.

Back when I set up my FreeNAS on a Dell R510 (12 Bay) I populated with 8 3TB drives (because that was what I had) with Raid Z2. I got something around 7.3TB available. I've got around 2.7TB available. I was told at the time that I would not be able to expand without rebuilding unless I populated all drives with higher capacity then it would expand. While I'm not in dire need of more space now I'd like to know if this possible improvement (expansion) will allow me to add 4 more 3TB drives in the future and expand my existing array? This machine is for data duplication so it would not be the end of the world if I had to rebuild, just a bunch of work and transporting data from another site.

A bit of reading tells me this is possibly in the mix for BSD12 late this year (2018)? Is that accurate? Would it apply to FreeNAS right away (or at all?).

Just trying to get a handle on my options before it becomes a critical decision to to capacity (or lack there of).

Thanks,

Roveer
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,466
I'd like to know if this possible improvement (expansion) will allow me to add 4 more 3TB drives in the future and expand my existing array?
Yes, but as currently planned, you'd need to add them one at a time.
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
Good point. Wonder if you can go from 3 disk z1 to 4 way z2
My understanding is that the data is reflowed but the existing stripe width of the old data remains the same and therefore the parity would too (I think that makes sense). This way the operation is just moving data and not recalculating parity across the vdev. Extending should be a relatively painless process where restriping the data and recalculating parity could be a long and slightly more risky process.
 

Roveer

Dabbler
Joined
Feb 22, 2018
Messages
40
So it sounds like it's going to be possible. Now the question is, when will this happen? Is it in fact tied to BSD12? Finally, does this also apply to the Raid-Z2 array that I've set up? Excuse my ignorance if I'm not using the terminology correctly.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,466
when will this happen?
Can't help you there; I haven't been following the release cycle that closely.
Finally, does this also apply to the Raid-Z2 array that I've set up?
I don't see why it wouldn't. As I'm understanding this project, it would let you take any m-disk RAIDZn and turn it into an m+1-disk RAIDZn. Repeat as necessary to add more disks.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,466
Wonder if you can go from 3 disk z1 to 4 way z2
I guess you could achieve the effect of that by creating a degraded pool to begin with. Create a degraded "four-disk" RAIDZ2 with three disks and a sparsefile, offline the sparsefile, and go on your way. Your pool is degraded, but still has one disk's worth of redundancy. At some point in the future, replace the missing sparsefile with a fourth disk, bringing you to two disks' redundancy.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,367
I guess you could achieve the effect of that by creating a degraded pool to begin with. Create a degraded "four-disk" RAIDZ2 with three disks and a sparsefile, offline the sparsefile, and go on your way. Your pool is degraded, but still has one disk's worth of redundancy. At some point in the future, replace the missing sparsefile with a fourth disk, bringing you to two disks' redundancy.

Any idea if there is a significant performance penalty when an array is degraded like that?
 
Top