increasing number of disks in raidz1

Grinas

Contributor
Joined
May 4, 2017
Messages
174
I have a pool 3tb * 3 and stupidly for some reason was under the impression i could extend this with single drives to the pool as and when I needed it.

I have just realised that is not the case(please correct me if I am wrong).

Other than recreating the existing pool to use 3tb * 4 do I have any other options other than buying 3 * 3TB drive and adding a another Vdev to the pool.
 

Gen8 Runner

Contributor
Joined
Aug 5, 2015
Messages
103
I would backup the data, destroy the dataset and setup a new one.
And maybe think about using RaidZ2, that won't be your last upgrade of HDD's. The bigger the size is, the bigger the crisis (if you fail on rebuild).
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,702
I have just realised that is not the case(please correct me if I am wrong).
Well, in an alternate universe (or in this one if we have time travel), you could be wrong... RAIDZ expansion is in the works, but don't hold your breath, it's at least a year away.
 

Scharbag

Guru
Joined
Feb 1, 2012
Messages
620
I am replacing a disk an an vDev that has 4TB drives. It will take over 4 days... So yeah, Z2 is likely best.
 

Gen8 Runner

Contributor
Joined
Aug 5, 2015
Messages
103
Well, in an alternate universe (or in this one if we have time travel), you could be wrong... RAIDZ expansion is in the works, but don't hold your breath, it's at least a year away.
Exactly. I am also waiting for this feature and want to upgrade my pool to RaidZ3. But feels like a century, for that reason better choose a higher RaidZ Level directly at the beginning.

@Scharbag For one 4TB Disk Replacement you need four days? I am just replacing a 10TB and it took roundabout 18 hours.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,702
better choose a higher RaidZ Level directly at the beginning.
Yes, even with the expansion, you will be stuck to the original RAIDZ level of the pool.
 

Grinas

Contributor
Joined
May 4, 2017
Messages
174
@sretalla @Gen8 Runner thanks for confirming.

I went for RaidZ1 in the first place as Raidz2 requires 4 drives and I only had three at the time.

It looks like I have to rebuild the pool as RaidZ2 since my little dell t20 290W PSU will likely struggle with 6 drives on top of the ESXI boot drive and 2 * SSDs
 
Last edited:

Yorick

Wizard
Joined
Nov 4, 2018
Messages
1,912
The T20 has room for four drives physically, right? If you rebuild as 4 x 3TB raidz2 you won't have gained any space. One way to crack this nut is to say "okay, raidz1 x 4 and a backup to BackBlaze using the script on the forums; if the raidz1 dies I'll pay restore fees to Backblaze, no biggie".

Another way to crack that nut is to go raidz2 with 4 drives, but not 3TB - but now it gets expensive, as you'll need to buy 4 drives not 1.

Yet another option: Is that motherboard in there mATX or any kind of recognizable standard, rather than a weirdly angled Dell special? You could transplant the whole thing into a Node 804 and have room for drives.
 

Grinas

Contributor
Joined
May 4, 2017
Messages
174
The T20 has room for four drives physically, right?

No in fact it has 5 * 3.5 bays/mounts and 2 * 2.5 bays/mounts. You could easily get another 3.5 in but it wont have a mount.

If you rebuild as 4 x 3TB raidz2 you won't have gained any space. One way to crack this nut is to say "okay, raidz1 x 4 and a backup to BackBlaze using the script on the forums; if the raidz1 dies I'll pay restore fees to Backblaze, no biggie".
I have unlimited cloud storage. My issue is that my internet speeds is really slow thus while I have the NAS and I have enough drives lying around to not need to copy the data to cloud storage to backup.

Another way to crack that nut is to go raidz2 with 4 drives, but not 3TB - but now it gets expensive, as you'll need to buy 4 drives not 1.
Why not 3TB drives? Buying new drives is out of the question as it means I am stuck with 4 *3TB door stops. I already have about 10 smaller capacity 3.5 drives that are just sitting in a drawer.

Yet another option: Is that motherboard in there mATX or any kind of recognizable standard, rather than a weirdly angled Dell special? You could transplant the whole thing into a Node 804 and have room for drives.

I dont want to have to purchase a new machine as this has been trouble free, is quiet, cheap to run, does everything i need and I have enough door stops already. I was hoping for quick upgrade of the pool size as I have my ESXI lab running on this machine as well. I was willing to get extra drives to bring it up to 6 but dont think the PSU could handle it and can't seem to find a bigger PSU for it. It has an 8 pin board power connector and I am struggling to find a new PSU that has an 8 pin connector. I will probably just buy a new PSU and get a 24 pin to 8 pin adapter and have 2 vdevs in raidz1 in the same pool. It seems like the easiest and quickest option.
 

Gen8 Runner

Contributor
Joined
Aug 5, 2015
Messages
103
I have unlimited cloud storage. My issue is that my internet speeds is really slow thus while I have the NAS and I have enough drives lying around to not need to copy the data to cloud storage to backup.

Where do you have unlimited cloud storage and what do you pay for it? I am comparing at the moment services, to check the best Backup Option for me.
 

Yorick

Wizard
Joined
Nov 4, 2018
Messages
1,912
Why not 3TB drives? Buying new drives is out of the question as it means I am stuck with 4 *3TB door stops. I already have about 10 smaller capacity 3.5 drives that are just sitting in a drawer.

Nothing wrong with 3TB drives. If you are at raidz1 3x3TB now and move to raidz2 4x3TB, you have gained redundancy, but no space. That was my point. Hence, options.
 

Grinas

Contributor
Joined
May 4, 2017
Messages
174
Where do you have unlimited cloud storage and what do you pay for it? I am comparing at the moment services, to check the best Backup Option for me.

google drive business accounts have unlimited storage. You can buy a one for a once off payment of about €10 on ebay. I have bought 5 of them from ebay a few years ago and have had no problems with them. There are others out there like one drive but the one i bought was unless since they did not allow API access which i need as I use rclone to mount on Linux. Just mount them using rclone and you are good to go. Unfortunately my internet sucks so i can not use these on my home network and I have no other options which is why i have the Nas.

Screenshot 2020-04-03 at 17.20.56.png

The ones i have redacted are 2 of my gdrives i keep permanently mounted on this machine. The others I mount and as when i need the data from them.

Nothing wrong with 3TB drives. If you are at raidz1 3x3TB now and move to raidz2 4x3TB, you have gained redundancy, but no space. That was my point. Hence, options.

Ah i get you now. thanks for clarifying. When you said not 3TB i thought there was a limitations or something with those drive sizes
 
Last edited:

Scharbag

Guru
Joined
Feb 1, 2012
Messages
620
Exactly. I am also waiting for this feature and want to upgrade my pool to RaidZ3. But feels like a century, for that reason better choose a higher RaidZ Level directly at the beginning.

@Scharbag For one 4TB Disk Replacement you need four days? I am just replacing a 10TB and it took roundabout 18 hours.
Yes, it is taking FOREVER. It could be that this pool has a zillion snapshots AND the vDev I am replacing a drive in is very full.

Screenshot 2020-04-03 10.20.46.png


One vDev is 6@6TB drives and one is 6@4TB drives. The data seems to be equally distributed between vDevs right now. This pool is OLD. I created it back in January of 2015... Then I replaced 3TB drives with 4TB drives... Then 4TB drives with 6TB drives... LOL. The rabbit hole is deep. Scrubs typically take about 15 hours. I am unsure as to why the replacement is taking so-darn-looooong...

Screenshot 2020-04-03 10.30.03.png


Once this is done, it will free up a 4TB disk that I can put in my backuptank to replace ANOTHER failed 3TB Seagate drive. :)

Screenshot 2020-04-03 10.37.18.png


Some day, I really need to buy some bigger disks and revise my system. My backup pool has 18 disks in 2 vDevs for ~38TB... I am sure I can do that more efficiently with larger spinning rust :)

Anywhoo,
 

Yorick

Wizard
Joined
Nov 4, 2018
Messages
1,912
Exactly. I am also waiting for this feature and want to upgrade my pool to RaidZ3. But feels like a century, for that reason better choose a higher RaidZ Level directly at the beginning.

The raidz expansion feature, by the way, is about adding a single disk to an existing raidz vdev. This can also be done again, after the first add is complete. It's not about changing raidz level. raidz2 remains raidz2 during expansion, and I haven't seen any serious discussion of converting between raidz levels.
 

Gen8 Runner

Contributor
Joined
Aug 5, 2015
Messages
103
The raidz expansion feature, by the way, is about adding a single disk to an existing raidz vdev. This can also be done again, after the first add is complete. It's not about changing raidz level. raidz2 remains raidz2 during expansion, and I haven't seen any serious discussion of converting between raidz levels.

Oh, that's not nice to hear. I really thought, that it is planned in future ZFS Feature, to change the RaidZ levels (at least Upgrade, that should be easier than downgrade for example from RaidZ3 to RaidZ2).
 

Yorick

Wizard
Joined
Nov 4, 2018
Messages
1,912
Oh, that's not nice to hear. I really thought, that it is planned in future ZFS Feature, to change the RaidZ levels (at least Upgrade, that should be easier than downgrade for example from RaidZ3 to RaidZ2).

TL;DR: No, never.

That rabbit hole goes a little deeper. What you are referencing is the mythical Block Pointer Rewrite. Note that raidz expansion does its thing without rewriting block pointers.

A quick primer on ZFS reshaping of pools, and why it's not very good at it, is on the podcast at https://www.bsdnow.tv/340, from roughly 14:00 to 24:00. The discussion of BPR is from roughly 22:00. "This would be the last feature ever added to ZFS" - as in, once BPR has been added, no other features would ever be added again. That is tongue in cheek but illustrates the point: It'd complicate things to such an obscene degree that no one would ever want to touch the code ever again.

The indirection used for vdev removal is already Not That Awesome, I can't see someone seriously proposing to add something similar to go from raidz2 to raidz3. No, just no.

ZFS expects people to get their storage layout right from the word go, and leave it at that - within reason, you can certainly add vdevs to a pool after the fact, though imbalancing is a thing, which that podcast also discusses.

DRAID looks like a really neat feature to resolve a lot of the pain points of raidz, by the way. And that's coming Any Year Now, Soon(tm).

This blog post argues that one should do striped mirrors, not raidz. Written from the perspective of multiple vdevs, not the tiny hobbyist 6 to 8 drive setups that a lot of us have, obviously. https://jrs-s.net/2015/02/06/zfs-you-should-use-mirror-vdevs-not-raidz/

And going back in time, you can find a discussion of BPR, and why that's never ever gonna happen, on these forums. https://www.ixsystems.com/community/threads/kickstarter-for-block-pointer-rewrite.21064/
 

Scharbag

Guru
Joined
Feb 1, 2012
Messages
620
TL;DR: No, never.

That rabbit hole goes a little deeper. What you are referencing is the mythical Block Pointer Rewrite. Note that raidz expansion does its thing without rewriting block pointers.

A quick primer on ZFS reshaping of pools, and why it's not very good at it, is on the podcast at https://www.bsdnow.tv/340, from roughly 14:00 to 24:00. The discussion of BPR is from roughly 22:00. "This would be the last feature ever added to ZFS" - as in, once BPR has been added, no other features would ever be added again. That is tongue in cheek but illustrates the point: It'd complicate things to such an obscene degree that no one would ever want to touch the code ever again.

The indirection used for vdev removal is already Not That Awesome, I can't see someone seriously proposing to add something similar to go from raidz2 to raidz3. No, just no.

ZFS expects people to get their storage layout right from the word go, and leave it at that - within reason, you can certainly add vdevs to a pool after the fact, though imbalancing is a thing, which that podcast also discusses.

DRAID looks like a really neat feature to resolve a lot of the pain points of raidz, by the way. And that's coming Any Year Now, Soon(tm).

This blog post argues that one should do striped mirrors, not raidz. Written from the perspective of multiple vdevs, not the tiny hobbyist 6 to 8 drive setups that a lot of us have, obviously. https://jrs-s.net/2015/02/06/zfs-you-should-use-mirror-vdevs-not-raidz/

And going back in time, you can find a discussion of BPR, and why that's never ever gonna happen, on these forums. https://www.ixsystems.com/community/threads/kickstarter-for-block-pointer-rewrite.21064/
Great info. Key point is ZFS was never designed with hobbyists in mind. I have been bitten by not thinking ahead a few times since FreeNAS 8... Some were tough learned lessons. Never lost data, but I have had to re-build my system a few times where I did not have a backup as I needed to revise layouts. Never liked those times!!!

Home users do not need the speed/IOPS typically so striped mirrors are not needed and just too expensive. So, we use RaidZ. RaidZ1 is basically useless with large drives as is illustrated by my 5 day disk replacement. RaidZ2 is typically the best option for home users. If you need more speed to run VMs, you can always run a SSD pool too. So plan ahead accordingly. I find that 6 disk vDevs at RaidZ2 are manageable, offer reasonable redundancy-to-capacity. But, everyone needs to plan their own systems accordingly.

Cheers,
 

Grinas

Contributor
Joined
May 4, 2017
Messages
174
I got a new PSU in the end and a 2 * 5.25 to 3 * 3.5 enclose with a fan so i can add the new vdev to the existing pool.

Thanks for the help.
 
Top