Issue with extending a RAIDZ3 volume with 4 additionnal HDDs

Status
Not open for further replies.

Kantos

Cadet
Joined
Mar 22, 2017
Messages
7
Dears,
I'm afraid I'm fairly new to the Freenas management and configuration so my questions may reflect it but we have to start somewhere right ?
My configuration is a Freenas 11.1 U5 Installed on a Supermicro server with two volumes configured.
The volume which gives an issue is a raidz3-0 volume of 7 HHDs of 6TB each (18,3Tib used and 16.8TB Available).
This volume is used as an ISCSI target to backup a few VM's on a 10years policy retention and is currently used nearly to the maximum capacity (2TB Free)
To extend the storage, we bought 4 HDD's of 4TB each and my colleague added them into the volume as stipes which gives the img1.
After that, because he was out of ideas to go forwards, I took back the case and am trying to put the config in the right order.
Could you please tell me first if it's possible to extend safely an existing volume of 7HHDs in Raidz3 with 4 HDDs of 4TB ?
If it is the case, is it possible to use the current configuration or correct the configuration in some way ? I mean I'm not sure stripes are a safe way to add space to a volume because they are the same as RAID 0 right ? Is there a way to remove the striped hard drives without crashing the whole configuration and configure it correctly or should I take as a fact that I lost the 4 HDD's forever and I'm good to buy X other HDD's to extend the volume ?
Any indication about how to solve this mess would be greatly appreciated :smile:
Thanks forwards for your help.
Best regards,
Laurent
 

Attachments

  • IMG1.JPG
    IMG1.JPG
    41.1 KB · Views: 401
  • IMG2.JPG
    IMG2.JPG
    69.2 KB · Views: 447

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
In order to extend a RAIDZ3 pool of 1 vdev, you need to back up the content, create a new pool/vdev and copy the content back. There is no online extension of RAIDZ in the way that you were trying to do (maybe coming in a few years, but not now).

You are currently in a risky situation with your pool as you have made a multi-vdev pool of 1 RAIDZ3 and 4 striped disks.

If you lose any one of the striped disks, your entire pool is lost.

I suggest you make a backup, rebuild the pool that you wanted to have (one vdev of RAIDZ3) and restore the backup.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Could you please tell me first if it's possible to extend safely an existing volume of 7HHDs in Raidz3 with 4 HDDs of 4TB ?
NO NO NO NO NO NO NO NO NO NO NO NO! NO! HELL NO! Not until RAIDZ expansion shows up, and we're not there yet!

@sretalla has stated the important facts above:
You are currently in a risky situation with your pool as you have made a multi-vdev pool of 1 RAIDZ3 and 4 striped disks.

If you lose any one of the striped disks, your entire pool is lost.

I suggest you make a backup, rebuild the pool that you wanted to have (one vdev of RAIDZ3) and restore the backup.

tl;dr - you need to fix this ASAP. At the very least, you need to add mirrors to each of those single drives.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
Your colleague really messed up here by failing to understand how to grow the filesystem, and adding those four drives as stripes.

I suggest you make a backup, rebuild the pool that you wanted to have (one vdev of RAIDZ3) and restore the backup.

This is the correct path forward; however, you may not have the space to back up this backup repository. If you don't, then you need to get some redundancy back via the method proposed here:

At the very least, you need to add mirrors to each of those single drives.

It will definitely not be anywhere near the optimal space utilization though. Even if you have to migrate all of the data to a separate array/SAN/etc to rebuild the pool I would strongly suggest that.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
It will definitely not be anywhere near the optimal space utilization though.
Yeah, it's just an emergency solution that may be easier in some cases. Not ideal, but it'll do in a pinch.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
Yeah, it's just an emergency solution that may be easier in some cases. Not ideal, but it'll do in a pinch.
And probably necessary due to the significantly higher chance of failure that the OP currently has.

My proposed path forward, assuming all drives currently used are 4TB, is as follows:
  1. Stop putting data on this pool.
  2. Buy 11x 4TB drives with the fastest shipping possible and get them installed.
  3. This might mean you need another DAE/JBOD shelf. And an external HBA if you're not already using external JBODs.
  4. Add 4x4TB drives as individual mirrors to each of the single striped drives. Make absolutely sure you are performing this step correctly and not extending the pool with more stripes. You will now at least have the ability to lose a drive and not be completely dead.
  5. Create a new pool with the configuration being a 7-drive RAIDZ3
  6. Copy the data over, whether manually or via zfs send/recv. It should fit, because step #1 was "stop adding more data."
  7. Make sure all of the data is there.
  8. No, really. Make sure it's all there.
  9. Destroy your old RAIDZ3/mirror mess (15 free drives)
  10. Create a new pool from 11 drives, RAIDZ3 (4 free drives)
  11. Copy the data back to the new-new pool
  12. See steps #7 and #8
  13. Destroy the 7-drive RAIDZ3 pool (11 free drives)
  14. Extend your 11-drive RAIDZ3 pool with another 11-drive RAIDZ3 vdev.
Total usable at the end should be roughly 16x4=64TB, with three-drive failure tolerance in each of the two vdevs.

During these steps, you might want to consider the use of larger record sizes if you're natively storing Veeam backups. FreeNAS defaults the iSCSI "volblocksize" to 16K, and Veeam backups write in much larger blocks than that. Since I'm assuming you don't want to switch to NFS, you'll have to set the block size on creation of the zvol (it's immutable afterwards.)
 
Last edited:

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Device removal would be awesome here, but I'm pretty sure it doesn't work if there are RAIDZ vdevs...
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194

Kantos

Cadet
Joined
Mar 22, 2017
Messages
7
Thanks for your replys.
Is there any way to remove the 4TB drives just added in stripes and unused ? the additional storage added with these hasn't been used in any way and it is not integrated into the RAIDZ3 as you may have noticed in the pictures attached to the thread.
If we manage to remove these 4 unused drives then we may just add the right number of additionnal 6TB to the current raidZ3 storage built on 6TB HDD'S without having to redo everything right ?
By the way, if it's possible then how many drives should I add to the current 7HDD's raidz3 volume ? I mean by that can I just add 4 HDD's or is there a minimum number of HDD's to add ?
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
Is there any way to remove the 4TB drives just added in stripes and unused ?
Not with any released version of FreeNAS. It's unclear if this would be possible with 11.2-BETA. Edit: Confirmed this is not possible.
If we manage to remove these 4 unused drives then we may just add the right number of additionnal 6TB to the current raidZ3 storage built on 6TB HDD'S without having to redo everything right ?
Not without creating a new RAIDZn vdev. It isn't currently possible to expand an n-disk RAIDZ vdev to an n+1-disk RAIDZ vdev. If you're creating a new RAIDZ3 vdev, the minimum number of disks would be five, but you'd lose three disks' worth of storage to parity.
 
Last edited:

Kantos

Cadet
Joined
Mar 22, 2017
Messages
7
Ok, thanks for your clear and complete reply, I'll follow the plan proposed by Honeybadger (transfer all to another NAS, destroy, rebuild a 11 6TB HDD's RAIDZ storage and transfert to the new RAIDZ)
Thank you very much for all your answers which allowed me to move forwards to a sound solution to this situation :smile:
Best regards
Laurent
 
Status
Not open for further replies.
Top