Help needed to recreate an existing RAIDZ2 pool with extra disks to produce larger pool

zed_thirteen

Cadet
Joined
Oct 6, 2013
Messages
3
Hi, I have searched for an answer but cannot find anything for my own particular need. Several variants but none that match my need

I have a TrueNAS Scale system which currently has a single RAIDZ2 pool (VOLUME_1) consisting of 5x6TB disks which I want to increase to a single RAIDZ2 pool of 8x6TB disks.

Currently the pool has 3.78TB of data

I purchased another 4x6TB drives for the upgrade - 3 to increase the pool from 5 to 8 disks and an extra hold all of the data while the existing 5 disk pool is destroyed and recreated with 8 drives.

I'm a little uncertain of the process. I'm trying to follow: https://www.truenas.com/community/threads/replacing-all-disks-in-a-pool.103008/post-708805 from danb35

I believe I need to:
  • Create a new pool (VOLUME_2) on the single disk (same host)
  • use the Data Protection->Replication Tasks to create a copy (recursive) a snapshot to the new pool
  • Destroy the existing pool (do I need to export?)
  • Add the 3 new disks to the system and create a new RAIDZ2 pool with all 8 disks

One concern I have is that the replication task I created says the Last Snapshot was VOLUME_1@auto-2021-02-0... which is a bit old. My Snapshot schedule is showing as daily at midnight (this may be new after creating the replication task). Should I cancel the replication task and force a snapshot, then start replication again?

Is there a way to validate the data on the new pool before I destroy the old one?

This is where I get a bit woolly. Do I create a replication task to take a snapshot of VOLUME_2 back to the new larger VOLUME_1?

Thanks in anticipation
David
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
use the Data Protection->Replication Tasks to create a copy (recursive) a snapshot to the new pool
For the sake of knowing what's actually happening, I'd likely do this at the CLI instead:
  • zfs snapshot -r VOLUME_1@migration
  • zfs send -R VOLUME_1@migration | zfs recv VOLUME_2 - you'll probably want to do this in a tmux session, as it can take a lot of time.
Then do a cleanup migration to catch any changes to VOLUME_1 while that was running:
  • zfs snapshot -r VOLUME_1@cleanup
  • zfs send -R -i VOLUME_1@migration VOLUME_1@cleanup | zfs recv VOLUME_2
Then make sure the data actually appears on VOLUME_2, because the next step will be destructive. Once you're sure, destroy VOLUME_1. To do that, go to the Storage dashboard and click Export/Disconnect for VOLUME_1. Check the box for "destroy data", uncheck the box for "delete configuration", and check the box for "confirm."

Then create your new pool. Make sure the cleanup snapshot is present on VOLUME_2 (zfs list -t snapshot should list the snapshots; you should see one named VOLUME_2@cleanup). Then send it back with zfs send -R VOLUME_2@cleanup | zfs recv VOLUME_1--again, you'll want to do this in a tmux session, as it will take some time. Once that's done, your data is back on VOLUME_1.
 

Patrick_3000

Contributor
Joined
Apr 28, 2021
Messages
167
Maybe this goes without saying, but if I were you, I'd back up all the data on the existing pool to a separate computer, the cloud, or both, before doing this in case anything goes wrong with the process. I realize you're doing a snapshot of the existing pool first to the new pool (if I'm understanding correctly), so you should be fine, but it would be advisable to have some redundancy before doing this process.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
Agreed. If anything goes south on the single-disk VOLUME_2 between the time you destroy VOLUME_1 and the time that all the data is back on it, you're in a bad way.

Another option--though this is getting more advanced--is to create VOLUME_2 as a mirror, and create VOLUME_1 as a degraded RAIDZ2. I have a resource about the latter. That would maintain at least some redundancy on your data the whole time, at the expense of more CLI work.
 

zed_thirteen

Cadet
Joined
Oct 6, 2013
Messages
3
I don't have enough space on any of my other home equipment. I will buy another 6TB so at least the copy is mirrored
 

zed_thirteen

Cadet
Joined
Oct 6, 2013
Messages
3
For the sake of knowing what's actually happening, I'd likely do this at the CLI instead:
  • zfs snapshot -r VOLUME_1@migration
  • zfs send -R VOLUME_1@migration | zfs recv VOLUME_2 - you'll probably want to do this in a tmux session, as it can take a lot of time.
Then do a cleanup migration to catch any changes to VOLUME_1 while that was running:
  • zfs snapshot -r VOLUME_1@cleanup
  • zfs send -R -i VOLUME_1@migration VOLUME_1@cleanup | zfs recv VOLUME_2
Then make sure the data actually appears on VOLUME_2, because the next step will be destructive. Once you're sure, destroy VOLUME_1. To do that, go to the Storage dashboard and click Export/Disconnect for VOLUME_1. Check the box for "destroy data", uncheck the box for "delete configuration", and check the box for "confirm."

Then create your new pool. Make sure the cleanup snapshot is present on VOLUME_2 (zfs list -t snapshot should list the snapshots; you should see one named VOLUME_2@cleanup). Then send it back with zfs send -R VOLUME_2@cleanup | zfs recv VOLUME_1--again, you'll want to do this in a tmux session, as it will take some time. Once that's done, your data is back on VOLUME_1.
Thank you so much danb35. This was exactly what I needed and worked a treat. Just had to add -F to the zfs send & recv commands. All back up and running now with no data loss and shares, apps, etc. appear to be working properly too. :smile:
 
Top