Storage migration strategy from Raidz1 to Raidz2

zeusthabomb

Cadet
Joined
Aug 1, 2022
Messages
9
Hello everyone,

This topic probably have been discussed so many times, that you are tired of it. But there's so much information that I'm a bit lost.

I have an HP micro server 8, running Truenas Scale, with 4x1TB HDD in raidz1 that I used for testing and playing around. I now bought 4x4TB ironwolf CMR disks and I want to make a new pool with raidz2.

As you know, the server has 4 bays, would it be possible to offline 2 of the raidz1 pool old HHDs, replace those with 2 with 2 ironwolfs and create a raidz2 pool and migrate the data? And later remove the old hdds and extend the raidz2 pool?

How should I approach this? Data isn't important, I could just delete the pool and replace, but I want to learn how I would this migration to understand a bit ZFS and for future reference.

Thanks !
 

Alecmascot

Guru
Joined
Mar 18, 2014
Messages
1,177
You cannot convert from RaidZ1 to RaidZ2 without a pool destroy/rebuild.
 

zeusthabomb

Cadet
Joined
Aug 1, 2022
Messages
9
You cannot convert from RaidZ1 to RaidZ2 without a pool destroy/rebuild.
Exactly, that's why I wanted to remove 2 disks from the RaidZ1, leave it degraded state, add 2 new disks and create a raidZ2 on those and migrate the data. But I don't know if that's even possible and later to extend it.
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
You can't create a 2-wide raidz2 and you can't (yet) enlarge a raidzN vdev after the fact.
You can't offline two drives in a 4-wide raidz1: This can only lose one drive.

Best go straight for backup-destroy-create-restore.

If you have no other system to replicate across the network, no external enclosure to attach additional drives (even over USB…) and no further drive/NAS to hold backups the last remaining possibility would be:
1. Export the raidz1 pool and take out all drives.
2. Plug in new drives, create 4-wide raidz2 and then export the new pool.
3. Plug 3 drives of the raidz1 pool and import as degraded.
4. Plug in one of the new drives (from the newly created raidz2), wipe it and make it a single drive pool.
5. Replicate from the (degraded) raidz1 to the the single drive.
6. Export the raidz1 pool and take out its drives.
7. Plug in the remaining 3 drives from the raidz2 pool and import as degraded.
8. Replicate from the single drive to the degraded raidz2.
9. Wipe the single drive and use it to replace the missing drive in the raidz2 pool.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
would it be possible to offline 2 of the raidz1 pool old HHDs, replace those with 2 with 2 ironwolfs and create a raidz2 pool and migrate the data?
You can't offline two disks from RAIDZ1, and you can't create RAIDZ2 with two disks either*.

but I want to learn how I would this migration to understand a bit ZFS and for future reference.
"destroy and rebuild the pool" is the only answer here, but the sticky point is migrating the data, and that's complicated by the fact that you have no spare bays in your system. A possible way to do it (though very much at your own risk) would be:
  • Offline one disk from the RAIDZ1 pool and remove it
  • Install one of the 4TB disks and create a new pool (call it pool2) on it
  • Copy all data to the new disk
  • Export/detach the RAIDZ1 pool and remove its remaining disks
  • Install the remaining 4TB disks and create a degraded RAIDZ2 pool (call it pool3) on them (link below)
  • Copy everything from pool2 to pool3
  • Resilver the disk from pool2 into pool3
*OK, you can, but you're pretty much playing with fire. This guide explains how, though it's written for CORE (actually, it's written for FreeNAS, but...), so some of the commands may be different. But here it is, with the caution to use at your own risk:
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
@Etorix & @danb35 - You both forgot the golden rule:

Here be dragons!

Either of those plans can work. But if you need the details, then in general, you probably won't have the skill to pull it off without data loss. Just saying / warning.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
You both forgot the golden rule:
Well, if you follow my link, you get that warning:
1666443542935.png
 

zeusthabomb

Cadet
Joined
Aug 1, 2022
Messages
9
@Etorix & @danb35 - You both forgot the golden rule:

Here be dragons!

Either of those plans can work. But if you need the details, then in general, you probably won't have the skill to pull it off without data loss. Just saying / warning.
Might be true, however, data isn't important and I want to learn. We have to start somewhere.
 

zeusthabomb

Cadet
Joined
Aug 1, 2022
Messages
9

How to migrate data from one pool to another​


Warning
This can cause data lost, always try to backup before!!!!

  1. Export the existing pool and take out all drives.
  2. Plug in new drives, create new pool, export the new pool and remove all drives
    1. Might need to rescan the controlers to see the new disks `echo "- - -" > /sys/class/scsi_host/host*/scan`.
  3. Plug 3 drives from the old pool and import and degraded.
    1. Might need to rescan the controlers to see the new disks echo `echo "- - -" > /sys/class/scsi_host/host*/scan`.
  4. Plug 1 drive from the new pool, wipe it and create a single drive pool.
    1. Might need to rescan the controlers to see the new disks `echo "- - -" > /sys/class/scsi_host/host*/scan`.
  5. Replicate from the degraded pool into the single drive with rsync (e.g. rsync -avzh --progress /mnt/old_pool /mnt/temporary_pool).
  6. Export the old pool and take out the drives.
  7. Plug in the remaining drives from the new pool as degraded.
  8. Replicate from the single drive to the new degraded, maybe replicate to the datasets already
  9. Export the single drive pool and wipe it and use it to replace the missing drive in the new created pool

Thanks @Etorix and @danb35, it worked like a charm.
My media was all there, as soon as I exported the NFS, the Plex server had everything ready.
 

rmccullough

Patron
Joined
May 17, 2018
Messages
269
I plan to do something similar when I upgrade my disks. I have a 12-bay case, with 9 x 2TB drives in a single vdev and a single pool (RAIDZ2).

I plan to upgrade to 6 disks (somewhere between 8-14TB depending on what I can find for a good deal) in the next year. I was thinking what I would need to do is detach the current pool1, insert the 6 disks for pool2 and create that vdev & pool in RAIDZ2. Then remove 1 or 2 of these disks. Then put in 7 or 8 of the pool1 disks and import it. Then copy the data from pool1 > pool2.

I know this isn't the best practice.

Which would be "less" risky? Running pool1 with minimum disks, or pool2 with minimum disks? I was thinking pool1 since I will only be reading from it and not writing to it. Is this true? Am I doing something ridiculously stupid?
 

Heracles

Wizard
Joined
Feb 2, 2018
Messages
1,401
So you have 14TB of usable space in that pool...

If you go with 14TB drives, then you can do a basic mirror with 2x 14TB using your actuel free bays.
Once that mirror is created, you can zfs send / receive your actual data to that mirror.
You then remove the old drives and old pool.

From here, you can :
--Add more mirrors in your new pool. No need to move data anymore but your vdevs will end up unbalanced and fragmented
or
--Create a new pool with 6 drives in any format you wish (Raid-Z2 or mirrors would be the best) and migrate your data back to that one. You then keep the previous 2x 14TB as spare drives (always good to have spares...)

Also, never forget about backups and be sure to have a good backup strategy in place. No single server, TrueNAS or other, can be more than a single point of failure.
 

rmccullough

Patron
Joined
May 17, 2018
Messages
269
I know this is an old post, but I did my upgrade a couple weeks ago. Went w/ 7 x 14TB. I used 1 of the drives to backup the data off my existing pool. I was foolish and didn't zfs send / receive, and instead just rsync the data.

After that pulled the 9 old drives. At this point, I think my server rebooted. I am not sure why. I went back upstairs and moved the remaining 6 new drives into the drive caddies from the 9 old drives and stuck them in the server chassis. Created the new pool w/ the 6 drives and started rsync the data from single drive to the new pool.

Any idea why the server rebooted? I pulled 9 drives one after the other. Seemed to happen when I did that. I don't recall hitting the power button.

Seemed to all go pretty smooth. Had to switch over some jobs, shares, jail mounts, and system dataset to the new pool, but was able to start my jails and everything seems to be working smoothly. The upgrade was actually a lot easier than I would have thought.
 

Heracles

Wizard
Joined
Feb 2, 2018
Messages
1,401
Any idea why the server rebooted?
Even a fluctuation in the powerline can do that, so such a reboot may be 100% unrelated to TrueNAS... There are just too many things that can reboot a server and a simple reboot is something to be expected under any condition.

I pulled 9 drives one after the other. Seemed to happen when I did that.
Are you sure that everything in your setup was hot-plug capable ? The bay, the caddy, the drive... All of them must be hot-plug capable. If any is not, you can end up with something like that.
 

rmccullough

Patron
Joined
May 17, 2018
Messages
269
I have a UPS and dual PS, so I assumed it wouldn't be a power issue.

I guess I assumed it was all hot-pluggable, but possibly not. Historically I haven't had a problem removing/adding/replacing drives while the system is running, but not sure.
 
Top