How to migrate ZFS raidz1 to mirror in-place?

Status
Not open for further replies.
Joined
Jul 23, 2017
Messages
34
Is it possible to migrate from a 4-drive raidz1 to a 2-drive mirror, in-place, in a 4-bay server?

Assuming backups have been done and the 4-drive raidz1 has been scrubbed, I would expect to be able to do something like this:
  1. Remove one of the 4 raidz1 drives. Pool now shows as degraded. Vulnerable, but with a backup.
  2. Add a new, much larger drive.
  3. Somehow create a mirror consisting of the new drive and the degraded 3-drive raidz1. Wait for the resilvering to finish copying data to the new drive.
  4. Remove the original 3 raidz1 drives.
  5. Add the second much larger drive and mirror the one that is already there.
If As it is impossible to do 3-5 above, is there a built-in zfs function to properly copy all data from the degraded pool to a new one?

Many thanks for helping a newbie. I realise this could not be done using the GUI in FreeNAS, and I am not experienced enough with the interplay of zpool and zfs commands, I am afraid.
 
Last edited:
Joined
Jul 23, 2017
Messages
34
Ah, I think 3-5 might be impossible. I just read in man zpool:
Virtual devices cannot be nested, so a mirror or raidz virtual device can only contain files or disks. Mirrors of mirrors (or other combinations) are not allowed.

If so, what is the recommended route to solving this? Ideally, the new pool could end up with the same name as the original one.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
As it is impossible to do 3-5 above, is there a built-in zfs function to properly copy all data from the degraded pool to a new one?
Of course, it's half the reason ZFS is so awesome. zfs send and zfs recv.
Just recursively snapshot your entire pool and send that snapshot to the new pool.
 
Joined
Jul 23, 2017
Messages
34
Of course, it's half the reason ZFS is so awesome. zfs send and zfs recv.
Just recursively snapshot your entire pool and send that snapshot to the new pool.
Indeed, I have been trying that for the last half an hour, using something along the lines of zfs send -R PoolA@migrate11aug17 | zfs receive -ev PoolB

It will take a while before I see if everything has moved as it should have. The only niggle so far is that it creates the original root-level dataset inside the top level data set of the destination pool, rather than in its root. I end up one-level deeper than needed. I suppose my syntax is incorrect.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Oh, right, that happens. No, it's an unfortunate side-effect of how ZFS is implemented.

The workaround is to replicate the individual datasets by hand to get them to show up at the correct level.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
I'd never realized that was possible, but, sure enough, the FreeBSD ZFS manual mentions it pretty clearly.
 

PhilipS

Contributor
Joined
May 10, 2016
Messages
179
The only niggle so far is that it creates the original root-level dataset inside the top level data set of the destination pool, rather than in its root.

I've never tested replicating to the root of the pool, but is this possibly caused because of the -e option on the zfs receive, since that option will append the last element of the source to the target (PoolB/PoolA@migrate11aug17)?
 
Joined
Jul 23, 2017
Messages
34
I've never tested replicating to the root of the pool, but is this possibly caused because of the -e option on the zfs receive, since that option will append the last element of the source to the target (PoolB/PoolA@migrate11aug17)?
Possibly. I considered options -d and -e in order to deal with recursively nested snapshots. I followed advice from a blog, to be honest. Not a major issue, as I can zfs rename to move things up, but I am looking forward to understanding the syntax better for the next time. This form of replication for a file system is useful, and impressive.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
If your new mirror drive is large enough... replicate from a degraded Z1 to a single disk pool.

Then add a disk to the single disk pool to form a mirror. Add the other two disks as a mirror.
 
Joined
Jul 23, 2017
Messages
34
If your new mirror drive is large enough... replicate from a degraded Z1 to a single disk pool.

Then add a disk to the single disk pool to form a mirror. Add the other two disks as a mirror.
May I ask what do you mean by replicate: something different from the piped zfs send to a zfs receive that I have used? If there is a better way, I'd be glad to learn it, as I am sure this need will arise again. Thanks.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
ZFS send/receive is called replicating :)
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
Oh, right, that happens. No, it's an unfortunate side-effect of how ZFS is implemented.

The workaround is to replicate the individual datasets by hand to get them to show up at the correct level.
Uh, I have done 2 full root pool backups, (though in Linux), to alternate media. A third one is in progress. It had the correct levels and was bootable. These are the commands I use;
Code:
zpool create -o ashift=12 -o comment="SDXC root pool" \
  -O mountpoint=legacy -O compression=lz4 -O aclinherit=passthrough \
  -O acltype=posixacl sdxcpool sdd3

zfs snapshot -r rpool@temp

zfs send -Rpv rpool@temp | zfs receive -dFu sdxcpool

zfs destroy -rv rpool@temp
zfs destroy -rv sdxcpool@temp

Poking around, I think it's the -d option to zfs receive that corrects the FS levels. While the -e does something different.

PS: To those wondering why I copied my root pools to SDXC media, all 3 of my Linux computers have built in SDXC slots. Thus, these are my backup boot devices. They can allow me to fix my main root pool without finding other bootable media with ZFS on Linux. In the past, they were EXT4, but with ZFS I get verification of data, so I would be able to tell when my SD cards start failing.
 
Last edited:
Status
Not open for further replies.
Top