Recovering jails after migrating to new pool

teawrecks

Cadet
Joined
May 7, 2018
Messages
5
Hi, I recently set up a new pool using new drives on a separate machine, and used 'zfs send' over ssh to migrate the 3 datasets (iocage, jails, share) that were contained in my old pool to the new one.

The new system recognizes all my jails, but for all plugin jails the mount points were all messed up and wouldn't boot.
Example:
mountpoints.PNG

This wasn't seen on my one manually created jail. Those mount points looked fine and it boots fine.

I figured it would be as simple as removing these extra entries so that only the mount points I want (the last 2 in the picture) were present (though the 2nd to last one is still wrong; has that [14, ' prefix). After removing bogus entries, the jails still wouldn't boot. After looking around the forums, it sounds like these entries represented attempts at secret mount points to the 11.2-RELEASE image that are necessary for the plugins to function.

As an example, my plex jail wasn't booting, so I installed a new plex jail. This one booted fine, but didn't have all my data obviously. Looking at the dataset hierarchy I noticed that the two jails both had roots like:
/Nydus/iocage/jails/plex/root - 4GB
/Nydus/iocage/jails/plex_2/root - 500MB

So I thought, "if I just clone the root from my old plex jail to the new plex jail, everything would boot fine". So I created a snapshot of the old root, then ran:
zfs destroy /Nydus/iocage/jails/plex_2/root
zfs clone /Nydus/iocage/jails/plex@snapshot /Nydus/iocage/jails/plex_2/root

plex_2 now boots fine and all my data was back!

Here's where I messed up: I figured "Great, now that it's cloned, I can delete old jail and rename the new one and I'm back in business". Apparently that's not how it works. In the Jails menu I deleted "plex" which apparently also deleted the snapshots of plex's root, which apparently also caused plex_2 to be deleted since it was a clone of the snapshot. So now my data for that jail is gone.

1) It's not the end of the world, and yeah, I do have my original drives still and could get it back, but if anyone knows a quick way to undelete that jail/snapshot that'd be great.
2) Any suggestions on how I can get these jails up and running again? I was almost there with the clone of root, but I don't want to have a bunch of dud-jails sitting around just to keep their snapshots around.

Thanks
 

teawrecks

Cadet
Joined
May 7, 2018
Messages
5
Ran 'zpool history' and yeah, it looks like iocage went through like the mob and killed anyone related to plex before killing it:

Code:
2020-02-22.15:24:36 zfs snapshot Nydus/iocage/jails/plex/root@manual-plex
2020-02-22.15:26:18 zfs destroy Nydus/iocage/jails/plex_2/root
2020-02-22.15:27:29 zfs clone Nydus/iocage/jails/plex/root@manual-plex Nydus/iocage/jails/plex_2/root
2020-02-22.15:31:26 <iocage> zfs destroy manual-plex
2020-02-22.15:31:26 <iocage> zfs destroy Nydus/iocage/jails/plex_2/root
2020-02-22.15:31:27 <iocage> zfs destroy Nydus/iocage/jails/plex_2
2020-02-22.15:31:27 <iocage> zfs destroy Nydus/iocage/jails/plex/root@manual-plex
2020-02-22.15:31:27 <iocage> zfs destroy plex
2020-02-22.15:31:30 <iocage> zfs destroy Nydus/iocage/jails/plex/root
2020-02-22.15:31:39 <iocage> zfs destroy Nydus/iocage/jails/plex
 

teawrecks

Cadet
Joined
May 7, 2018
Messages
5
For anyone who is curious, the command I should have run was 'zfs promote /Nydus/iocage/jails/plex_2/root ' after cloning. This "inverts" the child-parent dependency of the original dataset and the clone, which means I can safely delete the old jail and it doesn't touch the new one.

Still not sure why the mount points were messed up in the first place. Still not sure whether I can recover my lost plex dataset, though it doesn't seem likely.
 
Top