Is it simple to move my plugin jail to another pool?

CraftyClown

Patron
Joined
Apr 5, 2014
Messages
214
Hi all,

I have some sector errors popping up on one of my drives and I thought this might be a good time to upgrade my pools.

My existing setup is three separate pools over 6 disks, all mirrored.

What I'm planning to change up to is one Raidz2, made up of 4 x 4tb WD reds

To do this, I will need to wipe clean one of my existing mirrors, but that mirror also contains my jails. I'm wondering how easy it will be to move my Jails temporarily ? I had considered moving them to a windows share whilst I swap the drives out, but I have a bad feeling that will totally mess up the permissions.

Any advice greatly appreciated :)


EDIT:

I found the following instructions on how best to move the jails:

Assumptions:
  • The pool you are transferring the jails from is main_pool
  • The destination pool is ssd_pool
  • The jail root (Jails->Configuration) is /mnt/main_pool/jails
  • The new jail root will be /mnt/ssd_pool/jails
Steps:
  1. Turn off all plugins (Plugins->Installed)
  2. Stop all jails (Jails->View Jails)
  3. Run these commands via CLI:[PANEL]zfs snapshot -r main_pool/jails@relocate
    zfs send -R main_pool/jails@relocate | zfs receive -v ssd_pool/jails
    zfs get -rH -o name -s received mountpoint ssd_pool/jails | xargs -I {} sh -c "zfs set mountpoint=/{} {}; zfs mount {};"[/PANEL]
  4. Change the Jail Root to /mnt/ssd_pool/jails (Jails->Configuration)
  5. Start jails/plugins
  6. Check that everything works and destroy the original jails dataset (main_pool/jails)

As I need to use the drives from the original pool as part of the new pool, should I do the above process twice? Once to move the jails to a temp location and then again once the raidz2 pool has been built?
 
Last edited:

Robert Trevellyan

Pony Wrangler
Joined
May 16, 2014
Messages
3,778
As I need to use the drives from the original pool as part of the new pool, should I do the above process twice? Once to move the jails to a temp location and then again once the raidz2 pool has been built?
I can't vouch for those instructions, but if you need to reuse drives from your old pool in your new pool, you have no option but to move the data twice. Specifically, you'll have to pause after the zfs send | zfs receive in step 3 while you shuffle the drives around and create a new pool, then repeat a suitably modified version of the zfs send | zfs receive before continuing.
 

CraftyClown

Patron
Joined
Apr 5, 2014
Messages
214
I can't vouch for those instructions, but if you need to reuse drives from your old pool in your new pool, you have no option but to move the data twice. Specifically, you'll have to pause after the zfs send | zfs receive in step 3 while you shuffle the drives around and create a new pool, then repeat a suitably modified version of the zfs send | zfs receive before continuing.

Just as an update and for anyone that comes across this at a later date, the instructions above worked like a dream.

I have moved my jails on to a spare drive, meaning I can now remove the 4tb drives from the old volume and build a new raidz2 volume with them and then move the jails back once finished.

Happy days :D
 

bernieTB

Dabbler
Joined
Mar 6, 2016
Messages
19
I can only confirm CraftyClown's post - this worked perfectly for me as well. I moved my jails from one mirrored volume to a new SSD mirrored volume and had no issues what so ever!!!
 

RichTJ99

Patron
Joined
Sep 12, 2013
Messages
384
Hi,

I followed these instructions & it all seems to be good.

I get that the steps:

1. Snapshots original data source
2. Sends data to new pool
3. zfs get -rH -o name -s received mountpoint Pool/Jails1 | xargs -I {} sh -c "zfs set mountpoint=/{} {}; zfs mount {};"[/PANEL]


What does this do (ITEM #3) - as it didnt work for me - i did change the path but it gave an error about the mount point.

However - I did change all paths, disconnected the other drives and my jails are working so I guess its good?


Thanks,
Rich
 

styno

Patron
Joined
Apr 11, 2016
Messages
466
Did you run step 3 with or without "[/PANEL]" attached to the end?
 

RichTJ99

Patron
Joined
Sep 12, 2013
Messages
384
I ran it exactly as pasted above.

I did disconnect the other pool though & I guess it seems OK. Or at least it works.
 

tenjuna

Dabbler
Joined
May 5, 2016
Messages
24
I just ran this, worked like a charm (without the references to PANEL tag though, I assume those were BB code that were not parsed).
 

Vincèn

Patron
Joined
Oct 16, 2014
Messages
265
Check that everything works and destroy the original jails dataset (main_pool/jails)
I followed your tutorial for moving my jails to a separate storage volume and so far it works great and thanks for that ;)

Just one think to be sure I'm not doing terrible mistakes: when you say destroy original jails dataset, I should go in Storage -> View volumes then select each one of the old jails (including system one: named .warden) and select Destroy dataset for each one right ? till I have no more in my old volume right ?

Thanks
 

Vincèn

Patron
Joined
Oct 16, 2014
Messages
265
Just one think to be sure I'm not doing terrible mistakes: when you say destroy original jails dataset, I should go in Storage -> View volumes then select each one of the old jails (including system one: named .warden) and select Destroy dataset for each one right ? till I have no more in my old volume right ?
No one to let me know if I'm correct or not ?

Thanks
 

Noctris

Contributor
Joined
Jul 3, 2013
Messages
163
I know i am necro'ing this thread but i just wanted to add some info i just came across while doing this.

I was going from 4 pools to one and in the procces had to move jails, destroy the 4 pools and create a new one, then move them back. I then moved them again to organise some data better. Apparently somewhere along the line, i may have missed a command but either way, i could not install new jails. After some looking around, all mount points where updated EXCEPT the warden-pluginjail-template snapshots. I looked around a here and found this to easily fix the problem:

Login to shell and check the mountpoints:

Code:
zfs list -o origin,name,used,avail,refer,mountpoint


( do a | grep jail if you have many datasets)

for me this gave the following:

Code:
-														tank/zjails												  14.8G  12.0T  20.8M  /mnt/tank/zjails
-														tank/zjails/.warden-template-Freebsd-Standard-x64			2.84G  12.0T  2.84G  /mnt/tank/jails/.warden-template-Freebsd-Standard-x64
-														tank/zjails/.warden-template-Ubuntu 13.04 - 64 Bit			654M  12.0T   654M  /mnt/tank/zjails/.warden-template-Ubuntu 13.04 - 64 Bit
-														tank/zjails/.warden-template-VirtualBox-4.3.12				850M  12.0T   850M  /mnt/tank/zjails/.warden-template-VirtualBox-4.3.12
-														tank/zjails/.warden-template-pluginjail					   618M  12.0T   618M  /mnt/tank/jails/.warden-template-pluginjail
-														tank/zjails/.warden-template-pluginjail--x64				  594M  12.0T   594M  /mnt/tank/zjails/.warden-template-pluginjail--x64
-														tank/zjails/.warden-template-pluginjail--x64-20150624225527   594M  12.0T   594M  /mnt/tank/jails/.warden-template-pluginjail--x64-20150624225527
-														tank/zjails/.warden-template-pluginjail-9.3-x64			   594M  12.0T   594M  /mnt/tank/jails/.warden-template-pluginjail-9.3-x64
tank/zjails/.warden-template-Freebsd-Standard-x64@clean  tank/zjails/FreeBSD_Server								   5.55G  12.0T  6.79G  /mnt/tank/zjails/FreeBSD_Server
tank/zjails/.warden-template-pluginjail--x64@clean	   tank/zjails/couchpotato_1									 485M  12.0T  1.05G  /mnt/tank/zjails/couchpotato_1
tank/zjails/.warden-template-pluginjail@clean			tank/zjails/nzbhydra_1										228M  12.0T   845M  /mnt/tank/zjails/nzbhydra_1
tank/zjails/.warden-template-pluginjail--x64@clean	   tank/zjails/sabnzbd_1										 491M  12.0T  1.06G  /mnt/tank/zjails/sabnzbd_1
tank/zjails/.warden-template-pluginjail--x64@clean	   tank/zjails/sonarr_1										 1.03G  12.0T  1.61G  /mnt/tank/zjails/sonarr_1
tank/zjails/.warden-template-pluginjail--x64@clean	   tank/zjails/transmission_1									323M  12.0T   915M  /mnt/tank/zjails/transmission_1



where you can see that the mountpoint was not updated correctly for the warden templates ( which is weird, why did it update the rest but not them ?

You can easily fix this by doing:

Example:
Code:
 zfs set mountpoint=/tank/zjails/.warden-template-pluginjail tank/zjails/.warden-template-pluginjail


Note: MIND THE / in the beginning of the mountpoint. zfs will tell you you need an absolute mounting path ( so no pool/datasetname) but if you give it the absolute mounting path ( /mnt/tank/zjails/etc in my case, it will add and extra "/mnt" resulting in a path like: /mnt/mnt/tank/zjails/etc...

Code:
-														tank/zjails												  14.9G  12.0T   135M  /mnt/tank/zjails
-														tank/zjails/.warden-template-Freebsd-Standard-x64			2.84G  12.0T  2.84G  /mnt/mnt/tank/zjails/.warden-template-Freebsd-Standard-x64
-														tank/zjails/.warden-template-Ubuntu 13.04 - 64 Bit			654M  12.0T   654M  /mnt/mnt/tank/zjails/.warden-template-Ubuntu 13.04 - 64 Bit
-														tank/zjails/.warden-template-VirtualBox-4.3.12				850M  12.0T   850M  /mnt/tank/zjails/.warden-template-VirtualBox-4.3.12
-														tank/zjails/.warden-template-pluginjail					   618M  12.0T   618M  /mnt/mnt/tank/zjails/.warden-template-pluginjail
-														tank/zjails/.warden-template-pluginjail--x64				  594M  12.0T   594M  /mnt/mnt/tank/zjails/.warden-template-pluginjail--x64
-														tank/zjails/.warden-template-pluginjail--x64-20150624225527   594M  12.0T   594M  /mnt/mnt/tank/zjails/.warden-template-pluginjail--x64-20150624225527
-														tank/zjails/.warden-template-pluginjail-9.3-x64			   594M  12.0T   594M  /mnt/mnt/tank/zjails/.warden-template-pluginjail-9.3-x64
tank/zjails/.warden-template-Freebsd-Standard-x64@clean  tank/zjails/FreeBSD_Server								   5.55G  12.0T  6.79G  /mnt/tank/zjails/FreeBSD_Server
tank/zjails/.warden-template-pluginjail--x64@clean	   tank/zjails/couchpotato_1									 485M  12.0T  1.05G  /mnt/tank/zjails/couchpotato_1
tank/zjails/.warden-template-pluginjail@clean			tank/zjails/nzbhydra_1										228M  12.0T   845M  /mnt/tank/zjails/nzbhydra_1
tank/zjails/.warden-template-pluginjail--x64@clean	   tank/zjails/sabnzbd_1										 491M  12.0T  1.06G  /mnt/tank/zjails/sabnzbd_1
tank/zjails/.warden-template-pluginjail--x64@clean	   tank/zjails/sonarr_1										 1.03G  12.0T  1.61G  /mnt/tank/zjails/sonarr_1
tank/zjails/.warden-template-pluginjail--x64@clean	   tank/zjails/transmission_1									323M  12.0T   915M  /mnt/tank/zjails/transmission_1



The zfs command does rather bad parsing of the command line. I accidently put a double space between the mountpoint and the datasetname resulting in mountpoint /mnt ( which it itself adds)

After updating the warden template, all is well again and i can install jails once more.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Just as an update and for anyone that comes across this at a later date, the instructions above worked like a dream.

I have moved my jails on to a spare drive, meaning I can now remove the 4tb drives from the old volume and build a new raidz2 volume with them and then move the jails back once finished.

Happy days :D
I am preparing for a reconfiguration of my storage pool and your post appears to be exactly the information I am looking for. Thanks for the info.

Edit:
I have a question about it and I hope that you, or someone can answer. Does this allow you to move the /jails to a smaller vdev? I have the /jail on a pool of mirrored 1TB spinning disks and I want to move it to SSD but it will have less total capacity. It will be much faster and still have more storage than what is being used by the /jails now. I feel sure it will work, I just wanted to ask before I jump into it.

Thanks in advance.
 
Last edited:

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
I am preparing for a reconfiguration of my storage pool and your post appears to be exactly the information I am looking for. Thanks for the info.

I have a question about it and I hope that you, or someone can answer. Does this allow you to move the /jails to a smaller zvol? I have the /jail on a pool of mirrored 1TB spinning disks and I want to move it to SSD but it will have less total capacity. It will be much faster and still have more storage than what is being used by the /jails now. I feel sure it will work, I just wanted to ask before I jump into it.

Thanks in advance.
1. You can't move jails to a zvol. You can only have them on your pool in a dataset.
2. A SSD will not be noticably faster and it's usually a waste.

Yes you can move them. You need to snapshot the jails dataset then zfs send receive that snapshot to the new SSD pool. Then lastly you need to set the jail root path in the jail configuration in the GUI.

Sent from my Nexus 5X using Tapatalk
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
1. You can't move jails to a zvol. You can only have them on your pool in a dataset.
2. A SSD will not be noticably faster and it's usually a waste.

Yes you can move them. You need to snapshot the jails dataset then zfs send receive that snapshot to the new SSD pool. Then lastly you need to set the jail root path in the jail configuration in the GUI.

Sent from my Nexus 5X using Tapatalk

Corrected: A pool can be made of one or more vdev and the pool contains a dataset that can contain the jails. I just left the intermediate steps out because the point is I have a larger capacity vdev (made of disks) hosting the jails now and I want to move the content of that vdev to a smaller capacity vdev (made from SSDs). I thought it would be faster because of the higher speed of the SSD vs HDD. Why wouldn't it be faster?
 
Last edited:

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
No, a pool is made of one or more vdevs.
I see where my confusion came in. Thanks for reminding me, vdev vs. zvol
Everywhere I said zvol was supposed to be vdev.
 

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
Corrected: A pool can be made of one or more vdev and the pool contains a dataset that can contain the jails. I just left the intermediate steps out because the point is I have a larger capacity vdev (made of disks) hosting the jails now and I want to move the content of that vdev to a smaller capacity vdev (made from SSDs). I thought it would be faster because of the higher speed of the SSD vs HDD. Why wouldn't it be faster?
A SSD will be faster than a HDD but how do you measure that speed bump and will it be noticable. If you can show numbers of a workflow in the jail that are better with SSD than HDD I would love to see it. In most cases it doesn't really help.

Sent from my Nexus 5X using Tapatalk
 

Makaveli6103

Contributor
Joined
Mar 18, 2012
Messages
104
I just followed these instructions to move my jails to a larger SSD. In snapshots under storage, all of the snapshots I made to copy over to the new SSD are listed there. Is it Ok to delete them?
 

khierallah

Cadet
Joined
Jun 1, 2018
Messages
1
Steps:
  1. Turn off all plugins (Plugins->Installed)
  2. Stop all jails (Jails->View Jails)
  3. Run these commands via CLI:[PANEL]zfs snapshot -r main_pool/jails@relocate
    zfs send -R main_pool/jails@relocate | zfs receive -v ssd_pool/jails
    zfs get -rH -o name -s received mountpoint ssd_pool/jails | xargs -I {} sh -c "zfs set mountpoint=/{} {}; zfs mount {};"[/PANEL]
  4. Change the Jail Root to /mnt/ssd_pool/jails (Jails->Configuration)
  5. Start jails/plugins
  6. Check that everything works and destroy the original jails dataset (main_pool/jails)

Thanks for advise!
could you please explain more step no.3?
Thanks.
 

jea001

Dabbler
Joined
May 16, 2017
Messages
10
@khierallah,

I was able to move my jails successfully last night using these commands, it should look something like this:
zfs snapshot -r main_pool/jails@relocate
zfs send -R main_pool/jails@relocate | zfs receive -v ssd_pool/jails
zfs get -rH -o name -s received mountpoint ssd_pool/jails | xargs -I {} sh -c "zfs set mountpoint=/{} {}; zfs mount {};"

You should replace main_pool with the name of the dataset in which jails dataset is currently located and ssd_pool with the name of the dataset to which you want to move your jails dataset.

I am not an expert at working with the ZFS filesystem and using CLI to adjust ZFS properties, but here is what I've gathered from looking at the man page for the zfs command:

The first command (starting zfs snapshot) creates a snapshot of the jails dataset and all sub-datasets.
The second command (starting zfs send) sends that snapshot from your current jails location to the new jails location.
The third & final command (starting zfs get) sets the mountpoint for the new jails dataset appropriately.

I hope this helps.
 
Top