Multiple jailroots?

Status
Not open for further replies.

mostlygeek

Cadet
Joined
Nov 21, 2017
Messages
9
I'm currently running 9.10.2-U6 and looking to upgrade to 11.1. After the upgrade can I use iocage to create new jails in a different pool and they work along side the old warden ones?

I don't mind doing it on the CLI and maintaining them manually. I'm just not sure this would work and thought I'd ask before trying it out. What I would like to do is incrementally migrate services running on my current jails to iocage based ones without disrupting services.

Thanks
Ben
 

mostlygeek

Cadet
Joined
Nov 21, 2017
Messages
9
Upgraded and it seems to work. The old warden based jails boot and AFAICT are working normally. *whew*
  • iocage activate ssd-disk ... worked
  • iocage fetch ... this created the new zfs datasets under ssd-disk/iocage/*, was also able to fetch 11.1-RELEASE
  • iocage create -r 11.1-RELEASE -n testjail ... worked, and can be seen under ssd-disk/iocage/jails/testjail
  • iocage start testjail ... worked
  • jls shows <testjail> as well as my warden based jails. jexec <jid> tcsh let me get a shell in my jail :)
So overall, the upgrade worked and my previous jails appear to be fine.

What I'm trying to figure out now is how to remove the default iocage datasets from my main storage pool: datapool

Code:
> zfs list | grep iocage
datapool/iocage											1.05M  4.97T   192K  /mnt/iocage
datapool/iocage/download									176K  4.97T   176K  /mnt/iocage/download
datapool/iocage/images									  176K  4.97T   176K  /mnt/iocage/images
datapool/iocage/jails									   176K  4.97T   176K  /mnt/iocage/jails
datapool/iocage/log										 176K  4.97T   176K  /mnt/iocage/log
datapool/iocage/releases									176K  4.97T   176K  /mnt/iocage/releases
ssd-disk/iocage											1.19G   171G   100K  /mnt/iocage
ssd-disk/iocage/download									260M   171G	88K  /mnt/iocage/download
ssd-disk/iocage/download/11.1-RELEASE					   260M   171G   260M  /mnt/iocage/download/11.1-RELEASE
ssd-disk/iocage/images									   88K   171G	88K  /mnt/iocage/images
ssd-disk/iocage/jails									   728K   171G	88K  /mnt/iocage/jails
ssd-disk/iocage/jails/testjail							  640K   171G	92K  /mnt/iocage/jails/testjail
ssd-disk/iocage/jails/testjail/root						 548K   171G   961M  /mnt/iocage/jails/testjail/root
ssd-disk/iocage/log										  92K   171G	92K  /mnt/iocage/log
ssd-disk/iocage/releases									961M   171G	88K  /mnt/iocage/releases
ssd-disk/iocage/releases/11.1-RELEASE					   961M   171G	88K  /mnt/iocage/releases/11.1-RELEASE
ssd-disk/iocage/releases/11.1-RELEASE/root				  961M   171G   961M  /mnt/iocage/releases/11.1-RELEASE/root
ssd-disk/iocage/templates									88K   171G	88K  /mnt/iocage/templates


Using `zfs unmount datapool/log` fails due to the device being busy. Force unmounting it breaks all sorts of stuff like the GUI. Somewhere there is something using the dataset, i just haven't found out what yet.
 
Joined
Sep 4, 2015
Messages
1
I am running into this on 11.1-U4

Code:
My2TBMirror/iocage										1.21G   441G  3.25M  /mnt/iocage
My2TBMirror/iocage/download								260M   441G	88K  /mnt/iocage/download
My2TBMirror/iocage/download/11.1-RELEASE				   260M   441G   260M  /mnt/iocage/download/11.1-RELEASE
My2TBMirror/iocage/images								   88K   441G	88K  /mnt/iocage/images
My2TBMirror/iocage/jails									88K   441G	88K  /mnt/iocage/jails
My2TBMirror/iocage/log									  88K   441G	88K  /mnt/iocage/log
My2TBMirror/iocage/releases								973M   441G	88K  /mnt/iocage/releases
My2TBMirror/iocage/releases/11.1-RELEASE				   973M   441G	88K  /mnt/iocage/releases/11.1-RELEASE
My2TBMirror/iocage/releases/11.1-RELEASE/root			  973M   441G   973M  /mnt/iocage/releases/11.1-RELEASE/root
My2TBMirror/iocage/templates								88K   441G	88K  /mnt/iocage/templates
My4x3TB/iocage											1.41G  4.86T   145K  /mnt/iocage
My4x3TB/iocage/download									260M  4.86T   128K  /mnt/iocage/download
My4x3TB/iocage/download/11.1-RELEASE					   260M  4.86T   260M  /mnt/iocage/download/11.1-RELEASE
My4x3TB/iocage/images									  128K  4.86T   128K  /mnt/iocage/images
My4x3TB/iocage/jails									  15.5M  4.86T   128K  /mnt/iocage/jails
My4x3TB/iocage/jails/plexjail							 15.3M  4.86T   134K  /mnt/iocage/jails/plexjail
My4x3TB/iocage/jails/plexjail/root						15.2M  4.86T  1.13G  /mnt/iocage/jails/plexjail/root
My4x3TB/iocage/log										 134K  4.86T   134K  /mnt/iocage/log
My4x3TB/iocage/releases								   1.14G  4.86T   128K  /mnt/iocage/releases
My4x3TB/iocage/releases/11.1-RELEASE					  1.14G  4.86T   128K  /mnt/iocage/releases/11.1-RELEASE
My4x3TB/iocage/releases/11.1-RELEASE/root				 1.14G  4.86T  1.13G  /mnt/iocage/releases/11.1-RELEASE/root
My4x3TB/iocage/templates								   128K  4.86T   128K  /mnt/iocage/templates
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
What do you mean running into this? There's no problem in this thread. If you setup iocage on two pools by accident, just delete the datasets for the one you don't want. If that's not your problem, please start a new thread and explain your problem.
 
Status
Not open for further replies.
Top