Register for the iXsystems Community to get an ad-free experience and exclusive discounts in our eBay Store.

Back-up Warden based FreeNAS jails to gzip and restore


Nov 30, 2016
This post is about backing up Warden based FreeNAS jails to gzip to transfer off of the FreeNAS server so that all ZFS pools can be destroyed and recreated , then restoring the jails into the new pool. I’m posting this as I had real trouble finding a working solution to this issue even though I saw this question asked a fair number of times, though was able to piece together a solution through trial and error.

When I originally built the FreeNAS server about 3.5 years ago I chose to only populate 5 of 8 available drive bays for cost reasons, in addition with ZFS1 and 5 x 4TB drives I didn’t require additional storage at the time. In the subsequent years I have consumed about 80% of the available storage. I had 2 spare 4TB drives and so if I bought 1 additional drive I could populate all 3 empty drive bays and add significant additional storage at much lower cost than replace the 5 existing drives with larger replacements. A common recommendation in this scenario appears to be adding a new vDev with the 3 new drives, however this would double the number of drives consumed by parity data, an undesirable outcome from my perspective. I decided I would backup the pool data, destroy the existing pool, add the 3 additional drives and rebuild a new 8 drive ZFS1 pool, this would increase my stage capacity from about 16TB to 26TB, with only about $150 cost for 1 new drive.

My issue then became that I did not have any spare SATA ports (beyond the 8 to be used in the new pool), and the form factor of he case does not allow PCIE cards to be added, and both of my USB ports were consumed by a pair of thumb drives which hold the FreeNAS OS in a mirrored paid. All this meant I could not create a temporary pool to backup the data and jails and instead I would need to transfer the data to a drive on my desktop computer. For the pool data this was not an issue but backing up jails outside of a local pool was more difficult the below is how I solved this.

I am running FreeNAS 11.1-U7 with 3 standard and 1 plugin type jail all managed with Warden (not IOCage). Most of the solutions I found were for IOCage but I did not want to deal with that migration at this point and I suspect there may be others in my situation, the below works for Warden managed jails.

To backup the jails I used a script provided by m0nkey_ (

The script worked perfectly well, but in case someone wants to avoid the script and enter the commands directly here is what the script is doing. These command are for the following setup: pool = SHARED1, jails = JAILS, jailname = TRANSMISSION_1. For the sake of clarity I’ve converted all of these values into all caps to make it clear where others will need to align those value to their own setup.

zfs snapshot SHARED1/JAILS/TRANSMISSION_1@backup-20200728
zfs send SHARED1/JAILS/TRANSMISSION_1@backup-20200728 | gzip > TRANSMISSION_1-2020728.gz
zfs destroy SHARED1/JAILS/TRANSMISSION_1@backup-20200728

The result is file called TRANSMISSION_1-2020728.gz containing the contents of the TRANSMISSION_1 jail, which I backed-up off of the FreeNAS along with all of the other pool data. I then destroyed the existing pool, added the new drives, built a new 8 drive pool and copied all of the data, including the jail backups into the new 8 drive pool.

I could not get the command in the above referenced thread intended to restore the jails into the new pool to work (it errored with “gzip: no such file or directory”). I was unable to understand or solve this issue by sending the command in this format but solved the error by sending the command in a different format which I found in number of threads referring the copying jails from one pool to another.

The next issue was that although the datasets from the old jails was restored, but they were not recognised as jails by FreeNAS. I ultimately solved this issue as follows:
  1. Creating standard jails in the webgui using the same name as each of the jails to be restored, and stop the jails (under Jails in webgui)
  2. Change the root path to the jails to a different temporary location (under Jails->Configuration in webgui), the reboot. (Jails should no longer be visible under Jails in webgui)
  3. Under the storage in the webgui destroy the datasets of the standard jails created in step 1
  4. Restore the jail using the below commands. These jails should be restored to the locations of the jails created in step 1, and destroyed in step 3.

    As above, the code below is for 1 jail with the following setup: pool = SHARED1, jails = JAILS, jailname = TRANSMISSION_1 (replace these values with your own and run for each of your jails)

    gzcat TRANSMISSION_1-2020728.gz | zfs receive -v SHARED1/JAILS/ TRANSMISSION_1

    Once all of your jails are restored run this command once. (replace my values with your own)

    zfs get -rH -o name -s received mountpoint SHARED1/JAILS | xargs -I {} sh -c "zfs set mountpoint=/{} {}; zfs mount {};"

  5. Change the jail root back to the original jails location (where you restored the jails) (under Jails->Configuration in webgui)
  6. Restart FreeNAS
  7. Jails should now be visible in webgui. Start jails and test
This worked for both standard and plugin type jails, though as I created all standard jails in step 1, all jails are now labelled in the webgui as standard jails. This appears to have no impact on their function.

My 4 jails are setup as follows, this worked fine for this setup and I expect but cannot guarantee it will work for your jails:

Jail 1 (standard jail) contains > Postfix
Jail 2 (standard jail) contains > MariaDB
Jail 3 (plugin jail but now labelled standard jail in webgui) contains > Transmission with OpenVPN setup for VPN and also firewall setup to block internet connections outside of VPN. This jail is also monitored by a script running on FreeNAS but outside of the jail to automatically restart the jail when the VPN goes down
Jail 4 (standard jail) contains > Sonar, Radar, Jacket