how to properly backup (replicate) iocage jails for disaster recovery?

guermantes

Patron
Joined
Sep 27, 2017
Messages
213
I have searched Oracle zfs docs and the forum but I am still unsure regarding how to properly backup my iocage jails to a different pool so as to have a backup if I mess up or otherwise get my jails destroyed.

Naively I began with a recursive snapshop task of the entire iocage/ tree and tried to replicate that to my secondary pool where I collect locally stored backups (not optimal but I also have disks offsite). Only the top iocage dataset was transferred but nothing of substance relating to my jails (which are nested further down in the iocage directory tree). As per this thread it seems that was to be expected as replication does not include child datasets. What a bummer!

So what are my options? Could I simply snapshot and replicate the e.g. tank/iocage/jails/MYJAILNAME-1/root and tank/iocage/jails/MYJAILNAME-2/root datasets? Is that how it should be done? Or are they dependent on files in other datasets in the iocage/ hierarchy? Do I also need to separately replicate all the datasets in the iocage/ tree so as to be able to clone back the iocage/ tree in its entirety?

Now, someone may say replication task, but I have tried that and I am not succeeding in setting up "localhost replication". I have tried twice and have lost several hours but still can't understand how to set it up even after looking at threads discussing localhost replication here on the forums. (I get an "ECDSA key error").
 
D

dlavigne

Guest
That is a very old thread and a bug that was fixed over 4 years ago.

Did you get any further with this? If not, are you running 11.2-U3 (as in your signature) or are you up-to-date with 11.2-U5?
 

guermantes

Patron
Joined
Sep 27, 2017
Messages
213
That is a very old thread and a bug that was fixed over 4 years ago.

Did you get any further with this? If not, are you running 11.2-U3 (as in your signature) or are you up-to-date with 11.2-U5?
No, I haven't got any further regrettably. I tried incorporating the "root" dataset for one of my iocage jails in my nightly bash/cron script that replicates snapshots of my important datasets locally to a different pool, but after successful initial replication night 1, on night 2 it failed during the incremental replication of that dataset with an error none of the other datasets in the script have ever thrown (warning: cannot send 'TANK/iocage/jails/jailwebdev/root@auto-20190907.0200-3m': signal received). Since then I have work to do so I haven't been able to investigate.

I'm still on 11.2 U3. Perhaps U5 with the fix "Do not crash replication script on misconfigured SSH connection" will allow me to go down the replication route instead? Will try to find time to look at the during the coming week-end.
 

0x4161726f6e

Dabbler
Joined
Jul 3, 2016
Messages
19
I have a recursive replication setup for work that hasn't given me any trouble, but that isn't trying to replicate jailed datasets. My guess would be that your issue may be related to that, but I could be mis-remembering what I read about jailed datasets.
 

guermantes

Patron
Joined
Sep 27, 2017
Messages
213
From the log:
cannot receive incremental stream: destination GOLD/sendrecv/iocage/jails/jailwebdev/root has been modified since most recent snapshot.

So the destination has been modified? I would expect this if atime was on but it is not. Not for any of the datasets in the hierarchy.

Does it have to do with the fact that the jail is running when the snapshot is taken?
 

0x4161726f6e

Dabbler
Joined
Jul 3, 2016
Messages
19
OK so the problem is not related to the dataset being jailed.

You are reading the that message from the log correctly, the destination dataset has been modified. This would not be caused by a running jail when the snapshot is taken.

Are you sharing (NFS or CIFS/SMB) the destination dataset? Even if the destination dataset is read only a sharing process can lock the dataset and cause the error you are seeing. The read only property in ZFS (on a dataset/zvol not file or folder) prevents data from changing, but won't stop ZFS replication or process locks; based on my experience.
 

guermantes

Patron
Joined
Sep 27, 2017
Messages
213
No. None of the datasets in pool GOLD are being shared. I am completely flummoxed by this.
 

0x4161726f6e

Dabbler
Joined
Jul 3, 2016
Messages
19
Code:
zfs get all <dataset>

I wonder if a difference in dataset version could be an issue.

I'm thinking you may need to write a script to get the outcome you're looking for. If you use the built-in replication tool, it will flatten the dataset hierarchy.
 

guermantes

Patron
Joined
Sep 27, 2017
Messages
213
The versions seem to be the same (5).

source:
Code:
root@freenas:~ # zfs get all TANK/iocage/jails/jailwebdev/root
NAME                               PROPERTY                 VALUE                                              SOURCE
TANK/iocage/jails/jailwebdev/root  type                     filesystem                                         -
TANK/iocage/jails/jailwebdev/root  creation                 Fri Apr  5 20:08 2019                              -
TANK/iocage/jails/jailwebdev/root  used                     633M                                               -
TANK/iocage/jails/jailwebdev/root  available                5.46T                                              -
TANK/iocage/jails/jailwebdev/root  referenced               978M                                               -
TANK/iocage/jails/jailwebdev/root  compressratio            1.89x                                              -
TANK/iocage/jails/jailwebdev/root  mounted                  yes                                                -
TANK/iocage/jails/jailwebdev/root  origin                   TANK/iocage/releases/11.2-RELEASE/root@jailwebdev  -
TANK/iocage/jails/jailwebdev/root  quota                    none                                               default
TANK/iocage/jails/jailwebdev/root  reservation              none                                               default
TANK/iocage/jails/jailwebdev/root  recordsize               128K                                               default
TANK/iocage/jails/jailwebdev/root  mountpoint               /mnt/TANK/iocage/jails/jailwebdev/root             default
TANK/iocage/jails/jailwebdev/root  sharenfs                 off                                                default
TANK/iocage/jails/jailwebdev/root  checksum                 on                                                 default
TANK/iocage/jails/jailwebdev/root  compression              lz4                                                inherited from TANK/iocage/jails
TANK/iocage/jails/jailwebdev/root  atime                    off                                                inherited from TANK
TANK/iocage/jails/jailwebdev/root  devices                  on                                                 default
TANK/iocage/jails/jailwebdev/root  exec                     on                                                 default
TANK/iocage/jails/jailwebdev/root  setuid                   on                                                 default
TANK/iocage/jails/jailwebdev/root  readonly                 off                                                default
TANK/iocage/jails/jailwebdev/root  jailed                   off                                                default
TANK/iocage/jails/jailwebdev/root  snapdir                  hidden                                             default
TANK/iocage/jails/jailwebdev/root  aclmode                  passthrough                                        inherited from TANK/iocage/jails
TANK/iocage/jails/jailwebdev/root  aclinherit               passthrough                                        inherited from TANK/iocage/jails
TANK/iocage/jails/jailwebdev/root  canmount                 on                                                 default
TANK/iocage/jails/jailwebdev/root  xattr                    off                                                temporary
TANK/iocage/jails/jailwebdev/root  copies                   1                                                  default
TANK/iocage/jails/jailwebdev/root  version                  5                                                  -
TANK/iocage/jails/jailwebdev/root  utf8only                 off                                                -
TANK/iocage/jails/jailwebdev/root  normalization            none                                               -
TANK/iocage/jails/jailwebdev/root  casesensitivity          sensitive                                          -
TANK/iocage/jails/jailwebdev/root  vscan                    off                                                default
TANK/iocage/jails/jailwebdev/root  nbmand                   off                                                default
TANK/iocage/jails/jailwebdev/root  sharesmb                 off                                                default
TANK/iocage/jails/jailwebdev/root  refquota                 none                                               default
TANK/iocage/jails/jailwebdev/root  refreservation           none                                               default
TANK/iocage/jails/jailwebdev/root  primarycache             all                                                default
TANK/iocage/jails/jailwebdev/root  secondarycache           all                                                default
TANK/iocage/jails/jailwebdev/root  usedbysnapshots          101M                                               -
TANK/iocage/jails/jailwebdev/root  usedbydataset            531M                                               -
TANK/iocage/jails/jailwebdev/root  usedbychildren           0                                                  -
TANK/iocage/jails/jailwebdev/root  usedbyrefreservation     0                                                  -
TANK/iocage/jails/jailwebdev/root  logbias                  latency                                            default
TANK/iocage/jails/jailwebdev/root  dedup                    off                                                default
TANK/iocage/jails/jailwebdev/root  mlslabel                                                                    -
TANK/iocage/jails/jailwebdev/root  sync                     standard                                           default
TANK/iocage/jails/jailwebdev/root  refcompressratio         1.88x                                              -
TANK/iocage/jails/jailwebdev/root  written                  8.76M                                              -
TANK/iocage/jails/jailwebdev/root  logicalused              848M                                               -
TANK/iocage/jails/jailwebdev/root  logicalreferenced        1.38G                                              -
TANK/iocage/jails/jailwebdev/root  volmode                  default                                            default
TANK/iocage/jails/jailwebdev/root  filesystem_limit         none                                               default
TANK/iocage/jails/jailwebdev/root  snapshot_limit           none                                               default
TANK/iocage/jails/jailwebdev/root  filesystem_count         none                                               default
TANK/iocage/jails/jailwebdev/root  snapshot_count           none                                               default
TANK/iocage/jails/jailwebdev/root  redundant_metadata       all                                                default
TANK/iocage/jails/jailwebdev/root  org.freebsd.ioc:active   yes                                                inherited from TANK
TANK/iocage/jails/jailwebdev/root  org.freenas:description                                                     inherited from TANK


destination:
Code:
root@freenas:~ # zfs get all GOLD/sendrecv/iocage/jails/jailwebdev/root
NAME                                        PROPERTY                 VALUE                                            SOURCE
GOLD/sendrecv/iocage/jails/jailwebdev/root  type                     filesystem                                       -
GOLD/sendrecv/iocage/jails/jailwebdev/root  creation                 Sat Sep 14  3:30 2019                            -
GOLD/sendrecv/iocage/jails/jailwebdev/root  used                     804M                                             -
GOLD/sendrecv/iocage/jails/jailwebdev/root  available                4.23T                                            -
GOLD/sendrecv/iocage/jails/jailwebdev/root  referenced               804M                                             -
GOLD/sendrecv/iocage/jails/jailwebdev/root  compressratio            1.88x                                            -
GOLD/sendrecv/iocage/jails/jailwebdev/root  mounted                  yes                                              -
GOLD/sendrecv/iocage/jails/jailwebdev/root  quota                    none                                             default
GOLD/sendrecv/iocage/jails/jailwebdev/root  reservation              none                                             default
GOLD/sendrecv/iocage/jails/jailwebdev/root  recordsize               128K                                             default
GOLD/sendrecv/iocage/jails/jailwebdev/root  mountpoint               /mnt/GOLD/sendrecv/iocage/jails/jailwebdev/root  default
GOLD/sendrecv/iocage/jails/jailwebdev/root  sharenfs                 off                                              default
GOLD/sendrecv/iocage/jails/jailwebdev/root  checksum                 on                                               default
GOLD/sendrecv/iocage/jails/jailwebdev/root  compression              lz4                                              inherited from GOLD
GOLD/sendrecv/iocage/jails/jailwebdev/root  atime                    off                                              inherited from GOLD/sendrecv
GOLD/sendrecv/iocage/jails/jailwebdev/root  devices                  on                                               default
GOLD/sendrecv/iocage/jails/jailwebdev/root  exec                     on                                               default
GOLD/sendrecv/iocage/jails/jailwebdev/root  setuid                   on                                               default
GOLD/sendrecv/iocage/jails/jailwebdev/root  readonly                 off                                              default
GOLD/sendrecv/iocage/jails/jailwebdev/root  jailed                   off                                              default
GOLD/sendrecv/iocage/jails/jailwebdev/root  snapdir                  hidden                                           default
GOLD/sendrecv/iocage/jails/jailwebdev/root  aclmode                  passthrough                                      inherited from GOLD/sendrecv/iocage/jails/jailwebdev
GOLD/sendrecv/iocage/jails/jailwebdev/root  aclinherit               passthrough                                      inherited from GOLD
GOLD/sendrecv/iocage/jails/jailwebdev/root  canmount                 on                                               default
GOLD/sendrecv/iocage/jails/jailwebdev/root  xattr                    off                                              temporary
GOLD/sendrecv/iocage/jails/jailwebdev/root  copies                   1                                                inherited from GOLD/sendrecv/iocage/jails/jailwebdev
GOLD/sendrecv/iocage/jails/jailwebdev/root  version                  5                                                -
GOLD/sendrecv/iocage/jails/jailwebdev/root  utf8only                 off                                              -
GOLD/sendrecv/iocage/jails/jailwebdev/root  normalization            none                                             -
GOLD/sendrecv/iocage/jails/jailwebdev/root  casesensitivity          sensitive                                        -
GOLD/sendrecv/iocage/jails/jailwebdev/root  vscan                    off                                              default
GOLD/sendrecv/iocage/jails/jailwebdev/root  nbmand                   off                                              default
GOLD/sendrecv/iocage/jails/jailwebdev/root  sharesmb                 off                                              default
GOLD/sendrecv/iocage/jails/jailwebdev/root  refquota                 none                                             default
GOLD/sendrecv/iocage/jails/jailwebdev/root  refreservation           none                                             default
GOLD/sendrecv/iocage/jails/jailwebdev/root  primarycache             all                                              default
GOLD/sendrecv/iocage/jails/jailwebdev/root  secondarycache           all                                              default
GOLD/sendrecv/iocage/jails/jailwebdev/root  usedbysnapshots          152K                                             -
GOLD/sendrecv/iocage/jails/jailwebdev/root  usedbydataset            804M                                             -
GOLD/sendrecv/iocage/jails/jailwebdev/root  usedbychildren           0                                                -
GOLD/sendrecv/iocage/jails/jailwebdev/root  usedbyrefreservation     0                                                -
GOLD/sendrecv/iocage/jails/jailwebdev/root  logbias                  latency                                          default
GOLD/sendrecv/iocage/jails/jailwebdev/root  dedup                    off                                              default
GOLD/sendrecv/iocage/jails/jailwebdev/root  mlslabel                                                                  -
GOLD/sendrecv/iocage/jails/jailwebdev/root  sync                     standard                                         default
GOLD/sendrecv/iocage/jails/jailwebdev/root  refcompressratio         1.88x                                            -
GOLD/sendrecv/iocage/jails/jailwebdev/root  written                  152K                                             -
GOLD/sendrecv/iocage/jails/jailwebdev/root  logicalused              1.38G                                            -
GOLD/sendrecv/iocage/jails/jailwebdev/root  logicalreferenced        1.38G                                            -
GOLD/sendrecv/iocage/jails/jailwebdev/root  volmode                  default                                          default
GOLD/sendrecv/iocage/jails/jailwebdev/root  filesystem_limit         none                                             default
GOLD/sendrecv/iocage/jails/jailwebdev/root  snapshot_limit           none                                             default
GOLD/sendrecv/iocage/jails/jailwebdev/root  filesystem_count         none                                             default
GOLD/sendrecv/iocage/jails/jailwebdev/root  snapshot_count           none                                             default
GOLD/sendrecv/iocage/jails/jailwebdev/root  redundant_metadata       all                                              default
GOLD/sendrecv/iocage/jails/jailwebdev/root  org.freenas:description                                                   inherited from GOLD/sendrecv


I am thinking maybe I should just call it quits trying to backup my jails like this, because it occurs to me that the jails themselves are pretty static. jailwebdev has it's code parts in a different dataset which is mounted into the jail but already being snapshotted successfully from its "proper" dataset, and for the nextcloud jail I suppose I could cron-backup the database instead of the entire jail, and send the database to a location where it will be backed up.
 

guermantes

Patron
Joined
Sep 27, 2017
Messages
213
I'm thinking you may need to write a script to get the outcome you're looking for. If you use the built-in replication tool, it will flatten the dataset hierarchy.

I am actually using a homebrewed script already. Super elementary I am sure since it's pretty much my first that is longer than ten lines.

I am posting it below, but I am in no way asking anyone to review it unless they take pleasure in doing so.

Code:
#!/bin/bash

# scp sendrecv.sh root@10.0.0.50:/mnt/TANK/sysadmin/sendrecv/
# This script performs incremental zfs send/recv within the same Freenas system, for those who do not have the luxury of having a second system and thus cannot set up standard replication tasks. The script itself is intended to be run by cron, and supports incremental send/recv as well as a first send/recv operation the first time the script is run on a particular dataset in order to kick-off the process of sending incrementally. To start anew, delete the file 'previousNNsnapshot' in assets folder and empty the corresponding dataset on GOLD.
# PREREQUISITE: snapshots to be sent must be created automatically in order to be accessible to the script.
# The second grep filters out snaps that only live a few hours and thus are prone to disappear before the next send which leads to broken incremental


# Backup function (incremental)
backup() {
    zfs list -t snapshot -o name | grep $snapshotPrefix$dataset@auto | grep '[^h]$' > $assetPath$dataset"snapshots"
    currentSnap=$( tail -n 1 $assetPath$dataset"snapshots" )
    previousSnap=$( head -n 1 $assetPath"previous"$dataset"snapshot" )
    if [ $currentSnap = $previousSnap ]
    then
        echo $(date '+%F %T') "WARNING:" $dataset"@auto: previous and current snapshots are the same. Nothing to do, aborting..." >> $logFile
    else
        zfs send -i $previousSnap $currentSnap | zfs recv $destinationPath$dataset 2> $assetPath$dataset"-zfserror"
        if [ $? -eq 0 ]
        then
            echo $currentSnap > $assetPath"previous"$dataset"snapshot"
            echo $(date '+%F %T') "SUCCESS:" $dataset"@auto: zfs send/recv successful." >> $logFile
        else
            echo $(date '+%F %T') "FAILURE:" $dataset"@auto: zfs send/recv failed while sending or receiving." >> $logFile
            cat $assetPath$dataset"-zfserror" >> $logFile
        fi
        rm $assetPath$dataset"-zfserror"
    fi
}


# Send initial snapshot, executes if script has not run before for this dataset
initial_send() {
    zfs list -t snapshot -o name | grep $snapshotPrefix$dataset@auto | grep '[^h]$' > $assetPath$dataset"snapshots"
    currentSnap=$( tail -n 1 $assetPath$dataset"snapshots" )
    zfs send $currentSnap | zfs recv -F $destinationPath$dataset 2> $assetPath$dataset"-zfserror"
    if [ $? -eq 0 ]
    then
        echo $currentSnap > $assetPath"previous"$dataset"snapshot"
        echo $(date '+%F %T') "SUCCESS:" $dataset"@auto: initial send successful." >> $logFile
    else
        echo $(date '+%F %T') "FAILURE:" $dataset"@auto: initial send failed." >> $logFile
        cat $assetPath$dataset"-zfserror" >> $logFile
    fi
    rm $assetPath$dataset"-zfserror"
}


# Checks whether script has run before on dataset in question and chooses appropriate action
execute_script() {
    if [ -a $assetPath"previous"$dataset"snapshot" ]
    then
        backup
    else
        initial_send
    fi
}



# Backup function (incremental)
backup_jail() {
    zfs list -t snapshot -o name | grep $snapshotPrefix$dataset$datasetsuffix@auto | grep '[^h]$' > $assetPath$dataset"snapshots"
    currentSnap=$( tail -n 1 $assetPath$dataset"snapshots" )
    previousSnap=$( head -n 1 $assetPath"previous"$dataset"snapshot" )
    if [ $currentSnap = $previousSnap ]
    then
        echo $(date '+%F %T') "WARNING:" $dataset"@auto: previous and current snapshots are the same. Nothing to do, aborting..." >> $logFile
    else
        zfs send -i $previousSnap $currentSnap | zfs recv $destinationPathJail$dataset$datasetsuffix 2> $assetPath$dataset"-zfserror"
        if [ $? -eq 0 ]
        then
            echo $currentSnap > $assetPath"previous"$dataset"snapshot"
            echo $(date '+%F %T') "SUCCESS:" $dataset"@auto: zfs send/recv successful." >> $logFile
        else
            echo $(date '+%F %T') "FAILURE:" $dataset"@auto: zfs send/recv failed while sending or receiving." >> $logFile
            cat $assetPath$dataset"-zfserror" >> $logFile
        fi
        rm $assetPath$dataset"-zfserror"
    fi
}


# Send initial snapshot, executes if script has not run before for this dataset
initial_send_jail() {
    zfs list -t snapshot -o name | grep $snapshotPrefix$dataset$datasetsuffix@auto | grep '[^h]$' > $assetPath$dataset"snapshots"
    currentSnap=$( tail -n 1 $assetPath$dataset"snapshots" )
    zfs send $currentSnap | zfs recv -F $destinationPathJail$dataset$datasetsuffix 2> $assetPath$dataset"-zfserror"
    if [ $? -eq 0 ]
    then
        echo $currentSnap > $assetPath"previous"$dataset"snapshot"
        echo $(date '+%F %T') "SUCCESS:" $dataset"@auto: initial send successful." >> $logFile
    else
        echo $(date '+%F %T') "FAILURE:" $dataset"@auto: initial send failed." >> $logFile
        cat $assetPath$dataset"-zfserror" >> $logFile
    fi
    rm $assetPath$dataset"-zfserror"
}


# Checks whether script has run before on dataset in question and chooses appropriate action
jail_execute_script() {
    if [ -a $assetPath"previous"$dataset"snapshot" ]
    then
        backup_jail
    else
        initial_send_jail
    fi
}



# Constants
assetPath="/mnt/TANK/sysadmin/sendrecv/assets/" # WARNING: changing this directory will break incremental replication, since previousfile will not be found in the new directory and as such a new sending with -F flag will occur the next time the script is run.
logFile="/mnt/TANK/sysadmin/sendrecv/sendrecv.log"
destinationPath="GOLD/sendrecv/" # dataset has to exist and be empty, and must be created with atime off, otherwise incremental is prone to break
destinationPathJail="GOLD/sendrecv/iocage/jails/" # dataset has to exist and be empty, and must be created with atime off, otherwise incremental is prone to break


# Later on we want to know what day it is...
today=$( LC_TIME=en_GB.UFT8 date +%A )

# Task 1
dataset="dev" # name of dataset that the snapshot to be sent refers to
snapshotPrefix="TANK/" # varies depending on whether snapshotted datasets are nested or not and REQUIRES TRAILING SLASH
execute_script
echo "DEV done!"

# Task 2
dataset="hibou"
snapshotPrefix="TANK/"
execute_script
echo "HIBOU done!"

# Task 3
dataset="home"
snapshotPrefix="TANK/"
execute_script
echo "HOME done!"

# Task 4
dataset="foto"
snapshotPrefix="TANK/"
execute_script
echo "FOTO done!"

# Task 5
dataset="nibelheim"
snapshotPrefix="TANK/"
execute_script
echo "NIBELHEIM done!"

# Task 6
dataset="sysadmin"
snapshotPrefix="TANK/"
execute_script
echo "SYSADMIN done!"

# Task 7


# Task 8
dataset="backup"
snapshotPrefix="TANK/"
execute_script
echo "BACKUP done!"
        
# Task 9 only runs on Sundays since Musica snapshots are weekly and I don't wan't to clutter the log with warnings during the week
if [ $today = "Sunday" ]
then
        dataset="musica"
        snapshotPrefix="TANK/bibliotek/"
        execute_script
        echo "MUSICA done!"
fi

# Task 10 only runs on Sundays since Video snapshots are weekly and I don't wan't to clutter the log with warnings during the week
if [ $today = "Sunday" ]
then
        dataset="bluray-musik"
        snapshotPrefix="TANK/bibliotek/video/"
        execute_script
        echo "VIDEO BLURAY-MUSIK done!"
fi

# Task 11 only runs on Sundays since Video snapshots are weekly and I don't wan't to clutter the log with warnings during the week
if [ $today = "Sunday" ]
then
        dataset="annat"
        snapshotPrefix="TANK/bibliotek/video/"
        execute_script
        echo "VIDEO ANNAT done!"
fi

# Jails
# Task 12
dataset="jailwebdev" # name of dataset that the snapshot to be sent refers to
snapshotPrefix="TANK/iocage/jails/" # REQUIRES TRAILING SLASH
datasetsuffix="/root"
jail_execute_script
echo "JAILWEBDEV done!"

# Task 13
dataset="nextcloud"
snapshotPrefix="TANK/iocage/jails/"
datasetsuffix="/root"
jail_execute_script
echo "NEXTCLOUD done!"
 

2twisty

Contributor
Joined
Mar 18, 2020
Messages
145
Back from the dead:

Running TrueNAS Core 12-release

If I include iocage in my periodic snapshots and try to replicate it, I get an error stating that the iocage snapshot does not exist. If I exclude pool/iocage from both the periodic snapshot and the replication task, it works fine.

So, how can I automatically back up and replicate iocage? Right now I am not running any plugins or jails of any kind, but if I decide to in the future, I want them backed up so that I can do a complete restore if I must.
 

Alecmascot

Guru
Joined
Mar 18, 2014
Messages
1,175
I have a separate snapshot task for iocage and a separate replication task.
It all works fine.
 

2twisty

Contributor
Joined
Mar 18, 2020
Messages
145
I'll try that. Strange that I can't just replicate the entire pool all at once. I'll post back with the results
 

2twisty

Contributor
Joined
Mar 18, 2020
Messages
145
Yup. That worked just fine. So for the people searching in the future, if you want to just replicate your entire pool, you must exclude iocage from both the snapshot task and the replication task, then create a new snapshot and task for iocage separately.

Seems kinda dumb, and I would encourage the devs to look into/fix this issue.
 

feleven

Dabbler
Joined
Feb 17, 2014
Messages
39
My apologies for resurrecting this aging thread.

I've been using rsync to back up my "TANK" pool to a "BKUP" pool on the same machine, which includes a plex jail. Previously, I hadn't fully grasped what an iocage jail is (a mounted folder), so was mystified in seeing that the backup contained both the original plex media files AND the jailed plex media files - that is, the backup contained two copies of the media files, a HUGE waste of time, space, and system resources.

So, recent forum post readings helped me understand that while iocage/jails/plex is active, rsync sees the mounted jail as a second set of media files and backs them up a second time. Obviously I need to revamp my backup procedure.

I also learned that "iocage export plex" (for example) creates a backup of the plex jail. So, in theory, I could use "rsync --exclude 'plex' " to exclude the plex jail, then use "iocage export plex" to create a "backup" of the jail separately.

Or I could do what many others do, use snapshots to make backups easier and smaller, while allowing a simple method of rolling back if necessary.

My questions:
When I execute "iocage export plex" to backup the jail, what exactly is stored in the backup glob? I ask this so I know which folder(s) to --exclude in rsync. If the jail is named "plex", and Mount Points are

SOURCE=/mnt/TANK/PlexMedia
DESTINATION=/mnt/TANK/iocage/jails/plex/root/media

when I enter "iocage export plex," at which folder does the export start: iocage/jails/plex, or iocage/jails/plex/root?

My assumption is that "iocage export plex" would start at folder iocage/jails/plex, including all files and folders under that, ie. the jail config and fstab configuration files and root folder containing what appears to be all the plex media server application folders and files. I assume the "plex/root/media folder would be backed up but empty ('cause the jail's off).

If I move to a snapshot backup process, I believe I have to stop the jail first, then take the system snapshot, then snapshot iocage, then restart the jail. Do I have this right? What about the child datasets under iocage - does a snapshot of iocage also capture all of them? I assume it does.

Finally, I have some older snapshots lingering around that have become stale dated. I can delete all but the oldest, a snapshot of dataset named "TANK/iocage/releases/12.2-RELEASE/root". When I try to delete it, I see an error message

[EFAULT] Cannot destroy TANK/iocage/releases/12.2-RELEASE/root@plex: snapshot has dependent clones

I sort of understand what the error means, but can I remove this last snapshot to start with a clean slate, then create a brand new one reflecting the current state of the machine?

I THOUGHT I understood FreeNAS/TrueNAS, after years of use. But, boy, I'm getting an education now!
 

Alecmascot

Guru
Joined
Mar 18, 2014
Messages
1,175
do a forum search for "dependent clones" and continue your education !
 

feleven

Dabbler
Joined
Feb 17, 2014
Messages
39
Yeah, that went well. :frown:

I did the search and read through what I could find about deleting the recalcitrant TANK/iocage/releases/12.2-RELEASE/root@plex snapshot. The CLI process that was suggested worked just fine - I just hit the last Enter, and a minute later the snapshot was gone, no warnings or any other responses from TrueNAS.

Unfortunately, deleting that snapshot also deleted the actual /mnt/TANK/iocage/jails/plex/root folder. Basically, that plex jail root folder contained the PlexMediaServer app files installed in the jail. I wasn't aware that deleting a snapshot could utterly zorch the source files - that tidbit might have been good to know.

Fortunately, after a brief panic, I realized my rsync backup to the BKUP pool from a few days ago was still available. So recovery of the deleted files to the TANK pool was relatively painless, although it sure took a while. A test drive after restoration confirmed all was back to normal and running as expected. Disaster averted.

So now I know that not all snapshots are created equal, and that more caution needs to be taken when "cleaning things up". I think I'll do more research of my own before acting on random suggestions. And perhaps I'll just stick to my trusty rsync backup procedure until I've run a bunch of tests on snapshots-as-backup on dummy jails. I'd rather waste some disk space (which I have plenty of in both pools) than risk inadvertently deleting any more critical files.
 

bkw777

Cadet
Joined
Aug 7, 2022
Messages
7
Replicating the entire iocage as a single task is doable, and all from the gui.

It's just utterly and completely opaque, non obvious, and not facilitated. It's like the idea never ocurred that anyone would ever want to do it, which boggles my mind...

Anyway... the secret was to create the destination top level dataset, empty and unmounted, then create a recursive snapshot of the source manually, but clear out the snapshot name and select the "auto-..." naming scheme, then create the scheduled snapshot task, then the replication task. Then you can use the "run now" button on the replication task.

One source of confusion is when you use the gui in basic mode to create the replication job, it will do that initial snapshot one time automatically behind the scene without saying much about it, but only that first time. It doesn't do it in response to the "run now" button. So if you edit the job to use any other source, or use advanced mode instead of basic when creating a new job, it doesn't run the initial snapshot. And the scheduled snapshot page doesn't have a "run now" button, it just says "pending" and it'll run whenever the next schedule comes around and that's it.

So to make "run now" work, you have to create the snapshot that the replication wants to read.
And the next non-obvious part is that the snapshot NAMES need to match what the replication job is looking for. So when you use the snapshot page to manually make a snapshot, by default it will use a name "manual-...", and the replication job will ignore that and say "no snapshots available...". So what you do is blank out that name field and leave it empty, then on the field below that you select the "auto-..." naming scheme from a pulldown list (that only has that one item in it). That will create a manual snapshot on the spot, but with names that are the same as the sheduled snapshots.

Once those exist, and the destination dataset exists, empty, not mounted, then the "run now" button on the replication task works.

If the destination dataset already exists with stuff in it from prior messing around, you have to clear it all out and finally get it unmounted first.
It may take several manual steps because there may be datasets within datasets and half of them may be "busy" just from being mounted or not empty. By "destination dataset" I do not mean "tank" or your equivalent (Mine happens to be named "v1" where most examples seem to have "tank"), I mean some dataset within that like tank/iocage or tank/iocage/jails. Start at Storage -> Pool -> tank -> <destination> and expand all datasets within there and try to delete them. If any can't be deleted, use a root sheel to unmount. "umount /mnt/tank/iocage/jails/junk", then try to delete from the gui again. Repeat until you have tank/iocage and nothing in it. Use the shell to unmount the top-level empty destination "umount /mnt/tank/iocage"

Or if the destination pool is already empty and a destination dataset doesn't exist yet, just create it, and it will be empty and unmounted.

I will show the explicit example from my own case just to show an explicit example, but yours will be different.
Where most examples have "tank" my main (big, slow) pool happens to be named "v1". This is where I want my destination to go.
My source is my "iocage" dataset from a pool named "ssd"
I have a pair of ssd's set up as an unsafe-but-fast raid0 pool named "ssd" for jails.
Since this pool is unsafe, I want to back up my entire iocage from ssd/iocage to v1/iocage

So remember, in the following:
ssd = source pool
ssd/iocage = source dataset
v1 = destination pool
v1/iocage = destination dataset

Assuming the simpler case where you just create a new destination dataset on the destination pool and don't have to worry about umounting or deleting stuff...

1 - Storage -> Pools -> v1 -> v1 -> (3-dots) -> Add Dataset -> "iocage"

2 - Storage -> Snapshots -> Add
Dataset: ssd/iocage
Name: blank this out, erase the pre-loaded "manual-2022-..."
Naming Schema: auto-%Y-%m-%d_%H-%M
[x] Recursive

When you hit Submit on this, the snapshots will be created instantly, so they will already be available for the next steps to use.

The next is to make a periodic snapshot task exactly the same as the manual one above
3 - Tasks -> Periodic Snapshot Tasks -> Add
Dataset: ssd/iocage
[x] Recursive
Set whatever schedule and retention policy you want, just leave the naming schema alone so the default "auto-..." matches the other steps.

4 - Tasks -> Replication Tasks -> Add -> Advanced
Name: "ssd/iocage -> v1/iocage" (free form, can be anything)
Transport: LOCAL
Source: ssd/iocage
Destination: v1/iocage
[x] Run Automatically
Save

Now the "run now" button should work.

In my case I also selected the "almost full filesystem" and the "synchronize destination snapshots" options but be careful you only do that if you are sure the source and destination are correct because it will DELETE DATA from the destination. Don't cry to me if your tank goes poof.

Finally, the "run now" button only ever sends the source snapshots that already exist, it doesn't update the source snapshots. When you re-run "run now" no data changes on the destination unless the snapshots have been re-run. The scheduled replication updates the destination data only because the sheduled snapshot tasks has updated the source snapshots. If you want to do "update my backup now" you'll have to do a manual snapshot per step 2 above, and then "run now" on the replication.
 
Top