Jails missing but not... how do I get them back?

ninjai

Explorer
Joined
Apr 6, 2015
Messages
98
I recently began doing snapshot replication to a secondary local pool and it broke my jails.

Code:
@freenas ~]$ ls /mnt
FreeNAS-Backup  iocage          md_size         Storage
[@freenas ~]$ ls /mnt/iocage/jails
[@freenas ~]$ mount | grep Storage/iocage
Storage/iocage on /mnt/iocage (zfs, local, nfsv4acls)
Storage/iocage/download on /mnt/iocage/download (zfs, local, nfsv4acls)
Storage/iocage/download/11.1-RELEASE on /mnt/iocage/download/11.1-RELEASE (zfs, local, nfsv4acls)
Storage/iocage/download/11.2-RELEASE on /mnt/iocage/download/11.2-RELEASE (zfs, local, nfsv4acls)
Storage/iocage/images on /mnt/iocage/images (zfs, local, nfsv4acls)
Storage/iocage/jails on /mnt/iocage/jails (zfs, local, nfsv4acls)
Storage/iocage/jails/backup-server on /mnt/iocage/jails/backup-server (zfs, local, nfsv4acls)
Storage/iocage/jails/backup-server/root on /mnt/iocage/jails/backup-server/root (zfs, local, nfsv4acls)
............


This is correct and how it's supposed to be. Randomly this morning the pool had switched by itself to my secondary pool so all the lines above were showing FreeNAS-Backup/iocage, etc.

I reboot and it did nothing. I went to "Jails" in the GUI and activated FreeNAS-Backup as the jail pool, then activated Storage again. My jails re-appear but all say "Corrupt". Like this:

Code:
ocage list
unifi_controller_1 is missing it's configuration, please destroy this jail and recreate it.
wiki is missing it's configuration, please destroy this jail and recreate it.
backup-server is missing it's configuration, please destroy this jail and recreate it.
nextcloud is missing it's configuration, please destroy this jail and recreate it.
dns_1 is missing it's configuration, please destroy this jail and recreate it.
pms is missing it's configuration, please destroy this jail and recreate it.

+-----+--------------------+---------+---------+-----+
| JID |        NAME        |  STATE  | RELEASE | IP4 |
+=====+====================+=========+=========+=====+
| -   | backup-server      | CORRUPT | N/A     | N/A |
+-----+--------------------+---------+---------+-----+
| -   | dns_1              | CORRUPT | N/A     | N/A |
+-----+--------------------+---------+---------+-----+
| -   | nextcloud          | CORRUPT | N/A     | N/A |
+-----+--------------------+---------+---------+-----+
| -   | pms                | CORRUPT | N/A     | N/A |
+-----+--------------------+---------+---------+-----+
| -   | unifi_controller_1 | CORRUPT | N/A     | N/A |
+-----+--------------------+---------+---------+-----+
| -   | wiki               | CORRUPT | N/A     | N/A |
+-----+--------------------+---------+---------+-----+


If I use the GUI to browse to jails and look in the iocage dataset, I see the data there:
1556237462148.png


I have no idea how iocage is mounted at the low level or how to even see if my jails are still there. Not sure how this happened either. Any guidance in troubleshooting...?

EDIT:
with zfs list I can even see the old data. Something is funky about the mounting process and I have no idea what it is or how to troublshoot it since iocage jails seem to be mysteriously mounted on /mnt/iocage instead of where the dataset shows inside of the GUI: Storage/iocage

Code:
[steve@freenas /mnt/iocage]$ zfs list | grep jails
FreeNAS-Backup/iocage/jails                                  304K  3.43T    88K  /mnt/iocage/jails
FreeNAS-Backup/iocage/jails/pms                              216K  3.43T    96K  /mnt/iocage/jails/pms
Storage/iocage/jails                                        36.0G  2.40T   208K  /mnt/iocage/jails
Storage/iocage/jails/backup-server                          1.09G  2.40T   208K  /mnt/iocage/jails/backup-server
Storage/iocage/jails/backup-server/root                     1.09G  2.40T  2.42G  /mnt/iocage/jails/backup-server/root
Storage/iocage/jails/dns_1                                   204M  2.40T   184K  /mnt/iocage/jails/dns_1
Storage/iocage/jails/dns_1/root                              203M  2.40T  1.52G  /mnt/iocage/jails/dns_1/root
Storage/iocage/jails/nextcloud                              1.70G  2.40T   192K  /mnt/iocage/jails/nextcloud
Storage/iocage/jails/nextcloud/root                         1.70G  2.40T  2.92G  /mnt/iocage/jails/nextcloud/root
Storage/iocage/jails/pms                                    3.82G  2.40T   192K  /mnt/iocage/jails/pms
Storage/iocage/jails/pms/root                               3.82G  2.40T  4.94G  /mnt/iocage/jails/pms/root
Storage/iocage/jails/transmission_1                         5.59G  2.40T   192K  /mnt/iocage/jails/transmission_1
Storage/iocage/jails/unifi_controller_1                     7.96G  2.40T   184K  /mnt/iocage/jails/unifi_controller_1
Storage/iocage/jails/unifi_controller_1/root                7.96G  2.40T  7.48G  /mnt/iocage/jails/unifi_controller_1/root
Storage/iocage/jails/wiki                                   5.13G  2.40T   184K  /mnt/iocage/jails/wiki
Storage/iocage/jails/wiki/root                              5.13G  2.40T  4.70G  /mnt/iocage/jails/wiki/root
 
Last edited:

ninjai

Explorer
Joined
Apr 6, 2015
Messages
98
I have an update. Despite trying to move which pool the iocage mounting resided on, I decided to shut down the NAS, unplug my external hard drive (FreeNAS-Backup pool), and voila.... upon boot up I see my jails are no longer corrupt and the data is there.

Did this happen because I replicated my iocage dataset there? If so, how else am I supposed to back up iocage jails?
 
Joined
Jul 10, 2016
Messages
521
A dataset is not the same as a mountpoint. After replication, both the original dataset Storage/iocage and the replicated dataset FreeNAS-Backup/iocage are fighting over that same mountpoint /mnt/iocage/ and one mounted on top of the other, hence the mess.

You can prevent that it automatically mounts the backup dataset by using the -u option, e.g. zfs receive -u and set the mountpoint to none or /FreeNAS-Backup/iocage (FreeNAS adds the /mnt automatically) to prevent future mishaps. Refer to zfs(8) and zfs administration

Note that earlier versions of iocage mounted the iocage dataset in /mnt/iocage where later versions changed that to /mnt/<pool>/iocage/
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
You can also use zfs set canmount=noauto pool/dataset for all the datasets from iocage down on your backup disk.

You would set it back to yes in the case of recovery. You would also have to set those datasets to be writable as they are marked as read-only by the replication job (probably why the system thinks the jails are corrupt when the backup copy has been winning the mounting war).
 

ninjai

Explorer
Joined
Apr 6, 2015
Messages
98
Thanks guys,

Is there any issue with me changing the mountpoint to /mnt/pool/iocage? I really don't understand the need to have it in /mnt/iocage. Also, is there any way for me to set it to not automatically mount from the GUI?
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
Is there any issue with me changing the mountpoint to /mnt/pool/iocage? I really don't understand the need to have it in /mnt/iocage. Also, is there any way for me to set it to not automatically mount from the GUI?
You would be playing with fire there... the system is set to look for things in that location, so if you move it around, broken things are bound to start appearing.

I imagine the reason for that location is so that the middleware can always be set to look at the mount location for jails, rather than having to first look up the jails dataset and its mount location... I guess both ways work, one just has extra steps in it, so they selected the simpler one.

There is no GUI setting to modify auto-mounting.
 

ninjai

Explorer
Joined
Apr 6, 2015
Messages
98
Thanks sretalla. So if I use the option you suggested, canmount=noauto, does that not mean that I can't replicate the jails to my backup pool because it isn't mounted? Or will it just copy to /mnt/FreeNAS-Backup/iocage/ and just not mount it and have a mounting war?
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
does that not mean that I can't replicate the jails to my backup pool because it isn't mounted?
No.
will it just copy to /mnt/FreeNAS-Backup/iocage/ and just not mount it and have a mounting war?
Yes.

Since the replication uses the path directly to the pool (i.e. FreeNAS-Backup/iocage) mnt doesn't come into the equation for replication, it's only when zfs mount eveluates the datasets and sees something wants to mount to /mnt/iocage that presents a problem since multiple datasets have that mountpoint set.

If only one of them has canmount=yes, then that one will mount... ones with canmount=noauto won't, but can still be manually mounted (presumably elsewhere with an altroot or with the primary offline)
 
Last edited:

ninjai

Explorer
Joined
Apr 6, 2015
Messages
98
My last question is how I set this on datasets that do not yet exist on that pool as they're not yet replicated.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
how I set this on datasets that do not yet exist on that pool as they're not yet replicated.
Well, you don't. The mount conflict doesn't usually happen until the first reboot, so you just have to make sure to do it before that, but clearly after the datasets exist.
 
Top