Replication "up to date" but empty folders

Dunuin

Contributor
Joined
Mar 7, 2013
Messages
110
Hi,

I have one SSD-pool "/mnt/SSDpool/" and one HDD-pool "/mnt/HDDpool" and wanted to duplicate the whole SSD-pool to the HDD-pool as a backup.
I created an snapshot task with "recursive" enabled...
task.png

...and an replication task with "recursive" enabled too and localhost as remote host...
replication_task.png

...and snapshots are created...
snapshots.png

...and the replication is running and finishes with "up to date" but some datasets on the HDD are empty (like "HDDpool/SSD-Snapshots/SSDpool_backup/FAMP/nextcloud/data") but there are files in the dataset on the SSD (like "SSDPool/FAMP/nextcloud/data"):
ssd_datset.png


In the console I see this messages:
Jan 19 03:00:08 BM-Homeserver collectd[4229]: statfs(/mnt/HDDpool/SSD-Snapshots/SSDpool_backup/iocage/jails/Backup/root) failed: No such file or directory Jan 19 03:00:08 BM-Homeserver collectd[4229]: statfs(/mnt/HDDpool/SSD-Snapshots/SSDpool_backup/iocage/jails/ComiXed/root) failed: No such file or directory Jan 19 03:00:08 BM-Homeserver collectd[4229]: statfs(/mnt/HDDpool/SSD-Snapshots/SSDpool_backup/iocage/jails/Emby/root) failed: No such file or directory Jan 19 03:00:08 BM-Homeserver collectd[4229]: statfs(/mnt/HDDpool/SSD-Snapshots/SSDpool_backup/iocage/jails/FAMP/root) failed: No such file or directory Jan 19 03:00:18 BM-Homeserver collectd[4229]: statfs(/mnt/HDDpool/SSD-Snapshots/SSDpool_backup/FAMP/nextcloud/config) failed: No such file or directory Jan 19 03:00:18 BM-Homeserver collectd[4229]: statfs(/mnt/HDDpool/SSD-Snapshots/SSDpool_backup/FAMP/nextcloud/data) failed: No such file or directory Jan 19 03:00:18 BM-Homeserver collectd[4229]: statfs(/mnt/HDDpool/SSD-Snapshots/SSDpool_backup/SSD-Share/HighSec/Documents) failed: No such file or directory Jan 19 03:00:18 BM-Homeserver collectd[4229]: statfs(/mnt/HDDpool/SSD-Snapshots/SSDpool_backup/SSD-Share/HighSec/StuffX_SSD) failed: No such file or directory Jan 19 03:00:18 BM-Homeserver collectd[4229]: statfs(/mnt/HDDpool/SSD-Snapshots/SSDpool_backup/iocage/jails/Torrent/root) failed: No such file or directory Jan 19 03:00:18 BM-Homeserver collectd[4229]: statfs(/mnt/HDDpool/SSD-Snapshots/SSDpool_backup/iocage/jails/digiKam/root) failed: No such file or directory Jan 19 03:00:18 BM-Homeserver collectd[4229]: statfs(/mnt/HDDpool/SSD-Snapshots/SSDpool_backup/iocage/jails/emby/root) failed: No such file or directory Jan 19 03:00:18 BM-Homeserver collectd[4229]: statfs(/mnt/HDDpool/SSD-Snapshots/SSDpool_backup/iocage/jails/jDownloader/root) failed: No such file or directory Jan 19 03:00:18 BM-Homeserver collectd[4229]: statfs(/mnt/HDDpool/SSD-Snapshots/SSDpool_backup/iocage/jails/mineos/root) failed: No such file or directory Jan 19 03:00:18 BM-Homeserver collectd[4229]: statfs(/mnt/HDDpool/SSD-Snapshots/SSDpool_backup/iocage/releases/11.2-RELEASE/root) failed: No such file or directory Jan 19 03:00:18 BM-Homeserver collectd[4229]: statfs(/mnt/HDDpool/SSD-Snapshots/SSDpool_backup/iocage/releases/11.3-RELEASE/root) failed: No such file or directory Jan 19 03:00:28 BM-Homeserver collectd[4229]: statfs(/mnt/HDDpool/SSD-Snapshots/SSDpool_backup/SSD-Share/LowSec/Roms_SSD) failed: No such file or directory Jan 19 03:00:28 BM-Homeserver collectd[4229]: statfs(/mnt/HDDpool/SSD-Snapshots/SSDpool_backup/iocage/download/11.2-RELEASE) failed: No such file or directory Jan 19 03:00:38 BM-Homeserver collectd[4229]: statfs(/mnt/HDDpool/SSD-Snapshots/SSDpool_backup/Emby/cache) failed: No such file or directory
All those folders are empty on the HDD but shouldn't.

1.) Why are those folders empty?

I found an old thread where an disabled "recursive" option on side resulted in empty folders but I activated recursion on both sides.

2.) Is there a way to validate if the replication is complete and both datasets are the same?

3.) Is it normal that the replication creates snapshots of the "target" datasets?
I only created a task to create snapshots of "SSDpool" but that job also creates snapshots of that replicated backup datasets on the HDDpool.

I am using FreeNAS-11.2-U7.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
You need to pay attention when replicating the iocage dataset locally as the mountpoint is always /mnt/iocage and it will mount in conflict with the original... particularly after a reboot where it's a crapshoot over which one will mount. You can set the canmount property of the replica to noauto to work around this.

You can confirm replication is complete by looking at the size of the replicated datasets. (maybe not 100% the same size as variations can occur for a few reasons, but it should be within a few percentage points).

Replication jobs use snapshots to replicate, so it's normal (but also enforced as you can't set up replication on a dataset that isn't already set up for snapshots in the first place).

Look under your root filesystem (/) to see if the data is mounted there as failures to mount in the set location often result in a root mount.
 

MikeyG

Patron
Joined
Dec 8, 2017
Messages
442
@Dunuin did you ever figure this out? I'm seeing the exact same thing with the same statfs error, though that error is on a remote replication target.
 

Apollo

Wizard
Joined
Jun 13, 2013
Messages
1,458
Normally, remote pool/dataset are supposed to be read only.
Often, when a new dataset is freshly created on the remote pool during replication, the remote system may need to restarted or the pool detached then imported for Freenas to be able to mount the replicated datasets.
 

MikeyG

Patron
Joined
Dec 8, 2017
Messages
442
All datasets are read only. Tried rebooting and exporting/importing pool.

The dataset I'm looking for shows up in zfs list:
BackupPool/Mike/Files/Backup 276G 9.86T 231G /mnt/BackupPool/Mike/Files/Backup
but when I type "mount" it's missing in the list.

Also unable to browse to it via terminal session.

If I export the pool and import it again via terminal, I get this:
cannot mount '/BackupPool/Mike/Files/Backup': failed to create mountpoint

Both servers running 11.3-3.1
 
Last edited:

Apollo

Wizard
Joined
Jun 13, 2013
Messages
1,458
All datasets are read only. Tried rebooting and exporting/importing pool.

The dataset I'm looking for shows up in zfs list:
BackupPool/Mike/Files/Backup 276G 9.86T 231G /mnt/BackupPool/Mike/Files/Backup
but when I type "mount" it's missing in the list.

Also unable to browse to it via terminal session.

If I export the pool and import it again via terminal, I get this:
cannot mount '/BackupPool/Mike/Files/Backup': failed to create mountpoint

Both servers running 11.3-3.1
The error is expected and is the behavior is confirmed.
There are 2 possible ways to go about:

1) Pool is set as read-only, so it needs to be writable:

Check the status with:
zfs get readonly BackupPool

Change state with:
zfs set readonly=off BackupPool

2) Dataset are read-only:

Check the status with:

zfs get readonly BackupPool/dataset

Change state with:
zfs set readonly=off BackupPool/dataset

detach the pool and attach it again or simply restart the system. If find restarting a cleaner and safer way to proceeed as I don't have to answer the question about destroying the content of the pool. If you have encrypted pool, you don't have to deal with the encryption keys.

Once the datasets have been mounted, you can make the pool readonly again and unless you create new datasets via replication, subsequent replication will not require you to make changes top the readonly status.

This should be enough:

zfs set readonly=on BackupPool
 

MikeyG

Patron
Joined
Dec 8, 2017
Messages
442
Thanks for your input @Apollo. Somehow I managed to fix this, though I'm not sure exactly what combination of steps did it. Tried exporting/importing and restarting with both read only on and off. That didn't do it. However, with read only off, moving the dataset elsewhere, manually creating an identically named dataset in it's place, deleting it, exporting via terminal, importing, exporting via GUI, then importing seems to have fixed it.
 

Apollo

Wizard
Joined
Jun 13, 2013
Messages
1,458
You shouldn't have to mess around with snapshot and dataset as you did. I suspect a higher level dataset wasn't set to be writable.
For example your remote pool is as follow:

BackupPool/A/B/C

Then you replicate and the new dataset structure is has follow:
BackupPool/A/B/C/D/E

If dataset A, B and C were never mounted because BackupPool was set as readonly=ON, then the pool is still able to receive snapshots.
If you decide to change the dataset C and D to be writable, but haven't changed the state of A, B and C, then you will not be able to mount C and D untill A, B and C have been mounted at least once and C being set as readonly=ON prior to mounting C and D.

I hope it makes sense.
 

MikeyG

Patron
Joined
Dec 8, 2017
Messages
442
Yes, what you're saying makes total sense. In this case, there were multiple child datasets on the same level as the one in question, all of which were mounting correctly and were not set readonly. So while I had issues with BackupPool/Mike/Files/Backup, BackupPool/Mike/Files/Documents was fine, with BackupPool/Mike/Files set readonly=off.

As I can't verify exactly how it was set before I starting messing with it, it's possible that parent dataset was set readonly=on which was messing things up. If this happens again in the future though I will pay closer attention to the readonly settings above the dataset in question.
 
Top