Empty or non-existent mountpoint after zfs replication

asmodeus

Explorer
Joined
Jul 26, 2016
Messages
70
Hi,

I've replicated a few jails via replication task from a 9.10-U6 to another box which was then upgraded to 11.2-U3. The replication task is still running without errors after the upgrade but something weird is going on with the mountpoint. Let's take the dataset plexmediaserver_1 as an example:

- It's listed as a dataset:
root@vault:/ # zfs list|grep plex
biggertank/jails/plexmediaserver_1 5.99G 26.2T 3.90G /mnt/biggertank/jails/plexmediaserver_1


- It is mounted by zfs:
root@vault:/mnt/biggertank/jails # zfs mount | grep plex
biggertank/jails/plexmediaserver_1 /mnt/biggertank/jails/plexmediaserver_1


- mount confirms:
root@vault:/ # mount|grep plex
biggertank/jails/plexmediaserver_1 on /mnt/biggertank/jails/plexmediaserver_1 (zfs, local, nfsv4acls)


- However, the mountpoint does not exist at all:
root@vault:/ # ls -lad /mnt/biggertank/jails/plexmediaserver_1
ls: /mnt/biggertank/jails/plexmediaserver_1: No such file or directory


- unmount and mount fixes this, but only until the next replication runs.
zfs umount biggertank/jails/plexmediaserver_1 && zfs mount biggertank/jails/plexmediaserver_1

Any ideas what's going on? I want to make sure I have a working backup before upgrading the second box to 11.2...
 
D

dlavigne

Guest
Were you able to resolve this?

Also, it is recommended to do a phased update from that old of a version (eg to 11.0, then 11.1, then 11.2). You'll also want to make sure you have a backup of your config (that you test at each phase before moving to the next one) and are prepared to replace the boot device if needed (especially if it is USB) as applying that many updates will be intensive on the boot device.
 

asmodeus

Explorer
Joined
Jul 26, 2016
Messages
70
Were you able to resolve this?
Unfortunately not. I can fix the situation by deleting the empty directories and then umount/mount again, but it's broken again after the next replication...

On phased upgrading: I already upgraded, but I have not upgraded my pool. Can I go back to a 9.10 install, import the 9.10 pre-upgrade configuration, and then do phased upgrades?
 
D

dlavigne

Guest
Yes, that should work. Just be careful not to apply a config newer than the version you go back to.
 

Apollo

Wizard
Joined
Jun 13, 2013
Messages
1,458
@asmodeus , Not having a mounted dataset is usually not an issue. Most likely your pool or datasets have readonly status causing ZFS to not mount the dataset. This will result in the inability of listing the content of the dataset itslef.
Most likely, the replication is causing the the dataset to be set a s readonly.
If you have a conflicting dataset and folder name (same name) it would be best to do a zfs rename of the unmountable dataset. That would clear things up, if that is the issue of course.
 

asmodeus

Explorer
Joined
Jul 26, 2016
Messages
70
@asmodeus , Not having a mounted dataset is usually not an issue. Most likely your pool or datasets have readonly status causing ZFS to not mount the dataset. This will result in the inability of listing the content of the dataset itslef.
Most likely, the replication is causing the the dataset to be set a s readonly.
If you have a conflicting dataset and folder name (same name) it would be best to do a zfs rename of the unmountable dataset. That would clear things up, if that is the issue of course.

That's a good idea. I looked at zfs get all|grep readonly and none of my datasets have read-only set.
 

Apollo

Wizard
Joined
Jun 13, 2013
Messages
1,458
Do you have any snapshots for the corresponding dataset?
 

asmodeus

Explorer
Joined
Jul 26, 2016
Messages
70
There are lots of snapshots for these datasets, yes. They are being replicated over from the 9.10 box.
 

Apollo

Wizard
Joined
Jun 13, 2013
Messages
1,458
Having snapshots for the corresponding dataset is very good. It proves the data is there.
I still believe your pool is set as readonly causing the dataset to not being able to be mounted.
If you suspect the dataset is conflicting with a folder name, then you can rename the dataset via CLI to something unique and see if it mounts after a reboot.
 

asmodeus

Explorer
Joined
Jul 26, 2016
Messages
70
Yes, I'm quite confident the data is there.
What is the authoritative source on whether the pool is readonly? zfs get all|grep readonly|grep -v off says it is not readonly and the UI agrees with it.
 

Apollo

Wizard
Joined
Jun 13, 2013
Messages
1,458
You can run the following:

zfs set readonly=off biggertank
reboot an see if this works.
As you reboot, try to see what happens when the pool gets mounted.
If this is not successful, the last thing that comes to mind maybe related to the path name length and the length of the files in the dataset.
I have had such issue in the past where the entire path + filename would prevent mounting of the dataset.

If this is the case, rename your dataset in such way it has a very short path.
 

asmodeus

Explorer
Joined
Jul 26, 2016
Messages
70
I redid the setup with a backup config, going from 9.10 to 11.1, then 11.2. I also rebooted after zfs set readonly=off biggertank. Unfortunately the problem persists. I'm really pulling my hair out with this one. If the path length were an issue would this not also prevent manually mounting and interacting with the dataset? For the six character difference, I might try renaming the pool from 'biggertank' to 'tank'.
 

asmodeus

Explorer
Joined
Jul 26, 2016
Messages
70
Renamed the tool to tank, that might have fixed it. Waiting to confirm after a few more replication tasks have run.
 

Apollo

Wizard
Joined
Jun 13, 2013
Messages
1,458

asmodeus

Explorer
Joined
Jul 26, 2016
Messages
70
Renaming the tank was straightforward, but unfortunately it did not solve the problem after all. Sorry about the confusion! Just checked again with a few more snapshots replicated and sure enough the mount points are empty again. ¯\_(ツ)_/¯
 

asmodeus

Explorer
Joined
Jul 26, 2016
Messages
70
I'm also below the max mount point path length. Per zfs list | awk '/tank/ { print $5 }' | wc -L my mountpoints are at 58 characters max, the documented limit is 88 bytes.
 

Apollo

Wizard
Joined
Jun 13, 2013
Messages
1,458
I'm also below the max mount point path length. Per zfs list | awk '/tank/ { print $5 }' | wc -L my mountpoints are at 58 characters max, the documented limit is 88 bytes.
From recollection, the file path is only one part of it.
Once the dataset is mounted, each folder you create in the dataset and all the files within will add to the maximum path name.
I had a dataset used to backup my hard drive on my Windows PC. There was one program I recently installed which had an extensive folder structure and included long filenames. Once the files have been replicated it would cause the dataset to being unmountable.

See my earlier post for this exact issue:

https://www.ixsystems.com/community/threads/cannot-mount-file-name-too-long.38470/
 

asmodeus

Explorer
Joined
Jul 26, 2016
Messages
70
This has just become even more annoying, as it now affects a dataset previously unaffected - my user's home - preventing ssh remote logins. I will file a bug.
 

AbsolutIggy

Dabbler
Joined
Feb 29, 2020
Messages
31
What happened on this issue, @asmodeus? I'm experiencing the same problem.

I don't quite understand the readonly issue though - It seems all my datasets are set to readonly. All of them are replicated from another system, I have not created any on the box affected. However, only one of the datasets is unmounted when replication starts.

It could be caused by long file names, there are very long paths somewhere in that dataset.. but the problem described in this other post is when mounting datasets - mine are getting unmounted..
 
Top