replication errors cannot mount X failed to create mountpoint: Read-only file system

Tha_Reaper

Dabbler
Joined
Dec 2, 2023
Messages
10
I run a daily replication task of my apps pool to my storage pool. Everything went great for a couple of weeks, but now i get a couple of errors without changing any settings:

Code:
cannot mount '/mnt/Klemmers/appbackup2/releases/homarr/volumes/pvc-447bab69-9a9a-40a9-9291-5b65200c9cf3': failed to create mountpoint: Read-only file system
cannot mount '/mnt/Klemmers/appbackup2/releases/homarr/volumes/pvc-4b9908af-153a-4edc-8d07-ac5d73c09035': failed to create mountpoint: Read-only file system
cannot mount '/mnt/Klemmers/appbackup2/releases/homarr/volumes/pvc-e1cef352-bbaa-4547-b312-ffe5f928f1bf': failed to create mountpoint: Read-only file system
cannot mount '/mnt/Klemmers/appbackup2/releases/radarr/volumes/pvc-bfe773f1-bc89-49b6-afa4-c1b28fefe22f': failed to create mountpoint: Read-only file system
cannot mount '/mnt/Klemmers/appbackup2/releases/sonarr/volumes/pvc-3c7f4e01-d9d9-43f1-b999-737f6dac908f': failed to create mountpoint: Read-only file system
cannot mount '/mnt/Klemmers/appbackup2/releases/jellyfin/volumes/pvc-7c190c8f-d3cf-401f-828f-12b3f8ccc1ab': failed to create mountpoint: Read-only file system
cannot mount '/mnt/Klemmers/appbackup2/releases/jellyfin/volumes/pvc-90808052-f7c0-4e22-b42a-8553a8514691': failed to create mountpoint: Read-only file system
cannot mount '/mnt/Klemmers/appbackup2/releases/prowlarr/volumes/pvc-2311f692-6783-46ba-b8a1-6e8a9f89faeb': failed to create mountpoint: Read-only file system
cannot mount '/mnt/Klemmers/appbackup2/releases/jellyseerr/volumes/pvc-695216d9-9079-4ec0-8804-9702e8b64043': failed to create mountpoint: Read-only file system
cannot mount '/mnt/Klemmers/appbackup2/releases/qbittorrent/volumes/pvc-182bb1b5-8b91-49e3-9a51-190abd067b33': failed to create mountpoint: Read-only file system
cannot mount '/mnt/Klemmers/appbackup2/releases/flaresolverr/volumes/pvc-98f1fffa-4ab4-42a7-8722-54b443a1573b': failed to create mountpoint: Read-only file system
cannot mount '/mnt/Klemmers/appbackup2/releases/audiobookshelf/volumes/pvc-f0f24c6a-745d-4cf5-86d0-8fc1d529a2a6': failed to create mountpoint: Read-only file system


I tried creating a new dataset to send the replication to, but it immediately gives the same errors. I also tried stopping all the apps and then replicating, but that also doesnt work.
Im a bit worried that my replication is incomplete, so that if my apps SSD decides to crash, that i may lose everything.

i run TrueNAS-SCALE-23.10.2
 

Apollo

Wizard
Joined
Jun 13, 2013
Messages
1,458
As stated in your logs, you are dealing with a pool or dataset which is set as read-only and as a result will not allow to create a mount point for the dataset. This doesn't mean your backup is corrupted in any way.
See if the main pool is set as Read-only (Klemmers).
or any of the descendent datasets: appbackup2/releases/homarr/volumes/...
When you find which one is set as Read-only, you can change its state using the GUI or via CLI and that should take care of your issue.
 

Tha_Reaper

Dabbler
Joined
Dec 2, 2023
Messages
10
As stated in your logs, you are dealing with a pool or dataset which is set as read-only and as a result will not allow to create a mount point for the dataset. This doesn't mean your backup is corrupted in any way.
See if the main pool is set as Read-only (Klemmers).
or any of the descendent datasets: appbackup2/releases/homarr/volumes/...
When you find which one is set as Read-only, you can change its state using the GUI or via CLI and that should take care of your issue.
thank you for the reply. I dont understand as i just created the dataset. with all write permissions, and the replication literally just created this folder and complains its read-only.
When i browse in the GUI to the affected folders in the datasets view, i am greeted with the following error:

[ENOENT] Path /mnt/Klemmers/appbackup2/releases/homarr/volumes/pvc-4b9908af-153a-4edc-8d07-ac5d73c09035 not found

and i cant change the permissions there.
and when i go to the folder in the CLI i get this:

Code:
admin@truenas[/mnt/Klemmers/appbackup2/releases/homarr]$ ls -l
total 14
drwxr-xr-x 26 root root 26 Feb 27 05:04 charts
drwxr-xr-x  3 root root  3 Nov  4 17:21 volumes


those permissions look correct to me? or am i mistaken?
 

Apollo

Wizard
Joined
Jun 13, 2013
Messages
1,458
Permission are one thing and Dataset Read-only setting is another.
The issue with Read-only as a mountpoint is ZFS driven, so you won't see it with a "ls" command.
I am not a Scale user so I am not familiar with the environment.
However, you should be able to tell if a pool or dataset is read-only by running the following command:
zfs get -r readonly Name_of_pool
See related documentation:
https://openzfs.github.io/openzfs-docs/man/master/8/zfs-get.8.html

Either way, if any of the pool/volume and descendent dataset have the readonly parameter set, it would return the VALUE to be "on".
 

Tha_Reaper

Dabbler
Joined
Dec 2, 2023
Messages
10
that explains a lot. All PVC datasets that were created were read-only and the problematic ones were set to temporary on top of that.
For the life of my i could not remove the readonly parameter using the commandline. i tried 'sudo zfs set readonly=off Klemmers/appbackup2/releases/sonarr/volumes/pvc-3c7f4e01-d9d9-43f1-b999-737f6dac908f' and that for all the folders that game me errors, but that didnt change a thing.
I could however fix it by editing the replication task to ignore "destination dataset read-only policy" and created a new dataset with properties read-only set to OFF. As that dataset will not be mounted, i didnt really see an issue with this.
 

Apollo

Wizard
Joined
Jun 13, 2013
Messages
1,458
'sudo zfs set readonly=off Klemmers/appbackup2/releases/sonarr/volumes/pvc-3c7f4e01-d9d9-43f1-b999-737f6dac908f'
As far as I understand, you don't need to set it as readonly=off.
The issue usually comes from the process not being able to update the file system for the dataset that needs to be mounted.

Do the following for each of the dataset:

zfs get readonly Klemmers
zfs get readonly Klemmers/appbackup2
zfs get readonly Klemmers/appbackup2/releases
zfs get readonly Klemmers/appbackup2/releases/sonarr
zfs get readonly Klemmers/appbackup2/releases/sonarr/volumes
If any one of them is set with the "VALUE on", you will need to make a change to it.
 

invar

Dabbler
Joined
Jan 23, 2021
Messages
36
I'm having exactly the same problem as the OP. I had a replication task to replicate my entire pool (source: TrueNAS Scale, 1 pool only, also contains ix-applications for the apps" and target: TrueNAS Core locally connected server) and everything was working fine until recently... not sure when it happened. But now I get the same errors as the OP.

I found this article:

Which shows that my replication task should be configured to exclude certain subdirectories, so I reconfigured my task to exclude the following subdirectories: docker, k3s, catalog, catalogs. And it reduced the number of errors, but just like the OP, I still get errors for the "releases" subdirectories. Namely, here's what I get:

Code:
cannot mount '/mnt/KTCloud2/backup_of_ix-applications/releases/docker/volumes/pvc-a9f4ad01-af49-42ac-98dc-00501c50a7b2': failed to create mountpoint: Read-only file system
cannot mount '/mnt/KTCloud2/backup_of_ix-applications/releases/ktmariadb/volumes/pvc-38ee8015-0070-4f4b-93f9-128c4568c952': failed to create mountpoint: Read-only file system
cannot mount '/mnt/KTCloud2/backup_of_ix-applications/releases/ktmariadb/volumes/pvc-ada70191-1dfc-4ed3-b32c-1bfb0d3ecb76': failed to create mountpoint: Read-only file system
cannot mount '/mnt/KTCloud2/backup_of_ix-applications/releases/dockerregistry/volumes/pvc-ed3a3087-fd89-4642-a864-a2784f0b1d10': failed to create mountpoint: Read-only file system


Which obviously pertains to PVC.

I even tried altering the replication task to go to a local dataset instead (eliminate the TrueNAS Core server from the equation) and the same errors come up.

There's something in that article stating this:

Developer notes​

PVC mountpoints on replication​

In some/all cases PVC mountpoints are not correctly set to legacy after replication. HeavyScript has added scripting to fix this issue, however it does not seem to be a priority for iX-Systems to fix upstream. To fix this issue manually, run:
Code:
zfs set mountpoint=legacy "$(zfs list -t filesystem -r "$(cli -c 'app kubernetes config' | grep -E "pool\s\|" | awk -F '|' '{print $3}' | tr -d " \t\n\r")" -o name -H | grep "volumes/pvc")"


I wonder if I should do that... or if it applies.
 

Dave41

Dabbler
Joined
Sep 20, 2022
Messages
17
I am having the same issue with a replication task that used to work until a recent Truenas upgrade to 23.10.2. My destination pool Read Only Policy had always been set to "SET" the other choices are REQUIRE or IGNORE. The offending error that has cropped up since the upgrade in:
"cannot mount '/mnt/Odd_Month/ix-applications/k3s/kubelet': failed to create mountpoint: Read-only file system"

Has the upgrade changed something? Any suggestions would be appreciated.
 

invar

Dabbler
Joined
Jan 23, 2021
Messages
36
I am having the same issue with a replication task that used to work until a recent Truenas upgrade to 23.10.2. My destination pool Read Only Policy had always been set to "SET" the other choices are REQUIRE or IGNORE. The offending error that has cropped up since the upgrade in:
"cannot mount '/mnt/Odd_Month/ix-applications/k3s/kubelet': failed to create mountpoint: Read-only file system"

Has the upgrade changed something? Any suggestions would be appreciated.
You probably have your replication task set like I did, without any exclusions. Follow the link I posted above regarding how to backup things... Apparently, simply replicating the entire pool or even the entire ix-applicaitons dataset is NOT the way to do it.
 

invar

Dabbler
Joined
Jan 23, 2021
Messages
36
Okay, dove into this some more. The developer notes above definitely have something to do with it:

Developer notes​

PVC mountpoints on replication​

In some/all cases PVC mountpoints are not correctly set to legacy after replication. HeavyScript has added scripting to fix this issue, however it does not seem to be a priority for iX-Systems to fix upstream. To fix this issue manually, run:
Code:
zfs set mountpoint=legacy "$(zfs list -t filesystem -r "$(cli -c 'app kubernetes config' | grep -E "pool\s\|" | awk -F '|' '{print $3}' | tr -d " \t\n\r")" -o name -H | grep "volumes/pvc")"


After a somewhat failed replication with the errors I noted above, I went into my *target* shell. On my *target* system, the command above wouldn't run properly due to the target system being TrueNAS core and there being no kubernetes app pool there. Heck, I'm not even sure that command would run properly if it was a TrueNAS scale system...

Instead, I ran:

Code:
 zfs list -t filesystem | grep "volumes/pvc" | awk '{print $1}'

KTCloud/backup_of_ktcloud2/ix-applications/releases/docker/volumes/pvc-a9f4ad01-af49-42ac-98dc-00501c50a7b2
KTCloud/backup_of_ktcloud2/ix-applications/releases/dockerregistry/volumes/pvc-ed3a3087-fd89-4642-a864-a2784f0b1d10
KTCloud/backup_of_ktcloud2/ix-applications/releases/ktmariadb/volumes/pvc-38ee8015-0070-4f4b-93f9-128c4568c952
KTCloud/backup_of_ktcloud2/ix-applications/releases/ktmariadb/volumes/pvc-ada70191-1dfc-4ed3-b32c-1bfb0d3ecb76

And one by one, I set the target datasets to a legacy mountpoint:
Code:
zfs set mountpoint=legacy KTCloud/backup_of_ktcloud2/ix-applications/releases/docker/volumes/pvc-a9f4ad01-af49-42ac-98dc-00501c50a7b2
zfs set mountpoint=legacy KTCloud/backup_of_ktcloud2/ix-applications/releases/dockerregistry/volumes/pvc-ed3a3087-fd89-4642-a864-a2784f0b1d10
zfs set mountpoint=legacy KTCloud/backup_of_ktcloud2/ix-applications/releases/ktmariadb/volumes/pvc-38ee8015-0070-4f4b-93f9-128c4568c952
zfs set mountpoint=legacy KTCloud/backup_of_ktcloud2/ix-applications/releases/ktmariadb/volumes/pvc-ada70191-1dfc-4ed3-b32c-1bfb0d3ecb76

I reran the replication task and it finally completed correctly.
 

Dave41

Dabbler
Joined
Sep 20, 2022
Messages
17
Maybe we should be creating a BUG Report, there seems to be a history going back to at least Bluefin of IX Systems lacking interest in resolving replication issues related to Apps containers, maybe that is why so many users are choosing Promox or other solutions.
 
Top