SOLVED Cannot override `canmount` property in Replication

yottabit

Contributor
Joined
Apr 15, 2012
Messages
192
I have a non-root user on a remote system. I have all the necessary zfs permissions on my dataset, e.g., create, destroy, etc.

I cannot mount on the system, and that's fine.

The problem is that when I use the TrueNAS replication task, the task fails because it always tries to mount on the remote system. (The dataset is in fact replicated successfully.)

I specify `canmount=off` in the Properties Override dialog, but it's ignored.

Doing a `zfs get canmount` on the dataset at the remote still shows `canmount=on`.

Is this a known bug? Is it fixed in the nightlies?

Edit: hopefully it's a known bug that the properties override and properties exclude dialogs don't display correctly when opening a task for edit (always show blank), but I think perhaps I need granted the `canmount` zfs property on the far-end to set this property. Trying that now.

Edit 2: yep, needed the `canmount` property assigned to my user on the remote. Success!
 
Last edited:

yottabit

Contributor
Joined
Apr 15, 2012
Messages
192
In the end, this continued to cause more and more problems since the remote was running an earlier version of OpenZFS and I didn't have root access. Instead I piped each dataset via zfs send through pixz and into a compressed file. Then I used scp to copy that file to the remote server for safe keeping. If I didn't have enough space on a temporary drive, I could've just piped the compressed stream directly over SSH into a file instead of using scp.

First I tried using rsync and while it worked fine the whole-files flag was specified, it was dreadfully slow resuming with the delta xfer algorithm enabled (the default for remote operation), with these very large files (typ. 500 GB to 1.2 TB each). My connection is 500M/500M and the remote is 1G/1G, and we were able to achieve 400+ Mbps transfers. Thus, as long as the connection is decently stable, just using scp was 4-6x faster than using rsync with delta xfer, even though the whole file had to be sent again if the connection was disrupted.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
Could it be that you're using a value that's not valid? I know that it can be noauto and on.... not sure if off is a valid option.
 

yottabit

Contributor
Joined
Apr 15, 2012
Messages
192
Could it be that you're using a value that's not valid? I know that it can be noauto and on.... not sure if off is a valid option.
Sure, but the problem was three-fold.

First, there is no way to grant "all" ZFS permissions to a user, and even when enumerating every property individually there are some that cannot be granted to a regular user.

Second, I wanted to preserve the properties of the existing datasets as much as possible, which means not excluding properties just because they're not supported by the remote.

Third, I didn't want to haphazardly change properties on the receiver because it would require that I maintain state of all these, and setting them again after receiving back to the primary host. If it were a dozen datasets I would deal with it, but I have a few dozen, and then there's the incredibly nested ix-applications dataset... I could've scripted it, but it was a lot easier just sending into a file,and receiving back from a file, especially since I didn't need to access any of the data while in the intermediate state.
 
Top