Replication from TrueNAS SCALE to CORE, Best Practices?

yottabit

Contributor
Joined
Apr 15, 2012
Messages
192
tl;dr: when replicating datasets between hosts of different ZFS versions, and wishing to preserve the entire state of the dataset, is it better to just send into a file rather than a remote dataset?

I have been trying for days now to replicate two dozen datasets from TrueNAS SCALE (with upgraded pool ZFS 2.1) to TrueNAS CORE (older pool, ZFS 2.0).

I am using the "full filesystem" option in replication, which should preserve the dataset properties. I have encountered several problems, causing me to restart the replication (maybe it's resuming, hard to tell yet):
  1. On the remote system I'm not receiving as root, so the root user there has to use `zfs allow` for the properties I need into my root dataset. The problem is that I don't know which properties will be required ahead of time, so I don't know which to ask him to add until the replication fails (which seems to be at the very end of sending a dataset, as it seems to not set the properties at the remote until the end).
  2. There is no allow `all` property, and just adding every property to the list also doesn't work because the command will return an error that a property is not allowed for that type of dataset, but it doesn't tell you which property caused the problem. (If I had root, I would just run a shell loop on an array of all properties to make sure all that were possible were set, but he isn't going to be writing a shell script himself. I guess I could write it for him...)
  3. As a non-root user, I don't have permission to mount datasets on his system (nor do I need to), so I have been using the override property `canmount=off` to work around this; but it seems that I have to enter this into the replication form every time I edit the form as it doesn't seem to preserve what was entered last time when it loads the form for edit.
  4. Some OpenZFS 2.1 properties are not supported on OpenZFS 2.0, so I have to add these to the exclude properties, and have the same form problem as #3 above.
As ZFS forked long ago, and keeps diverging even more now between Oracle v. FreeBSD (and maybe OpenSolaris?) v. OpenZFS, and even different versions of the same branch, how does everyone handle this when replicating to remote systems? Is there a best practices guide?

I have sent several TB of data so far, and these datasets still show up in a pending/resumable state on the remote, but not yet in the regular dataset list yet. Each time I fix a property exclude, and start the replication again, it seems to start uploading another dataset. I am hoping it eventually finishes all the other datasets that failed at the last instance due to lack of permission to set a property or an unsupported property.

But I am starting to wonder if I should just send these datasets into a remote file instead. I think that should preserve all properties, right? And I wouldn't have to mess with `canmount`, restoring its state later when I pull all the data back. Downsides to this approach:
  1. Not supported in the TrueNAS UI
  2. Would have to manually script the send of each dataset into a remote file
  3. Not able to resume if there is a failed transfer (minor; connection has been 100% stable so far)
  4. ... something I'm missing?
The idea is to send all my data sets away, refactor my vdevs and pool, and then pull all the datasets back. I am becoming increasingly weary that I will be able to pull back all my datasets as-is without spending considerable time fixing up all the missing and/or reset properties locally at the end.

How does everyone else handle situations like this?
 
Top