FreeNAS & Sanoid - Having Difficulty

Joined
Mar 27, 2019
Messages
2
I am trying to get Sanoid/Syncoid to work with FreeNAS. I seem to be running into trouble at every step.

I have a Proxmox server. I'm running Sanoid directly on the host OS to take/manage snapshots. I am trying to send these snapshots, using Syncoid, to my FreeNAS box, which I will be using for backup. The FreeNAS box is a HP Microserver Gen 8. 12 GB RAM. Installed 11.2 from scratch a few weeks ago.

I want to also run Sanoid on FreeNAS (to manage snapshots). So I made a jail, and installed everything.

ZFS in the Jail

The first big issue I had was figuring out how to let the jail have access to the ZFS pool. Simply adding a Mount Point from the host doesn't work. In the "Jail Properties" I had to:
  • Check allow_mount in "Jail Properties"
  • Check allow_mount_zfs in "Jail Properties"
  • Set enforce_statfs to 0 in "Jail Properties"
  • Check jail_zfs in "Custom Properties"
  • Enter a new dataset I created into jail_zfs_dataset on the "Custom Properties" page
  • (I also set the jail_zfs_mountpoint, but nothing is getting mounted to where I set within the jail. The specified dataset gets mounted to /mnt/pool_name/dataset_name.)
After figuring out all that, I could see the zpool in the jail.

Sending Snapshots

NOW, I tried to use Syncoid (which is a fancy wrapper for zfs send/receive). I'm doing this on the Proxmox host, sending to the FreeNAS box. I wanted to copy everything from my primary storage pool (on Proxmox) to this dataset on FreeNAS. After playing with some options, I came up with this command:

Code:
# /opt/sanoid/syncoid --recursive primary_array root@192.168.1.101:backup_array/primary_array_backup


And it worked! For a bit. A large amount of data was transferred, but eventually it started skipping snapshots with the error:

Code:
cannot mount 'backup_pool/storage_pool_backup/dir1/dir2/dir3/dir4': File name too long


The actual command that was run (edited out irrelevant info like SSH port & key):

Code:
# /sbin/zfs send -i 'primary_array/dataset@auto-20170530.0500-1y' 'primary_array/dataset-auto-20170530.0500-1y-clone'@'auto-20180109.0500-1y' | /usr/bin/pv -s 4096 | /usr/bin/lzop  | /usr/bin/mbuffer  -q -s 128k -m 16M 2>/dev/null | /usr/bin/ssh root@192.168.1.101 ' /usr/bin/mbuffer  -q -s 128k -m 16M 2>/dev/null | /usr/bin/lzop -dfc |  /sbin/zfs receive -s -F '"'"'backup_array/primary_array_backup/dataset-auto-20170530.0500-1y-clone


And cleaned up even more (removing the compression, the "/usr/bin"s, the buffer, pv):

Code:
# zfs send -i 'primary_array/dataset@auto-20170530.0500-1y' 'primary_array/dataset-auto-20170530.0500-1y-clone'@'auto-20180109.0500-1y' | ssh root@192.168.1.101 zfs receive -s -F 'backup_array/primary_array_backup/dataset-auto-20170530.0500-1y-clone'


Questions

Does anybody have any ideas as to what the problem is? Any insight is appreciated.

There seems to be a file name limit somewhere?? (On a device/file system made for storage?) I'm not using a particularly long filename, I just measured at 57 characters in the actual error.

I know I'm using a 3rd party tool, and that introduces variables. But I don't think the issues are caused by the tool.

I can provide additional info if I left something out. Thanks again.
 
Joined
Mar 27, 2019
Messages
2
I did not figure this out.

A lot of the ZFS send/receives fail because of the "filename too long" error. My best guess is that running ZFS receive from a jail is causing an issue, potentially with mount points. I haven't been able to narrow it down to that though.

Glad to do try things out or give more info.
 
Top