I am tidying up my system in order to migrate to a different pool and to speed things up I am doing manual replication over SSH.
I have noticed recently that I am getting some errors related to snapshot while attempting replication via CLI.
What I have done in the past, and it used to work like a charm, is to have normal periodic replications done on the entire volume in a recursive manner, as well as more finite recursive snapshots of datasets within the same volume.
For entire volume replication over CLI, I would create a manual recursive snapshot of the volume then I would run the CLI command to replicate recursively to another volume locally or over the network.
Now, I am experiencing a major issue regarding the replication:
The command I would run is as follow:
zfs send -vvR WD-RAIDZ2@manual-20170109 | ssh -i /data/ssh/replication root@192.168.1.116 zfs receive -vv Seagate-16TB-unencrypted/WD-RAIDZ2-copy
Since I haven't run this command recently, possibly 6 month ago at least (Automatic replication was done by Freenas) I haven't encountered such issue. Also I have been relocating datasets within the volume and never had such issues in the past. I am not overlooking possible user errors though.
I am just wondering if the latest Freenass 9.10 updates haven't changed how snapshots and replications are handled.
The peculiar error message is as follow:
Manual recursive snapshot was created 3 hrs after the @auto-20170109.0553-6m automatic snapshot.
Is this a bug?
If I generate another recursive snapshot and perform the replication with the newest snapshot, I get the following error:
I have noticed recently that I am getting some errors related to snapshot while attempting replication via CLI.
What I have done in the past, and it used to work like a charm, is to have normal periodic replications done on the entire volume in a recursive manner, as well as more finite recursive snapshots of datasets within the same volume.
For entire volume replication over CLI, I would create a manual recursive snapshot of the volume then I would run the CLI command to replicate recursively to another volume locally or over the network.
Now, I am experiencing a major issue regarding the replication:
The command I would run is as follow:
zfs send -vvR WD-RAIDZ2@manual-20170109 | ssh -i /data/ssh/replication root@192.168.1.116 zfs receive -vv Seagate-16TB-unencrypted/WD-RAIDZ2-copy
Since I haven't run this command recently, possibly 6 month ago at least (Automatic replication was done by Freenas) I haven't encountered such issue. Also I have been relocating datasets within the volume and never had such issues in the past. I am not overlooking possible user errors though.
I am just wondering if the latest Freenass 9.10 updates haven't changed how snapshots and replications are handled.
The peculiar error message is as follow:
zfs send -vvR WD-RAIDZ2@manual-20170109 | ssh -i /data/ssh/replication root@192.168.1.116 zfs receive -vv Seagate-16TB-unencrypted/WD-RAIDZ2-copy
skipping snapshot WD-RAIDZ2@auto-20170109.0553-6m because it was created after the destination snapshot (manual-20170109)
Manual recursive snapshot was created 3 hrs after the @auto-20170109.0553-6m automatic snapshot.
Is this a bug?
If I generate another recursive snapshot and perform the replication with the newest snapshot, I get the following error:
Assertion failed: (ilen <= SPA_MAXBLOCKSIZE), file /freenas-9.10-releng/_BE/os/cddl/lib/libzfs/../../../cddl/contrib/opensolaris/lib/libzfs/common/libzfs_sendrecv.c, line 2090.
Abort (core dumped)
Last edited: