Replication to localhost creating new dataset in destination dataset from task

Status
Not open for further replies.

southerner

Dabbler
Joined
Feb 4, 2017
Messages
11
Hello all,
I have a recursive snapshot task setup on vol2/pht every day between 10:00-10:05 in which a replication task is setup to send the data everyday at 10:30 to vol3/pht via localhost as the remote machine. My problem is that once the task completes the data's path is now vol3/pht/pht . I have a new dataset of the exact same data as pht. Setting of the task are as follows:
volume/dataset: vol2/pht
remote ZFS: vol3/pht
recursively replicate child's dataset: Checked
delete stale snapshots: checked
...
I am I mis using the replication task?

My need is for a snapshot to send "replicate" the data from vol2/pht to vol3/pht as new data is written to vol2/pht daily.

I have tried to find a script that would utilized ZFS send receive to send incremental streams daily, but had not success there was thread about snycoid but that perl script is nuts.
 

southerner

Dabbler
Joined
Feb 4, 2017
Messages
11
Replication fails without a volume/dataset if the source is volume/dataset. I was able to send the entire pool vol2 to vol3 which included all datasets. I could not just send vol2/pht to vol3.


off topic, I wonder why a ZFS send / recv incremental task has not been created in Freenas.
 

JustinOtherBobo

Dabbler
Joined
Aug 21, 2018
Messages
26
Replication fails without a volume/dataset if the source is volume/dataset. I was able to send the entire pool vol2 to vol3 which included all datasets. I could not just send vol2/pht to vol3.

I am currently using the below dataset structure, snapshot, and replication setup on 11.1-U6 and replication is taking place.

The datasets at LOC1 look as follows when looking at the STORAGE tab:

Code:
pool-loc1
   pool-loc1
        jails
        loc1
            pc1


Tasks are setup as follows:
*snapshot task: pool-loc1/loc1 recursive
*replication task: pool-loc1/loc1 recursive to destination: pool-loc3

In LOC2, STORAGE tab looks as follows:
Code:
pool-loc2
      pool-loc2
           jails
           loc2
              pc1

And tasks are setup as follows:
*snapshot task: pool-loc2/loc2 recursive
*replication task: pool-loc2/loc2 recursive to destination: pool-loc3

The end Results when looking at LOC3, the receiving location:

Code:
pool-loc3
      pool-loc3
          jails
          loc1
                pc1
          loc2
                pc1
          loc3
                some-non-replicated-datasets


However, as I've mentioned in another post, when I clone a snapshot.... say pc2@auto-blah-blah-today and share it out of pool-loc3 to, for example, perform a file recovery at the remote location, the mount-point disappears as soon as a replication receive event starts.

The share still operates and the cloned dataset is still visible under the STORAGE tab.

And as a "new" development, I can confirm that, without having EVER shared out ANYTHING out of the COMPLETE replication, I now have a replication where:

Code:
pool-loc3
      jails
      loc1
          pc1   <-- all files visible at the remote pool's mountpoint
          pc2   <-- all files visible at the remote pool's mountpoint
          pc3   <-- no files are visible at the remote pool's mountpoint;

For pc3 nothing, not even the .zfs folder is visible, it is as if the whole thing doesn't exist. Yet the replication is showing up-to-date and no information is currently getting transmitted.

The snapshots ARE visible in the UI Storage/Snapshot listing.
 
Last edited:
Status
Not open for further replies.
Top