Replication snapshot does not show up on remote FreeNAS server

Status
Not open for further replies.

freenastier

Dabbler
Joined
Feb 9, 2017
Messages
20
Roughly one year ago I build a new FreeNas machine because I would like to have a remote backup of my data. I managed to replicate my data from the old FreeNas server to the new FreeNas server. Then the old FreeNas server moved to a friend. Unfortunately I never got remote replication to work as a backup mechanism, therefore we used the inferior way of copying files manually between the two machines.

Now I have upgraded both machines to FreeNAS 11.1-U1 and tried again if I could get replication to work.
I have created an automatic snapshot task and a replication task. This seems to work because I can see the dataset from the snapshot ending up on the remote FreeNas server. In the web-gui I can see that the file size is shown correctly, however when I ssh to the machine then the contents of the dataset appear to be empty.

What am I doing wrong? Is there a log-file I can see relevant logging of what's going wrong? The files in /var/log do not help me any further.

tl;dr:
I have two FreeNas machines on different locations. The idea is to synchronize the datasets between these two machines. I would like to use automated replication between these two machines, but it does not seem to work properly because I can not see the files on the other machine after a snapshot has been send there. (the web-gui shows correct file size for the snapshot but I can not see the files when looking in the /mnt/PoolName/DataSet directory.

Wat am I doing wrong? Did I misinterpreted the snapshot replication principle? Do I maybe have to mount the snapshots somewhere in order to be able to see them? If so, what happens if I mount them as writable and change some files on the remote site?
 

Alecmascot

Guru
Joined
Mar 18, 2014
Messages
1,177
I recall this came up recently.
I am a little vague but the empty dataset was caused by the settings being used, something to do with the timing and expiry of the snapshots and whether they were recursive.
I have searched but I cannot find it.
 

PhilipS

Contributor
Joined
May 10, 2016
Messages
179
I have a personal note about FreeNAS replication: When replicating a child dataset and the parent dataset does not exist on the receiving host, the parent dataset is created with the readonly property set. This prevents the creation of a mount point for the child dataset.

Not sure if this happens on the current version, but I suspect your dataset is just not mounting.

You can run zfs get mountpoint Pool/Dataset and see where the mountpoint is supposed to be. If the mount point doesn't exist, then verify that the dataset the mount point is supposed to be in is not read only: zfs get readonly Pool/Dataset

You may be able to turn the read only flag off, create the mount point folder, mount the dataset then set read only on again:

Code:
zfs set readonly=off Pool/ParentDataset
mkdir mymountpoint
mount -a
zfs set readonly=on Pool/ParentDataset
 

freenastier

Dabbler
Joined
Feb 9, 2017
Messages
20
When I execute the zfs get mountpoint Pool/Dataset command (where pool=SsdVolume and Dataset=RepExperiment) I receive the following output:

Code:
NAME                     PROPERTY    VALUE                         SOURCE
SsdVolume/RepExperiment  mountpoint  /mnt/SsdVolume/RepExperiment  default


It appears that the mount point is on the right place where I am looking. There is however no content at the given mount point.
Executing the zfs get readonly Pool/Dataset command (where pool=SsdVolume and Dataset=RepExperiment) I receive the following output:

Code:
NAME                     PROPERTY  VALUE   SOURCE
SsdVolume/RepExperiment  readonly  on      local


The RepExperiment dataset appears to be readonly. It this supposed to be readonly or not?

When I perform the df -h command I can also see that there is no space used by the mount point on the drive. In the FreeNas web-gui however I can also see that the replicated snapshot takes 0% and 0 mb on the destination, however the clone which I have promoted shows that a few hundred mb's are used.
Unfortunately the files still do not show up on disk.

Update 1:
I noticed that the replication to the remote machine does not work anymore. I receive emails from my own machine saying that the replication fails, but it does not contain any clues telling me what caused the failure:
Code:
Replication MyPool/RepExperiment -> 192.168.0.6:SsdVolume/RepExperiment failed: Failed: MyPool/RepExperiment (auto-20180408.1543-2m)

To clarify the local ip address, I have the remote machine temporarily plugged in my home network at my place to debug the problem. I really want to sort this out before I start using both systems again as intended.

After receiving several emails with the information as stated above I promoted the snapshot and removed the clone. Then suddenly received the following mail:
Replication MyPool/RepExperiment -> 192.168.0.6:SsdVolume/RepExperiment failed: cannot receive aclmode property on SsdVolume/RepExperiment/RepExperiment: permission denied
cannot receive quota property on SsdVolume/RepExperiment/RepExperiment: permission denied
cannot receive refreservation property on SsdVolume/RepExperiment/RepExperiment: permission denied
cannot receive atime property on SsdVolume/RepExperiment/RepExperiment: permission denied
cannot receive dedup property on SsdVolume/RepExperiment/RepExperiment: permission denied
cannot receive reservation property on SsdVolume/RepExperiment/RepExperiment: permission denied
cannot receive refquota property on SsdVolume/RepExperiment/RepExperiment: permission denied
cannot mount '/mnt/SsdVolume/RepExperiment/RepExperiment': failed to create mountpoint
This is the first time I received these errors by email. The strange thing is that I have not received an email since. According the the web-gui the snapshot has been updated on the remote machine, because I can see the size has increased.
The content is however still not visible on the remote machine on the given location. I do notice however that the mountpoint appears to need an additional /RepExperiment sub directory after the Pool/DatSet name?

Update 2:
I think I might be on to something, because I found this in the /var/log/debug.log:

Code:
Apr  8 19:33:18 freenas /autorepl.py: [tools.autorepl:291] Checking dataset MyPool/RepExperiment                                   
Apr  8 19:33:19 freenas /autorepl.py: [tools.autorepl:131] Sending zfs snapshot: /sbin/zfs send -V -p -i MyPool/RepExperiment@auto-
20180408.1928-2m MyPool/RepExperiment@auto-20180408.1933-2m | /usr/local/bin/lz4c | /usr/local/bin/pipewatcher $$ | /usr/local/bin/
ssh -i /data/ssh/replication -o BatchMode=yes -o StrictHostKeyChecking=yes -o ConnectTimeout=7 -l repluser -p 22 192.168.0.6 "/usr/b
in/env lz4c -d | /sbin/zfs receive -F -d 'SsdVolume/RepExperiment' && echo Succeeded"                                               
Apr  8 19:33:19 freenas /autorepl.py: [tools.autorepl:150] Replication result: cannot receive aclmode property on SsdVolume/RepExper
iment/RepExperiment: permission denied                                                                                              
cannot receive quota property on SsdVolume/RepExperiment/RepExperiment: permission denied                                           
cannot receive refreservation property on SsdVolume/RepExperiment/RepExperiment: permission denied                                  
cannot receive atime property on SsdVolume/RepExperiment/RepExperiment: permission denied                                           
cannot receive dedup property on SsdVolume/RepExperiment/RepExperiment: permission denied                                           
cannot receive reservation property on SsdVolume/RepExperiment/RepExperiment: permission denied                                     
cannot receive refquota property on SsdVolume/RepExperiment/RepExperiment: permission denied                                        
cannot mount 'SsdVolume/RepExperiment/RepExperiment': Insufficient privileges                                                       
Apr  8 19:33:20 freenas /autorepl.py: [ws4py:360] Closing message received (1000) 'b'''                                             
Apr  8 19:33:20 freenas /autorepl.py: [tools.autorepl:625] Autosnap replication finished                                            
Apr  8 19:33:21 freenas /alert.py: [ws4py:360] Closing message received (1000) 'b'''                                                
Apr  8 19:33:21 freenas /alert.py: [ws4py:360] Closing message received (1000) 'b'''                                                
Apr  8 19:34:18 freenas /autorepl.py: [tools.autorepl:221] Autosnap replication started                                             
Apr  8 19:34:18 freenas /autorepl.py: [tools.autorepl:222] temp log file: /tmp/repl-75548   


I apparently do not have not the right priveledges for the dedicated replication user?
 
Last edited:

PhilipS

Contributor
Joined
May 10, 2016
Messages
179
Your target for replication should be just SsdVolume not Ssdvolume/RepExperiment to avoid the double directory issue.
I would disable the replication, then on your target delete the RepExperiment dataset. Create a new RepExperiment dataset on the target and then re-enable replication with the replication target set to SsdVolume and see how that goes.

Here is how I have one of mine configured, if that helps.

https://imgur.com/RGi18qa
 

freenastier

Dabbler
Joined
Feb 9, 2017
Messages
20
Your target for replication should be just SsdVolume not Ssdvolume/RepExperiment to avoid the double directory issue.
I would disable the replication, then on your target delete the RepExperiment dataset. Create a new RepExperiment dataset on the target and then re-enable replication with the replication target set to SsdVolume and see how that goes.

Here is how I have one of mine configured, if that helps.

https://imgur.com/RGi18qa
Thank you for pointing me on the problem PhillipS. I did what you suggested and it worked!

I removed the snapshot and replication tasks. Then I deleted the user on the remote machine. Finally I created the user again on the remote machine and created a new snapshot task and replication task. Now it appears to work flawlessly.

Turns out that I mistakenly used a 'double directory'. In the Freenas web-gui I had the 'SsdVolume' volume, which contained a 'SsdVolume' dataset. I figured I had to name both but this was not the case.
Apparently a volume always has a dataset named after the volume by default? I got confused by this volume / dataset distinction.
 
Status
Not open for further replies.
Top