Hello,
I have two FreeNAS test boxes, in this case FNSAN1 and FNSAN2.
FNSAN1 There is approx. 500gb of data on a 5TB zvol.
FNSAN2 There is a single 1.5tb pool that pretty much empty.
I have set up a daily automatic snapshot on FNSAN1
There is a replication task configured on the source back to remote server FNSAN2
I should note that i'm aware of the difference in pool size between servers but I had expected that the replication will only transfer used (of which that seems to be the case).
The replication starts as scheduled and takes a 1.5-2 hours to get to an ALMOST complete state, i.e. at 99-100% i get an alert which tells me:
Replication iData1/lun1 -> 10.147.5.146:FNSAN2_iData1 failed: Failed: iData1/lun1 (auto-20190705.1200-7d)
I ran the following shell command:
Which to me looks positive - however in the Replication Tasks window the jobs shows as:
Failed: iData1/lun1 (auto-20190705.1200-7d)
I've done quite a bit of searching but not sure if i'm using the right commands to try debug this issue.
On the remote server underneath my empty pool I can now see the newly replicated lun, so something's working!
I tried removing all the snapshots and replication tasks within the GUI and set them up from scratch but the same issue occurred - seems to tick over nicely without issue then fail right at the last second.
Many thanks
Neil
I have two FreeNAS test boxes, in this case FNSAN1 and FNSAN2.
FNSAN1 There is approx. 500gb of data on a 5TB zvol.
FNSAN2 There is a single 1.5tb pool that pretty much empty.
I have set up a daily automatic snapshot on FNSAN1
There is a replication task configured on the source back to remote server FNSAN2
I should note that i'm aware of the difference in pool size between servers but I had expected that the replication will only transfer used (of which that seems to be the case).
The replication starts as scheduled and takes a 1.5-2 hours to get to an ALMOST complete state, i.e. at 99-100% i get an alert which tells me:
Replication iData1/lun1 -> 10.147.5.146:FNSAN2_iData1 failed: Failed: iData1/lun1 (auto-20190705.1200-7d)
I ran the following shell command:
Code:
root@FNSAN1[~]# cat /var/log/debug.log | grep "auto-20190705.1200-7d" Jul 5 13:45:06 FNSAN1 /autorepl.py: [tools.autorepl:131] Sending zfs snapshot: /sbin/zfs send -V iData1/lun1@auto-20190705.1200-7d | /usr/local/bin/pigz | /usr/local/bin/pipewatcher $$ | /usr/local/bin/ssh -i /data/ssh/replication -o BatchMode=yes -o StrictHostKeyChecking=yes -o ConnectTimeout=7 -p 22 10.147.5.146 "/usr/bin/env pigz -d | /sbin/zfs receive -F -d 'FNSAN1_iData1' && echo Succeeded" root@FNSAN1[~]#
Which to me looks positive - however in the Replication Tasks window the jobs shows as:
Failed: iData1/lun1 (auto-20190705.1200-7d)
I've done quite a bit of searching but not sure if i'm using the right commands to try debug this issue.
On the remote server underneath my empty pool I can now see the newly replicated lun, so something's working!
I tried removing all the snapshots and replication tasks within the GUI and set them up from scratch but the same issue occurred - seems to tick over nicely without issue then fail right at the last second.
Many thanks
Neil
Last edited: