jerryjharrison
Explorer
- Joined
- Jan 15, 2014
- Messages
- 99
I am using 2 newly built Freenas servers running version FreeNAS-9.3-STABLE-201508250051, which is the latest version as of this post.
I have set up replication, but I am getting the following error in the status screen of the replication task on PUSH: (Failed: zpool1/system (auto-20150827.0701-2w)
I can send the snapshot from PUSH via the command line with no issues using:
zfs send zpool1/system@auto-20150827.0701-2w | ssh -p 45 -i /data/ssh/replication 10.0.1.5 zfs receive -F zpool1/replication/system@auto-20150827.0701-2w
/var/log/auth.log on PULL shows from the automated replication attempt:
Aug 27 09:10:02 backnas sshd[7617]: Accepted publickey for root from 10.0.1.3 port 19192 ssh2: RSA 0c:b8:26:20:ab:e5:60:cc:b6:42:d5:8c:8d:e6:5f:de
Aug 27 09:10:02 backnas sshd[7617]: Received disconnect from 10.0.1.3: 11: disconnected by user
Aug 27 09:10:02 backnas sshd[7622]: Accepted publickey for root from 10.0.1.3 port 13578 ssh2: RSA 0c:b8:26:20:ab:e5:60:cc:b6:42:d5:8c:8d:e6:5f:de
Aug 27 09:10:02 backnas sshd[7622]: Received disconnect from 10.0.1.3: 11: disconnected by user
/var/log/messages on PUSH reflects:
Aug 27 09:10:02 PrimeNAS autorepl.py: [common.pipesubr:71] Popen()ing: /usr/bin/ssh -i /data/ssh/replication -o BatchMode=yes -o StrictHostKeyChecking=yes -o Connect$
Now for the strange part - I can create a dataset on PULL named "system" within the "replication" dataset, and using the initialize checkbox on the replication task, it will succeed once. When the next snapshot is created it goes back to failing. When the initial successful snapshot transfer is made, I end up with the following as a dataset on PULL: "/zpool1/replication/system/system" even though the remote target is set for "zpool1/replication."
Any ideas on where to go from here?
I have set up replication, but I am getting the following error in the status screen of the replication task on PUSH: (Failed: zpool1/system (auto-20150827.0701-2w)
I can send the snapshot from PUSH via the command line with no issues using:
zfs send zpool1/system@auto-20150827.0701-2w | ssh -p 45 -i /data/ssh/replication 10.0.1.5 zfs receive -F zpool1/replication/system@auto-20150827.0701-2w
/var/log/auth.log on PULL shows from the automated replication attempt:
Aug 27 09:10:02 backnas sshd[7617]: Accepted publickey for root from 10.0.1.3 port 19192 ssh2: RSA 0c:b8:26:20:ab:e5:60:cc:b6:42:d5:8c:8d:e6:5f:de
Aug 27 09:10:02 backnas sshd[7617]: Received disconnect from 10.0.1.3: 11: disconnected by user
Aug 27 09:10:02 backnas sshd[7622]: Accepted publickey for root from 10.0.1.3 port 13578 ssh2: RSA 0c:b8:26:20:ab:e5:60:cc:b6:42:d5:8c:8d:e6:5f:de
Aug 27 09:10:02 backnas sshd[7622]: Received disconnect from 10.0.1.3: 11: disconnected by user
/var/log/messages on PUSH reflects:
Aug 27 09:10:02 PrimeNAS autorepl.py: [common.pipesubr:71] Popen()ing: /usr/bin/ssh -i /data/ssh/replication -o BatchMode=yes -o StrictHostKeyChecking=yes -o Connect$
Now for the strange part - I can create a dataset on PULL named "system" within the "replication" dataset, and using the initialize checkbox on the replication task, it will succeed once. When the next snapshot is created it goes back to failing. When the initial successful snapshot transfer is made, I end up with the following as a dataset on PULL: "/zpool1/replication/system/system" even though the remote target is set for "zpool1/replication."
Any ideas on where to go from here?
Last edited: