Replication Issue

Status
Not open for further replies.

stefanb

Patron
Joined
Dec 12, 2014
Messages
200
Hi,

I've a question about replication and a possible problem I have:

Snapshot was taken, used 77MB space.
Replication task succeeded
Snapshot on 2nd system exists, used space 0MB!?!

The logs are stupid - whats going on there?
Code:
Jan 21 16:43:29 filer autorepl.py: [common.pipesubr:58] Popen()ing: /usr/bin/ssh -i /data/ssh/replication -o BatchMode=yes -o StrictHostKeyChecking=yes -o ConnectTimeout=7 -p 22 192.168.16.241 "zfs list -Hr -o name -t snapshot -d 1 replica/projekte | tail -n 1 | cut -d@ -f2"
Jan 21 16:43:30 filer autorepl.py: [common.pipesubr:72] Executing: /usr/bin/ssh -i /data/ssh/replication -o BatchMode=yes -o StrictHostKeyChecking=yes -o ConnectTimeout=7 -p 22 192.168.16.241 "/sbin/zfs inherit freenas:state replica/projekte@auto-20150121.0800-3d"
Jan 21 16:43:30 filer autorepl.py: [common.pipesubr:72] Executing: /sbin/zfs inherit freenas:state raid/projekte@auto-20150120.1855-3d
Jan 21 16:43:30 filer autorepl.py: [common.pipesubr:72] Executing: /sbin/zfs release -r freenas:repl raid/projekte@auto-20150120.1855-3d
Jan 21 16:43:31 filer autorepl.py: [common.pipesubr:72] Executing: /sbin/zfs set freenas:state=LATEST raid/projekte@auto-20150121.0800-3d
Jan 21 16:43:31 filer autorepl.py: [common.pipesubr:72] Executing: /sbin/zfs hold -r freenas:repl raid/projekte@auto-20150121.0800-3d
Jan 21 16:43:31 filer common.pipesubr: cannot hold snapshot 'raid/projekte@auto-20150121.0800-3d': tag already exists on this dataset
Jan 21 16:43:45 filer autorepl.py: [common.pipesubr:58] Popen()ing: /usr/bin/ssh -i /data/ssh/replication -o BatchMode=yes -o StrictHostKeyChecking=yes -o ConnectTimeout=7 -p 22 192.168.16.241 "zfs list -Hr -o name -t snapshot -d 1 replica/projekte | tail -n 1 | cut -d@ -f2"
Jan 21 16:43:45 filer autorepl.py: [common.pipesubr:72] Executing: /usr/bin/ssh -i /data/ssh/replication -o BatchMode=yes -o StrictHostKeyChecking=yes -o ConnectTimeout=7 -p 22 192.168.16.241 "/sbin/zfs inherit freenas:state replica/projekte@auto-20150121.1200-3d"
Jan 21 16:43:45 filer autorepl.py: [common.pipesubr:72] Executing: /sbin/zfs inherit freenas:state raid/projekte@auto-20150121.0800-3d
Jan 21 16:43:46 filer autorepl.py: [common.pipesubr:72] Executing: /sbin/zfs release -r freenas:repl raid/projekte@auto-20150121.0800-3d
Jan 21 16:43:46 filer autorepl.py: [common.pipesubr:72] Executing: /sbin/zfs set freenas:state=LATEST raid/projekte@auto-20150121.1200-3d
Jan 21 16:43:46 filer autorepl.py: [common.pipesubr:72] Executing: /sbin/zfs hold -r freenas:repl raid/projekte@auto-20150121.1200-3d
Jan 21 16:43:47 filer common.pipesubr: cannot hold snapshot 'raid/projekte@auto-20150121.1200-3d': tag already exists on this dataset
Jan 21 16:43:58 filer autorepl.py: [common.pipesubr:58] Popen()ing: /usr/bin/ssh -i /data/ssh/replication -o BatchMode=yes -o StrictHostKeyChecking=yes -o ConnectTimeout=7 -p 22 192.168.16.241 "zfs list -Hr -o name -t snapshot -d 1 replica/projekte | tail -n 1 | cut -d@ -f2"
Jan 21 16:43:58 filer autorepl.py: [common.pipesubr:72] Executing: /usr/bin/ssh -i /data/ssh/replication -o BatchMode=yes -o StrictHostKeyChecking=yes -o ConnectTimeout=7 -p 22 192.168.16.241 "/sbin/zfs inherit freenas:state replica/projekte@auto-20150121.1600-3d"
Jan 21 16:43:58 filer autorepl.py: [common.pipesubr:72] Executing: /sbin/zfs inherit freenas:state raid/projekte@auto-20150121.1200-3d
Jan 21 16:43:59 filer autorepl.py: [common.pipesubr:72] Executing: /sbin/zfs release -r freenas:repl raid/projekte@auto-20150121.1200-3d
Jan 21 16:43:59 filer autorepl.py: [common.pipesubr:72] Executing: /sbin/zfs set freenas:state=LATEST raid/projekte@auto-20150121.1600-3d
Jan 21 16:44:00 filer autorepl.py: [common.pipesubr:72] Executing: /sbin/zfs hold -r freenas:repl raid/projekte@auto-20150121.1600-3d
Jan 21 16:44:00 filer common.pipesubr: cannot hold snapshot 'raid/projekte@auto-20150121.1600-3d': tag already exists on this dataset


S.
 

stefanb

Patron
Joined
Dec 12, 2014
Messages
200
Hi,

repliaction is working.
Because I only need snapshot for replication, I did the following steps:
Delete all snapshots on push
Delete all snapshots on pull
Delete all snapshot tasks
Delete all replication tasks
Create new snapshot tasks on push
Waiting for first snapshot to be done
Create a new replication task, with "Initialize remote side for once. (May cause data loss on remote side!)"
After replication starts (it sends all the data to pull - 5TB) and is still running, I disabled it, cause transfer is longer than the interval of snapshots
After replication finished, I enabled the task again

Works now for 4 days

Cause I didn't need snapshots, only replication, I fight with myself to switch to rsync.

S.
 
Status
Not open for further replies.
Top