Hello everybody,
I have two FreeNAS boxes running version FreeNAS-9.2.0-RELEASE-x64 (ab098f4).
Here is my pool and dataset config (both boxes have same volumes and datasets configured):
# zpool list
NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
zfsVol1 21.8T 463G 21.3T 2% 1.00x ONLINE /mnt
# zfs list
NAME USED AVAIL REFER MOUNTPOINT
zfsVol1 6.19T 4.18T 244K /mnt/zfsVol1
zfsVol1/jails 1.29G 4.18T 482K /mnt/zfsVol1/jails
zfsVol1/jails/.warden-template-pluginjail 805M 4.18T 805M /mnt/zfsVol1/jails/.warden-template-pluginjail
zfsVol1/jails/bacula-sd_1 271M 4.18T 1.05G /mnt/zfsVol1/jails/bacula-sd_1
zfsVol1/jails/btsync_1 247M 4.18T 1.03G /mnt/zfsVol1/jails/btsync_1
zfsVol1/zfsDataset1 2.31G 4.18T 2.08G /mnt/zfsVol1/zfsDataset1
zfsVol1/zfsDataset2 238K 4.18T 238K /mnt/zfsVol1/zfsDataset2
zfsVol1/zfsVolume1 2.06T 6.03T 220G -
zfsVol1/zfsVolume2 2.06T 6.24T 84.1M -
zfsVol1/zfsVolume3 2.06T 6.24T 55.6M -
I have created periodic snapshots for zfsVol1/zfsDataSet1 and zfsVol1/zfsDataSet2. Works.
But I also have zvols, which I export as iSCSI targets used by VMware ESX server.
I tried to make replication of zfsVol1/zfsVolume2. Have periodic snapshots configured.
But for replication task, I get the following in /var/log/messages:
Apr 9 16:04:01 x48svr61xfn1 autorepl.py: [common.pipesubr:71] Executing: (/sbin/zfs send -V -R zfsVol1/zfsVolume2@auto-20140409.1527-2w | /bin/dd obs=1m | /bin/dd obs=1m | /usr/bin/ssh -c arcfour256,arcfour128,blowfish-cbc,aes128-ctr,aes192-ctr,aes256-ctr -i /data/ssh/replication -o BatchMode=yes -o StrictHostKeyChecking=yes -q -l root -p 22 192.168.200.204 "/sbin/zfs receive -F -d zfsVol1 && echo Succeeded.") > /tmp/repl-39921 2>&1
Apr 9 16:04:05 x48svr61xfn1 autorepl.py: [common.pipesubr:57] Popen()ing: /usr/bin/ssh -c arcfour256,arcfour128,blowfish-cbc,aes128-ctr,aes192-ctr,aes256-ctr -i /data/ssh/replication -o BatchMode=yes -o StrictHostKeyChecking=yes -q -l root -p 22 192.168.200.204 "zfs list -Hr -o name -t snapshot -d 1 zfsVol1/zfsVolume2 | tail -n 1 | cut -d@ -f2"
Apr 9 16:04:05 x48svr61xfn1 autorepl.py: [tools.autorepl:332] Replication of zfsVol1/zfsVolume2@auto-20140409.1527-2w failed with 119620+609 records in 58+1 records out 61415564 bytes transferred in 2.483186 secs (24732567 bytes/sec) 119952+1 records in 58+1 records out 61415564 bytes transferred in 2.485535 secs (24709192 bytes/sec) cannot receive new filesystem stream: dataset is busy
"dataset is busy" seems crucial to me, but what it means? Should target volume be unmounted when doing replication. Or maybe my pools configuration is wrong for such scenario (both iSCSI zvol and data-sets on same volume)?
What I do wrong? Is such iSCIS/zvol replication possible at all? Any step-by-step scenario available as for dataset replication?
I have two FreeNAS boxes running version FreeNAS-9.2.0-RELEASE-x64 (ab098f4).
Here is my pool and dataset config (both boxes have same volumes and datasets configured):
# zpool list
NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
zfsVol1 21.8T 463G 21.3T 2% 1.00x ONLINE /mnt
# zfs list
NAME USED AVAIL REFER MOUNTPOINT
zfsVol1 6.19T 4.18T 244K /mnt/zfsVol1
zfsVol1/jails 1.29G 4.18T 482K /mnt/zfsVol1/jails
zfsVol1/jails/.warden-template-pluginjail 805M 4.18T 805M /mnt/zfsVol1/jails/.warden-template-pluginjail
zfsVol1/jails/bacula-sd_1 271M 4.18T 1.05G /mnt/zfsVol1/jails/bacula-sd_1
zfsVol1/jails/btsync_1 247M 4.18T 1.03G /mnt/zfsVol1/jails/btsync_1
zfsVol1/zfsDataset1 2.31G 4.18T 2.08G /mnt/zfsVol1/zfsDataset1
zfsVol1/zfsDataset2 238K 4.18T 238K /mnt/zfsVol1/zfsDataset2
zfsVol1/zfsVolume1 2.06T 6.03T 220G -
zfsVol1/zfsVolume2 2.06T 6.24T 84.1M -
zfsVol1/zfsVolume3 2.06T 6.24T 55.6M -
I have created periodic snapshots for zfsVol1/zfsDataSet1 and zfsVol1/zfsDataSet2. Works.
But I also have zvols, which I export as iSCSI targets used by VMware ESX server.
I tried to make replication of zfsVol1/zfsVolume2. Have periodic snapshots configured.
But for replication task, I get the following in /var/log/messages:
Apr 9 16:04:01 x48svr61xfn1 autorepl.py: [common.pipesubr:71] Executing: (/sbin/zfs send -V -R zfsVol1/zfsVolume2@auto-20140409.1527-2w | /bin/dd obs=1m | /bin/dd obs=1m | /usr/bin/ssh -c arcfour256,arcfour128,blowfish-cbc,aes128-ctr,aes192-ctr,aes256-ctr -i /data/ssh/replication -o BatchMode=yes -o StrictHostKeyChecking=yes -q -l root -p 22 192.168.200.204 "/sbin/zfs receive -F -d zfsVol1 && echo Succeeded.") > /tmp/repl-39921 2>&1
Apr 9 16:04:05 x48svr61xfn1 autorepl.py: [common.pipesubr:57] Popen()ing: /usr/bin/ssh -c arcfour256,arcfour128,blowfish-cbc,aes128-ctr,aes192-ctr,aes256-ctr -i /data/ssh/replication -o BatchMode=yes -o StrictHostKeyChecking=yes -q -l root -p 22 192.168.200.204 "zfs list -Hr -o name -t snapshot -d 1 zfsVol1/zfsVolume2 | tail -n 1 | cut -d@ -f2"
Apr 9 16:04:05 x48svr61xfn1 autorepl.py: [tools.autorepl:332] Replication of zfsVol1/zfsVolume2@auto-20140409.1527-2w failed with 119620+609 records in 58+1 records out 61415564 bytes transferred in 2.483186 secs (24732567 bytes/sec) 119952+1 records in 58+1 records out 61415564 bytes transferred in 2.485535 secs (24709192 bytes/sec) cannot receive new filesystem stream: dataset is busy
"dataset is busy" seems crucial to me, but what it means? Should target volume be unmounted when doing replication. Or maybe my pools configuration is wrong for such scenario (both iSCSI zvol and data-sets on same volume)?
What I do wrong? Is such iSCIS/zvol replication possible at all? Any step-by-step scenario available as for dataset replication?