ZFS send to external backup drive?

Status
Not open for further replies.

rm-r

Contributor
Joined
Jan 7, 2013
Messages
166
Hi,

I have found a few threads on backups so I just want to check my thinking as to the process for backing up my main pool to an external drive (I keep off-site) – its my home kit, not business or mission critical.

So, my main pool is made up of 4 x 2tb disks in 2 x mirrors – total of 3.8(ish)TB usable. with around 10 datasets within

I have a 4TB single hdd external USB enclosure

I want to send my main pool to this disk, then take it away to keep off site and do this every week or so.

My proposed process….

  1. Attach the USB drive to the freenas machine
  2. Create a single pool from the entire disk
  3. Use “zfs send” from my local pool to the usb pool
  4. Click “detach” via the freenas web gui for the USB pool
  5. Unplug the usb drive and take off site

Commands;

I have found this post about moving jails

http://forums.freenas.org/threads/relocate-jails-to-ssd-helping-hdd-sleep.16955/#post-89106

in there the awesome Dusan shows the commands as

Code:
zfs snapshot -r main_pool/jails@relocate
zfs send -R main_pool/jails@relocate | zfs receive -v ssd_pool/jails
zfs get -rH -o name -s received mountpoint ssd_pool/jails | xargs -I {} sh -c "zfs set mountpoint=/{} {}; zfs mount {};"


so adapting this to do the entire pool (step 3 above) mine would be

Code:
zfs snapshot -r main_pool/
zfs send -R main_pool/ | zfs receive -v USB_pool/


right?

other considerations;
  • when i attach the USB again (the next week) will it mount / attach automatically? or should i use "auto import"?
  • Restoring in the event of catastrophe
    • Code:
      zfs send -R UB_pool/ | zfs receive -v NEWmain_pool/
      ?
    • Will the underlying datasets be created on the NEW main_pool or do i need to create them prior to starting the recovery?
Many thanks as always - i tried to do my research first rather than just ask an open, one line question :)
 

Dusan

Guru
Joined
Jan 29, 2013
Messages
1,165
so adapting this to do the entire pool (step 3 above) mine would be
Code:
zfs snapshot -r main_pool/
zfs send -R main_pool/ | zfs receive -v USB_pool/

right?
You should read the documentation on zfs send & receive :): http://www.freebsd.org/cgi/man.cgi?query=zfs
You are missing the snapshot names in the commands -- the part after the @ sign.
You also need to add the -F switch to zfs receive so that it doesn't complain that the target filesystem already exists. However, even if you fix these problems it would still only work the first time. The second time it would complain that the destination already contains a snapshot. Of course you could destroy it, but that's far from optimal -- you would need to copy the entire pool every time. What you are looking for is incremental replication.

This should work:
Initial copy:
[panel]zfs snapshot -r main_pool@backup # create a snapshot
zfs send -R main_pool@backup | zfs receive -vF USB_pool # transfer it over [/panel]Now the USB_pool contains a replica of your main_pool as it appeared at the moment when you took the snapshot.

To update it later you can do this:
[panel]zfs rename -r main_pool@backup main_pool@previous_backup # rename the "old" snapshot
zfs snapshot main_pool@backup # take a new snapshot
zfs send -Ri main_pool@previous_backup main_pool@backup | zfs receive -v USB_pool # incremental replication
zfs destroy -r main_pool@previous_backup # get rid of the previous snapshot[/panel]This will only transfer the blocks that changed since the last replication. Of course if you already have some snapshot schedule you can use those snapshots. You also do not need to rename the snapshots, but the example I gave you works nicely in a script as the names of the snapshots are always the same.
zfs send -i snapshot1 snapshot2 will send the difference between those two snapshots, or you can use zfs send -I snapshot1 snapshot2 to also send all intermediary snapshots. In both cases (-i and -I) the destination pool contains complete replica of the main_pool at the time you took the @backup snapshot. However, if you for example take daily snapshots and only do the replication weekly, with -I you will also be able to go back to any of the daily snapshots when you do a restore.
In 9.2.1 you won't even need to keep the previous snapshot as it will include a new ZFS feature -- zfs bookmarks. The incremental replication currently (9.2.0) needs at lest two snapshots. With bookmarks you can take a snapshot, transfer it, bookmark it and destroy it. You can later use the bookmark as a source reference for incremental replication -- you take a new snapshot and ask zfs send to create a stream containing the difference between the bookmark and the current snapshot.
when i attach the USB again (the next week) will it mount / attach automatically? or should i use "auto import"?
Yes, you need to auto import.
Restoring in the event of catastrophe
Code:
zfs send -R UB_pool/ | zfs receive -v NEWmain_pool/
?
Almost: zfs send -R UB_pool@backup | zfs receive -vF NEWmain_pool
Will the underlying datasets be created on the NEW main_pool or do i need to create them prior to starting the recovery?
Yes, the -R option will recursively create all child datasets. The same happens when you take the initial backup -- you start with an empty pool and after the first replication in will contain a copy of all your datasets. If you later add a new dataset the incremental recursive replication will also create it on the destination.
 

rm-r

Contributor
Joined
Jan 7, 2013
Messages
166
Thank you so much for such a full response, appreciate your time in doing that - I'll read it tonight at home fully to digest it
 

rm-r

Contributor
Joined
Jan 7, 2013
Messages
166
hi,

so i got some time to play so made a VM - i think there is a -r missing from first line....?

Code:
zfs snapshot main_pool@backup # create a snapshot


should be

Code:
zfs snapshot -r main_pool@backup # create a snapshot


it seems to work better - i get no creation errors - if i dont use the -r i get complaints about the underlying datasets

with -r it looks like this :) am i right?

Code:
[root@freenas] ~# zpool list
NAME    SIZE  ALLOC  FREE    CAP  DEDUP  HEALTH  ALTROOT
inter  13.9G  647M  13.2G    4%  1.00x  ONLINE  /mnt
test1  252G    1M  252G    0%  1.00x  ONLINE  /mnt
usb1  16.1G  624K  16.1G    0%  1.00x  ONLINE  /mnt


Code:
[root@freenas] ~# zfs snapshot -r inter@backupwithr2
[root@freenas] ~# zfs send -R inter@backupwithr2 | zfs receive -vF usb1
receiving full stream of inter@backup4usb into usb1@backup4usb
received 51.0KB stream in 1 seconds (51.0KB/sec)
receiving incremental stream of inter@backup into usb1@backup
received 312B stream in 1 seconds (312B/sec)
receiving incremental stream of inter@backup25th into usb1@backup25th
received 312B stream in 1 seconds (312B/sec)
receiving incremental stream of inter@backup878 into usb1@backup878
received 312B stream in 1 seconds (312B/sec)
receiving incremental stream of inter@backupwithr into usb1@backupwithr
received 312B stream in 1 seconds (312B/sec)
receiving incremental stream of inter@backupwithr2 into usb1@backupwithr2
received 312B stream in 1 seconds (312B/sec)
receiving full stream of inter/set1@backup878 into usb1/set1@backup878
received 235MB stream in 1 seconds (235MB/sec)
receiving incremental stream of inter/set1@backupwithr into usb1/set1@backupwithr
received 312B stream in 1 seconds (312B/sec)
receiving incremental stream of inter/set1@backupwithr2 into usb1/set1@backupwithr2
received 312B stream in 1 seconds (312B/sec)
receiving full stream of inter/jails@backup878 into usb1/jails@backup878
received 47.9KB stream in 1 seconds (47.9KB/sec)
receiving incremental stream of inter/jails@backupwithr into usb1/jails@backupwithr
received 312B stream in 1 seconds (312B/sec)
receiving incremental stream of inter/jails@backupwithr2 into usb1/jails@backupwithr2
received 312B stream in 1 seconds (312B/sec)
receiving full stream of inter/jails/sb@backup878 into usb1/jails/sb@backup878
received 498KB stream in 1 seconds (498KB/sec)
receiving incremental stream of inter/jails/sb@backupwithr into usb1/jails/sb@backupwithr
received 312B stream in 1 seconds (312B/sec)
receiving incremental stream of inter/jails/sb@backupwithr2 into usb1/jails/sb@backupwithr2
received 312B stream in 1 seconds (312B/sec)
receiving full stream of inter/set2@backup878 into usb1/set2@backup878
received 409MB stream in 2 seconds (205MB/sec)
receiving incremental stream of inter/set2@backupwithr into usb1/set2@backupwithr
received 312B stream in 1 seconds (312B/sec)
receiving incremental stream of inter/set2@backupwithr2 into usb1/set2@backupwithr2
received 312B stream in 1 seconds (312B/sec)
[root@freenas] ~#
 

Dusan

Guru
Joined
Jan 29, 2013
Messages
1,165
Yes, you are right, thanks. I'm sorry, I somehow forgot that switch. I edited the original post for benefit of future readers.
 

rm-r

Contributor
Joined
Jan 7, 2013
Messages
166
No, thank you - it sounds lame but this was a bit of a scary thing for me, im used to the gui drag and drop, ntfs / ext recovery tools as a backup - but having your help and using a vm to copy back and forth i just restored!

thank you so much again - you have made a great difference!

just to give future readers an idea of valid results

Code:
[root@freenas] ~# zfs send -R usb1@backupwithr2 | zfs receive -vF newinternal
receiving full stream of usb1@backup4usb into newinternal@backup4usb
received 51.0KB stream in 1 seconds (51.0KB/sec)
receiving incremental stream of usb1@backup into newinternal@backup
received 624B stream in 1 seconds (624B/sec)
receiving incremental stream of usb1@backup25th into newinternal@backup25th
received 624B stream in 1 seconds (624B/sec)
receiving incremental stream of usb1@backup878 into newinternal@backup878
received 624B stream in 1 seconds (624B/sec)
receiving incremental stream of usb1@backupwithr into newinternal@backupwithr
received 624B stream in 1 seconds (624B/sec)
receiving incremental stream of usb1@backupwithr2 into newinternal@backupwithr2
received 624B stream in 1 seconds (624B/sec)
receiving full stream of usb1/set1@backup878 into newinternal/set1@backup878
received 235MB stream in 2 seconds (117MB/sec)
receiving incremental stream of usb1/set1@backupwithr into newinternal/set1@backupwithr
received 624B stream in 1 seconds (624B/sec)
receiving incremental stream of usb1/set1@backupwithr2 into newinternal/set1@backupwithr2
received 624B stream in 1 seconds (624B/sec)
receiving full stream of usb1/set2@backup878 into newinternal/set2@backup878
received 409MB stream in 2 seconds (205MB/sec)
receiving incremental stream of usb1/set2@backupwithr into newinternal/set2@backupwithr
received 624B stream in 1 seconds (624B/sec)
receiving incremental stream of usb1/set2@backupwithr2 into newinternal/set2@backupwithr2
received 624B stream in 1 seconds (624B/sec)
receiving full stream of usb1/jails@backup878 into newinternal/jails@backup878
received 47.9KB stream in 1 seconds (47.9KB/sec)
receiving incremental stream of usb1/jails@backupwithr into newinternal/jails@backupwithr
received 624B stream in 1 seconds (624B/sec)
receiving incremental stream of usb1/jails@backupwithr2 into newinternal/jails@backupwithr2
received 624B stream in 1 seconds (624B/sec)
receiving full stream of usb1/jails/sb@backup878 into newinternal/jails/sb@backup878
received 498KB stream in 1 seconds (498KB/sec)
receiving incremental stream of usb1/jails/sb@backupwithr into newinternal/jails/sb@backupwithr
received 624B stream in 1 seconds (624B/sec)
receiving incremental stream of usb1/jails/sb@backupwithr2 into newinternal/jails/sb@backupwithr2
received 624B stream in 1 seconds (624B/sec)


and my data sets look like this in the gui now after the restore

restore.PNG
 

Dusan

Guru
Joined
Jan 29, 2013
Messages
1,165
You can also use the GUI to do the snapshots/replication. However, the functionality provided by the GUI is more geared to recurrent tasks than to ad-hoc backups. My backup strategy includes two external drives. One is always connected to the system and the FreeNAS "GUI" snapshot/replication functionality backups the important datasets every night. Every month I rotate the drives. However, if you intend to connect the drive only to do the backups then CLI is the way to go. With the GUI replication you would have your logs spammed by failed replications.
Anyway, thumbs up to actually trying/practicing the whole process including the restore :).
 

panz

Guru
Joined
May 24, 2013
Messages
556
May we suggest to developers to implement a feature for "one shot" replications? :)
 

panz

Guru
Joined
May 24, 2013
Messages
556
Anyway, thumbs up to actually trying/practicing the whole process including the restore :).

I use this command-line

Code:
zfs send -R -i tank/music@001 tank/music@002 | zfs recv -Fduv backup


explanation of switches used on the receiving side:

-F forces a rollback to the most recent snapshot;
-d maintains my naming scheme;
-u makes sure that the associated file systems do not get mounted.

Then I do:

Code:
zfs mount -o ro backup/music
zfs set readonly=on backup/music


Is this all right?
 

Dusan

Guru
Joined
Jan 29, 2013
Messages
1,165
Is this all right?
If it does what you need then it's OK :). -d is probably not necessary, also -F may not be necessary as you mount your backups read only -- but there is also the other side effect of -F that you may find useful (when receiving an incremental stream it will destroy all snapshots & datasets on the destination that do not exist on the source).
I think you can set a zfs property even on an unmounted dataset. So, you could first set the readonly property and then mount the dataset without -o ro.
 

panz

Guru
Joined
May 24, 2013
Messages
556
I think you can set a zfs property even on an unmounted dataset. So, you could first set the readonly property and then mount the dataset without -o ro.

I think I need to add -v to the (sending or receiving?) side to monitor the send / receive operation.

I'm not sure how to plan my backup strategy, so I'd like to discuss it with you.

I've just purchased a 24 bay case. I'm going to populate the first 6 bays with 3TB WD Red drives in a RAIDZ2 configuration. This is going to allow about 10TB of storage (from which I need to subtract a further 20% if I'm studying ZFS correctly).

Then, I plan to backup this pool (name this "main_pool") to a "backup_pool" of 6 HDs (configured as above = 6 WD 3TB drives in RAIDZ2) that I'm going to insert once a month.

So, my "backup day" will be:

0) do a manual recursive snapshot of "main_pool" every 30 days;

1) the same day of step # 0, scrub the "main_pool" and check its smartctl tests (I'm studying how to do this from an SSH session); if all OK then:

2) shutdown the server;

3) insert the 6 trays containing the drives of the "backup_pool";

4) import & decrypt "backup_pool";

5) scrub "backup_pool";

6) from the CLI do zfs send / recv with incremental -I or -i switch (as above);

7) export "backup_pool";

8) shutdown the server and pull the 6 backup disks.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Wow. that's a lot of manual labor for a backup! But your steps are accurate for what you are wanting to do.
 

Robert Smith

Patron
Joined
May 4, 2014
Messages
270
Unless I missed another thread somewhere, I think this is a the best one with a ‘how-to’ on using zfs replication for backup to a locally attached pool.

Great instructions by Dusan, I just want to offer some add-ons.

First of all, the incremental backups, as suggested, skip any intermediate snapshots, which is probably what most people would want; while the initial zfs send, sends all the snapshots already present, not just the @backup.

To avoid that (in case you have a great deal of snapshots already on the source pool, which you do not care to transfer to the backup pool), you can use the following initial copy command instead:

Code:
zfs send -Ri main_pool main_pool@backup | zfs receive -vF USB_pool


After you are done transferring, you can delete snapshots that you no longer want on both pools; just keep in mind that for subsequent incremental replications, you want the last previously replicated snapshot (the starting point of the incremental replication) to still exist on both pools.

Second, unless you are running these commands at the physical box, you should use persistent sessions, otherwise it may become hard to figure out what is going in case you get disconnected.

On FreeNAS, for persistent sessions, you can use tmux:

Before running remote console commands just run

Code:
tmux


And if you get disconnected, to get back to the remote session, login to the console again and run

Code:
tmux attach


This should pop you right back to the console session you left.

HTH
 

TAC

Contributor
Joined
Feb 16, 2014
Messages
152
I have a 3 drive RAIDZ1 volume (Data) in my FreeNAS box and a 6 drive RAIDZ2 volume (Data1). I want to upgrade my 'Data' volume which includes all my data and jails to RAIDZ2.

From the above it looks like I can backup (send) my whole Data volume to my Data1 volume, upgrade my Data volume to RAIDZ2, and then restore everything from the Data1 volume back to Data.

Since this would be like restoring from a catastrophic failure I would think my jails and everything would work as before.
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874

rm-r

Contributor
Joined
Jan 7, 2013
Messages
166

rm-r

Contributor
Joined
Jan 7, 2013
Messages
166
righto - here is the first cut - just set your specific datasets at the top and away you go.

if the backup drive does not have the dataset there then uncomment the initial replication section in the script - run it then comment them back out again

script assumes both source and destination are on the same host at the moment - i'll keep updating the github as i go

https://github.com/J-C-B/freenas_and_iohyve/blob/master/freenas_backup.sh

Cheers guys

Code:
#!/bin/bash

## Credits
# initial script idea is from here http://stephanvaningen.net/node/14
# reverse SSH idea is from here https://forums.freenas.org/index.php?threads/best-way-to-access-freenas-without-port-forwards.25454/
# Standing on the shoulders of giants

## Tips
# use "tmux" before running the script manually - then if you disconnect the script will keep going
# if disconnected use "tmux attach" to get back to your session

## Splunk
# This script is designed to output key value pairs, with a unique ID to logger
# this is so it will work with the FreeNAS app for Splunk I also maintain - https://splunkbase.splunk.com/app/2940/

#################################################################
######### set your volume and dataset to backup below ###########
#################################################################
################# Example below will backup #####################
################## FROM datastore/downloads #####################
#################### TO usb/jails ###########################
#################################################################

# Enter source zvol below
srczvol=datastore

#Enter target zvol below
targetzvol=usb

#Enter dataset to backup below
dataset=jails

# Set remote host here - you need to have done ssh key exchange between hosts for prompt-less execution
# see here for details on this

# Reverse SSH example - see script here for receive side reverse ssh script to help
remotehost="ssh -p 9000 root@localhost"

# Local network example
# remotehost="ssh root@192.168.2.2"


#################################################################
#################################################################
################ Do not edit below here #########################
#################################################################
#################################################################

#Create unique id for tracking
runid=$(date +"%y%m%d%H%M%S")


#################################################################   
#################################################################
################ Check target up ############################
#################################################################
#################################################################
IP=localhost                                                                                                                                                     
PORT=9000

nc -vz $IP $PORT 2>/dev/null 1>/dev/null                                                                                                                         
if [ "$?" = 1 ]                                                                                                                                                 
then                                                                                                                                                             
  echo "Remote Host dead $? $IP $PORT $runid" | logger                                                                                                                                     
else   


#################################################################
############ uncomment to initialise the backup pool ############
################### not needed after first run ##################
####################  so comment back out! ######################
#################################################################
#echo "srcset="$srczvol/$dataset", state="start_initial_Replication", runid="$runid"" | logger
#zfs snapshot -r $srczvol/$dataset@remote-snap000 | logger
#zfs send -R $srczvol/$dataset@remote-snap000 | $remotehost zfs receive -Fduv $targetzvol | logger
#echo "srcset="$srczvol/$dataset", state="finish_initial_Replication", runid="$runid"" | logger


#################################################################
############## Prep and rotate Section ##########################
#################################################################
echo "srcset="$srczvol/$dataset", tgtset="$remotehost-$targetzvol/$dataset", state="Preparations_start", runid="$runid"" | logger
echo "srcset="$srczvol/$dataset", state="delete_oldest_snapshot", runid="$runid"" | logger
zfs destroy -r $srczvol/$dataset@remote-snap003 | logger
echo "tgtset="$remotehost-$targetzvol/$dataset", state="delete_oldest_snapshot", runid="$runid"" | logger
$remotehost zfs destroy -r $targetzvol/$dataset@remote-snap003 | logger
echo "srcset="$srczvol/$dataset", state="Rotate_snapshots", runid="$runid"" | logger
zfs rename -r $srczvol/$dataset@remote-snap002 remote-snap003 | logger
zfs rename -r $srczvol/$dataset@remote-snap001 remote-snap002 | logger
zfs rename -r $srczvol/$dataset@remote-snap000 remote-snap001 | logger
echo "tgtset="$remotehost-$targetzvol/$dataset", state="Rotate_snapshots", runid="$runid"" | logger
$remotehost zfs rename -r $targetzvol/$dataset@remote-snap002 remote-snap003 | logger
$remotehost zfs rename -r $targetzvol/$dataset@remote-snap001 remote-snap002 | logger
$remotehost zfs rename -r $targetzvol/$dataset@remote-snap000 remote-snap001 | logger
echo "srcset="$srczvol/$dataset", state="Create_newest_snapshot", runid="$runid"" | logger
zfs snapshot -r $srczvol/$dataset@remote-snap000 | logger
echo "srcset="$srczvol/$dataset", tgtset="$remotehost-$targetzvol/$dataset", state="Preparations_finished", runid="$runid"" | logger

#################################################################
############### Send - Receive Section ##########################
#################################################################

echo "srcset="$srczvol/$dataset", tgtset="$remotehost-$targetzvol/$dataset", state="Start_send", runid="$runid"" | logger
zfs send -Ri remote-snap001 $srczvol/$dataset@remote-snap000 | $remotehost zfs receive -Fduv $targetzvol | logger
echo "srcset="$srczvol/$dataset", tgtset="$remotehost-$targetzvol/$dataset", state="Finished_send", runid="$runid"" | logger

zfs list -H -o name -t snapshot | grep $dataset@remote
$remotehost zfs list -H -o name -t snapshot | grep $dataset@remote

zfs list -H -o name -t snapshot | grep $dataset |  logger
$remotehost zfs list -H -o name -t snapshot | grep $dataset | logger

fi

 
Last edited:

Rabuyik

Cadet
Joined
Nov 16, 2016
Messages
7
Do you happen to have this backup script without the SSH remote part ?

I think many people, including me, only needs a local backup script with or without hotswaping the HDs...

Many thanks !



righto - here is the first cut - just set your specific datasets at the top and away you go.

if the backup drive does not have the dataset there then uncomment the initial replication section in the script - run it then comment them back out again

script assumes both source and destination are on the same host at the moment - i'll keep updating the github as i go

https://github.com/J-C-B/freenas_and_iohyve/blob/master/freenas_backup.sh

Cheers guys

Code:
#!/bin/bash

## Credits
# initial script idea is from here http://stephanvaningen.net/node/14
# reverse SSH idea is from here https://forums.freenas.org/index.php?threads/best-way-to-access-freenas-without-port-forwards.25454/
# Standing on the shoulders of giants

## Tips
# use "tmux" before running the script manually - then if you disconnect the script will keep going
# if disconnected use "tmux attach" to get back to your session

## Splunk
# This script is designed to output key value pairs, with a unique ID to logger
# this is so it will work with the FreeNAS app for Splunk I also maintain - https://splunkbase.splunk.com/app/2940/

#################################################################
######### set your volume and dataset to backup below ###########
#################################################################
################# Example below will backup #####################
################## FROM datastore/downloads #####################
#################### TO usb/jails ###########################
#################################################################

# Enter source zvol below
srczvol=datastore

#Enter target zvol below
targetzvol=usb

#Enter dataset to backup below
dataset=jails

# Set remote host here - you need to have done ssh key exchange between hosts for prompt-less execution
# see here for details on this

# Reverse SSH example - see script here for receive side reverse ssh script to help
remotehost="ssh -p 9000 root@localhost"

# Local network example
# remotehost="ssh root@192.168.2.2"


#################################################################
#################################################################
################ Do not edit below here #########################
#################################################################
#################################################################

#Create unique id for tracking
runid=$(date +"%y%m%d%H%M%S")


################################################################# 
#################################################################
################ Check target up ############################
#################################################################
#################################################################
IP=localhost																																					
PORT=9000

nc -vz $IP $PORT 2>/dev/null 1>/dev/null																														
if [ "$?" = 1 ]																																				
then																																							
  echo "Remote Host dead $? $IP $PORT $runid" | logger																																	
else 


#################################################################
############ uncomment to initialise the backup pool ############
################### not needed after first run ##################
####################  so comment back out! ######################
#################################################################
#echo "srcset="$srczvol/$dataset", state="start_initial_Replication", runid="$runid"" | logger
#zfs snapshot -r $srczvol/$dataset@remote-snap000 | logger
#zfs send -R $srczvol/$dataset@remote-snap000 | $remotehost zfs receive -Fduv $targetzvol | logger
#echo "srcset="$srczvol/$dataset", state="finish_initial_Replication", runid="$runid"" | logger


#################################################################
############## Prep and rotate Section ##########################
#################################################################
echo "srcset="$srczvol/$dataset", tgtset="$remotehost-$targetzvol/$dataset", state="Preparations_start", runid="$runid"" | logger
echo "srcset="$srczvol/$dataset", state="delete_oldest_snapshot", runid="$runid"" | logger
zfs destroy -r $srczvol/$dataset@remote-snap003 | logger
echo "tgtset="$remotehost-$targetzvol/$dataset", state="delete_oldest_snapshot", runid="$runid"" | logger
$remotehost zfs destroy -r $targetzvol/$dataset@remote-snap003 | logger
echo "srcset="$srczvol/$dataset", state="Rotate_snapshots", runid="$runid"" | logger
zfs rename -r $srczvol/$dataset@remote-snap002 remote-snap003 | logger
zfs rename -r $srczvol/$dataset@remote-snap001 remote-snap002 | logger
zfs rename -r $srczvol/$dataset@remote-snap000 remote-snap001 | logger
echo "tgtset="$remotehost-$targetzvol/$dataset", state="Rotate_snapshots", runid="$runid"" | logger
$remotehost zfs rename -r $targetzvol/$dataset@remote-snap002 remote-snap003 | logger
$remotehost zfs rename -r $targetzvol/$dataset@remote-snap001 remote-snap002 | logger
$remotehost zfs rename -r $targetzvol/$dataset@remote-snap000 remote-snap001 | logger
echo "srcset="$srczvol/$dataset", state="Create_newest_snapshot", runid="$runid"" | logger
zfs snapshot -r $srczvol/$dataset@remote-snap000 | logger
echo "srcset="$srczvol/$dataset", tgtset="$remotehost-$targetzvol/$dataset", state="Preparations_finished", runid="$runid"" | logger

#################################################################
############### Send - Receive Section ##########################
#################################################################

echo "srcset="$srczvol/$dataset", tgtset="$remotehost-$targetzvol/$dataset", state="Start_send", runid="$runid"" | logger
zfs send -Ri remote-snap001 $srczvol/$dataset@remote-snap000 | $remotehost zfs receive -Fduv $targetzvol | logger
echo "srcset="$srczvol/$dataset", tgtset="$remotehost-$targetzvol/$dataset", state="Finished_send", runid="$runid"" | logger

zfs list -H -o name -t snapshot | grep $dataset@remote
$remotehost zfs list -H -o name -t snapshot | grep $dataset@remote

zfs list -H -o name -t snapshot | grep $dataset |  logger
$remotehost zfs list -H -o name -t snapshot | grep $dataset | logger

fi

 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,176
So just replace the ssh part after the pipe with a zfs recv with equivalent structure.
 
Status
Not open for further replies.
Top