how to use zfs send | zfs receive to back up my current pools to a 10TB transfer disk?

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,458
The GUI does allow using "localhost" as the remote system to replicate to. But I've never tried it.
I have, and have found it very flaky. But even beyond that, it's far more complicated than it needs to be--even when you're replicating to localhost, it's still going through ssh, so there's overhead from the network stack, overhead from encryption/decryption, overhead from compression/decompression, and the complication of SSH keys and thumbprints to deal with. None of that is of any use at all for something that isn't leaving your box.
It would be a nice feature to have a "one time" replication option to a specific target,
Indeed.
 

devnullius

Patron
Joined
Dec 9, 2015
Messages
289
Some concrete examples that might help other newbies too https://forums.freenas.org/index.ph...-backup-removable-local-backup-restore.60312/

I'm gonna give it another try now all my disks are running stable :)

PS: syncoid discussion (not resolved) can be found here https://forums.freenas.org/index.ph...to-script-replication-of-zfs-snapshots.60313/
PS: once replication has started, the disks keep running and working like crazy while no new snapshots are being generated? :( https://forums.freenas.org/index.php?threads/im-wondering-what-my-hdd-is-doing.61116/
 
Last edited:

devnullius

Patron
Joined
Dec 9, 2015
Messages
289
Looks the first zfs step worked.

Code:
zfs send -v -R Pipi@auto-20180128.1957-2w | zfs recv -F VOLU10TB/Pipi


That worked for a long time; hard disks & CPUs worked hard and no errors were given. Total disk size in use for VOLU10TB (single disk) did not increase. I suspect this is one of the advantages of using ZFS and copying your data twice to that volume (once with rsync, once with zfs send).

BUT... There is a but... Restoring the data (testing before destroying) is failing...

I have my new 5 disks now as their own pool for the time being. It's called HPBURNTEST.

I gave the following command:

Code:
zfs send -v -R VOLU10TB/Pipi@auto-20180128.1957-2w | zfs receiv -F HPBURNTEST


It starts for a moment, before I get:

Code:
cannot receive new filesystem stream: destination 'HPBURNTEST' does not exist
warning: cannot send 'VOLU10TB/Pipi@auto-20180128.1757-2w': signal received
TIME SENT SNAPSHOT
warning: cannot send 'VOLU10TB/Pipi@auto-20180128.1757-2w': Broken pipe
cannot send 'VOLU10TB/Pipi': I/O error


I'll be investigating a bit myself... Tomorrow: maybe sleeping is enough to see the light again :) If you have a flashlight, don't be afraid to use it here ;p
 

toadman

Guru
Joined
Jun 4, 2013
Messages
619
I think you have to send it to a dataset maybe. zfs send -v -R VOLU10TB/Pipi@auto-20180128.1957-2w | zfs recv -F HPBURNTEST/Pipi

Just a guess from memory.
 

devnullius

Patron
Joined
Dec 9, 2015
Messages
289
Why is everything so fracking complicated??? :(
I want to remove all snapshots. The GUI ain't working. I need to -R remove all snapshots. HOW? :(

Code:
cannot destroy 'Pipi/jails/.warden-template-pluginjail@clean': snapshot has dependent clones use '-R' to destroy the following datasets: Pipi/jails/bacula-sd_1 Pipi/jails/xdm_1 Pipi/jails/owncloud_1 Pipi/jails/plexmediaserver_1 Pipi/jails/emby_1 Pipi/jails/subsonic_1 Pipi/jails/firefly_


Post-edit: I'll give this a look https://serverfault.com/questions/849966/zfs-delete-snapshots-with-interdependencies-and-clones

Post-edit 2: Ok, got it - finally :(
zfs destroy -R Pipi/jails/firefly_1 etc etc

Now I can make some snapshots again so I can use zfs send & receive... SIGH!

Post-edit 3 + FOLLOW UP Q: https://forums.freenas.org/index.php?threads/recursive-snapshots.39420/ It's ok to enable recursive, right? All these 'explanations' make so little sense to me :(
 
Last edited:

Redcoat

MVP
Joined
Feb 18, 2014
Messages
2,924

devnullius

Patron
Joined
Dec 9, 2015
Messages
289
Thank you, I missed the last link given there:

This explains some confusion for me :)
"Recursive" option means that for all child datasets will be created snapshot.

There is need to understand different between "FOLDER" and "DIRECTORY".

"FOLDER" is zfs dataset, which is created by "zfs create" command. "DIRECTORY" is filesystem object, which is created by "mkdir" command.

If your FOLDER has some DIRECTORY then snapshot will contains it. If your FOLDER has some child FOLDER then snapshot will not contains it.

If your FOLDER has some child FOLDER and you create recursive snapshot then ZFS creates two same name snapshots: one per FOLDER.

Thank you!
 

devnullius

Patron
Joined
Dec 9, 2015
Messages
289
I think you have to send it to a dataset maybe. zfs send -v -R VOLU10TB/Pipi@auto-20180128.1957-2w | zfs recv -F HPBURNTEST/Pipi

Just a guess from memory.
Aaaaah finally :) Thank you! For the record, after wiping my HPBURNTEST pool and re-creating it, the import now seems to work :) ♥ And note to self: if "HPBURNTEST/Pipi" doesn't exist, it will be created. If it does exist, I suppose it'll be updated/overwritten (check this)?
And a reminder, because I keep forgetting it: zfs list -t all ;-p
 
Last edited:

Apollo

Wizard
Joined
Jun 13, 2013
Messages
1,450
I just found out about this thread, but as reference and completeness, I would be cautious about the process the OP went through. Especially when having to destroy the iocage jail snapshots from the source.

As stated by the OP, if all was needed was to replicate the entire volune then the command should have been as follow:

zfs send -v -R VOLU10TB@auto-20180128.1957-2w | zfs recv HPBURNTEST
I would not have used the "auto-20180128.1957-2w" has a snapshot to perform replication, instead I would have taken a manual Recursive snapshot of the volume via GUI and use the default "manual-date..." designation.
"The Manual-date..." snapshots never get destroyed automaticaly by Freenas, this is not the case of the "auto-date...".

If the intent was to backup the volume in a dataset that should reside on the destination, then the dataset on the destination has to be manually created.

What the OP did is in fact make a recursive backup through replication of the "Pipi" dataset and not the entire volume.

One note, is that since iocage has been implemented, replication of the iocage jails dataset is not possible it would seem as the iocage dataset may not be a true dataset as it may only appear in the GUI when the iocage jail is activated.
 

kreg

Dabbler
Joined
Jul 7, 2019
Messages
11
Can zfs send, or any methods discussed here, copy a pool that is not currently imported / mounted? I have a possibly corrupted pool. The system kernel panics when I attempt to import it. I'd like to make copies of this pool onto fresh disks before messing with it further. I have also built a new system with a Xeon E3-1230 V6 and 32GB ECC memory. Hopefully it can become the new home for my data if I am able to recover it.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,175
Can zfs send, or any methods discussed here, copy a pool that is not currently imported
No. What you can try is to import the pool read-only and see if that works so you can get your data out. A number of failure modes still allow for read-only access.
 
Top