Best Methods/Practices for Pool Backup while retaining Snapshots between backups.

Status
Not open for further replies.

mike360x1

Contributor
Joined
Dec 25, 2012
Messages
107
Hello,

I am wondering on what the best practices for backing up data. More specifically, I'm wondering about how one can go about backing up data but still retaining the snapshots.

My reasoning is because I worry that I may have to restore a backup with a specific snapshot in the future because the backup has been unknowingly infected by a virus long ago that has just been discovered. In this case, using snapshots that have been created during the time of backup will not be enough.

My backup plan is to have my irreplaceable data backed up to amazon S3 AWS (checked daily). And then have a local backup of all data (using an archival 8TB HDD) which will be run bi-monthly.

I have heard about Rsync and ZFS push/Pull. I currently do not know of any solutions I’m looking for using either of these methods.

So, it comes down to two questions

1. What would be best solution for backing up pool data to a single hard drive while still being able to satisfy the above condition?

2. If the above is not possible, what is the best solution for backing up to a single HDD regardless of snapshots?

I would be grateful for your help.

Sincerely,

Michael L.
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
Rsync won't backup snapshots, but you can look up scripts that will keep incremental versions.

ZFS replication will do exactly what you are looking for. In freenas, you configure the source and destination (either locally or another freenas across the network) and it will copy the dataset, snapshots and all.

1. What would be best solution for backing up pool data to a single hard drive while still being able to satisfy the above condition?
Replication. Just create a new pool with the archive drive in it, and point your replication at that (when configuring replication, use 127.0.0.1 as the destination)
 

mike360x1

Contributor
Joined
Dec 25, 2012
Messages
107
Rsync won't backup snapshots, but you can look up scripts that will keep incremental versions.

ZFS replication will do exactly what you are looking for. In freenas, you configure the source and destination (either locally or another freenas across the network) and it will copy the dataset, snapshots and all.


Replication. Just create a new pool with the archive drive in it, and point your replication at that (when configuring replication, use 127.0.0.1 as the destination)

I see. I have checked out zfs push/pull before. My understanding is that you use the zfs send/recev commands to send a dataset of a certain snapshot. And its done using the following.

Code:
zfs send datapool/docs@today | zfs recv backuppool/backup


What I don't see is the option to recursively transfer all the snapshots of a dataset and its child datasets.
I'm guessing I just don't include @(snapshotname) and include a -r option at the end? Along those lines?
 

snaptec

Guru
Joined
Nov 30, 2015
Messages
502
You can set a checkbox for recursive in the gui


Gesendet von iPhone mit Tapatalk
 

nojohnny101

Wizard
Joined
Dec 3, 2015
Messages
1,478
as @snaptec said, you have to check the box in the GUI, and unless you have a specific reason, you should be using the GUI to setup periodic snapshots and replication tasks.
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
Doing this from the CLI is going to take a ton of learning and research. It's a lot easier to use the GUI. Just make sure you have recursive snapshots taken on your root pool and child datasets configured for replication.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
I'm normally pro the CLI, but seriously, the GUI periodic replication does a LOT more than just send/receive. If you can use it, you should.
 

mike360x1

Contributor
Joined
Dec 25, 2012
Messages
107
Okay fair enough, I'm not sure how to use the CLI to recursively send over all my snapshots anyways. :)

However, the reason that I didn't initially think of using the zfs replication GUI was that it required a remote location. And thus I thought it was only used for remote transfers, not local ones.

Would I leave the remote host blank this time? I wouldn't want to accidentally copy my entire dataset to the wrong location and find out a few hours after it has finished copying. (If only there was a dry run function in the GUI).

What I want to do is transfer
Default/Personal
to
BackupSet/Personal

Do I assume that zfs replication transfers the whole directory (including the dataset folder) to the new location?
IE: Default/Personal will become BackupSet/Personal/Personal on the backup pool?
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
You use "localhost" as the destination ip

This might not be quite as fast as a true local copy but it should be fast enough
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
Would I leave the remote host blank this time?
No, use 127.0.0.1 (or localhost):
Just create a new pool with the archive drive in it, and point your replication at that (when configuring replication, use 127.0.0.1 as the destination)
Do I assume that zfs replication transfers the whole directory (including the dataset folder) to the new location?
The contents of the source dataset will go into the destination dataset.

What I want to do is transfer
Default/Personal
to
BackupSet/Personal

Do I assume that zfs replication transfers the whole directory (including the dataset folder) to the new location?
IE: Default/Personal will become BackupSet/Personal/Personal on the backup pool?
Your "IE" is correct. If you would rather only see BackupSet/Personal, then make your destination "BackupSet" and the replication will create the "Personal" dataset and copy everything into it.
 

mike360x1

Contributor
Joined
Dec 25, 2012
Messages
107
Thank you for your reply,

I have setup the replication task following your suggestion.

I'm now getting the error:
Code:
Failed: Permission denied (publickey,password)


What am I doing wrong on my part?

To obtain the SSH key, I used the SSH key scan button.
 

mike360x1

Contributor
Joined
Dec 25, 2012
Messages
107
Last edited:

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874

mike360x1

Contributor
Joined
Dec 25, 2012
Messages
107

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
I just refresh the storage tab and compare sizes. Or watch the reporting graphs. Not great but it works.
 

mike360x1

Contributor
Joined
Dec 25, 2012
Messages
107
Is there a way to have FreeNAS just specify the contents of the dataset rather than making it copy over the entire dataset(like you mentioned before)? Maybe add a forward slash at the end of the destination directory?

-I'm just looking to make backup/restore to exactly the way I intended to.

-But I suppose I could just create a dataset in the pool and use that... I guess its not a huge deal if it can't be done.


EDIT: Off topic, but what happens to the Backupset's Snapshots if I disconnect the drive from the machine to store and then later reconnect it. What will happen to those snapshots which have expired? Will they be deleted as soon as the ZFS pool is mounted?
 
Last edited:

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
Is there a way to have FreeNAS just specify the contents of the dataset rather than making it copy over the entire dataset(like you mentioned before)? Maybe add a forward slash at the end of the destination directory?
Are you talking about naming of backupset/personal/personal, or the sub contents of the personal dataset. If you don't want the double-"personal" directories, just use BackupSet as the destination, and it will put personal there as a sub dataset as I mentioned previously:
If you would rather only see BackupSet/Personal, then make your destination "BackupSet" and the replication will create the "Personal" dataset and copy everything into it.
Will they be deleted as soon as the ZFS pool is mounted?
I believe expired ones get deleted when replication runs (unless you have the "remove stale snapshots" option disabled). Remember that replication syncs snapshots.
 

mike360x1

Contributor
Joined
Dec 25, 2012
Messages
107
Are you talking about naming of backupset/personal/personal, or the sub contents of the personal dataset. If you don't want the double-"personal" directories, just use BackupSet as the destination, and it will put personal there as a sub dataset as I mentioned previously:

I mean if there's any way of copying the contents of the dataset without copying its respective "folder" over. But I suppose, that wouldn't make sense, now that I think about it. Cause the snapshots when then have nothing to reference to.

IE:
Code:
Default/Personal <----- contents of
to 
Backupset/Documents <----Destination (without creating the double directory: /Backupset/Documents/Persona )
 
Status
Not open for further replies.
Top