Most appropriate way to duplicate dataset from one pool to another on a daily basis

Status
Not open for further replies.

SwisherSweet

Contributor
Joined
May 13, 2017
Messages
139
I have a pool made-up of several datasets. Some of the datasets are zvols accessible via iSCSI.

I understand that snapshots can be sent to another pool, however I do not know how to use the scheduled snapshots I am making of my "primary" pool as backups. The file names are not static (e.g. auto-20170607.1636-1y). So my questions are:

1. Are snapshots the most appropriate means of replicating data from one pool (in my case "primary") to a backup pool on the same server?
2. If I use snapshots to send/recv datasets from my primary pool to my backup pool, will my backup pool include the snapshots too?
3. Assuming cron is the most appropriate way to schedule the replication, how can I write a command that will take the last snapshot that was automatically generated by FreeNAS?

Also, I assume there is nothing special about replicating zvols using snapshots. Please correct me if I am wrong.

Appreciate your help.
 

artlessknave

Wizard
Joined
Oct 29, 2016
Messages
1,506
in a word: yes
1. yes
2. yes
3. no, replications are built into the gui. in the "replication tasks" section. under storage. simply tell it your source/dest dataset and use localhost/127.0.0.1, it should just work. will trigger with each autosnap

a zvol snap looks the same as a dataset snap and replications like a dataset
 

SwisherSweet

Contributor
Joined
May 13, 2017
Messages
139
in a word: yes
1. yes
2. yes
3. no, replications are built into the gui. in the "replication tasks" section. under storage. simply tell it your source/dest dataset and use localhost/127.0.0.1, it should just work. will trigger with each autosnap

a zvol snap looks the same as a dataset snap and replications like a dataset

Thanks for the reply. After I knew what I was searching for, I found this post:

https://forums.freenas.org/index.php?threads/no-ecdsa-host-key-is-known-for.21521/#post-128909

This gave me step-by-step directions on getting this to work and it worked perfectly for me. :)

Thanks again!
 

artlessknave

Wizard
Joined
Oct 29, 2016
Messages
1,506
awesome, glad to be able to help.

you still have the single machine as a single point of failure, with anything able to corrupt the primary pool very likely able to corrupt the second pool (bad ram, bad config, security hole etc) so if you particularly care about whats in the pool you might want to consider moving the secondary pool to a different machine. that said, this is a bit better than just one pool
 

SwisherSweet

Contributor
Joined
May 13, 2017
Messages
139
awesome, glad to be able to help.

you still have the single machine as a single point of failure, with anything able to corrupt the primary pool very likely able to corrupt the second pool (bad ram, bad config, security hole etc) so if you particularly care about whats in the pool you might want to consider moving the secondary pool to a different machine. that said, this is a bit better than just one pool

Thanks. I plan to offsite critical data, but my primary reason for the local backup pool is if I have a failure of multiple drives at the same time or I do something stupid to the primary pool of data.

If I had bad RAM (I'm using ECC), I don't think having a remote copy of the potentially corrupt data would matter.

What do you mean by bad config or security hole?
 

artlessknave

Wizard
Joined
Oct 29, 2016
Messages
1,506
"bad config" = "I do something stupid to the primary pool"

security hole means exactly that, some flaw that allows somebody to do something malicious

i more was pointing out the 2 pools in same system as a side-note. if you have a plan for offsite backups then it shouldn't matter much if you are replicating to the same machine
 
Status
Not open for further replies.
Top