Restore Pools & Datasets from FreeNAS GUI

vafk

Contributor
Joined
Jun 22, 2017
Messages
132
Today I had a special task to change the RAID-Z design on my main FreeNAS from Raid-Z1 to Raid-Z2.

I had my main system with pool nas1 and several datasets replicated to backup1 and backup2. Now when I wanted to get my datasets back I did not find a one-button-klick to restore. Instead I had to da the other way around, i.e. create a snapshot of all datasets on backup and a replication task from backup to main system.

I don't know if this is the right way to do so and I hope it is not very cumbersome to get the data back. I read many posts where users are asking about copying one pool into another and all the answers I see is to use command line with zfs send | zfs receive. I hope that I am wrong that the current FreeNAS is not to be uses in a mix from GUI and command line when it comes to restore one's data.

Appreciate if someone can point me to the right source where it is explained. Thanks!
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
Replication is running the zfs send | recv in the background together with the requisite snapshots.

There is no distinction between "restoring data" and "copying new data" as far as FreeNAS is concerned, so you shouldn't expect a specific GUI function to appear for that.

In your case, you used the GUI to set up a replication task (usually a repetetive action) for the relatively unique event of "restoring" the replicated data to the "source", which most folks find is better done at the command line due to this unique nature of the event. To each his/her own. What you did is valid, (although you're going to want to remove that replication job now).
 

vafk

Contributor
Joined
Jun 22, 2017
Messages
132
@sretalla

Thank you for clarification. I start to understand. However it makes me not feel comfortable using GUI for backup (yes I know this is not the "true" backup but hope you understand) and CLI for restore. Because "backup" is done by a regular base (daily etc) while restore only on demand (hopefully never) so using CLI once in a life time need lot of concentration.

Now this is what I faced this morning after the system replicated the night through: half of the datasets were OK and the other half was buggy. When I "ls" the mnt/nas1 directory it showed me only the working datasets. Midnight Commander would show me the bugged datasets in red color and a "?" in front of them, i.e. "Pictures" was "?Pictures". Never mind I thought, deleted the datasets, deleted tasks and recreated all and replicate again.

I have a second problem: part of the working datasets are read only. I followed this thread


checking with "zfs get readonly" and fixing with "zfs set readonly=off [dataset]"

Maybe this is not a problem but misunderstanding from my side. After pools or datasets are copied to another location, are they marked as readonly by default? If so, then must I use CLI with "zfs set readonly=off [dataset]" to be able to access them or is there something to select in the GUI when replicating them to indicate that this is a copy and access to it is permitted (without having to adjust the destination later by CLI commands)?

I still do not understand what went wrong & what I did wrong but in this stage must state that I need badly a clear step-by-step tutorial, how to
1) copy dataset@FreeNAS1 to dataset@FreeNAS2
2) after destroy dataset@FreeNAS1 copy dataset@FreeNAS2 to dataset@FreeNAS1
3) While keeping all permissions & ACL as they were in the original dataset before these steps were done.

One more question: In my pool "nas1" there are several datasets so for each dataset I created a snapshot task and a replication task. So with 15 datasets it comes to 30 tasks. Is it possible to have only one snapshot task for pool "nas1" (that will include all datasets") and also one replication task to copy "nas1" to let say "backup1"? I have not tried doing this because I was afraid I could mess up with my replica on backup1...
 
Last edited:

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
Maybe this is not a problem but misunderstanding from my side. After pools or datasets are copied to another location, are they marked as readonly by default? If so, then must I use CLI with "zfs set readonly=off [dataset]" to be able to access them or is there something to select in the GUI when replicating them to indicate that this is a copy and access to it is permitted (without having to adjust the destination later by CLI commands)?
Replication does this on purpose so you don't mess with your replica.

Doing the job of setting it to be writable is intended to be a human intervention.

3) While keeping all permissions & ACL as they were in the original dataset before these steps were done.
Since zfs send /recv is block-level, ACL/permissions are not altered.

I still do not understand what went wrong & what I did wrong but in this stage must state that I need badly a clear step-by-step tutorial, how to
1) copy dataset@FreeNAS1 to dataset@FreeNAS2
2) after destroy dataset@FreeNAS1 copy dataset@FreeNAS2 to dataset@FreeNAS1
At the source:
zfs snapshot -r pool/dataset@snapshotName
zfs send -R pool/dataset@snapshotName | pv | ssh destinationIP zfs recv pool/dataset

the pv in the middle will give you a progress monitor on the task.

When you're done and happy everything is there:
zfs destroy -r pool/dataset (!!! careful, your snapshots disappear with this too, so no going back)

Then at the destination:
zfs snapshot -r pool/dataset@snapshotName
zfs send -R pool/dataset@snapshotName | pv | ssh sourceIP zfs recv pool/dataset

Since we're doing this manually, no need to set anything back from read-only.

Is it possible to have only one snapshot task for pool "nas1" (that will include all datasets")
That's the "-r" in zfs snapshot -r ...
 
Last edited:

vafk

Contributor
Joined
Jun 22, 2017
Messages
132
@sretalla

I appretiate your help :smile:

Now I did for the dataset "DV" on pool "backup1"

1) zfs snapshot -r backup1/DV@snapshotDV
2) zfs send backup1/DV@snapshotDV | pv | zfs recv nas1/DV

and because it is locally it replicates with approx. 200 MiB/s.

I begin to see that my mistake when I tried to do it yesterday (I did not mention this in this post) I tried to use zfs send | receive for the dataset and not for the snapshot. Does that mean, that to copy datasets across pools it only works after a snapshot is taken which then is sent | received (and not the dataset itself)?
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
Does that mean, that to copy datasets across pools it only works after a snapshot is taken which then is sent | received (and not the dataset itself)?
Since the live filesystem is subject to change at any time, using a block-level tool to send it would be dangerous since already sent blocks could be changed before the remaining blocks in that transaction are also sent, resulting in situations where file changes made during the send are only partially committed to the target or even overwritten with older data. Sending Snapshots that don't change once taken is a much smarter way.
 

vafk

Contributor
Joined
Jun 22, 2017
Messages
132
Today I learned a lot :smile: Stay well & many thanks!!!
 
Top