SOLVED Cloning a Volume with zvol

Status
Not open for further replies.

Grinas

Contributor
Joined
May 4, 2017
Messages
174
Hey,

i have an SSD that all my jails and VMs run from. This is a volume with a single drive so if it ever fails i'll have a lot of work to restore.

Just wondering what is the best way to clone it so i can have a backup copy if it ever goes?
 

Jailer

Not strong, but bad
Joined
Sep 12, 2014
Messages
4,977
Add a second drive to it as a mirror and you shouldn't have to worry about it any longer. That's kind of the point of using FreeNAS/ZFS. As long as you set your pool up correctly and have proper monitoring set up this should never be an issue.
 

Grinas

Contributor
Joined
May 4, 2017
Messages
174
Add a second drive to it as a mirror and you shouldn't have to worry about it any longer. That's kind of the point of using FreeNAS/ZFS. As long as you set your pool up correctly and have proper monitoring set up this should never be an issue.

Yes i know i should a least mirror the drive, but i don't have enough SATA connections on my server and also i cant afford another SSD to mirror this drive. Also i am already at my PSU output limit so if i was to add another SATA card to add another drive i would also need to upgrade my PSU.

that why im looking to create a backup that i can keep on an array that actually has fault tolerance.

there is no data on the SSD that cant be replaced but having a backup would save me a few days of configuration if it did ever fail.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
Sounds like a replication job is what you want (given your hardware constraint preventing the use of a mirror).

Set periodic snapshots on your SSD pool/datasets and use a replication task to send those snapshots to one of your spinning disk pools (you can specify a dataset that doesn't exist like tank/SSDBackups as the destination, it will be created).

There's a good post out there to guide you on how to replicate to the localhost rather than an external system.

https://forums.freenas.org/index.php?threads/local-zfs-replication-task.15677/

Look at dusan's post for the clue to the ssh keys (need to modify the root user properties to have the key you will use).

To restore, you could do a zfs send | zfs recv in the opposite direction (I guess you will find a post covering that if needed).

I have recently noticed a problem (and thought about a solution) with iocage Jails and using replication:

Because iocage sets up a mount directly at /mnt/iocage, your replica datasets will think this is where they need to mount also (i.e. not at /mnt/tank/SSDBackups)... this is an issue if you're (I think alphabetically) in front of your SSD pool... not sure about that... anyway, next reboot after your replication job is complete, you may find that your jails are running out of the backup mount (which may be read-only... replication target datasets are set like that).

My proposed solution to that is using zfs set canmount=noauto tank/SSDBackups (for each of your datasets under iocage, including iocage itself)
 

Grinas

Contributor
Joined
May 4, 2017
Messages
174
Sounds like a replication job is what you want (given your hardware constraint preventing the use of a mirror).

Set periodic snapshots on your SSD pool/datasets and use a replication task to send those snapshots to one of your spinning disk pools (you can specify a dataset that doesn't exist like tank/SSDBackups as the destination, it will be created).

There's a good post out there to guide you on how to replicate to the localhost rather than an external system.

https://forums.freenas.org/index.php?threads/local-zfs-replication-task.15677/

Look at dusan's post for the clue to the ssh keys (need to modify the root user properties to have the key you will use).

To restore, you could do a zfs send | zfs recv in the opposite direction (I guess you will find a post covering that if needed).

I have recently noticed a problem (and thought about a solution) with iocage Jails and using replication:

Because iocage sets up a mount directly at /mnt/iocage, your replica datasets will think this is where they need to mount also (i.e. not at /mnt/tank/SSDBackups)... this is an issue if you're (I think alphabetically) in front of your SSD pool... not sure about that... anyway, next reboot after your replication job is complete, you may find that your jails are running out of the backup mount (which may be read-only... replication target datasets are set like that).

My proposed solution to that is using zfs set canmount=noauto tank/SSDBackups (for each of your datasets under iocage, including iocage itself)

Can you do a full restore from a snapshot?

i always thought snapshots can only be used to rollback to a previous version if you have a fully working system as they only contain changes from the previous snapshot, i did not know they can be used for full backups.

Im assuming to use the snapshot as a backup i will need the first snapshot taken and every snapshot after that to restore the volume to the state it was before failure?
 

MrToddsFriends

Documentation Browser
Joined
Jan 12, 2015
Messages
1,338

Grinas

Contributor
Joined
May 4, 2017
Messages
174
Please reread sretalla's posting. The key is to set up snapshots and replications. Snapshots are needed to be able to do replications and replications go to another pool in your system (if done locally).

http://doc.freenas.org/11/storage.html#periodic-snapshot-tasks
http://doc.freenas.org/11/storage.html#replication-tasks

i read the documentation before posting it states that
"A replication task allows you to automate the copy of ZFS snapshots to another system over an encrypted connection. This allows you to create an off-site backup of a ZFS dataset or pool."

So does that mean that the replication tasks merges snapshots into the backup on the remote/local machine to create the backup of the dataset? It does not say how it is done other than using the snapshots.

---UPDATE---
Sorry i just realized i was looking at the documentation for a previous version of freenas, it is explained far better in the link you provided.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
Hope you got it in the meantime, but the short version is that snapshots and replication tasks produce a full identical copy of your source pool on the other side (localhost or other host) which can be rolled back just like the original to different snapshot versions (or just mount a different point in time of the dataset/pool without needing to roll it back).

Snapshots can sometimes be interdependent, but ZFS/freebsd manages that for you and warns when you're removing a snapshot that's needed, otherwise, snapshots are always "flattened" back into the pool as they are destroyed, so no concerns about missing something unless you elected to ignore a warning about that from the system at some point during a manual action.

You can't count on snapshots to be your backup, but they are a great help in performing your backup.
 
Status
Not open for further replies.
Top