How to backup and restore a VM

Louis2

Contributor
Joined
Sep 7, 2019
Messages
177
I know there have been other threads about this subject, but I do not manage / understand any how ..

Assume the particular VM is situated in dataset '<pool-x><vms>test_vm'

If you create a snapshot of a VM or any other zvol, that snapshot does refer to a delta not the whole zvol.
And I want to make a copy of the whole zvol !
- to another dataset in the same pool
- to a dataset in an other pool (my actual intention)
- to a pool on a remote machine

And it would have been very nice, if that had been possible from the GUI !!! .... (as far as I can see and tried it is not :rolleyes:
What ever.

Patrick Hausen does suggest in an earlier thread:
zfs snapshot <pool-name>/path/to/vm-zvol@20220830
zfs send <pool-name>/path/to/vm-zvol@20220830 > /mnt/<pool-name>/some/storage/space/vm-zvol-20220830.img


What I tried, but did not have the result hoped for.

What I want is to have a new zvol with is an exact an full representation of the actual zvol to copy.
And following the procedure above I get a copy of the snapshot, which is only a small delta ....
not the complete zvol as it is at this moment!

So .... how to archive that .....

Restore the zvol copy later on probably equals:
- stop the test_vm (if it is running)
- delete the data set '<pool-x><vms>test_vm'(everything in it)
- recreate that dataset
- copy the backup zvol from the other pool to '<pool-x><vms>test_vm'
- start the vm

I hope it works this way ...
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
First, a really clean backup of a ZFS VM volume requires down time of the volume, (or VM). You can leave the VM running or zVol mounted, but the zVol snapshot backup could be in-consistent, (aka need to run chkdsk / fsck).

Next, ZFS snapshots are deltas of data. But, they represent the entire state of the zVol at the time of the snapshot. If you make an external copy, (like ZFS Send), it will represent the entire zVol.

I don't have the exact syntax to copy a single zVol to another location, but it is something like;

zfs snapshot POOL/PATH/TO/ZVOL@DATE zfs send POOL/PATH/TO/ZVOL@DATE | \ zfs receive POOL/NEW/PATH

You can copy the zVol to;
- A file, locally, or remotely
- Same server & pool, but different path
- Same server, but different pool
- Different server using SSH, to different pool

I don't have any suggestions for using the GUI of TrueNAS to make snapshots or replicating it / them. But, it can be done.


ZFS snapshots are neat things. They take up no extra space until the original zVol has changes applied. Eventually snapshots can take up a lot of space, so keeping track of the amount of space they use is important.
 

Louis2

Contributor
Joined
Sep 7, 2019
Messages
177
For info I did try the suggestions in the original thread by Patrick (much appreciated).

Trying that to make a copy of the full vm based on the snapshot on another pool.

Since the original zvol (not the snapshot) was lets say 100GB that should result in a same size zvol on the destination pool. But that did not happen, the destination was only something like 200k.

The only thing I can imagine is that the real copy is a background task executed later, but I did not see the size growing over time.

I will try again ....
I see that the commands you suggest are slightly different ..
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
A ZFS Send & Receive will only copy blocks in use. If you create 100GB zVol, but only use 200KB, it will run fast and copy only 200KB.

As I said, I did not give you the exact syntax of the ZFS Send & Receive. Their may be custom options needed, but I would not know those as I don't copy zVols. The syntax that I use for copying my Linux OS root pool for backup is different than what you asked about.
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
Exact syntax should be the following:
zfs snapshot pool1/zvolname@tobecloned this creates a snapshot named "tobecloned".
zfs send pool1/zvolname@tobecloned | zfs recv pool2/zvol(new)name this will create an exact copy of said snapshot in the path you want, renaming it however you want.

You will have to setup a script if you want this to be done automatically every X time: the tricky part is shutting down the VM and restarting it via CLI.
 
Last edited:

Louis2

Contributor
Joined
Sep 7, 2019
Messages
177
From a SSH-console I tried,
sudo zfs snapshot Cheetah/Virtual-Machines/xyz@20230422
sudo zfs send Cheetah/Virtual-Machines/xyz@20230422 | sudo zfs receive -F Olifant/BackUp-VMs/xyz

That command was accepted ...... but up to now, over an hour later, it did not yet result in a visible result ...

The involved VM contains Ubuntu with apache, mysql, mailsystem and a sftp server in initial state, but far bigger than the 152 kB which is the size of the destination Dataset (without visual content)

There is an Rsync task running, but that one is receiving data via a 1G-connection so the machine is hardly loaded and has plenty of time to clone the zvol, I would say ...

So I do not understand why it does not work :frown:
 

samarium

Contributor
Joined
Apr 8, 2023
Messages
192
If you are wanting to have a rolling backup, then it would seem that you can the normal replication gui to set that up, and it would only push the latest changes following the initial replication. You would have to make sure that the replication stopped if you want to use the replicated vm image, else any changes would get rolled back when the next replication runs. A cleaner way may be to clone the replicated image, and then boot the clone, knowing that it will get progressively out of date vs the replicated image, so would have to be recloned, but at least it would not get changed out from under you with a rollback. Seems you have some experimenting to do. You can to the incremental send/recv manually too, to see how it works, IIRC -I or -i, and of cause zfs clone and zfs promote are other ways to bend you head out of shape too.
 

samarium

Contributor
Joined
Apr 8, 2023
Messages
192
From a SSH-console I tried,
sudo zfs snapshot Cheetah/Virtual-Machines/xyz@20230422
sudo zfs send Cheetah/Virtual-Machines/xyz@20230422 | sudo zfs receive -F Olifant/BackUp-VMs/xyz

That command was accepted ...... but up to now, over an hour later, it did not yet result in a visible result ...

The involved VM contains Ubuntu with apache, mysql, mailsystem and a sftp server in initial state, but far bigger than the 152 kB which is the size of the destination Dataset (without visual content)

There is an Rsync task running, but that one is receiving data via a 1G-connection so the machine is hardly loaded and has plenty of time to clone the zvol, I would say ...

So I do not understand why it does not work :frown:
If the zfs recv is still running, then let it complete. When it stops, then check the size. You may be able to see the pool available space changing with zpool list or zfs list, but easier to just wait for zfs recv completion.
 

Louis2

Contributor
Joined
Sep 7, 2019
Messages
177
I have no idea if something is happen under the ^hood^, I start the command and the command line is available again. For test I did enter the second command again now with recv in state of receive and root as owner of the destination dataset. Not any result yet.

admin@lion[~]$ ps -aux | grep recv
admin 3861554 0.0 0.0 5136 2340 pts/0 S+ 13:30 0:00 grep recv

The machine loading is minimal (a couple of %)
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
The new snapshot & zVol should show up with;
zfs list -t all -r Olifant/BackUp-VMs

Also, in general, if you are just copying a dataset or zVol once, then you remove the source and destination snapshots afterwards. They simply are not needed when copy is complete, for the one time copy.
 

Louis2

Contributor
Joined
Sep 7, 2019
Messages
177
Here the ^top^ and zfs list output

1682164605923.png



admin@lion[~]$ sudo zfs list -t all -r Olifant/BackUp-VMs
[sudo] password for admin:
NAME USED AVAIL REFER MOUNTPOINT
Olifant/BackUp-VMs 414G 12.2T 144K /mnt/Olifant/BackUp-VMs
Olifant/BackUp-VMs/xyz 152K 12.2T 96K /mnt/Olifant/BackUp-VMs/xyz
Olifant/BackUp-VMs/xyz@20230422 56K - 96K -
Olifant/BackUp-VMs/FreeBSD14 82.4G 12.2T 96K /mnt/Olifant/BackUp-VMs/FreeBSD14
Olifant/BackUp-VMs/FreeBSD14/FreeBSD14-yle96l 81.3G 12.3T 2.10G -
Olifant/BackUp-VMs/FreeBSD14/FreeBSD14-yle96l_FreeBSD14_20230401a 1.13G 12.2T 1.13G -
Olifant/BackUp-VMs/HomeAssistant 52.9G 12.2T 96K /mnt/Olifant/BackUp-VMs/HomeAssistant
Olifant/BackUp-VMs/HomeAssistant/haos 50.8G 12.2T 2.10G -
Olifant/BackUp-VMs/HomeAssistant/haos_HomeAssistant_20220401a 2.10G 12.2T 2.10G -
Olifant/BackUp-VMs/Twonky 279G 12.2T 96K /mnt/Olifant/BackUp-VMs/Twonky
Olifant/BackUp-VMs/Twonky/Twonky-d0t98a 102G 12.2T 85.7G -
Olifant/BackUp-VMs/Twonky/Twonky-d0t98a_Twonky20230403a 86.3G 12.2T 86.3G -
Olifant/BackUp-VMs/Twonky/Twonky-d0t98a_Twonky_20230401a 5.00G 12.2T 5.00G -
Olifant/BackUp-VMs/Twonky/Twonky-d0t98a_Twonky_20230402a 86.3G 12.2T 86.3G -
Olifant/BackUp-VMs/test 96K 12.2T 96K /mnt/Olifant/BackUp-VMs/test
Olifant/BackUp-VMs/test@20230422 0B - 96K -
Olifant/BackUp-VMs/test2 96K 12.2T 96K /mnt/Olifant/BackUp-VMs/test2
Olifant/BackUp-VMs/test2@20230422 0B - 96K -
 
Last edited:

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
I think their is missing syntax. The destination name does not exist. So you would use something like this;

sudo zfs send Cheetah/Virtual-Machines/xyz@20230422 | sudo zfs receive -ev Olifant/BackUp-VMs

We use the "-e" option because you don't have a "Cheetah/Virtual-Machines" in "Olifant/BackUp-VMs". Nor do you want one. I also included the "-v" to give you status of the receive.

From the ZFS Receive manual page:
-e Discard all but the last element of the sent snapshot's file system name, using that element to deter-
mine the name of the target file system for the new snapshot as described in the paragraph above.
 

Louis2

Contributor
Joined
Sep 7, 2019
Messages
177
Now something is happening ! however not very much
^received 43.9K stream in 1 seconds (43.9K/sec)^

I have an appointment now. I will further look into it this evening
 

Louis2

Contributor
Joined
Sep 7, 2019
Messages
177
It is just NOT working this way. Where I have to say that I do not know what the procedure should be :eek:

One thing is for sure, you can not use a just generated snapshot, you need a snapshot representative for ALL data used by the actual VM-instance,
Using that snapshot you can create the intended backup zvol.

(sudo zfs send <sourcepool>/<datasetpath/<dataset><the_snapshot_representing_actual_situation> | sudo zfs receive -Fev <destpool>/<new-datasetpath>/<newdataset> (where I am not sure if you can force the dataset name this way)

The problem is how to create the ^ALL-actual-data-snapshot^. To be honest I do not know, but I tried things like this:
- stop the VM
- promote the snapshot used by the vm (indicated by the computer symbol)
- delete all snapshots which can be deleted now
- promote the vm snapshot again
- delete all snapshots which can ve deleted now
- promote the vm snapshot again
- etc
Up to the moment that only the base snapshot (xyz-rfg1ef) and the ^vm^ snapshot are left

Then send the resulting VM-snapshot to the destination, that will result in the intended zvol (or comes at least close, not sure it is 100% this way)

And to finish things:
- Rename the vm-snapshot to the name belonging to the first snapshot
- tell the VM to use that snapshot
- restart the VM

I do NOT grantee !!!! that the described process is correct, but I think it is the correct direction. Where this process IMHO is to tricky to do this as a manual process. The process, whatever it exactly is, should be performed via a script behind a GUI.

Also note that this are my actual findings, not less not more.

If someone has a better solution, I am very interested!

PS
Note that I am not a fan of the incredible long names becoming longer and longer for each snapshot.
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
I don't understand what you mean by promote. If you use the ZFS command zfs promote, that is out right wrong for the intended purpose.

For my Linux computer's recovery, I have an alternate boot media installed. It is then possible to perform 100% backup of the OS pool, (aka root pool / rpool), and make it bootable. I've tried booting off of the other media, it works 100% of the times I have tried it. For me, I scripted the process and check the log file afterwards. Always good.

The procedure I sent you should work. I would perform the work as root, and skip the SUDO command. It might be interfering with proper operation.

Stop the VM if you want a clean backup.

sudo su -
zfs snapshot Cheetah/Virtual-Machines/xyz@20230422
zfs send -v Cheetah/Virtual-Machines/xyz@20230422 | zfs receive -e Olifant/BackUp-VMs

We can reverse the verbosity and use it on the sender program.

When done, you clean out the snapshots;

zfs destroy Cheetah/Virtual-Machines/xyz@20230422
zfs destroy Olifant/BackUp-VMs/xyz@20230422


To be exceptionally clear, ZFS snapshots represent the dataset or zVol at the time of the snapshot. This has been proven over and over again, at least for me. ZFS promotes are something different, and are a way to roll back a dataset or zVol to a prior point in time snapshot.

If this information is not suitable to create a copy of your zVol, then I don't know how to explain it better.
 

Louis2

Contributor
Joined
Sep 7, 2019
Messages
177
Tja .... I am still fighting ... to get a backup ...

What I would like to create is one zvol representing the vm on a given moment in time. To archive that I tried the proposed snapshot method and replication tasks both not leading to the result hoped for ...

For the test I took a freebsd vm which I did not/hardly touch before and did create two snapshots one with an one without recursion
root@lion[~]# zfs snapshot Cheetah/Virtual-Machines/FreeBSD14@20230423
root@lion[~]# zfs snapshot -r Cheetah/Virtual-Machines/FreeBSD14@20230423_r

The vm looks like this (one the original and one clone)

1682249217567.png


1682249457034.png


The relating snapshots look are:

1682249298895.png


I did create two datasets to store the backup results and than executed the following commands

root@lion[~]# zfs send -v Cheetah/Virtual-Machines/FreeBSD14@20230423 | zfs receive -e Olifant/BackUp-VMs/freebsd14_tst
full send of Cheetah/Virtual-Machines/FreeBSD14@20230423 estimated size is 42.6K
total estimated size is 42.6K
root@lion[~]# zfs send -v Cheetah/Virtual-Machines/FreeBSD14@20230423_r | zfs receive -e Olifant/BackUp-VMs/freebsd14_r
full send of Cheetah/Virtual-Machines/FreeBSD14@20230423_r estimated size is 42.6K
total estimated size is 42.6K

Than I did create a (artificial) daily snapshot for the test in favor of a replication task

1682249930264.png


The total result is like this

1682250139195.png


No zvol's.

Note that if there had been snapshots zvols in the directory the replication task would have copied them all (I have seen that behavoir before)


Than I tried if it is possible to create a snapshot of a zvol (not that I did expect that)

1682250383319.png


So non of the described actions has lead to the result hoped for ..... :oops: :confused:
I am probably doing very stupid things :eek:
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
I can't understand what could be your issue, but I have recently succesfully cloned a VM's zvol with this approach (bare metal CORE).
 

Louis2

Contributor
Joined
Sep 7, 2019
Messages
177
I have a core machine and this machine a scale machine.

The core machine was mainly intended as a NAS where the SCALE-system is a server for NAS but also VM- functionality. ..... And would like to backup the VM's on a second dataset to make sure that in case of a failure or that there is some other problem, I always have a solid backup :)
(which I can also store on the core machine)
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
My point is I am perfectly able to do on my system what you are trying to do on yours, I re-tested just now.
Likely your naming scheme is not helping you; anyway, you can snapshot the zvol from the GUI in the pool/storage section.
 
Top