Share ZVOL Snapshot via iSCSI (for mount on Backup Server)

t0mc@

Cadet
Joined
Jan 14, 2021
Messages
7
Hi everyone,

we have a TrueNAS 12 Install with a 5TB ZVOL which is shared via iSCSI to a productive Server which stores data on it, so far, so good.
Now for doing incremental file - backups we have a second Server with bacula (a backup software). For backup time our plan is to do a snapshot of the ZVOL in TrueNAS, share this snapshot via iscsi, mount it on the backup server and do the incremental backup from it.

For testing this scenario we created another small ZVOL (backuptests), mounted it on the productive server, created some files on it and did a snapshot in the TrueNAS GUI:
1610624752512.png
1610624769190.png


Now we are at the point to share this snapshot via iSCSI in the TrueNAS GUI, which doesn't work... in the extend dialog TrueNAS shows the Snapshot Volume (as ro, of course, which is quit ok):
1610624752542.png


but when clicking on "send", an error appears that the ZVOL doesn't exist:
1610624798130.png



What are we doing wrong? Can't zvol snapshots be shared via iscsi despite they are shown in the dialog? Or is this scenario completely nonsense at all?
As iSCSI Vols should / can only be attached to one single server we thought, sharing a snapshot as "new" vol would be a way...

Thx for any hint!
T0m@
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
I don't think you can share snapshots with iSCSI. You would need to clone the snapshot to get something that can be shared... You should also then look into the complications of clones. The manual doesn't talk about ZVOL snapshots and clones being the same or different and says you can clone, use, then delete the clone with no harm to the original snapshot, but I have some concerning memory of reading something about deleting clones could delete the original somehow (not sure what I'm really saying here other than be careful).
 

t0mc@

Cadet
Joined
Jan 14, 2021
Messages
7
OK, I will try this approach... wouldn't be dramatical if the snapshot gets deleted, when the cloned vol is removed, as all this this is just for backup purposes.

But what I'm wondering is: why can the snapshot be found in the Extend GUI's device - dropdown? And as a plus "snapshot" is also mentioned in the help text:
1610634386220.png
 

t0mc@

Cadet
Joined
Jan 14, 2021
Messages
7
I tried the snapshot -> clone zvol from snapshot approach... indeed, a snapshot cannot be deleted if there are cloned vols existing, so one had first to delete the cloned vol, then the snapshot... and then, the next day, a new snapshot with the same name had to be made, the clone with the same name out of it and I assume, that even if the iSCSI config still exists (as the names are the same) it wouldn't work.

So this workflow sounds not very practicable:
  1. create snapshot of the zvol
  2. create clone of the snapshot
  3. mount the snapshot clone on the backup server
  4. do backup stuff
  5. unmount the snapshot clone from the backup server
  6. delete the clone
  7. delete the snapshot
Repeating this every day...

How could a incremental backup process of iSCSI shared zvols with a dedicated backup server better look like?
 

leecz

Cadet
Joined
Jul 20, 2020
Messages
3
It's working on 11 edition, but after upgrade to 12 I have got the same error. I think it's a bug~ I need to share zvol snapshot as iscsi target/extend.
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
How could a incremental backup process of iSCSI shared zvols with a dedicated backup server better look like?
Why don't you run a Bacula client on the file server and do incremental backups to the backup server?
Alternatively you could create regular incremental snapshots and replicate them to another ZFS based system skipping the file based backup altogether.
 

t0mc@

Cadet
Joined
Jan 14, 2021
Messages
7
Why don't you run a Bacula client on the file server and do incremental backups to the backup server?
Alternatively you could create regular incremental snapshots and replicate them to another ZFS based system skipping the file based backup altogether.
For example when backing up MySQL Database Files you have to stop MySQL for making sure, files / backup is consistent and so one has a downtime during backup time. When backing up from snapshot no downtime is needed in those cases.
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
For example when backing up MySQL Database Files you have to stop MySQL for making sure, files / backup is consistent and so one has a downtime during backup time. When backing up from snapshot no downtime is needed in those cases.
You will still lose in-flight transactions, but at least all tables will be frozen at the same moment in time. I see.

Well, I cannot offer much help. I simply backup zvols as zvols via incremental replication to a second system - with 2 or 4 weeks worth of snapshots.
 

t0mc@

Cadet
Joined
Jan 14, 2021
Messages
7
You will still lose in-flight transactions, but at least all tables will be frozen at the same moment in time. I see.

Well, I cannot offer much help. I simply backup zvols as zvols via incremental replication to a second system - with 2 or 4 weeks worth of snapshots.
I read about this some time ago. As I'm relativly new in the whole ZFS stuff never tried this. So you have 15 or 30 snapshots constantly, older ones will be deleted automatically? What about performance / Disk Space usage?

Nevertheless I'm with leecz ... I also think it is a bug in TrueNAS as 1. he wrote in V11 exactly this worked and 2. the GUI offers this functionality.
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
So you have 15 or 30 snapshots constantly
15 to 30? Hundreds ... 168 for a week of hourly snapshots per VM ...

older ones will be deleted automatically?
Of course, that's what the snapshot tasks are for. You define frequency and retention and the rest is automatic.

What about performance / Disk Space usage?
Performance of snaphots in ZFS is not an issue. Not at all. Disk space depends on how fast how much of your virtual disks are rewritten. I'll attach a list for our Poudriere VM, which naturally has a lot of writes going on.

HTH,
Patrick
 

Attachments

  • poudriere-snapshots.txt
    12.3 KB · Views: 181

t0mc@

Cadet
Joined
Jan 14, 2021
Messages
7
15 to 30? Hundreds ... 168 for a week of hourly snapshots per VM ...
:D OK... thought it where daily snapshots, not hourly...

Of course, that's what the snapshot tasks are for. You define frequency and retention and the rest is automatic.
I see... have to look a bit closer to this...

Performance of snaphots in ZFS is not an issue. Not at all. Disk space depends on how fast how much of your virtual disks are rewritten. I'll attach a list for our Poudriere VM, which naturally has a lot of writes going on.
Thx a lot.... the performance question came into my mind cause I know in VMWare ESXi having VM snapshots over a longer time the VM performance will get worser...
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
the performance question came into my mind cause I know in VMWare ESXi having VM snapshots over a longer time the VM performance will get worse
That is because VMFS is not copy on write. As soon as you create a snapshot, the VMDK is frozen and a continuously growing transaction log written instead.
ZFS is CoW anyway, so all it takes for a snapshot is not to free expired blocks and keep the reference.
 

t0mc@

Cadet
Joined
Jan 14, 2021
Messages
7
That is because VMFS is not copy on write. As soon as you create a snapshot, the VMDK is frozen and a continuously growing transaction log written instead.
ZFS is CoW anyway, so all it takes for a snapshot is not to free expired blocks and keep the reference.
I see... thx a lot for clearyfying :)
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
One other comment about non-COW file systems & snapshots, which includes Linux LVM snapshots, is that when you delete the snapshot, it can take a while to apply the transaction log to the "real" storage. The more changed, the longer to flush the transaction log. This also implies that the "real" storage is going to be hit with high disk writes while the transaction log is flushed. I have personally seen this take hours in a production environment of VMWare when the snapshot was held longer than it should.

ZFS does need some I/O to delete / destroy a snapshot, (clone or dataset too), so that the freed up blocks can be recorded as re-usable. However, all modern ZFS, (both OpenZFS and Solaris ZFS), use a method called ASync destroy which performs the storage reclaim in the background.

In fact, their is a hint that OpenZFS will include a new feature for ASync delete of larger files. Meaning if you have huge files, it can take a while to reclaim the space, like minutes, (for >100GB files on a busy NAS). But, with the new feature, it will reclaim those large files in the background like the ASync destroy feature.
 

motox

Cadet
Joined
Dec 22, 2021
Messages
7
It's working on 11 edition, but after upgrade to 12 I have got the same error. I think it's a bug~ I need to share zvol snapshot as iscsi target/extend.
Is someone in TrueNAS working on it? I also have the same error - when I choose the Deivce as snapshot [ro], I get the error that "Zvol ... does not exist" and actually indeed in /dev/zvol/... there is no zvol for a snapshot.

Anyone knows how to activate a zvol for a snapshot?
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
I think you're looking for zfs clone
 
Top