Snapshot fails no disk space

Kco3

Cadet
Joined
Dec 7, 2021
Messages
3
Automatic snapshot task keeps failed. There is one raid-z1 dataset/pool that has one zvol on it that uses 80% of the available space.

pool: SSD7
state: ONLINE
scan: scrub repaired 0B in 00:12:15 with 0 errors on Sun Nov 28 00:12:15 2021
config:

NAME STATE READ WRITE CKSUM
SSD7 ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
gptid/a9b5707e-3327-11ec-bae3-78ac4459de7c ONLINE 0 0 0
gptid/a9af0f6c-3327-11ec-bae3-78ac4459de7c ONLINE 0 0 0
gptid/a9d14a26-3327-11ec-bae3-78ac4459de7c ONLINE 0 0 0
gptid/a9c869e5-3327-11ec-bae3-78ac4459de7c ONLINE 0 0 0
gptid/a9da2790-3327-11ec-bae3-78ac4459de7c ONLINE 0 0 0

The volume:
zfs list -ro space SSD7/SSD7-VOL1
NAME AVAIL USED USEDSNAP USEDDS USEDREFRESERV USEDCHILD
SSD7/SSD7-VOL1 9.11T 11.3T 0B 4.95T 6.33T 0B

zfs get -r all SSD7/SSD7-VOL1
SSD7/SSD7-VOL1 type volume -
SSD7/SSD7-VOL1 creation Mon Dec 6 17:05 2021 -
SSD7/SSD7-VOL1 used 11.3T -
SSD7/SSD7-VOL1 available 9.11T -
SSD7/SSD7-VOL1 referenced 4.96T -
SSD7/SSD7-VOL1 compressratio 1.44x -
SSD7/SSD7-VOL1 reservation none default
SSD7/SSD7-VOL1 volsize 11.2T local
SSD7/SSD7-VOL1 volblocksize 32K -
SSD7/SSD7-VOL1 checksum on default
SSD7/SSD7-VOL1 compression lz4 inherited from SSD7
SSD7/SSD7-VOL1 readonly off default
SSD7/SSD7-VOL1 createtxg 1339912 -
SSD7/SSD7-VOL1 copies 1 default
SSD7/SSD7-VOL1 refreservation 11.3T local
SSD7/SSD7-VOL1 guid 2007483176272222030 -
SSD7/SSD7-VOL1 primarycache all default
SSD7/SSD7-VOL1 secondarycache all default
SSD7/SSD7-VOL1 usedbysnapshots 0B -
SSD7/SSD7-VOL1 usedbydataset 4.96T -
SSD7/SSD7-VOL1 usedbychildren 0B -
SSD7/SSD7-VOL1 usedbyrefreservation 6.33T -
SSD7/SSD7-VOL1 logbias latency default
SSD7/SSD7-VOL1 objsetid 14289 -
SSD7/SSD7-VOL1 dedup off default
SSD7/SSD7-VOL1 mlslabel none default
SSD7/SSD7-VOL1 sync standard default
SSD7/SSD7-VOL1 refcompressratio 1.44x -
SSD7/SSD7-VOL1 written 4.96T -
SSD7/SSD7-VOL1 logicalused 6.59T -
SSD7/SSD7-VOL1 logicalreferenced 6.59T -
SSD7/SSD7-VOL1 volmode default default
SSD7/SSD7-VOL1 snapshot_limit none default
SSD7/SSD7-VOL1 snapshot_count none default
SSD7/SSD7-VOL1 snapdev hidden default
SSD7/SSD7-VOL1 context none default
SSD7/SSD7-VOL1 fscontext none default
SSD7/SSD7-VOL1 defcontext none default
SSD7/SSD7-VOL1 rootcontext none default
SSD7/SSD7-VOL1 redundant_metadata all default
SSD7/SSD7-VOL1 encryption off default
SSD7/SSD7-VOL1 keylocation none default
SSD7/SSD7-VOL1 keyformat none default
SSD7/SSD7-VOL1 pbkdf2iters 0 default
SSD7/SSD7-VOL1 org.truenas:managedby 10.250.0.11 local

The empty space in the dataset is still 2.77 TiB (the remaining 20%).
Because the ZVOL (SSD7-VOL1) only has one iscsi target on in the uses the complete space of that volume, there wil never be more than 11.2 TiB on the volume.
Why is the remaining 20% not enough to facilitate snapshots?

I have done some research and saw that the value of refreservation had something to do with no space being available for snapshots.
I am not looking for way to bend best practises of perfectly okay default values. I can recreate volumes if needed, so I can reconfigure variables needed.
Just looking for some guidance to make automatic snapshots work, once and for all :)

Could anyone guide me?
 

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,947
A ZVOL should use 50% of available space only for best performance
Other than that I have never run a ZVOL that full and tried to snapshot it
 

Kco3

Cadet
Joined
Dec 7, 2021
Messages
3
I am just looking for the best practise to do this. Do I understand correctly that you never use more than 50% of the volume or 50% of the filesystem. And is the space that will be used by snapshots within the volume/zvol or the remaining filesystem space. If I have have to give up 50% of the total space because that is the best practise using snapshots, that's fine. Just looking for some documentation or guidance to know what to do.
 

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,947
Ignoring TiB and TB for simplicity

If I have 4 1TB disks in a 2 vdev mirror - 2TB useable (already at 50% of raw capacity)
If I then create a zvol - you should not use more than 1TB for that zvol (so a total of 25%) for best performance

Do you have any other snapshots on the dataset / zvol? If so try deleting them and then seeing if you can create another snapshot.
 

Kco3

Cadet
Joined
Dec 7, 2021
Messages
3
I have 5 4TB disks in Raid-Z1 with about 14 TB useable.
so if I want a zvol that will have no problems with snapshots that will be kept for 2 days, the capacity of the vol should be 7 TB?
I will allocate the whole zvol to an iscsi target, so the complete zvol will be used. Is it correct dat de remaining space on the vdev will be used for snapshots? Or does the zvol has to have space left in it for the snapshots?
 

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,947
I allocate 50% of the available space to the zvol, the rest I understand is available to snaphots (but keep to less than 80% utilisation)
So yes - zvol at around 7TB
 
Top