Zvol Storage Usage

Status
Not open for further replies.

Scampicfx

Contributor
Joined
Jul 4, 2016
Messages
125
Dear guys,

I know there are tons of threads out there discussing the storage-stats about "used" and "available" in the FreeNAS Web-GUI. Although I spent hours reading them, I still can not solve my questions regarding zvol usage.

Unbenannt.JPG

I was pushed to inform me about these stats more detailed because tonight I received "out of space"-errors from my snapshot tasks.

The "san"-zpool in the picture above houses two zvols which serve ESXi. The zpool itself consists of 3 mirrored vdevs each containing 2x 8TB disks with 4kn. This gives a total capacity of 24 TB (48 TB raw capacity). Additionally, there is one SLOG-PCIe-SSD with a configured capacity of 20 GB.

I think I understand "used" and "available" correctly when looking at my RAIDZ2 / RAIDZ3 pools. However, when looking at my iscsi-zvol-pool (see picture above), I don't understand these 4 lines completely... xD So, to be sure:

- First line: 9.0 TiB (41%) | 12.8 TiB
9,0 TiB + 12,8 TiB results in = 21,8 TiB
The zpool consists of: 2x 8TByte (mirror) + 2x 8TByte (mirror) + 2x 8TByte (mirror). This results in 48 TByte raw capacity and 24 TByte storage which is equal to 21,8 TiB. So, everything should be fine at this point.
However, FYI:
When looking at my other RAIDZ2 / RAIDZ3 pools, this first line represents the raw capacity of all disks.
When looking at this san-zpool (see picture above), this is not the raw capacity.
I guess, the reason herefore is, because no parity is used in this zpool, is this correct?

Regarding the first line-statistics of a zpool, I can also recommend this posting from @jgreco: Is my available space correct?

- Second Line: 21,0 TiB (99%) | 39,3 GiB
What I think I understand: A usage of 21,0 TiB (data0) and 10,8 GiB (boot) results in 21,0 TiB. This is nearly the complete capacity of this whole zpool. Therefore 99% are used, correct?
Because 99% are used, only 39,3 GiB are marked as "available", correct?

- Third Line: 10,8 GiB (17%) | 49,3 GiB
This is a small 10 G volume. It serves as boot volume for ESXi.
Code:
# zfs get used,available,referenced,quota,usedbysnapshots,us
edbydataset,written,logicalused san/gravityunit-boot							
NAME				  PROPERTY		 VALUE	 SOURCE						 
san/gravityunit-boot  used			 10.8G	 -							 
san/gravityunit-boot  available		49.3G	 -							 
san/gravityunit-boot  referenced	   493M	  -							 
san/gravityunit-boot  quota			-		 -							 
san/gravityunit-boot  usedbysnapshots  313M	  -							 
san/gravityunit-boot  usedbydataset	493M	  -							 
san/gravityunit-boot  written		  6.18M	 -							 
san/gravityunit-boot  logicalused	  1.60G	 -	   


What I think I understand: I guess 10,8 GiB get used due to the facts that the zvol itself has a capacity of 10G. Additionally, there are some snapshots of this zvol with only minor changes, so only little data consumption. This results in a total usage of 10,8 GiB.

What I don't understand: Why are 49,3 GiB available? How do they get calculated?

- Fourth Line: 21,0 TiB (63%) | 12,1 TiB
This is a 12T zvol. It serves as data volume for ESXi.
Code:
zfs get used,available,referenced,quota,usedbysnapshots,usedbydataset,written,logicalused san/gravityunit-data0
NAME  				  PROPERTY		 VALUE	 SOURCE					  													  
san/gravityunit-data0  used			 21.0T	 -										    									
san/gravityunit-data0  available		12.1T	 -										    									
san/gravityunit-data0  referenced	   2.52T	 -										    									
san/gravityunit-data0  quota			-		 -										    									
san/gravityunit-data0  usedbysnapshots  6.47T	 -										    									
san/gravityunit-data0  usedbydataset	2.52T	 -										    									
san/gravityunit-data0  written		  21.1G	 -										    									
san/gravityunit-data0  logicalused	  9.98T	 -  


What I don't understand: How is it possible for this volume to use 21,0 TiB?
Calculating referenced + usedbysnapshots + usedbydataset + written + logicalused doesn't lead to 21,0 TiB... what am I missing?

The "real" usage of this 12T data0 zvol is 3,18 TB, as you can see here:
Unbenannt2.JPG



What I understand: I received the "out of space"-error tonight because the available-stat was less than 12 TiB. Therefore, right now, I deleted some snapshots and now the available-stats shows 12,1 TiB. This means the next snapshot should work. But maybe the overnext snapshots errors again ;)
I learned this from this thread: https://forums.freenas.org/index.ph...space-outputs-to-fix-failing-snapshots.52740/

When typing ...
Code:
zfs list -o name,quota,refquota,reservation,refreservation

... I notice, that my zvol data0 has a REFRESERV of 12T.

Code:
san														none	  none	none	   none										
san/gravityunit-boot										  -		 -	none	  10.0G										
san/gravityunit-data0										 -		 -	none	  12.0T 


This means, a snapshot only gets triggered, when at least 12.0T are free.

I have to admit, I really like this REFRESERV-feature, because it ensures, that at all times 12T may get written to this zvol - at any time :) However, I hope that writes coming from users from iSCSI don't get blocked by this feature and just snapshots are affected from this REFRESERV-feature - is that correct?

However, what I don't understand: when this zvol is "using" 21,0 TiB - how can it be, that there are still 12,1 TiB available, when the whole zpool only offers a total capacity of 24 TiB? Furthermore, this zvol only has a size of 12T. I don't understand the mathematics ;)


Well... There are a lot of questions in this posting. I would like to say thanks for reading!
 
Last edited:

bigphil

Patron
Joined
Jan 30, 2014
Messages
486
How is it possible for this volume to use 21,0 TiB?
san/gravityunit-data0 usedbysnapshots 6.47T
san/gravityunit-data0 usedbydataset 2.52T
san/gravityunit-data0 refreservation 12.0T
= ~21T
refreservation counts against the dataset and the parent dataset "used" space.

This means, a snapshot only gets triggered, when at least 12.0T are free.
I don't think that is correct. You'll be allowed to take a snapshot as long as the total unreserved space in the pool is >= the referenced space of the dataset. So in your case, if you had 2.52T (your currently referenced used space of that dataset) or greater of unreserved space in the pool, you could take a snapshot of the san/gravityunit-data0 dataset. reservations apply to datasets and all children (clones, snapshots, etc). refreservation applies to datasets but doesnt include space used by clones and snapshots.

I'd like to see the output of: zfs get all san/gravityunit-data0
Also a screenshot of your iSCSI extents page.
 
Last edited:

Scampicfx

Contributor
Joined
Jul 4, 2016
Messages
125
Dear bigphil,

Thank you for your posting.

Code:
zfs get all san/gravityunit-data0																			
NAME				   PROPERTY				 VALUE					SOURCE													
san/gravityunit-data0  type					 volume				   -														
san/gravityunit-data0  creation				 Wed Nov 22 17:56 2017	-														
san/gravityunit-data0  used					 21.0T					-														
san/gravityunit-data0  available				12.1T					-														
san/gravityunit-data0  referenced			   2.52T					-														
san/gravityunit-data0  compressratio			1.11x					-														
san/gravityunit-data0  reservation			  none					 default													
san/gravityunit-data0  volsize				  12T					  local													
san/gravityunit-data0  volblocksize			 64K					  -														
san/gravityunit-data0  checksum				 on					   default													
san/gravityunit-data0  compression			  lz4					  inherited from san										
san/gravityunit-data0  readonly				 off					  default													
san/gravityunit-data0  copies				   1						default													
san/gravityunit-data0  refreservation		   12.0T					local													
san/gravityunit-data0  primarycache			 all					  default													
san/gravityunit-data0  secondarycache		   all					  default													
san/gravityunit-data0  usedbysnapshots		  6.47T					-														
san/gravityunit-data0  usedbydataset			2.52T					-														
san/gravityunit-data0  usedbychildren		   0						-														
san/gravityunit-data0  usedbyrefreservation	 12.0T					-														
san/gravityunit-data0  logbias				  latency				  default													
san/gravityunit-data0  dedup					off					  default													
san/gravityunit-data0  mlslabel										  -														
san/gravityunit-data0  sync					 always				   local													
san/gravityunit-data0  refcompressratio		 1.14x					-														
san/gravityunit-data0  written				  24.2G					-														
san/gravityunit-data0  logicalused			  9.98T					-														
san/gravityunit-data0  logicalreferenced		2.87T					-														
san/gravityunit-data0  volmode				  default				  default													
san/gravityunit-data0  snapshot_limit		   none					 default													
san/gravityunit-data0  snapshot_count		   none					 default													
san/gravityunit-data0  redundant_metadata	   all					  default													
san/gravityunit-data0  org.freenas:description  Block Size: 64K		  local	  



And here is a screenshot of iSCSI extents:

Unbenannt.JPG


Both zvol are configured as Extent type = Device
 

bigphil

Patron
Joined
Jan 30, 2014
Messages
486
Well, everything else looks ok. You'll either need to reduce the refreservation property or remove it to allow you to create snapshots. The way I tend to setup my iSCSI extents for ESXi is to use the available space threshold setting on the iSCSI settings in FreeNAS and not use a reservation of refreservation on the dataset. You can configure this via the global setting that looks at available pool space (this is how I typically set it), or via the extent with looks at available dataset space.
 
Status
Not open for further replies.
Top