Storage volume size mismatch

Status
Not open for further replies.

Tim-T

Cadet
Joined
Jun 7, 2018
Messages
9
I am confused about an apparent mismatch in my zvol size vs the pool created inside it. I have 6 identical 6tb disks in raidz2, meaning 4 are really storage and 2 are allocated as parity.

Freenas shows the overall volume as 26TB but the pool inside is only 16.6TB.

Neither number really adds up, to me. There are no snapshots on the volume. Can anyone explain the discrepancy? I know there is some difference between Tib and Tb but it's not enough to account for the differences here. 6x6Tb should be closer to 36TB than 26, and I seem to lose another 10TB by going to a storage pool volume. I would have expected to have between 20 and 24 TB in the pool, about 36 TB in raw storage, and 12 in parity. But none of these numbers is even close...
 

Tim-T

Cadet
Joined
Jun 7, 2018
Messages
9
Also note there are no quota settings on the pool itself. 150 MB set aside for home folders in a second volume but that hardly registers
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
I think the folks here will want to see:
zpool status
and
zfs list
output before making any kind of helpful comment.
 

garm

Wizard
Joined
Aug 19, 2017
Messages
1,556
Terminology matters, please read the primer and or the FreeBSD docs on ZFS, eg there is no such thing as parity disks. What you state doesn’t make sense, even though one can guess to what you mean it’s not efficient troubleshooting. Also, read the rules you agreed with yesterday when you signed up to the forum.. there is hardly any useful information in your posts and I won’t drag it out of you.

Having said all that, your question is hardly unique and if you use search (either here or on Google), the answer you seek you shall find young padawan.
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
This is the age old question on how ZFS calculates the storage capacity. I'll give you a few tid bits but you are going to need to look things up as previously suggested.

First, your 6 x 6TB drives under a RAIDZ2 = 21.6 TB of usable space. I can't say that FreeNAS is reporting it incorrectly, it's just that ZFS reports storage capacities oddly. So my advice is to do a Google search on how ZFS reports storage capacities. the answers you will find will likely just confuse you more.

So let's say you are missing some capacity, well if you have snapshots then your storage can disappear very quickly, but if you just installed FreeNAS, created your vdev and a dataset, and FreeNAS is reporting that you have only 16.6TB of storage then you may have a problem. In this case you need to follow our forum rules and provide us with the hardware info, FreeNAS version you are running, and the data requested above. Also post a screen shot of what you are looking at.
 

Tim-T

Cadet
Joined
Jun 7, 2018
Messages
9
Sorry, not used to forums. I navigated away from the post reply and it posted right away

I can't do this on a tablet...

FreeNAS version is 11.1-u5
 
Last edited:

Tim-T

Cadet
Joined
Jun 7, 2018
Messages
9
Code:
freenas# zpool status tank
  pool: tank
 state: ONLINE
  scan: scrub repaired 0 in 0 days 01:54:15 with 0 errors on Mon Jun  4 01:54:16 2018
config:

		NAME											STATE	 READ WRITE CKSUM
		tank											ONLINE	   0	 0	 0
		  raidz2-0									  ONLINE	   0	 0	 0
			gptid/c5d95c11-c605-11e7-822c-ac1f6b26b6fa  ONLINE	   0	 0	 0
			gptid/c65beebb-c605-11e7-822c-ac1f6b26b6fa  ONLINE	   0	 0	 0
			gptid/c6deb4da-c605-11e7-822c-ac1f6b26b6fa  ONLINE	   0	 0	 0
			gptid/c75aa952-c605-11e7-822c-ac1f6b26b6fa  ONLINE	   0	 0	 0
			gptid/c7cb91f4-c605-11e7-822c-ac1f6b26b6fa  ONLINE	   0	 0	 0
			gptid/c847fb9b-c605-11e7-822c-ac1f6b26b6fa  ONLINE	   0	 0	 0

errors: No known data errors

and
Code:
freenas# zfs list
NAME															USED  AVAIL  REFER  MOUNTPOINT
backup1														4.56T  2.47T  4.56T  /mnt/backup1
freenas-boot												   3.31G  24.3G	64K  none
freenas-boot/.system										   39.6M  24.3G	35K  legacy
freenas-boot/.system/configs-0933ed85aabc42888f7684feb8d5b76c  14.5M  24.3G  14.5M  legacy
freenas-boot/.system/cores									 1.79M  24.3G  1.79M  legacy
freenas-boot/.system/rrd-0933ed85aabc42888f7684feb8d5b76c	  19.3M  24.3G  19.3M  legacy
freenas-boot/.system/samba4									3.08M  24.3G  3.08M  legacy
freenas-boot/.system/syslog-0933ed85aabc42888f7684feb8d5b76c	904K  24.3G   904K  legacy
freenas-boot/ROOT											  3.25G  24.3G	29K  none
freenas-boot/ROOT/11.1-U1									   302K  24.3G   826M  /
freenas-boot/ROOT/11.1-U2									   262K  24.3G   833M  /
freenas-boot/ROOT/11.1-U4									   268K  24.3G   837M  /
freenas-boot/ROOT/11.1-U5									  3.25G  24.3G   839M  /
freenas-boot/grub											  7.02M  24.3G  7.02M  legacy
sync2														   577G  2.07T   577G  /mnt/sync2
sys															 101G  3.41T	88K  /mnt/sys
sys/VMs														  88K   250G	88K  /mnt/sys/VMs
sys/jails													  52.9G   207G   116K  /mnt/sys/jails
sys/jails/.warden-template-pluginjail-11.0-x64				  539M   197G   539M  /mnt/sys/jails/.warden-template-pluginjail-11.0-x64
sys/jails/plexmediaserver_1									42.4G   197G  42.0G  /mnt/sys/jails/plexmediaserver_1
sys/pool													   48.1G  1.95T  48.1G  /mnt/sys/pool
tank														   4.33T  16.6T   176K  /mnt/tank
tank/home													   360K   150G   360K  /mnt/tank/home
tank/pool													  4.33T  16.6T  4.02T  /mnt/tank/pool
tank/pool/DVR												   315G   435G   315G  /mnt/tank/pool/DVR

The GUI I am looking at that confuses me is
upload_2018-6-9_0-30-39.png

The overall size of "tank" doesn't add up. There is nothing obviously wrong here but the storage isn't adding up. Disks are:
upload_2018-6-9_0-32-59.png

The WD Green disks are in the "sys" volume (mirrored) and all the Hitachis are used in "tank" (raidz2)
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
The overall size of "tank" doesn't add up.
Sure it does. The first line shows a raw pool capacity of 6.5 + 26 = 32.5 TiB, which is just right for 6 x 6 TB disks (accounting for the fact that 1 TB = 0.909 TiB). The second line shows a "net" capacity of 4.3 + 16.6 = 20.9 TiB, which sounds about right accounting for overhead.
 

Tim-T

Cadet
Joined
Jun 7, 2018
Messages
9
I had forgotten about the TiB vs TB mismatches. Thanks, I just wanted to ensure I hadn't done something wrong.
Plus I was reading the status output wrong. I was subtracting used from available (as though available was total), which made things seem even worse.

It seems much clearer now
 
Last edited:
Status
Not open for further replies.
Top