zfs size looks odd

Status
Not open for further replies.

phynias

Dabbler
Joined
May 1, 2014
Messages
12
ok so first off i have to apologize because i feel really dumb for having to come here and ask this, but i have been looking around and staring at this for a while and i can not figure out what is going on.

so as you can see in the attached image i have a single zfs. my question is why is there such a huge discrepancy in size between the first line 13.3tb and the second 8.9 tb? did i set something up wrong when i first configured it? i never noticed this before so i feel even dumber, but hey gotta learn.

also if it is something i misconfigured can i fix it now to get back that lost space?

this is running 6 - 3tb drives.
output of zpool list:
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
freenas-boot 7.44G 970M 6.49G - - 12% 1.00x ONLINE -
zfs1 16.2T 13.3T 2.94T - 15% 81% 1.00x ONLINE /mnt


thanks so much!
 

Attachments

  • zfs1.png
    zfs1.png
    8.3 KB · Views: 240

enemy85

Guru
Joined
Jun 10, 2011
Messages
757
the first line it's the total including the space used for parity/redudancy.
the second line it's the real space after parity (raidz1/raidz2)
 

Robert Trevellyan

Pony Wrangler
Joined
May 16, 2014
Messages
3,778
Your numbers match this calculator if your pool is configured as RAID-Z2. That's a sensible configuration for 6 disks, so you didn't do anything wrong there. However, you have a problem because your pool is over 80% full, which represents a threshold for write performance.
 

phynias

Dabbler
Joined
May 1, 2014
Messages
12
thanks guys makes sense now.
and i know about the 80%, that's what made me start looking.
time to get some 6tb drive =p
 

Alister

Explorer
Joined
Sep 18, 2011
Messages
52
Checking my own system

6 x 2.0TB disks or 6 x 1.819 TiB Discs = 10.9TiB not the 12 you think you have.
 
Status
Not open for further replies.
Top