zpool list size changed? (or: does not match zfs list and df-h sizes)

Status
Not open for further replies.

jnitis

Dabbler
Joined
Aug 12, 2012
Messages
12
Greetings,

I have 4x3TB drives in a RAIDZ2. That should net me ~6TB left over after 2 drives are used for parity. The FreeNAS GUI and "df -h" / "zfs list" report this properly (~5.xTB) -- but "zpool list" reports the formatted size of all 4 drives (~11TB).

Is this normal behavior? I've done a ton of searching and some people say it is, some people say it isn't so I'd like to obtain a definitive answer before I put my FreeNAS box into production.

Another item bothering me is I *believe* "zpool list" *was* showing the proper total size previously, but I can't confirm it. I had one of the four drives go offline today (no SMART errors, simply rebooted and clear'd then online'd the disk and all has been well since) and I'm wondering if this little episode could be the cause?

Please see relevant data below.

Regards,

John

Code:
[root@freenas] /sbin# zpool list
NAME   SIZE   USED  AVAIL    CAP  HEALTH  ALTROOT
data  10.9T   273G  10.6T     2%  ONLINE  /mnt


Code:
[root@freenas] /sbin# zfs list
NAME   USED  AVAIL  REFER  MOUNTPOINT
data   134G  5.05T   134G  /mnt/data


Code:
[root@freenas] /sbin# df -h
Filesystem                 Size    Used   Avail Capacity  Mounted on
/dev/ufs/FreeNASs2a        927M    353M    499M    41%    /
devfs                      1.0K    1.0K      0B   100%    /dev
/dev/md0                   4.6M    1.8M    2.4M    43%    /etc
/dev/md1                   824K    2.0K    756K     0%    /mnt
/dev/md2                   149M     14M    123M    10%    /var
/dev/ufs/FreeNASs4          20M    1.1M     17M     6%    /data
data                       5.2T    135G    5.1T     3%    /mnt/data
/mnt/data/jail-data        5.2T    135G    5.1T     3%    /mnt/data/jail/jail/mnt/plugins
/mnt/data/jail-data/pbi    5.2T    135G    5.1T     3%    /mnt/data/jail/jail/usr/pbi
devfs                      1.0K    1.0K      0B   100%    /mnt/data/jail/jail/dev
procfs                     4.0K    4.0K      0B   100%    /mnt/data/jail/jail/proc


Code:
[root@freenas] /sbin# zpool status
  pool: data
 state: ONLINE
 scrub: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        data        ONLINE       0     0     0
          raidz2    ONLINE       0     0     0
            ada0p2  ONLINE       0     0     0
            ada1p2  ONLINE       0     0     0
            ada2p2  ONLINE       0     0     0
            ada3p2  ONLINE       0     0     0

errors: No known data errors
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
It looks completely normal to me. Your zpool status says raidz2, so you definitely have a RAIDZ2. Remember that Unix OSes are more powerful than Windows machines. Windows dumbs down alot of stuff because they don't want to confuse their userbase. The typical userbase for FreeBSD is much more technical and can handle knowing the real story of what's going on.

The total size of your zpool really is 10.6TB. That's really how much space the zpool is consuming. That's why zpool -list gives you the true size. ZFS handles the parity, mirrors, etc. In your case, any data you write to your array will consume approxmiately twice the file size because of the parity for your configuration. ;)
 

thornae

Cadet
Joined
Oct 16, 2012
Messages
1
I, too, spent quite a bit of time searching for an answer to this before finding this post.

Thanks for that link - it makes it all clear.
 
Status
Not open for further replies.
Top