Space Confusion (no not the normal TB vs. TiB)

Status
Not open for further replies.

mattlach

Patron
Joined
Oct 14, 2012
Messages
280
Hey all,

I'm just a little confused, hoping someone could help clear things up.

I've got a RAIDz2 array up with 8 drives (I know, not recommended configuration for performance reasons, but it works for me)

Four of the drives are 3TB, and four are 4TB.

Based on that, I'd expect 3*6=18TB of useable space.

Converting from TB to TiB that should be 18*(1000)^4/(1024)^4 = 16.37TiB

So, I expect to have a total of 16.37TiB available.

Now comes the funny part:

The GUI says I have 15.2TiB total.

a df -h shows me:
Code:
Filesystem                  Size    Used   Avail Capacity  Mounted on
RAIDz2-01                    15T      2T     13T    13%    /mnt/RAIDz2-01


and zpool list shows me:

Code:
# zpool list
NAME        SIZE  ALLOC   FREE    CAP  DEDUP  HEALTH  ALTROOT
RAIDz2-01  21.8T  2.79T  19.0T    12%  1.00x  ONLINE  /mnt

That last one is the most crazy!
So, which one of these methods is deemed to be the most accurate (definitely not the last one, unless it is assuming a certain compression ratio) and why is the 16.37 from my calculations not reflected here?
Am I seeing a 1.1TiB loss due to ZFS overhead above what would be expected from double parity?
Appreciate anything that can enlighten me!
Thank you,
Matt
 

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
The last one is showing total disk space including parity.So all 8 drives.
 

mattlach

Patron
Joined
Oct 14, 2012
Messages
280
The last one is showing total disk space including parity.So all 8 drives.


Ahh, that would make sense why it is so huge, as it is probably adding in the 1TB of space on each of my 4TB drives that is not being used.

Still can't seem to figure out why I only have 15.2TiB available instead of 6*3TB*(1000^4)/(1024^4) = 16.37TiB.

Is all that 1.2TiB being consumed by swap partitions and other overhead, or is there something wrong with my volume?
 

solarisguy

Guru
Joined
Apr 4, 2014
Messages
1,125
Try to post here results of
Code:
zpool get all RAIDz2-01 | egrep '%|T'
 
zfs get all RAIDz2-01 | egrep '%|T'
 

mattlach

Patron
Joined
Oct 14, 2012
Messages
280
Try to post here results of
Code:
zpool get all RAIDz2-01 | egrep '%|T'
 
zfs get all RAIDz2-01 | egrep '%|T'



Thank you for your help.

This is what I'm seeing:

Code:
# zpool get all RAIDz2-01 | egrep '%|T'
NAME       PROPERTY                       VALUE                          SOURCE
RAIDz2-01  size                           21.8T                          -
RAIDz2-01  capacity                       13%                            -
RAIDz2-01  free                           18.9T                          -
RAIDz2-01  allocated                      2.86T                          -
 
# zfs get all RAIDz2-01 | egrep '%|T'
NAME       PROPERTY              VALUE                  SOURCE
RAIDz2-01  used                  2.03T                  -
RAIDz2-01  available             13.2T                  -
RAIDz2-01  referenced            2.03T                  -
RAIDz2-01  usedbydataset         2.03T                  -
RAIDz2-01  written               2.03T                  -
RAIDz2-01  logicalused           2.03T                  -
RAIDz2-01  logicalreferenced     2.03T                  -
 

solarisguy

Guru
Joined
Apr 4, 2014
Messages
1,125
Interesting... We were not seeing everything yet :) so let's see
Code:
zfs list
zfs list -t snapshot
zpool status
P.S. They could have obscured the facts, so I am glad you do not have quota, refquota, refreservation, reservation (yet).
 

mattlach

Patron
Joined
Oct 14, 2012
Messages
280
Interesting... We were not seeing everything yet :) so let's see
Code:
zfs list
zfs list -t snapshot
zpool status
P.S. They could have obscured the facts, so I am glad you do not have quota, refquota, refreservation, reservation (yet).



Thank you again! Here is the requested info:

Code:
# zfs list
NAME                      USED  AVAIL  REFER  MOUNTPOINT
RAIDz2-01                1.98T  13.2T  1.98T  /mnt/RAIDz2-01
RAIDz2-01/.system        12.1M  13.2T  358K  /mnt/RAIDz2-01/.system
RAIDz2-01/.system/cores  7.44M  13.2T  7.44M  /mnt/RAIDz2-01/.system/cores
RAIDz2-01/.system/samba4  3.03M  13.2T  3.03M  /mnt/RAIDz2-01/.system/samba4
RAIDz2-01/.system/syslog  1.24M  13.2T  1.24M  /mnt/RAIDz2-01/.system/syslog


Code:
# zfs list -t snapshot
no datasets available


Code:
# zpool status
  pool: RAIDz2-01
state: ONLINE
  scan: scrub repaired 0 in 2h26m with 0 errors on Tue May 27 18:08:05 2014
config:
 
NAME                                            STATE    READ WRITE CKSUM
RAIDz2-01                                      ONLINE      0    0    0
raidz2-0                                      ONLINE      0    0    0
  da1p2                                      ONLINE      0    0    0
  da2p2                                      ONLINE      0    0    0
  da3p2                                      ONLINE      0    0    0
  da4p2                                      ONLINE      0    0    0
  gptid/66e28720-e538-11e3-82a3-001517168acc  ONLINE      0    0    0
  da5p2                                      ONLINE      0    0    0
  da6p2                                      ONLINE      0    0    0
  gptid/17289b87-e51d-11e3-9791-001517168acc  ONLINE      0    0    0
 
errors: No known data errors


I've been wondering about those .system locations. I don't recall them existing before I exported and reimported my volume.
 

Yatti420

Wizard
Joined
Aug 12, 2012
Messages
1,437
9.2.1.5 and new SMB needs the .system (you can also have a persistant syslog) I'm curious how old your pool is.. I though all disks should be gptid/.###...
 

mattlach

Patron
Joined
Oct 14, 2012
Messages
280
9.2.1.5 and new SMB needs the .system (you can also have a persistant syslog)

Ahh, thank you.

I'm curious how old your pool is.. I though all disks should be gptid/.###...


That is an artifact of having created the pool degraded. It required me to create the pool manually from the command line and then export and import it.

I used to have a 6 disk pool (4x3TB + 2x2TB). I wanted to upgrade it to an 8 disk pool.

I bought 4 4TB drives in order to accomplish this. The plan was to back up my files to a 2x4TB mirror on my desktop, destroy my old pool, and then create a 8 disk RAIDz2 pool with 2x2TB, 4x3TB and 2x4TB, then copy all my data back, and swap in the two 4TB disks from th emirror one by one.

Unfortunately before I even got my 4TB disks one of my 2TB drives died on me. I didn't have the extra cash to buy yet another disk right now, so I decided to use the strategy above, except for creating the new pool as a degraded pool, using a thin provisioned image file as one of the 8 disks. In order to do this I had to do it manually from the command line. When I did I referenced devices rather than gptid as I was lazy.

After exporting it from the command line, reimporting it into the FreeNAS gui, copying my backed up files back, resilvering in the missing disk with one of the drives from the backup mirror, and replacing my last 2TB disk with the second mirror disk, the above is what I wound up with.

The two disks (previously mirror) disks swapped in show up as gptid, whereas all the disks that were there when I created the degraded RAIDz2 still remain listed as their devices.

I was concerned about this at first. What if the disks move around and trade dev names? But they have moved since, and nothing bad seems to happen, so I am happy. It just leads to my zpool status looking a little weird.
 

solarisguy

Guru
Joined
Apr 4, 2014
Messages
1,125
You have to redo you pool. I might be able to help you find what exactly went wrong, but you should destroy and rebuild it regardless...

I made a mistake, when I read your initial post. You clearly listed
Code:
# zpool list
NAME      SIZE  ALLOC  FREE    CAP  DEDUP  HEALTH  ALTROOT
RAIDz2-01 21.8T  2.79T  19.0T    12%  1.00x  ONLINE  /mnt
My pool happens to have also 21.8TiB of raw disk space, so subconsciously I did not question that in your case 21.8TiB is a wrong value... You should have approximately:
Code:
  4 * 4TB + 4 * 3TB =
  4 * 4*1000^4/1024^4 + 3 * 4*1000^4/1024^4 =
  4 * 3.64 + 4 * 2.73 =
  14.56 + 10.92 = 25.5 TiB


Please post gpart show and zpool history

P.S. FreeNAS 9.x (or may be only starting with 9.2.1) automagically places .system dataset in the first ZFS pool added to the system. Version 9.2.1.6 promises to give users more control over it.
 

mattlach

Patron
Joined
Oct 14, 2012
Messages
280
You have to redo you pool. I might be able to help you find what exactly went wrong, but you should destroy and rebuild it regardless...

Ugh, I was fearing this. I currently don't have anywhere to back up my data to and restore it. if I do that, I will need to buy another couple of drives for a mirror :(

I made a mistake, when I read your initial post. You clearly listed

My pool happens to have also 21.8TiB of raw disk space, so subconsciously I did not question that in your case 21.8TiB is a wrong value... You should have approximately:
Code:
  4 * 4TB + 4 * 3TB =
  4 * 4*1000^4/1024^4 + 3 * 4*1000^4/1024^4 =
  4 * 3.64 + 4 * 2.73 =
  14.56 + 10.92 = 25.5 GiB


Please post gpart show and zpool history

P.S. FreeNAS 9.x (or may be only starting with 9.2.1) automagically places .system dataset in the first ZFS pool added to the system. Version 9.2.1.6 promises to give users more control over it.


As requested:
Code:
# gpart show
=>    63  8388545  da0  MBR  (4.0G)
      63  1930257    1  freebsd  [active]  (942M)
  1930320      63      - free -  (31k)
  1930383  1930257    2  freebsd  (942M)
  3860640    3024    3  freebsd  (1.5M)
  3863664    41328    4  freebsd  (20M)
  3904992  4483616      - free -  (2.1G)
 
=>      0  1930257  da0s1  BSD  (942M)
        0      16        - free -  (8.0k)
      16  1930241      1  !0  (942M)
 
=>        34  5860533101  da1  GPT  (2.7T)
          34          94      - free -  (47k)
        128    4194304    1  freebsd-swap  (2.0G)
    4194432  5856338696    2  freebsd-zfs  (2.7T)
  5860533128          7      - free -  (3.5k)
 
=>        34  5860533101  da2  GPT  (2.7T)
          34          94      - free -  (47k)
        128    4194304    1  freebsd-swap  (2.0G)
    4194432  5856338696    2  freebsd-zfs  (2.7T)
  5860533128          7      - free -  (3.5k)
 
=>        34  5860533101  da3  GPT  (2.7T)
          34          94      - free -  (47k)
        128    4194304    1  freebsd-swap  (2.0G)
    4194432  5856338696    2  freebsd-zfs  (2.7T)
  5860533128          7      - free -  (3.5k)
 
=>        34  5860533101  da4  GPT  (2.7T)
          34          94      - free -  (47k)
        128    4194304    1  freebsd-swap  (2.0G)
    4194432  5856338696    2  freebsd-zfs  (2.7T)
  5860533128          7      - free -  (3.5k)
 
=>        34  7814037101  da5  GPT  (3.7T)
          34          94      - free -  (47k)
        128    4194304    1  freebsd-swap  (2.0G)
    4194432  7809842696    2  freebsd-zfs  (3.7T)
  7814037128          7      - free -  (3.5k)
 
=>        34  7814037101  da6  GPT  (3.7T)
          34          94      - free -  (47k)
        128    4194304    1  freebsd-swap  (2.0G)
    4194432  7809842696    2  freebsd-zfs  (3.7T)
  7814037128          7      - free -  (3.5k)
 
=>        34  7814037101  da7  GPT  (3.7T)
          34          94      - free -  (47k)
        128    4194304    1  freebsd-swap  (2.0G)
    4194432  7809842696    2  freebsd-zfs  (3.7T)
  7814037128          7      - free -  (3.5k)
 
=>        34  7814037101  da8  GPT  (3.7T)
          34          94      - free -  (47k)
        128    4194304    1  freebsd-swap  (2.0G)
    4194432  7809842696    2  freebsd-zfs  (3.7T)
  7814037128          7      - free -  (3.5k)


and;

Code:
~# zpool history
History for 'RAIDz2-01':
2014-05-25.18:08:51 zpool create -f RAIDz2-01 raidz2 /dev/da1p2 /dev/da2p2 /dev/da3p2 /dev/da4p2 /dev/da5p2 /dev/da6p2 /dev/da7p2 /root/disk1.img
2014-05-25.18:09:24 zpool offline RAIDz2-01 /root/disk1.img
2014-05-25.18:10:06 zpool export RAIDz2-01
2014-05-25.18:11:19 zpool import -f -R /mnt 17890873096340179923
2014-05-25.18:11:19 zfs inherit -r mountpoint RAIDz2-01
2014-05-25.18:11:19 zpool set cachefile=/data/zfs/zpool.cache RAIDz2-01
2014-05-25.18:11:19 zfs set aclmode=passthrough RAIDz2-01
2014-05-25.18:11:24 zfs set aclinherit=passthrough RAIDz2-01
2014-05-25.18:11:36 zfs create RAIDz2-01/.system
2014-05-25.18:11:45 zfs create RAIDz2-01/.system/samba4
2014-05-25.18:11:55 zfs create RAIDz2-01/.system/syslog
2014-05-25.18:12:04 zfs create RAIDz2-01/.system/cores
2014-05-25.18:36:49 zpool set cachefile=/boot/zfs/zpool.cache RAIDz2-01
2014-05-26.10:25:28 zpool scrub RAIDz2-01
2014-05-26.14:28:48 zpool set cachefile=/data/zfs/zpool.cache RAIDz2-01
2014-05-26.14:31:36 zpool replace RAIDz2-01 2901464765091125052 gptid/17289b87-e51d-11e3-9791-001517168acc
2014-05-26.17:16:39 zpool set cachefile=/data/zfs/zpool.cache RAIDz2-01
2014-05-26.17:23:43 zpool offline RAIDz2-01 da5p2
2014-05-26.17:43:18 zpool set cachefile=/data/zfs/zpool.cache RAIDz2-01
2014-05-26.17:47:07 zpool replace RAIDz2-01 14575517232148871486 gptid/66e28720-e538-11e3-82a3-001517168acc
2014-05-27.14:52:48 zpool export RAIDz2-01
2014-05-27.15:29:07 zpool import -f -R /mnt 17890873096340179923
2014-05-27.15:29:09 zfs inherit -r mountpoint RAIDz2-01
2014-05-27.15:29:09 zpool set cachefile=/data/zfs/zpool.cache RAIDz2-01
2014-05-27.15:29:09 zfs set aclmode=passthrough RAIDz2-01
2014-05-27.15:29:14 zfs set aclinherit=passthrough RAIDz2-01
2014-05-27.15:33:31 zpool set autoexpand=on RAIDz2-01
2014-05-27.15:35:20 zpool online -e RAIDz2-01 /dev/da1p2
2014-05-27.15:35:28 zpool online -e RAIDz2-01 /dev/da2p2
2014-05-27.15:35:37 zpool online -e RAIDz2-01 /dev/da3p2
2014-05-27.15:35:42 zpool online -e RAIDz2-01 /dev/da4p2
2014-05-27.15:35:57 zpool online -e RAIDz2-01 gptid/66e28720-e538-11e3-82a3-001517168acc
2014-05-27.15:36:06 zpool online -e RAIDz2-01 /dev/da5p2
2014-05-27.15:36:14 zpool online -e RAIDz2-01 /dev/da6p2
2014-05-27.15:36:27 zpool online -e RAIDz2-01 gptid/17289b87-e51d-11e3-9791-001517168acc
2014-05-27.15:37:26 zpool export RAIDz2-01
2014-05-27.15:37:59 zpool import -f -R /mnt 17890873096340179923
2014-05-27.15:38:01 zfs inherit -r mountpoint RAIDz2-01
2014-05-27.15:38:01 zpool set cachefile=/data/zfs/zpool.cache RAIDz2-01
2014-05-27.15:38:01 zfs set aclmode=passthrough RAIDz2-01
2014-05-27.15:38:06 zfs set aclinherit=passthrough RAIDz2-01
2014-05-27.15:41:50 zpool scrub RAIDz2-01
2014-05-27.19:26:15 zpool import -c /data/zfs/zpool.cache.saved -o cachefile=none -R /mnt -f 17890873096340179923
2014-05-27.19:26:15 zpool set cachefile=/data/zfs/zpool.cache RAIDz2-01
2014-05-28.19:34:46 zpool import -c /data/zfs/zpool.cache.saved -o cachefile=none -R /mnt -f 17890873096340179923
2014-05-28.19:34:46 zpool set cachefile=/data/zfs/zpool.cache RAIDz2-01


Also,

Interestingly enough, after mounting the share via NFS on my Ubuntu server, I get the 16TB size I was expecting all along...

Code:
~$ df -h
Filesystem              Size  Used Avail Use% Mounted on
/dev/sda1                23G  1.7G  20G  8% /
none                    4.0K    0  4.0K  0% /sys/fs/cgroup
udev                    487M  4.0K  487M  1% /dev
tmpfs                    100M  512K  99M  1% /run
none                    5.0M    0  5.0M  0% /run/lock
none                    497M    0  497M  0% /run/shm
none                    100M    0  100M  0% /run/user
IP:/mnt/RAIDz2-01  16T  2.0T  14T  14% /mnt/FreeNAS


Though this may just be a TiB vs TB issue. I can't tell which Ubuntu's df command is using...

Thank you again!

--Matt
 

mattlach

Patron
Joined
Oct 14, 2012
Messages
280
I wonder if this has anything to do with me initially forgetting to set autoexpand to on.

As you can see starting in line 30 in the history, I retroactively set it on, and then did an online -e on each of the drives to get it to expand after the fact. Maybe this didn't work quite right?

I followed the procedure here under the section "Enabling ZFS Pool Expansion After Drive Replacement".

Copied the relevant section from above:
Code:
2014-05-27.15:33:31 zpool set autoexpand=on RAIDz2-01
2014-05-27.15:35:20 zpool online -e RAIDz2-01 /dev/da1p2
2014-05-27.15:35:28 zpool online -e RAIDz2-01 /dev/da2p2
2014-05-27.15:35:37 zpool online -e RAIDz2-01 /dev/da3p2
2014-05-27.15:35:42 zpool online -e RAIDz2-01 /dev/da4p2
2014-05-27.15:35:57 zpool online -e RAIDz2-01 gptid/66e28720-e538-11e3-82a3-001517168acc
2014-05-27.15:36:06 zpool online -e RAIDz2-01 /dev/da5p2
2014-05-27.15:36:14 zpool online -e RAIDz2-01 /dev/da6p2
2014-05-27.15:36:27 zpool online -e RAIDz2-01 gptid/17289b87-e51d-11e3-9791-001517168acc
 

mattlach

Patron
Joined
Oct 14, 2012
Messages
280
I wonder if this has anything to do with me initially forgetting to set autoexpand to on.

As you can see starting in line 30 in the history, I retroactively set it on, and then did an online -e on each of the drives to get it to expand after the fact. Maybe this didn't work quite right?

I followed the procedure here under the section "Enabling ZFS Pool Expansion After Drive Replacement".

Copied the relevant section from above:
Code:
2014-05-27.15:33:31 zpool set autoexpand=on RAIDz2-01
2014-05-27.15:35:20 zpool online -e RAIDz2-01 /dev/da1p2
2014-05-27.15:35:28 zpool online -e RAIDz2-01 /dev/da2p2
2014-05-27.15:35:37 zpool online -e RAIDz2-01 /dev/da3p2
2014-05-27.15:35:42 zpool online -e RAIDz2-01 /dev/da4p2
2014-05-27.15:35:57 zpool online -e RAIDz2-01 gptid/66e28720-e538-11e3-82a3-001517168acc
2014-05-27.15:36:06 zpool online -e RAIDz2-01 /dev/da5p2
2014-05-27.15:36:14 zpool online -e RAIDz2-01 /dev/da6p2
2014-05-27.15:36:27 zpool online -e RAIDz2-01 gptid/17289b87-e51d-11e3-9791-001517168acc


Come to think of it, I think I missed the part about checking and waiting for any resilvering when I did the "online -e"'s

Maybe I should try doing this again, and doing a zpool status in between each to see if there is any resilvering activity.
 

solarisguy

Guru
Joined
Apr 4, 2014
Messages
1,125
Your partition sizes look OK, the zpool history looks OK too.

autoexpand set to off is only a partial explanation. What was the size of /root/disk1.img and how did you create it?
 

solarisguy

Guru
Joined
Apr 4, 2014
Messages
1,125
In the meantime, you can do once again
Code:
zpool online -e RAIDz2-01 gptid/66e28720-e538-11e3-82a3-001517168acc
zpool online -e RAIDz2-01 gptid/17289b87-e51d-11e3-9791-001517168acc

Just watch whether the first command results in resilvering (and let it finish) before doing the second one.
 

mattlach

Patron
Joined
Oct 14, 2012
Messages
280
Your partition sizes look OK, the zpool history looks OK too.

autoexpand set to off is only a partial explanation. What was the size of /root/disk1.img and how did you create it?


This was a sparse image I created using dd, 2TB in size. (i thought it best to match the size of my smallest physical drive at the time.

Code:
dd if=/dev/zero of=/root/disk1.img bs=2000000000000 seek=1 count=0


In the meantime, you can do once again
Code:
zpool online -e RAIDz2-01 gptid/66e28720-e538-11e3-82a3-001517168acc
zpool online -e RAIDz2-01 gptid/17289b87-e51d-11e3-9791-001517168acc

Just watch whether the first command results in resilvering (and let it finish) before doing the second one.
Thank you again, will do.
I will try to do this after work, but I am also moving this weekend, and not sure I will get to it right away. I will update here with my results when I do it.
 

solarisguy

Guru
Joined
Apr 4, 2014
Messages
1,125
Your initial setup should have 23.62TiB, but I think it had only 21.8TiB. I mean that it was missing space that really was not available from the sparse file. Here is my calculation of the delta
Code:
    2TB = 1.82TiB = 23.62TiB - 21.8TiB


Mystery solved! :)

Later your disk upgrade did not increase space, so that is why you are seeing 21.8TiB.

P.S.
Where did you exactly place your sparse disk file?
Code:
    4 * 4TB + 2 * 3TB + 2 * 2TB =
    4 * 3.64TiB + 2 * 2.73TiB + 2 * 1.82TiB = 23.62TiB
 
Status
Not open for further replies.
Top