Usable Space After Creating New Raidz2 Volume

Status
Not open for further replies.

p274868

Dabbler
Joined
Mar 22, 2014
Messages
11
I first want to start off saying that I appreciate all the hard work that has gone into maintaining this great piece of software! I have been using FreeNAS for over a year now and it has been working awesome.

The past couple weeks i have been copying the data off of my 12 disk array composed of 2 (6 raidz2) arrays because i wanted to create one raidz2 with 10 disks. This will allow me to eventually fully fill my SuperMicro 24 bay chassis with two raidz2 arrays composed of 10 disks and one raidz2 array composed of 4 disks.

When creating a 10 disk raidz2 array the usable space does not add up. A 10 raidz2 array with 4TB disk yields and estimated capacity of 29.09 TB but once the volume is created the available space is only 27.2TB. Where did the 1.84 TB go? I am using an optimal configuration according to the users manual. I have tried this on 9.1.1 Release and 9.2.1.3 Release and get the same results. Whats interesting is when i create a 6 disk raidz2 volume the estimated and available space match.

The specs of my system are:
SuperMicro H8DME-2 motherboard
SuperMicro SuperChassis 846TQ-R900B
32GB of ECC Ram
2 x Six-Core AMD Opteron(tm) Processor 2419 (12 Cores Total)
3 x AOC-SAT2-MV8 Controller Cards
10 x 4TB Seagate Hard Drives
was tested on 9.1.1 and 9.2.1.3 x64 bit booting for USB
 

Attachments

  • pool.jpg
    pool.jpg
    60.5 KB · Views: 251
  • pool1.jpg
    pool1.jpg
    57.9 KB · Views: 256

p274868

Dabbler
Joined
Mar 22, 2014
Messages
11
I have not found a solution. I have additionally created the 10 disk Raidz2 array on 8.3.1 Release and encountered the same result. I understand that ZFS needs to pre allocate space to run correctly (such as the 2GB per drive swap space) but did not expect the available storage to take such a hit. There was a bug report that I came across were the web gui's storage did not match the storage seen in shell but I am still unsure if they are related. I was hoping a third party could post their available space when creating a raidz2 10 disk array to see if similar results are found.
 

Hoowahman

Cadet
Joined
Jul 4, 2014
Messages
8
Did you ever find a solution for this? I'm planning on doing the same thing with a 10 disk raidz2 array with 4TB drives. I read that if you use ashift=9 it will drastically use less overhead space. This might not be a good idea though. What did you end up doing?
 

p274868

Dabbler
Joined
Mar 22, 2014
Messages
11
I ended up excepting the overhead that the freenas GUI assigned. I was hoping to get a response regarding how overhead is allocated but never received a response. Since creating the volume I am currently at 87% full. I know that it has been discussed to stay below 80% full but cant justify not using over 3tb of storage on top of the 1.84 TB of space that is unaccounted for. Are you seeing the same 1.84tb loss of space? I am curious if you are experiencing the same issue?
 

Hoowahman

Cadet
Joined
Jul 4, 2014
Messages
8
Yeah after more research I have decided to stay with default settings and accept the overhead as well. I will not be even close to 80% usage and expect to be well below it. I haven't created my NAS yet but doing the initial research before putting it together and configuring it.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
The reason is because we've discussed this to death... in short summary:

1. What the companies sell as 3TB isn't 3TB. It's 3TB in base-10 but in base-2 you lose some space.
2. You lose space based on the efficiency(or lack there-of) based on your block size. Your block size is directly related to how much data is written per write, your compression setting, and how well the data compresses.
3. If you don't understand quotas, reservations, snapshots, and other settings such as export recycle bin for CIFS you're going to appear to "lose disk space".

So no, there's no cheatsheet for how much space you lose. You don't argue over how much space NTFS uses, and it is quite often 10% or more of your total disk space. 100% of the time when disk space is allegedly missing it's one of those things above. So if you are sure that #1 and 2 aren't your problem then maybe you don't understand ZFS as well as you think. ;)
 

solarisguy

Guru
Joined
Apr 4, 2014
Messages
1,125
@p274868, I have handcrafted for you two commands:
Code:
zfs get all pool | egrep 'NAME|used|available|ref|compress|quota|reservation|recordsize|copies|dedup|written'
zfs get all pool/.system | egrep 'NAME|used|available|ref|compress|quota|reservation|recordsize|copies|dedup|written'
Please execute them either using Shell in the GUI or using SSH. Then please share with us the output, while enclosing each output inside [CODE][/CODE] tags. Thank you!
 

solarisguy

Guru
Joined
Apr 4, 2014
Messages
1,125
no_connection, 105 minutes earlier:

This is as far as cheat-sheet goes that I have found.​
1.84TB seems in line with the 1.39TB for 3TB drives.​
(site did not load directly so link through archive)​
Very interesting indeed... I was expecting some snapshots, quotas, the usual suspects...

My RAID-Z2 has 6 Advanced Format 4TB disks, just redone using 9.2.1.5. And my overhead is only 0.2919 TiB = 299 GiB. That is even less than 0.26028858*4/3 TiB = 0.34705144 TiB I could expect after reading the article.

I have calculated my data space as 4 * 4 * 1000^4 / 1024^4 = 14.55 TiB (GUI rounds up to 14.6 TiB, and coincidentally using that value would give 0.34TiB loss)

My used and available are 9.85 + 4.41 = 14.26 TiB. Here is the relevant pool information:
Code:
[root@freenas] ~# zfs get all Volume_2 | \
egrep 'NAME|used|available|ref|compre|quota|reservation|recordsize|copies|dedup|written'
NAME    PROPERTY              VALUE                  SOURCE
ZFStest  used                  9.85T                  -
ZFStest  available            4.41T                  -
ZFStest  referenced            9.85T                  -
ZFStest  compressratio        1.00x                  -
ZFStest  quota                none                  local
ZFStest  reservation          none                  default
ZFStest  recordsize            128K                  default
ZFStest  compression          off                    local
ZFStest  copies                1                      default
ZFStest  refquota              none                  local
ZFStest  refreservation        none                  default
ZFStest  usedbysnapshots      0                      -
ZFStest  usedbydataset        9.85T                  -
ZFStest  usedbychildren        35.7M                  -
ZFStest  usedbyrefreservation  0                      -
ZFStest  dedup                off                    local
ZFStest  refcompressratio      1.00x                  -
ZFStest  written              9.85T                  -
ZFStest  logicalused          9.84T                  -
ZFStest  logicalreferenced    9.84T                  -
[root@freenas] ~#
For the pool safety and my good sleep, I impose both quota and refquota (removed them from the pool for this post), so I have noticed my space missing only after reading complains in the posts...
 
Status
Not open for further replies.
Top