RAIDZ1 Size seems small

Status
Not open for further replies.

stamps_

Dabbler
Joined
Sep 6, 2016
Messages
14
Hey Guys,

Not sure if this is a common question, couldn't find anything that directly related to my confusion.

I'm using 6 4TB WD Reds in my volume. I set up a I'm only seeing 16.5TB in my root dataset of RAIDz1 and 12.6TB in the dataset I created. After using Bidule0hm ZFS Calculator I found that I should have somewhere around 14TB usable in the dataset I create.

I took a couple screen shots here.

It just seems like I should have like way more space for the amount of drives I have.


Thanks in advance
 

BigDave

FreeNAS Enthusiast
Joined
Oct 6, 2013
Messages
2,479
From @Bidule0hm page regarding the calculator he created:
"The usable data space is the total data space minus the total overhead and the minimum recommended free space."
This means the USABLE space is based on the maximum recommended used capacity of 80%.
ZFS begins to lose performance (speed) as the volume capacity exceeds this level.

The link to the calculator seems to be broken at this time.

As an aside, using RAIDz1 with large capacity disks is heavily NOT recommended, primarily due to increased
resilver time required for the larger drives.
This leaves the degraded pool vulnerable for a second failure for a much longer period, thus increasing the chance
of a second drive failure and loss of the volume/pool in RAIDz1.

ZFS does some strange capacity calculations when showing available space as well.
Use the forum search feature with the words "missing space" and you'll see what I mean.
 
Last edited:

stamps_

Dabbler
Joined
Sep 6, 2016
Messages
14
Thanks BigDave. That answers my question.

Also, I have a complete backup of the volume/pool, but I appreciate the concern for my configuration with RAIDZ1.
 

BigDave

FreeNAS Enthusiast
Joined
Oct 6, 2013
Messages
2,479
Thanks BigDave. That answers my question.

Also, I have a complete backup of the volume/pool, but I appreciate the concern for my configuration with RAIDZ1.
You are (of course) most welcome!
The principal contributors in here will always point out these types of things when observed, if I had not done so,
someone else certainly would have.
Dave
 

stamps_

Dabbler
Joined
Sep 6, 2016
Messages
14
Instead of creating a new thread, I decided to continue the conversation here.

I'm a little disappointed and I'm not sure if it's just my set up. I managed to migrate all of my data using rsync.

Same details shared above, but now in the parent dataset(reds) is only 10.2TB and the child data set (reds) is down to 8.7TB.

Here are two recent screenshots:

idvbpf.png


2zf8xhw.png


This seems like a very low yield for 6 4TB disks.
 

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
the top 'reds' is storage total including parity and the second 'reds' is usable storage not counting parity. Everything look fine with your setup other than using raidz1 is stupid.
 

stamps_

Dabbler
Joined
Sep 6, 2016
Messages
14
I appreciate the concern for using RAIDZ1, everything is backed up appropriately. This is for accessibility.

It just seems strange to me that out of 24TB of disks (advertised, probably closer to 21TB) that I'm only able to utilize 8.7TB even using RAIDZ1. My older box is 3 3TB drives and I'm able to utilize 6ish TB.

Am I expecting too much of ZFS?

If I am, I suppose I could use the onboard controllers (Marvell and Intel) of my mobo.
 

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
This isn't zfs problem. This is just simple math, you should have 18.19 usable space on your system and ifyou add up the second line you get about that.

What is the output of zpool status?

Don't use the marvel controllers they don't work usually.
 

Mlovelace

Guru
Joined
Aug 19, 2014
Messages
1,111
4TB drives are actually 3.638TiB in size and you are using one of them for parity, so you don't have 24TB of disk, you've got 18.19TiB of raw usable storage without ZFS padding and metadata. Your system is fine.
 

stamps_

Dabbler
Joined
Sep 6, 2016
Messages
14
Here's the output of zpool status:

Code:
config:                                                                       
                                                                             
        NAME                                                STATE     READ WRITE
CKSUM                                                                       
        reds                                                ONLINE       0     0
     0                                                                       
          raidz1-0                                          ONLINE       0     0
     0                                                                       
            gptid/888668a7-74a5-11e6-8bc8-d05099c09f62.eli  ONLINE       0     0
     0                                                                       
            gptid/894d4abf-74a5-11e6-8bc8-d05099c09f62.eli  ONLINE       0     0
     0                                                                       
            gptid/8a0ff83b-74a5-11e6-8bc8-d05099c09f62.eli  ONLINE       0     0
     0                                                                       
            gptid/8ad3d44b-74a5-11e6-8bc8-d05099c09f62.eli  ONLINE       0     0
     0                                                                       
            gptid/8bade942-74a5-11e6-8bc8-d05099c09f62.eli  ONLINE       0     0
     0                                                                       
            gptid/8c74777f-74a5-11e6-8bc8-d05099c09f62.eli  ONLINE       0     0
     0                                                                       
                                                                             
errors: No known data errors                                     


If everything is okay then try and help me understand.

18.19TiB total raw - 3.638TiB RAIDZ1 losses = 14.552 TiB

12.2TiB originally available in child dataset. As explained to me above as 80% of true maximum.

8.7TiB available after data migration in dataset.

Why such a loss in usable storage?
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
upload_2016-9-20_18-57-36.png
 

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
Also what is the output of zpool list and zfs list?
 

stamps_

Dabbler
Joined
Sep 6, 2016
Messages
14
Thanks guys, I really do appreciate all of this (even the slight hostility) ;).

One more thing... Which is what really prompted my questioning.

If I have 8.7TiB available then why am I having an issue copying files to the "media" data set?

4r8dv8.png


The total amount I'm trying to copy is 3.3TiB.
 

stamps_

Dabbler
Joined
Sep 6, 2016
Messages
14
scrub is currently running

zpool list:


NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
freenas-boot 7.44G 632M 6.82G - - 8% 1.00x ONLINE -
reds 21.8T 10.2T 11.5T - 21% 46% 1.00x ONLINE /mnt


zfs list:


/reds
reds/.system 4.96M 8.67T 166K lega
cy
reds/.system/configs-7f4d67ae16c94917b949456bb9f364ad 1.39M 8.67T 1.39M lega
cy
reds/.system/cores 1.56M 8.67T 1.56M lega
cy
reds/.system/rrd-7f4d67ae16c94917b949456bb9f364ad 153K 8.67T 153K lega
cy
reds/.system/samba4 562K 8.67T 562K lega
cy
reds/.system/syslog-7f4d67ae16c94917b949456bb9f364ad 1.15M 8.67T 1.15M lega
cy
reds/jails 857M 8.67T 198K /mnt
/reds/jails
reds/jails/.warden-template-pluginjail 535M 8.67T 535M /mnt
/reds/jails/.warden-template-pluginjail
reds/jails/plexmediaserver_1 321M 8.67T 855M /mnt
/reds/jails/plexmediaserver_1
reds/jake 8.16T 8.67T 8.16T /mnt
/reds/jake
reds/media 179K 8.67T 179K /mnt
/reds/media
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
Which dataset is your cifs share?
 

stamps_

Dabbler
Joined
Sep 6, 2016
Messages
14
Which dataset is your cifs share?

Two CIFS shares actually.. One for parent "reds" and one for the user "jake."

Was attempting a copy from CIFS jake to reds/media.

I suspect you'll be telling me this is wrong :)
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
Why such a loss in usable storage?
You have 21T raw (16.9T usable) and you are using 8.2T (looks like in the "Jake" sub-dataset). Nothing seems out of the ordinary there. As for the copy, is there a quota somewhere in FreeNAS and are you sure the folder you are copying is only 3.3TB?

Was attempting a copy from CIFS jake to reds/media.
If they are on the same server, you can save a lot of time by just copying them using the CLI.
 

stamps_

Dabbler
Joined
Sep 6, 2016
Messages
14
Thanks guys, I think my confusion is resolved.

I will dig in a bit more with those links.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
I'm guessing there's a quota set up somewhere
 
Status
Not open for further replies.
Top