Pool not using all available space

Status
Not open for further replies.

saber07

Cadet
Joined
Mar 2, 2017
Messages
7
I recently expanded the size of my pool by replacing 6 2tb drives with 6 8tb drives (raidz2). I'm not sure on the exact terminology here, but in the screenshot below, the volume shows 33TiB available but I only have 21.2Tib available to actually use. I've made sure to set autoexpand on and use zpool online -e storage <device> to increase the pool size, but I don't know why it isn't using the full space. Does anyone know what I'm missing here?

9G3gLZB.png


http://imgur.com/9G3gLZB

Thanks
 

darkwarrior

Patron
Joined
Mar 29, 2015
Messages
336
HI there,

using the calculator @Bidule0hm created the numbers look actually fine. ;)

Everything is where it should be:
a RAIDZ2 pools with 6x8TB drives gives you a total RAID space (including redundancy) of ~43TiB (numbers from the first line in your screenshot == 33.2 + 10.4TiB used )
And you have a total usable data space of 29TiB [minus some ZFS overheads] (2nd line of your screenshot: 7TiB used + 21.2TiB free space).
Reminder: Additionally, you should avoid reaching over 80% of space used (22.9TiB in your case) to avoid slowing down everything to a crawl.
 

MiG

Dabbler
Joined
Jan 6, 2017
Messages
21
Funny, had exactly the same setup and question. Does this mean the 21.2 TiB available falls inside that 80% limit and can therefore be safely filled completely?
(safely in the sense of not severely affecting resilvering time following a drive replacement)
 
Last edited:

snaptec

Guru
Joined
Nov 30, 2015
Messages
502
No. You always see the allover available space. You have to make sure that 20% of the pool are free.


Gesendet von iPhone mit Tapatalk
 

MiG

Dabbler
Joined
Jan 6, 2017
Messages
21
Would it be possible to simply set an appropriate quotum ("17.0 TiB") on that dataset to prevent this?

This is my first ZFS setup and apart from an operational storage array also a bit of a test case for me. Coming from hardware RAID I must say I'm a bit surprised about the massive additional overhead (48 TB -> 17 TiB)...
 

snaptec

Guru
Joined
Nov 30, 2015
Messages
502
Why 17TiB?
The overall available Space is 28 TiB. In this Scenario 7 Tib are in use, thats why only 21 are available.

You will get alert emails if the pool is too full.
 

darkwarrior

Patron
Joined
Mar 29, 2015
Messages
336
This is my first ZFS setup and apart from an operational storage array also a bit of a test case for me. Coming from hardware RAID I must say I'm a bit surprised about the massive additional overhead (48 TB -> 17 TiB)...

Why would you want to strip down the space even more ?:eek:
You will be able to fill roughly 23TiB (80% of 29TiB) without any issue.
It's common best practice to stop there because you will get Freenas alerts and because that's the moment when ZFS will switch into a slower "space-filling" writing mode (from there the performance will suffer a severe degradation).

The space lost over-head and parity data is the price you pay for free snapshots, bit rot protection, check-summing, protection from 2 simultaneous drive failures and many other fancy things that are just cool. :cool:
To me that space is not lost, but it's all a matter of perception ;)
 

MiG

Dabbler
Joined
Jan 6, 2017
Messages
21
Ah, misread the column header. Happened to coincidentally be at exactly the same value as the thread starter.

Darkwarrior: note that because of the above this was based on different figures, I'm in the process of filling the volume and there's about 6TB already on there.

The question of using the quota system still stands. Slowing down to a crawl does not sound enticing in case of drive failure, I'd rather simply make 20% unavailable. Are there any (other) downsides to this approach that I might not be aware of?
 

darkwarrior

Patron
Joined
Mar 29, 2015
Messages
336
I was on purpose explicitly exaggerating what I wrote above, it's not "slowing down to a crawl" as soon as you hit 80%, but it's noticeably slower. :D
I'm currently using 86% of my available storage and I feel a difference, even if the writing speeds are still OK.
That being said you will really not want to arrive at 95% used, because then you will really experience how slow crawling can be when ZFS will be searching for free blocks to write to ...

The idea of a quota to arrive in that scenario is actually a good one and there are many ways to achieve it:
- dataset quotas
- writing one huge file that will serve as a "place holder/marker"
- etc ...
On the forum you might actually find others ;)
 

MiG

Dabbler
Joined
Jan 6, 2017
Messages
21
Thanks for explaining! The first one's the one I'm considering. Contrary to the now set-in-stone drive layout I have plenty of time to decide, fortunately :)
 
Status
Not open for further replies.
Top