Maximum amount of space useable

Status
Not open for further replies.

adrianwi

Guru
Joined
Oct 15, 2013
Messages
1,231
I know the recommended maximum is 80% but what is the absolute maximum you could get to without potentially loosing data.

I renamed one of the drives I backup to my NAS, so it created a new copy (~1.4TB) and after deleting the old folder realised that it won't actually delete until the snapshots expire so I'm running fairly tight on space (just over 90%).

Understand the performance issues, but what I read on the Oracle site suggested not going over 95% or this could potentially cause problems. Is this right, and what would happen if the pool filled to 100%.

Obviously trying to avoid this by adding as little data as possible until snapshots free up the reserved space.

Might need to start thinking about a new enclosure with more disks too!
 
Last edited:

BigDave

FreeNAS Enthusiast
Joined
Oct 6, 2013
Messages
2,479
Might need to start thinking about a new enclosure with more disks too!
In the above sentence, strike thinking about and replace with BUILDING!
Since you are already over the "line" by 10%, back up your data now!
I know the recommended maximum is 80% but what is the absolute maximum you could get to without potentially loosing data.
No one can really guarantee that, but if you keep writing to your pool, my guess is that you will soon find to answer...
 
Joined
Jan 9, 2015
Messages
430
I renamed one of the drives I backup to my NAS, so it created a new copy (~1.4GB) and after deleting the old folder realised that it won't actually delete until the snapshots expire so I'm running fairly tight on space (just over 90%).
Do you have any snapshots it are fairly large that you can manually remove through the GUI? Might buy you a little time depending on how much space they are taking up. The snapshot size is the "Used" field under Storage->Snapshots.
 

adrianwi

Guru
Joined
Oct 15, 2013
Messages
1,231
My backup is part of the problem! I'm using ZFS replication to backup to a 2nd FreeNAS box so don't want to mess around with the snapshots, otherwise I could simply free up the space. Don't expect to get over 95%, but was interested to know what might happen if I did.
 
Joined
Jan 9, 2015
Messages
430
I see.
 

Bidule0hm

Server Electronics Sorcerer
Joined
Aug 5, 2013
Messages
3,710
The 80% rule is just a rule set by humans to stop you filling your pool too much. However, AFAIK there is the 90% rule which is set by ZFS and when you cross it ZFS switches from performance optimization to space optimization so your performances will be abysmal... but you'll not lose data (the primary goal of ZFS is data reliability) ;)
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
The 80% rule is just a rule set by humans to stop you filling your pool too much. However, AFAIK there is the 90% rule which is set by ZFS and when you cross it ZFS switches from performance optimization to space optimization so your performances will be abysmal... but you'll not lose data (the primary goal of ZFS is data reliability) ;)

It switches at 95%, not 90%.
 

Bidule0hm

Server Electronics Sorcerer
Joined
Aug 5, 2013
Messages
3,710
Oh, sorry, I always saw 90%, at least now I know, thanks ;)
 

Tywin

Contributor
Joined
Sep 19, 2014
Messages
163
I know the recommended maximum is 80% but what is the absolute maximum you could get to without potentially loosing data.

This is a thing? Backup your data and try. No sane file system should mulch your data when it gets full, it should simply stop allowing writes.
 

adrianwi

Guru
Joined
Oct 15, 2013
Messages
1,231
Which value is the 95% based on in the image below? I was assuming 2.

freenas1_space.png
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Pretty sure it will be #2.

This is silly though. Going to argue how to maximize performance, then try not to hit that dreaded 95%. Hilarious because if you truly cared about performance you'd be look at 80%, period.

Feel free to cut off your nose despite your face. Won't make you look any better though.
 

adrianwi

Guru
Joined
Oct 15, 2013
Messages
1,231
Who's arguing about maximising performance?

I was below (although pushing 80%) before I ended up backing up some data twice due to renaming a drive. I'll be around the same once the snapshots release the deleted data.

Now I understand a little more about FreeNAS and have a pretty basic system running as I want, I've asked in another thread for some advice on a new build with the view to keeping things below 80% for some time to come.

You don't always have to be so grumpy CJ :D
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
I'm not grumpy. I just don't get why people are even spending their time worrying about the 95% thing. There's tons of people that do 94% (just to stay below that 95%), then complain when performance is slow. They'll do this forever too!

H-E-L-L-O. You can't break some of the rules, but not all of the rules, then expect everything to be a-okay. ;)
 

adrianwi

Guru
Joined
Oct 15, 2013
Messages
1,231
To be honest, I'm still happy with the read/write performance at 93% as it's still much faster than my previous NAS solutions.

Not so with the Plex performance, but what can you expect from the AMD processor in a HP Microserver :)

Will look forward to blazing speeds with a Intel Xeon processor and a pool below 50% in the coming months...

ps: hope you liked my puppy dog avatar? You should try one every now and again ;)
 

Tywin

Contributor
Joined
Sep 19, 2014
Messages
163
I'm not grumpy. I just don't get why people are even spending their time worrying about the 95% thing. There's tons of people that do 94% (just to stay below that 95%), then complain when performance is slow. They'll do this forever too!

H-E-L-L-O. You can't break some of the rules, but not all of the rules, then expect everything to be a-okay. ;)

95% of what is still a valid question, though; doubly-so for 80%. Say (2) above shows 80%, but (1) only shows 75%, and let's say that at my rate of data generation, it would take ~6 months for (1) to reach 80%. Should I replace it now, or wait 6 months for new hardware when I can afford twice the upgrade? :confused:

This is of course just an example. Even rough guidelines with conservative margin, it is important to know what they are referencing.

My gut actually says that (1) is the more important number, since that's the total raw space on the pool, but I am admittedly not an expert :D

p.s. Since some people like car analogies, consider if the speed limit on a highway were listed in RPM. RPM of what, my tyres, my driveshaft, my engine? Some reference tyre of a specific diameter? I think we can all admit that would be pretty silly :p
 

Robert Trevellyan

Pony Wrangler
Joined
May 16, 2014
Messages
3,778
I don't see how anything but 80% or 95% of 2 makes sense. My reasoning is that you can keep 1 constant and vary 2 by choosing different vdev layouts, because 1 shows raw capacity but 2 shows capacity after the overhead of redundancy is accounted for.
 

Tywin

Contributor
Joined
Sep 19, 2014
Messages
163
I don't see how anything but 80% or 95% of 2 makes sense. My reasoning is that you can keep 1 constant and vary 2 by choosing different vdev layouts, because 1 shows raw capacity but 2 shows capacity after the overhead of redundancy is accounted for.

Changing the vdev layout will change how much data it takes to reach a certain percent full yes, but (1) tells you how many blocks there are available in the pool for ZFS to put things. To me that seems to be the critical measure: data comes in, ZFS mixes in some parity and chops it up into blocks that have to go onto disks, then looks for where it can stuff them. Changing the vdev layout just changes how much parity gets sifted in and how many blocks it gets chopped into.

Easiest thing would probably be to look at the source for the switch at 95% to see what metric it checks. I honestly have no idea where in the source that would be. It's likely neither (1) nor (2), but something that is representative of one or the other.
 

Robert Trevellyan

Pony Wrangler
Joined
May 16, 2014
Messages
3,778
You may be right, but to me the issue is COW, which I expect to be more dependent on usable space than raw space. On the other hand, intuitively, the closer your pool is to being full, the closer 1 and 2 will be to each other anyway. Either way, I won't argue with your assertion that reading the code is the only way to know for sure.
 

ser_rhaegar

Patron
Joined
Feb 2, 2014
Messages
358
Well, the GUI warning about capacity of the pool is based on #1, the raw drives.

This is the command it runs to get the % used for each pool. This command returns it from the raw drives, or #1 in the screenshot.
Code:
zpool list -H -o cap

https://github.com/freenas/freenas/blob/master/gui/system/alertmods/zpool_capacity.py

I haven't seen the 95% part of the ZFS code though which still could be based on #2. This is just the GUI from FreeNAS which warns at 80 and marks it critical at 90.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
The problem is that people are confusing a lot of things here.

ZFS won't ever shred data, but it does have a failure mode where if you actually manage to fill a pool to 100%, you might not be able to commit the writes that are needed in order to update metadata to remove files to clear space. So you really do want to avoid hitting 100%.

The problem with a NAS, however, is that there's another layer, the protocol layer, between ZFS and the client. The various protocols are definitely capable of doing odd things when the underlying storage is acting funny, and ZFS will surely act funny (as in exceedingly slow writes, maybe poor read performance, etc) as the pool approaches 100%. This may cause some protocols to time out and attempt reconnection, which may make the problem worse, and exercises poorly-tested code paths that are designed to handle very unusual situations and which may result in corruption. Some combinations of client and server may result in data being lost.

That IS a very real problem and it isn't the fault of ZFS, or really of FreeNAS. It is just a problem people don't expect and don't plan for and then they can get royally hosed. I keep saying that systems need to be designed with plenty of excess capacity. Part of that is so that performance never becomes so poor that you're being pushed into those contingency code paths in the clients and servers. You never want that to happen.

95% of what is still a valid question, though; doubly-so for 80%.

95% of pool capacity, period. See "zpool list."

The 80% is debatable to a greater extent, and is a rule of thumb. It is the point at which ZFS often shows problems maintaining good performance for typical uses. 80% is probably not applicable to a pool dedicated to iSCSI, where the number is probably no more than 60%, and is probably more like 30-50%, depending on how much performance you want to maintain.

The problem with all of this is that so much is dependent on so many different things. You may well be able to ride a pool up to 98% when using it for long term archival purposes and never experience a problem because you're never rewriting data, so fragmentation remains low.
 
Status
Not open for further replies.
Top