80% capacity fill rule - How far past that is safe?

Status
Not open for further replies.

pclausen

Patron
Joined
Apr 19, 2015
Messages
267
So I'm just about to hit 80% capacity in my volume as seen here:

freenas80%25.PNG


The primary purpose of my FreeNAS server is to store media that is served up to about 10 clients using Emby.

I have 120TB raw capacity via 5 raidz2 vdevs with 10 disks in each. 4 of the vdevs use 2TB drives and the 5th one uses 4TB drives.

I have begun the process of replacing the 2TB drives in one of the vdevs with 6TB WD RED drives. Once I'm done, my raw capacity will increase by 40TB.

Thing is, those 6TB REDSs are expensive suckers, so I'm doing 2 per month as my "toy" budget allows. So at this rate it will take 4 months before I'm done. I'm guessing I'll be close to 90% capacity by then.

All my data is backed up (have a buddy with a similar setup that was kind enough to mirror my media archive for me).

Given my configuration and use, is pushing 90% advisable, or should I just bite the bullet, pull out the credit card, and order 8 more drives now? (I have already purchased 2).
 
Last edited:

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
On a pool where you're not ever removing files (freeing space) or doing anything other than exclusively adding data, you can safely go up to 98-99%. ZFS simply keeps allocating the next free contiguous range of space and will do so until the space is exhausted. You DO NOT WANT TO FILL YOUR POOL under any circumstances.

Unfortunately for you, with jails and things like transmission running, that does not apply to you: you will start to experience pain, then painful pain, then severe pain, then agony as your write speeds drop, because you are writing other stuff to the pool, and doing this introduces fragmentation, which is where the 80% thing comes from. It won't actually BREAK anything to pass the 80% mark, but it could eventually get bad enough that you're cursing ZFS and want to swear off it for the rest of your life. Every time you free space on your pool, you create little fragmented regions of free space in between the other stuff that's on your pool. As the pool fills to its ultimate capacity, ZFS is forced to allocate those useless little bits of scattershot space in order to fulfill your space requests, and that's a miserable experience to fill those all in.
 

pclausen

Patron
Joined
Apr 19, 2015
Messages
267
Got it. Really appreciate the great explanation! That makes total sense. Yeah, there's a lot of moving files around with Transmission and I also have an outbound dir where I copy new stuff to ship off to my buddy once a month. He's too paranoid to even run BTSync, let alone Transmission. :)

Well, I'll keep a close eye on any Black Friday deals I can find on 6TB REDs. Hopefully my wife won't kill me when she sees the credit card bill. Maybe I'll show her your response and explain why I had to do it. LOL
 

Fuganater

Patron
Joined
Sep 28, 2015
Messages
477
IDK if you saw my post but Newegg has the 6TB HGST NAS drives for $229. You can buy 5 per customer.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Got it. Really appreciate the great explanation! That makes total sense. Yeah, there's a lot of moving files around with Transmission and I also have an outbound dir where I copy new stuff to ship off to my buddy once a month. He's too paranoid to even run BTSync, let alone Transmission. :)

Yeah, the fragmentation's the killer with ZFS. If you can manage to overcome the feeling of massive waste, you just throw space at it and it gets much better. I'm throwing 48TB of raw space at the problem to arrive at 7TB of decent VM storage.

Well, I'll keep a close eye on any Black Friday deals I can find on 6TB REDs. Hopefully my wife won't kill me when she sees the credit card bill. Maybe I'll show her your response and explain why I had to do it. LOL

Don't get your wife mad at me.
 

Nick2253

Wizard
Joined
Apr 21, 2014
Messages
1,633
@jgreco, is there any kind of defrag tool/routine available for ZFS, or does that just not make sense considering the architecture?
 

pclausen

Patron
Joined
Apr 19, 2015
Messages
267
@Fuganater, thank. I'll keep an eye out for the HGSTs as well.

@jgreco, just kidding about my wife. She couldn't care less about all these forums I 'waste my time on'. :) You must be running a lot of mirror vdevs in parallel or something. Those Samsung 4TB SSDs are coming early next year you know... ;)
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
@jgreco, is there any kind of defrag tool/routine available for ZFS, or does that just not make sense considering the architecture?

It doesn't make sense given the architecture. When you have snapshots and all the other complicated features of ZFS, the problem of keeping track where all the blocks are becomes a very challenging proposition. A given block could be a member of the current pool image plus dozens of snapshots. Trying to "defrag" that one block could result in needing to rewrite many metadata blocks.
 

Tywin

Contributor
Joined
Sep 19, 2014
Messages
163
@jgreco, is there any kind of defrag tool/routine available for ZFS, or does that just not make sense considering the architecture?

No, there is no such tool. It does make sense for the architecture, but the complexity of the operation has prevented it from being implemented. If block pointer rewrite or ZFS device removal ever become a thing, you might see some defragmentation tools crop up. In the meantime, if you have another ZFS system available, you can replicate out, wipe and recreate pool, then replicate back in.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
You must be running a lot of mirror vdevs in parallel or something. Those Samsung 4TB SSDs are coming early next year you know... ;)

Yup, but I need them to be a reasonable price. Finally we're at a point where the consumer SSD's are a perfectly reasonable price and consumer HDD's are stupid-cheap.

We're actually a lot closer to that situation this year than in years past. The SanDisk Ultra II 960GB is going for $199. That was ~$500 a year or so ago.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
In the meantime, if you have another ZFS system available, you can replicate out, wipe and recreate pool, then replicate back in.

No need for something so drastic. You can move things that you suspect of being highly fragmented off of the pool (or if you have enough space, even just elsewhere on the pool). The act of reading-and-rewriting-elsewhere a fragmented file will defragment things to the extent reasonably possible.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
By the way, I should probably have mentioned somewhere that @pclausen 's proposed 90% isn't likely to fall into the "tragic catastrophe of frozen zpool of molasses"... it should be fine, but understand that it'll be somewhat slower than you're used to. Sorry, I get off on one train of thought sometimes.
 

Tywin

Contributor
Joined
Sep 19, 2014
Messages
163
No need for something so drastic. You can move things that you suspect of being highly fragmented off of the pool (or if you have enough space, even just elsewhere on the pool). The act of reading-and-rewriting-elsewhere a fragmented file will defragment things to the extent reasonably possible.

If that file is part of one or more snapshots, won't the snapshot(s) remain fragmented, and the newly written copy just take up additional space?
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
If that file is part of one or more snapshots, won't the snapshot(s) remain fragmented, and the newly written copy just take up additional space?

Yes, but most people don't hold on to snaps indefinitely so even that would resolve.

It is, however, a problem with no generally great solutions - just workarounds.
 

solarisguy

Guru
Joined
Apr 4, 2014
Messages
1,125
My general recommendation is to keep wives happy, then a slower pool is only a transitional inconvenience :D

P.S.
When you are significantly expanding your storage, your RAM might start slowing you down...
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
@pclausen is your wife an emby client user? She should understand performance slowdowns.
 

mattlach

Patron
Joined
Oct 14, 2012
Messages
280
No need for something so drastic. You can move things that you suspect of being highly fragmented off of the pool (or if you have enough space, even just elsewhere on the pool). The act of reading-and-rewriting-elsewhere a fragmented file will defragment things to the extent reasonably possible.

I vaguely remember that at some point someone had mentioned forcing a rewrite of the entire pool by somehow using the zfs export and import commands at the same time on the same server.

I imagine that it would take a LONG time on a large pool, during which you'd need it to be offline?

Does anyone know what I am talking about?
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Yes. It's one technique to accomplish the defragmentation method I suggested. The pool doesn't need to be offline, but things get complicated if you're trying to do it on a live pool. As in it's your responsibility to figure out how to handle changed files, etc., that happen during the operation. Also it doesn't work on a pool that's more than ~40% full, because you're copying a snapshot from the pool, back to the pool, so you'll end up at around 80% (and if you let a fragmented pool get fuller than that, things will get slow-ish).

It is therefore only useful for specific scenarios.
 

mattlach

Patron
Joined
Oct 14, 2012
Messages
280
Yes. It's one technique to accomplish the defragmentation method I suggested. The pool doesn't need to be offline, but things get complicated if you're trying to do it on a live pool. As in it's your responsibility to figure out how to handle changed files, etc., that happen during the operation. Also it doesn't work on a pool that's more than ~40% full, because you're copying a snapshot from the pool, back to the pool, so you'll end up at around 80% (and if you let a fragmented pool get fuller than that, things will get slow-ish).

It is therefore only useful for specific scenarios.

Ahh, that's too bad.

Would be nice if you could do this block by block.
 
Status
Not open for further replies.
Top