As well as saying that there is no longer a 80% usage limit.
It's Reddit. There's no usage limits and non-ECC works better than ECC and ZFS works better with less memory and all that crap. We lie like rugs over here at FreeNAS Forums and all that.
Most of the things we advise people are in pursuit of the safety of your data and pleasantness of the ZFS experience.
For example, while it is technically correct that there is no "80%" usage
limit, and you really can get all the way up to about 98% before things start going really wonky, the problem is that over time, this can result in insane fragmentation, which will cause performance problems.
We've historically pegged a safe limit at around 80% for average uses, but this can be much lower (25-50%) for specific applications such as block storage or databases. The 80% originally came from Sun/Oracle, who have since upped that to 90% sometime after ZFS v5. The lower number is always going to be better.
https://www.ixsystems.com/community/threads/sizing-for-zvol-with-compression.61945/
One of the *many* places I discuss fragmentation and the 50% rule. Wait, what?! There's also a 50% rule?!
https://extranet.www.sol.net/files/freenas/fragmentation/delphix-steady-state.png
This graph shows a fascinating property of ZFS, which is what write speeds are like once a lot of write/free activity has gone on and pool performance has settled into a steady state. This is the thing most benchmarks fail to actually test, but it's the thing you may have to live with if you have a long life pool.
So the real problem here is that ZFS mitigates fragmentation two ways: for writes, it relies on large amounts of contiguous free space. For reads, it prays that sequential data was written contiguously and when that isn't the case it relies on ARC/L2ARC to mitigate. On a pool with plenty of free space, like 10%, you can get SSD-like write speeds for your writes. On a pool with very little free space, like 95% with fragmentation, write speed will be very bad because the system has to work hard and seek to find free disk space.
But if you look at the Delphix graph, something else becomes clear... you've lost most of your write performance by the time you are maintaining a busy pool that's 50% occupancy. It doesn't drop to that low performance level immediately. It does so over time, as more rewrites add to the fragmentation, until you finally plateau at that steady state number.
So the real question is if this matters. If you're storing VM block data or running SQL on it, it absolutely matters, and you want to run mirrors, and you want to seriously consider limiting pool occupancy to 25-50%.
If you're creating a file archive of write-once never-removed data, you can wind it all the way up to 98% with minimal impact, because fragmentation is created by the free/rewrite cycle, which won't happen (much) on such a pool.
On the average use case pool, if you don't need high performance, again, you can probably get out to 90% without much drama, 95% with some drama. But keeping it below 80% will be faster.
So it isn't a simple answer, and there's a variety of numbers that are relevant. It's a multivariable game of time, write frequency, and pool occupancy. Most people don't want to earn a degree in CS filesystem theory to understand their ZFS system, so, yes, we simplify this a bit and put out numbers that are likely to be reasonable. 80% for a normal pool is generally safe. 50% for a block storage pool is generally the point at which you should consider stopping. If you don't mind the performance hit you can go as far as you like, up to about 98%. After that point, ZFS is pretty much guaranteed to suck intensely no matter what you're doing.
I suppose Reddit didn't bother to fill in most of those details...?