Understanding ZFS behaviour regarding disk writes and RAM usage!

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
I do run 2 VMs and Plex Server on it.
This is far more likely to be the cause of your memory woes than any "raw capacity" argument.

Right now you only have 0.4G (400M) free - the ARC is downsized to 9.5G from what I would guess is a normal size closer to the 12G range because of added consumption under the "Services" heading from VMs and Plex.

The "1GB per 1TB" guideline (not a rule) is for logical data written (before compression) as a rough suggestion for how much RAM is needed to hold metadata and some portion of hot data in RAM to accelerate it. Bear in mind that 1GB for each 1TB, if it's devoted entirely to caching live data, lets you hold 0.1% of your data in RAM. That's it - one tenth of a percent. For more performance, you want a higher RAM:space ratio; if you're mostly accessing new data (like different shows every time) or sequential data, you can get away with less.

Writing generalized guidelines for this is very difficult, because there will always be someone who's like "Oh I ran my 100TB on 8GB and it was great!" followed immediately by someone who "Oh my 100TB system was so slow on 32GB that I had to replace the mainboard for one that could do 64GB, why didn't anyone warn me."
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
jgreco, how much RAM would be too much then if that's the case?
My NAS is currently have 6x4TB and it has maxed my poor 16GB system, if I upgrade to 6x8TB in near future, should I get 64GB RAM or just 32GB is enough? I do run 2 VMs and Plex Server on it.

The dashboard says the memory is used as follow:
Free: 0.4 GiB
ZFS Cache: 9.5 GiB
Services: 5.9 GiB
is that mean i have 9.5GB left for VM or what not?

Does the 1GB per 1TB rule applied for occupied space only, or the whole disk size?


Thanks

You need to deduct VM space completely because that space is not available to FreeNAS, and jails also consume space. Having "16GB" but making lots of it unavailable is not something that is considered by of the rule of thumb. If you're running two 2GB VM's and a Plex server, you might only have a system that's effectively 10GB RAM there right now.

So the thing that matters is that in times past, low RAM led to instability and system corruption, and bad performance. We haven't seen the instability or system corruption lately, but bad performance is a natural side result of under-resourcing ZFS, and that has not changed.

The thing is, when I wrote the stuff back in the manual years ago, I was deliberately vague. There isn't a one-size-fits-all answer. I was hoping that the home hobbyist who was desperately praying for a less-costly solution would size it based on usable pool space or even maybe just occupied space, while someone looking for business-grade guidance would err on the other side and take it to mean raw space, and end up with a better performing system.

But the other thing is now that RAM is even cheaper than it used to be, people are pushing out to 32GB, 64GB, and more. Once you get out to these sizes, it's basically mostly a question of performance, and it is completely possible that a light duty NAS would never notice the difference between 32GB and 64GB. So follow this advice instead: do not be a slot stuffer. Do not stuff your slots with lower density RAM. If you have four slots and you fill that with four 8GB modules, then if it turns out you are unhappy, you have to pull modules to expand, and they become wasted money. Instead, get two 16GB modules.

https://www.ixsystems.com/community/threads/is-my-hardware-enough.44613/post-299628

I've always been deliberately vague about that, because this isn't a mathematical equation. You know how you can hook up a pop-up camper to your Ford F350 truck and you KNOW that's going to work, period. Or you can hook it up to a hitch on your Toyota Avalon and be comfortable. But you start moving down the line to the Nissan Altima, and you're getting close to a point where you might not want to try pulling that through the Rockies. And hooking it up to your Toyota Prius? Forget about it. Unless you're rolling downhill all the way.

We generally know that for the average case, you need a certain minimum amount of RAM for a given amount of disk to work well. It's not an absolute rule, and in almost every case, more RAM is better than less RAM.

So for the guy who comes in wanting to know if he can build an 8TB usable disk space pool on an 8GB RAM system, we kinda expect that to be reasonable. Not a stellar performer. I can throw a workload at that and make it cry, but for average home user use, it's "fine."

Then the business guy who wants a small office fileserver, I don't feel bad about telling him that he should get 16GB for his 4x4TB RAIDZ2, and that even though it'll probably work okay on 8, it'll be zippy on 16. They're usually fine hearing that, and more often than not, they were kinda figuring it that way anyways, because they weren't hoping for the cheapest solution possible, but rather the best.

Once you start giving people absolute rules, they start applying them and (entertainingly/annoyingly) try to explain to you how you're wrong about how they've applied some rule of thumb that you wrote.

ZFS has so few absolutes.

So if you have two small VM's and a Plex and
 

Big Data Guy

Dabbler
Joined
May 14, 2020
Messages
15
But the other thing is now that RAM is even cheaper than it used to be, people are pushing out to 32GB, 64GB, and more. Once you get out to these sizes, it's basically mostly a question of performance, and it is completely possible that a light duty NAS would never notice the difference between 32GB and 64GB. So follow this advice instead: do not be a slot stuffer. Do not stuff your slots with lower density RAM. If you have four slots and you fill that with four 8GB modules, then if it turns out you are unhappy, you have to pull modules to expand, and they become wasted money. Instead, get two 16GB modules.
Thanks guys, I think I will keep my 16GB RAM then. It's an old system and 16 is the maximum it can take.
I may add another VM since it seems there is another 9GB RAM available.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Thanks guys, I think I will keep my 16GB RAM then. It's an old system and 16 is the maximum it can take.
I may add another VM since it seems there is another 9GB RAM available.

No, you weren't listening. There isn't "another 9GB RAM available". Your VM's reduce the amount of memory ZFS and FreeNAS has to work with. Your NAS really does need a fair amount of RAM for itself.
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
ZFS Cache: 9.5 GiB
Services: 5.9 GiB
is that mean i have 9.5GB left for VM or what not?

I may add another VM since it seems there is another 9GB RAM available.
While you can add another VM and your system may work, the performance will not be helped at all. I would recommend that you read up on what the Cache actually does, it will help you understand how FreeNAS/FreeBSD works. Since you do have a maximum limit for your system of 16GB RAM, you are stuck with that unless you replace your hardware so we all do understand that but we don't want you to think that you have RAM not being utilized. If you need a truly fast system with a huge amount of IOPS and throughput, you would need to get new hardware. So add your new VM and if your system performance suffers, then you know why. One thing that FreeNAS/FreeBSD will do is when you are running out of RAM space, it will start pushing contents less critical to the hard drives in order to free up RAM space. While this is a nice feature to have, you never ever want this to actually occur because it will slow down your overall system performance. Again, something you should read about if you are serious about how to build a proper FreeNAS system, but if those performance issues do not impede your experience then you can probably live with it. I'm sure there are quite a few folks out there who have build a FreeNAS system and are running on very low RAM requirements. I've tested this out many moons ago just to see how it worked, and it worked but the hard drive was accessing like crazy and it slowed my overall system performance.

So my recommendation is to do Google search for something like "freenas cache tutorial" or "freenas cache ram" or similar, or you could read Chapter 24 "ZFS Primer" in the FreeNAS user guide. There are a lot of resources out there to help you understand how the system works so that you better understand things. And if you walk away with more questions, well that is good because a lot of use have questions and then you just need to do some more searching and if you can't find an answer, then post a question and maybe someone has that answer.

Good luck and don't worry, 16GB RAM is not "too" small, but you cannot just throw VMs at it and expect it to work like a huge server.
 

hescominsoon

Patron
Joined
Jul 27, 2016
Messages
456
I am seeing so much over analyzation here from folks wondering how much ram do i need..and the community giving very valid answers...
I go by the mantra..the more the better for ZFS.

Unix/Linux is not windows. Under windows having a 95% usage is the death knell of the system..for FreeNAS having a high usage is fine. If you are not swapping...everything is fine.
So how much ram is too much?...no such thing. How much ram is too little? I would say anything less than 16 gigs. How much for ZFS to show you what it can do? Lots. One disk on ZFS? Forget it. No amount of caching is going to push you much beyond your one HDD. It's when you have multiple HDD and tons of ram can a data integrity focused filesystem shine. ZFS is not a filesystem based on performance..it's first concern is data integrity...keep that in mind when trying to figure out "performance problems".

Look in my signature and you will see my R520 config. I have 48TB raw storage installed configured in mirrored vdevs. I have 4 computers here. Two of them are my workstation and my kids workstation. We each use our own iscsi connection to hold our individual steam, blizzard, epic and other gamecaches. Having that absurdly huge read cache means the data most times doesn't have the latency from streaming off the hard drives..otherwise..it's overkill..which I like..<G> i intend to fully populate the machine with the total 768GB of ram it can hold. WHY? CACHING. I have no ssd's in the box(it cannot hold any ssd's or other flash media natively). It is using more than 97% of the ram for everything..including the file cache. The amount of free ram stays right about where you see it in the pic..the cache changes based on how i am using he system..like vm's...or if i reboot(that clears the cache out really fast..<G>). Now i am running vm's on this machine so that changes depending on what i am running and if i delete the vm's or not. My other system in my signature is a small dell t20 that is purely..only a file server. it has 10 computers hooked to it with usually no more than 2 active at once. Even if both of them are active at the same time it's not a huge load on that machine as all of the machines are 100 meg connected..no worries for a small machine to keep up with as we are not loading huge amounts of files in real time..it's simply files that get stored(like documents, media files, etc etc) that do not require huge amounts of performance from the filesystem..:)
 

Attachments

  • nas status.jpg
    nas status.jpg
    28 KB · Views: 313
Top