Sufficient amount of memory

Status
Not open for further replies.
Joined
Mar 6, 2014
Messages
686
Hi there,

I remember reading somewhere that it is possible to have TOO MUCH RAM for a (FreeNAS) ZFS filesystem. I cannot find it anymore, so i hope someone here can shed some light on this... Is it true and why?

And for my case specifically, would 32GB in my setup be OK?

Motherboard: Supermicro X10SLH-F
6 HDD's: 6x WD Red 4TB SATA-3 (WD40EFRX) in RAID-Z2 - 24TB HDD space - 16TB storage
CPU: Intel®Xeon E3-1245 v3
32 GB Memory: 4x Samsung 8GB M391B1G73QH0-YK0
Case: Cooler Master Silencio 352
PSU: Be Quiet! Dark Power Pro 10 550W (BN200)
UPS: Cyberpower CP900EPFCLCD
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
The opposite is true, the more RAM the better ZFS performs. This is because FreeNAS will utilize all the RAM available for use as a cache (ARC) which is faster than an L2ARC hard drive/SSD. As for the amount of RAM being enough, yes it is although you could run 16GB RAM and probably achieve the same results depending on what you plan to use this build for. If it's normal home use like backups and streaming media, running a few of the jailed plugins, 16GB should be fine. If you plan to do a lot more with your system that requires serving up to a lot of people in a work environment, 32GB RAM would likely be preferred.

Hope this helps.

EDIT: If there is an upper limit for RAM, I have not heard of one but I'm certain it's above 128 GB.
 
Joined
Mar 6, 2014
Messages
686
Thanks. I think it was not about a memory limit, but more about some performance drop with an excessive amount of memory....not sure though.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
Hi Rilo,

Short Answer: I think what you're referring to isn't so much "too much RAM" but "too much RAM for your underlying disk."

Expanded (Hopefully I don't make a mess of this)
ZFS tries to make its best guess at how much data to stuff into a transaction group, but IIRC it bases this off your total RAM. So if you've got a system with 128GB of RAM connected by 10GbE, it might think "oh, you can handle 4GB in a transaction group" or something like that. And if you've got 24 fast SAS drives backing this up in a RAIDZ2 setup with 175MB/s sequential throughput off each drive, then you can eat about 2.73GB/s at the pool level (assuming 4x 6-drive vdevs) - with the default txg timeout of 5 seconds, you can absorb ~13.7GB in that time at the pool, which is more than a 4GB txg, so you're fine.

But if you're running, say, eight disks in a mirrored setup, and they're slower SATA that can only do 100MB/s - well, now your pool throughput is 400MB/s. Which over a txg is 2GB. Which is less than 4GB. Ruh roh. So you start stuffing your transaction group and five seconds later it's full at 4GB. You quiesce and start to sync it to the pool, and open a new txg for the next batch. Five seconds later, txg2 is full - but txg1 is only half-synced to disk. 2.5s stall. txg1 finishes, txg2 starts, and "txg3" is made available for writes. Five seconds later, txg3 is full - but again, txg2 is still writing. 2.5s stall.

Hopefully I haven't made too much of a mess of trying to explain it this early in the morning.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Good example, except ZFS will only stall once or twice.

First, ZFS does some simple basic testing when mounting the pool to get an idea of what kind of performance your pool can handle. It uses this to help determine the size of a transaction group.

Second, if it ends up with transaction groups that are too big and stalling the system the ZFS scheduler will deliberately throttle the incoming writes and reads to compensate.
 
Joined
Mar 6, 2014
Messages
686
That sounds familiar indeed HoneyBadger and your explanation was very clear to me (here it is almost evening). However, this doesn't seem to be an issue, according to cyberjock. But is the ZFS scheduler going to PERMANENTLY correct the size of the transaction group that was determined by the simple tests performed when mounting the pool? I.o.w. is this change/throttle permanent, or will it stall again and will the scheduler then again throttle the r/w's?
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
Thanks for the corrections @cyberjock. IIRC though the basic testing is still all sequential though, and while ZFS does a great job of coalescing random write I/O, random read that has to hit the pool can still affect it. I don't think you can ever totally eliminate ZFS breathing.

At the risk of needing another correction Rilo, the transaction sizing will be persistent after mounting (so the txg size will always be the same) but the automatic throttling of read/write will only happen until it feels that the txg won't overwhelm the pool. If you're constantly hammering the server with more data than the pool can absorb, you're always going to have a throttle going on, and the only way to solve it is a faster underlying pool (more or faster spindles)
 
Joined
Mar 6, 2014
Messages
686
Thanks guys, your answers were really clear and helpful!
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Just for those that are interested, there's a bunch of tunables that let you tweak what ZFS thinks your pool performance is. You can also override it and tell it what the performance is(but this isn't recommended because it's alot of trial and error to really get right).

There's also a bunch of tunables for changing ZFS' breathing characteristics. When I do large transfers over 10Gb from my RAMdrive on my desktop(hey.. I HAD to see what happens) I made my pool sneeze all over itself for like 5 seconds. Got a good laugh when I saw almost 10GB transfer in less than 10 seconds flat, then stop for 5 seconds.

The reality of it is that ZFS's breathing shouldn't normally be a problem, hence we don't talk about it much except to people who are really detailed about how ZFS works. Usually they want to tweak ZFS for better performance and haven't figured out that the defaults are probably best. ;)
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
@Honeybadger and @Cyberjock,
Great explanations, both of you. And yes, I've never heard of the issue you both have discussed, maybe because I don't have a 10Gb network, which would be very nice BTW. Maybe in 5+ years when prices are sure to have dropped a bit more.
 
Joined
Mar 6, 2014
Messages
686
Just for those that are interested, ..... to people who are really detailed about how ZFS works. ... ;)


Very interested indeed....still a lot to learn...
 
Status
Not open for further replies.
Top