ARC Hit Ratio Extremely Low After Upgrade to 9.10

Status
Not open for further replies.

NightNetworks

Explorer
Joined
Sep 6, 2015
Messages
61
So after upgrading to FreeNAS 9.10 I am seeing that my ARC HIt Ratio is now sitting at around 6% to 8% where it used to be around 70%. I did some research online and it appears that this is the new normal due to some changes that were made with some kind of script that was running in the background... (https://forums.freenas.org/index.php?threads/solved-differing-arc-usage-in-freenas-9-10-2.48831/) My question is what should this new normal be? Do I now need to add more RAM? Performance appears to be fine, I can still fully saturate my 1GB network connection.

Thanks!
 

Jailer

Not strong, but bad
Joined
Sep 12, 2014
Messages
4,974
I wouldn't worry about it. Mine seems to run around 36% after it's been up for a while.
 

m0nkey_

MVP
Joined
Oct 27, 2015
Messages
2,739
So after upgrading to FreeNAS 9.10 I am seeing that my ARC HIt Ratio is now sitting at around 6% to 8% where it used to be around 70%. I did some research online and it appears that this is the new normal due to some changes that were made with some kind of script that was running in the background... (https://forums.freenas.org/index.php?threads/solved-differing-arc-usage-in-freenas-9-10-2.48831/) My question is what should this new normal be? Do I now need to add more RAM? Performance appears to be fine, I can still fully saturate my 1GB network connection.

Thanks!
Totally normal. FreeNAS has been lying about it's ARC hit ratio for quite some time. Allan Jude was kind enough to provide us with an explanation which @DrKK and I shared in that thread.
 

NightNetworks

Explorer
Joined
Sep 6, 2015
Messages
61
So... (making some assumptions here so correct me if I am wrong) Since the data that we had been getting in the past regarding the ARC hit ratios was not correct... and we are now starting to see that the actual ARC hit ratios are very low. Would this now change the use of an L2ARC drive? What I mean by that is that @cyberjock mentions in his FreeNAS 9.10 guide that you should not use an L2ARC drive until you are at a minimum of 64GB of RAM, due to the L2ARC index using RAM. However my thinking now is that since the ARC hit ratios are actually pretty low in my case 6% with 16GB of RAM... Doubling my RAM to 32GB would theoretically only bring me to 12% ARC hit ratio and going to 64GB of RAM would theoretically only bring me to 24% ARC hit ratio. Whereas an L2ARC drive of say 250GB would result in a much higher amount of data being cached and I would think that equal to higher performance. I guess what I am getting at is I really dont see the point of dumping all this RAM into a FreeNAS server now... I mean you are never going to be able to put enough RAM in a system to cache any significant amount of data a large SSD drive for L2ARC would seem more logical at this point.

Is there any truth to what I am saying? Or am I wrong (assuming that I am)?

Thanks!
 

Jailer

Not strong, but bad
Joined
Sep 12, 2014
Messages
4,974
Is there any truth to what I am saying? Or am I wrong (assuming that I am)?
No your thinking is incorrect. ZFS will expand the ARC based on available memory and increase the ARC with added RAM and store more files in ARC. Since L2ARC uses RAM your performance would tank unless you are already maxing out your existing ARC before adding the L2ARC because you would pull available ARC (memory which is way faster than any SSD) for the L2ARC.

Confused yet?
 

NightNetworks

Explorer
Joined
Sep 6, 2015
Messages
61
No your thinking is incorrect. ZFS will expand the ARC based on available memory and increase the ARC with added RAM and store more files in ARC. Since L2ARC uses RAM your performance would tank unless you are already maxing out your existing ARC before adding the L2ARC because you would pull available ARC (memory which is way faster than any SSD) for the L2ARC.

Confused yet?

I think that I follow what you are saying... That being said going by this information posted by @m0nkey_ on this thread...

"What I took away from that, is what we're seeing now should be about normal for the average FreeNAS user. Obviously, the more you read from cache, the higher your hit ratio goes. Allen explained that if you have more RAM than storage, then you will see almost 100% hit ratio, but because most of us have on average 16GB RAM, and 6TB+ of storage, we're unlikely to see these high hit rates, unless of course you're re-reading the same data over and over again in cache."

When I have 6TB of storage and only 16GB of RAM... You are really not caching anything which is clearly shown by my ARC hit ratio of only 6%. So my line of thought is that while yes RAM is much faster then an SSD and yes you will lose RAM due to the L2ARC... An SSD is a lot faster then a HDD so I would think that the logic behind saying it will be slower due to the loss of RAM is no longer valid with the new data that we have.

For example if I have a favorite movie stored on my FreeNAS box that is say 25GB in size and I watch it frequently that movie/file could never be cached into RAM as its to large. However if I had a 250/500GB L2ARC drive it could be cached... that SSD would be faster then the HDD that it normally sits on.

Thoughts? (not saying that you are wrong I am just trying to make since of all of this)
 

darkwarrior

Patron
Joined
Mar 29, 2015
Messages
336
Hi there,

I'm afraid that your idea of the ARC% Hit rate doubling in a very linear manner every time you double your RAM is not really adequate ...
If you effectively never use the same file twice you could make your hit rate drop below 2 % ...

Theoretically, your thinking about how L2ARC works is correct, but because of how L2ARC practically works (already outlined above: Indexes in RAM, less RAM for ARC, etc ...) you will not have a pleasant user experience.

The big thing behind that is what I call "smart caching":
When you're watching your favorite movie (the same 25GB for example's sake): ZFS will try to pretty much load (in advance) blocks of the file into ARC that you did not watch yet and will evict those that you watched already, while you continue watching.
Magic inside(TM) :cool:

Personally, I would love to have a small SSD to act as a L2ARC for my pool (with iSCSI extents), but i'm restraining myself from installing one as long as I can't have more RAM ...:p
 

rungekutta

Contributor
Joined
May 11, 2016
Messages
146
I read somewhere that each block in the l2arc needs about 400 bytes of RAM for indexing (and which could have otherwise been used for ARC). If true and in a zfs block size of 4k that would mean approx 10:1 ratio. I.e. 100GB l2arc draws 10GB of RAM. Worth it? Depends on the data set I guess.

But there does seem to be much sceptism toward l2arc overall on this forum - I also don't buy (as often stated) that l2arc should be automatically dismissed unless RAM already maxed out. Seems a bit simplistic to me. RAM may be cheap but SSD is cheaper and for 10x leverage maybe the relative speed trade-off (between RAM and SSD) could be worth it in some (many?) cases? Even at the smaller end with a server with 14GB usable for caching - say 6GB of those are used to index an l2arc on SSD, and the (now slightly slower) read cache is total 8+60GB instead of 14GB? Surely there must be cases where that makes sense particularly with slow disks...
 
Last edited:

darkwarrior

Patron
Joined
Mar 29, 2015
Messages
336
HI there,

not really ... ;)
1. L2ARC is not a replacement for ARC
2. Presence of L2ARC will not make your system faster (when your ARC is pressured/starved down)
3. L2ARC is supposed to help serving read I/Os to help to keep the pool free for write I/Os for instance.

You can read more here from a trustworthy source::D
At what point L2ARC makes sense
2x256GB of L2ARC with 32/64GB of RAM
 

rungekutta

Contributor
Joined
May 11, 2016
Messages
146
Thanks. That top link actually seems to factually confirm what I wrote, more specifically 380 bytes as opposed to 400, But I understand your points and that l2arc is not a replacement for arc, also understand that RAM is orders of magnitudes faster than SSD (which is order of magnitudes faster than HDD!). The rest surely is a function of the characteristics of the data set and how it's accessed? For any given size data and workload there must be a configuration of arc+l2arc that makes the ideal setup, and obviously the full data set in arc is always the ideal but not always feasible given real-world restrictions of physical RAM capacity and budget. Surely a trade-off then between a) the speed penalty of l2arc vs arc against b) the 5x or 10x leverage in size of l2arc vs arc. Everything else has to be rules of thumb more than hard set rules, no?
 

darkwarrior

Patron
Joined
Mar 29, 2015
Messages
336
Indeed, all of this is very dataset and use case specific.
For simple home use (file serving, plex, No VMs) L2ARC almost brings no benefit. This is why the general recommendation always has been: Max out your RAM, before thinking about L2ARC.

L2ARC starts really to be useful in other very random I/O heavy scenarios like VM Hosting, NFS and iSCSI (my case for example)
I have 5TB of Zvols served up through iSCSI that could really benefit from L2ARC, but I really don't have enough RAM to do that

The general idea is: Once you arrive at having around 64GB of RAM (Approx 50GB of ARC) you can apply the 1:4 ARC:L2ARC sizing ratio/"rule of thumb". So, the result will ~200GB of L2ARC (more will not be used).
 

rungekutta

Contributor
Joined
May 11, 2016
Messages
146
Hi darkwarrior. Thanks, I think we understand each other, I just have to be a bit difficult again ;); why would 50GB ARC somehow be universally "appropriate"? Wouldn't the general objective be to fit the frequent working set in cache... so therefore if ARC itself can be large enough (by way of RAM available) then happy days and problem solved (in the best way possible), otherwise 2nd best option a combination of ARC+L2ARC to fit the working set, and the required size of L2ARC is then more or less given. Hard to see how rules of thumb could apply then, other than to avoid the extremes (e.g. ARC approaching zero due to oversize L2ARC vs RAM available).

How much RAM do you have out of interest?
 

darkwarrior

Patron
Joined
Mar 29, 2015
Messages
336
How much RAM do you have out of interest?

Yes, we are on the same wavelength ;)

5OGB of ARC is not universally appropriate, it's an example and additionally it's just that once you have reached 64GB it's starts to be difficult to justify to spending more hard earned cash on RAM (to go up to 128GB for example) and you're not aiming to use the server for something else too.
Of course, the ideal solution would be to have enough RAM to cache everything useful, But Not many people can afford to buy enough RAM to hold the whole data working set in cache.
Most of the modern Hardware platforms (Intel Skylake for instance) support up to 64GB of RAM anyway. If you want more you need to go to the Xeon E5 model lineup.

Currently, I have a FreeNAS VM with ~20GB of RAM and since I want to upgrade at some point, I'm currently looking for 2nd Hand servers with 128GB of RAM, to continue Virtualizing and be finally able to use a L2ARC. Yay !:p
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,363
Sounds like you need to add ram and an l2arc.

Iirc l2arc works as a victim cache for ARC.
 

Evi Vanoost

Explorer
Joined
Aug 4, 2016
Messages
91
The 'default' l2arc settings are relatively low. These are my current settings and L2ARC is used about 50% 'full' and the hit ratio is ~30%, my ARC does have a 97% hit rate. I have 700GB of L2ARC and ~400 GB of ARC, mainly a 200TB random I/O workload.

Depending on your workload, you may not even need L2ARC, test your performance with and without to see whether you 'need' it. In my case, it only seems to gets used during bursts of high activity, for the rest of the time, my disks are fast enough (and also have 128MB of read cache each) to handle the workload.

Code:
vfs.zfs.l2arc_norw: 0
vfs.zfs.l2arc_feed_again: 1
vfs.zfs.l2arc_noprefetch: 0
vfs.zfs.l2arc_feed_min_ms: 200
vfs.zfs.l2arc_feed_secs: 1
vfs.zfs.l2arc_headroom: 2
vfs.zfs.l2arc_write_boost: 10000000 (10MB/s)
vfs.zfs.l2arc_write_max: 40000000 (40MB/s)
 

darkwarrior

Patron
Joined
Mar 29, 2015
Messages
336
The 'default' l2arc settings are relatively low. These are my current settings and L2ARC is used about 50% 'full' and the hit ratio is ~30%, my ARC does have a 97% hit rate. I have 700GB of L2ARC and ~400 GB of ARC, mainly a 200TB random I/O workload.

Depending on your workload, you may not even need L2ARC, test your performance with and without to see whether you 'need' it. In my case, it only seems to gets used during bursts of high activity, for the rest of the time, my disks are fast enough (and also have 128MB of read cache each) to handle the workload.

Code:
vfs.zfs.l2arc_norw: 0
vfs.zfs.l2arc_feed_again: 1
vfs.zfs.l2arc_noprefetch: 0
vfs.zfs.l2arc_feed_min_ms: 200
vfs.zfs.l2arc_feed_secs: 1
vfs.zfs.l2arc_headroom: 2
vfs.zfs.l2arc_write_boost: 10000000 (10MB/s)
vfs.zfs.l2arc_write_max: 40000000 (40MB/s)

Out of curiosity:
What is the version of Freenas you're running ? Prior to 9.10 ?
What are your hardware specs ? :D:p
 

NightNetworks

Explorer
Joined
Sep 6, 2015
Messages
61
Yes, we are on the same wavelength ;)

Currently, I have a FreeNAS VM with ~20GB of RAM and since I want to upgrade at some point, I'm currently looking for 2nd Hand servers with 128GB of RAM, to continue Virtualizing and be finally able to use a L2ARC. Yay !:p

What!? Yeah sorry man if you are running FreeNAS in a VM for any purpose other than testing I feel like that pretty much discredits most of your advice.

Source, https://forums.freenas.org/index.ph...nas-in-production-as-a-virtual-machine.12484/
 

darkwarrior

Patron
Joined
Mar 29, 2015
Messages
336
What!? Yeah sorry man if you are running FreeNAS in a VM for any purpose other than testing I feel like that pretty much discredits most of your advice.

Source, https://forums.freenas.org/index.ph...nas-in-production-as-a-virtual-machine.12484/

Well. Sorry to disagree. :D
Many people advocate that FreeNAS should not be virtualized at all and it has been the policy on the Forums for a long long time, because people were making very bad mistakes and then loosing data.
But this has changed: iXSystems published an official statement on that matter back in 2015 ...:rolleyes:

I know
how to vitrtualise and know how to deal with ZFS, so I'm running it in a VM following best practices and I never had issues. And anyway should something happen, I have backups...

So, feel free to ignore me and my advise. I will do the same with yours :cool:
 

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421

darkwarrior

Patron
Joined
Mar 29, 2015
Messages
336
Actually it just shows your ignorance to how a vm works and what possibilities it creates. If you are aware of how to run freenas in a vm that person probably knows more than you ever will.

:cool:
 
Status
Not open for further replies.
Top