When is ARC too small?

Status
Not open for further replies.

Lars Jensen

Explorer
Joined
Feb 5, 2013
Messages
63
Using FreeNAS 9.3 (oldtimer but very stable) for NFSv4

Xeon CPU (12c/24t)
128 GB RAM
2 vdevs of 6x1TB SAS 7.2k drives
Intel P3700 PCIE for ZIL

Serving lots of small files (currently live 2 TB data in overall size) over NFS with aggressive NFS-clients.

Recently NFSD load began to be very high during peak hours, probably more demand for data.

And I wonder if 128 GB ARC is too small for 2 TB data?

(already made an order for 256GB RAM, maybe it needs more and plan a move to FreeNAS 11.1)

Attached arc_summary.txt.
ARC Size Breakdown:
Recently Used Cache Size: 87.01% 90.49 GiB
Frequently Used Cache Size: 12.99% 13.51 GiB
 

Attachments

  • arc_summary.txt
    9.3 KB · Views: 364

m0nkey_

MVP
Joined
Oct 27, 2015
Messages
2,739
Your ARC is being very efficient.
Code:
ARC Total accesses:				   468.25b
   Cache Hit Ratio:	   99.53%   466.04b
   Cache Miss Ratio:	   0.47%   2.21b
   Actual Hit Ratio:	   99.00%   463.57b
 

DrKK

FreeNAS Generalissimo
Joined
Oct 15, 2013
Messages
3,630
Your ARC is being very efficient.
Code:
ARC Total accesses:				   468.25b
   Cache Hit Ratio:	   99.53%   466.04b
   Cache Miss Ratio:	   0.47%   2.21b
   Actual Hit Ratio:	   99.00%   463.57b
I wonder, sir.

He is using 9.3. You will recall that the way 9.3 was engineered, it would report obscene ARC hit stats, because of so many hits on a superfluous process or something that was part of FreeNAS.

His ARC hit ratio on 9.3 (which is very, veryu very high, even considering the thing I'm talking about) may not shed as much light as you think.
 

Lars Jensen

Explorer
Joined
Feb 5, 2013
Messages
63
Thanks for the feedback, but I wonder if the ARC is too small and can cause load on the NFSD process that waits for data from vdev disks in the pool?

NFS traffic is pretty consistent and around 20-30 mbit avg. (on a 2x10G interface).
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
Thanks for the feedback, but I wonder if the ARC is too small and can cause load on the NFSD process that waits for data from vdev disks in the pool?

Said somewhat in jest, rule #1 of ZFS is "you never have enough RAM." More RAM rarely hurts you. ;)

Your vdevs are set up as 2x 6-drives in what I assume is RAIDZ2, so random I/O capabilities will be rather low and penalize a cache miss pretty heavily. You could run arcstat.py 5 during those peak hours when you think things are being slow, and look for a high miss% ratio then.

My other thought is that I don't believe FN 9.3 has support for compressed ARC, and I don't remember when lz4 became the default (I think it was 9.10 for both) - upgrading to a newer release (like FN 11.1-U6) should give significant gains for you if your data is compressible.
 

m0nkey_

MVP
Joined
Oct 27, 2015
Messages
2,739
He is using 9.3. You will recall that the way 9.3 was engineered, it would report obscene ARC hit stats, because of so many hits on a superfluous process or something that was part of FreeNAS.
Completely forgot about this scenario. Suggest upgrading to 9.10.2-U6 as a minimum as it was resolved by that release and then monitoring ARC to see how it performs.
 
Status
Not open for further replies.
Top