Small ARC?

cwagz

Dabbler
Joined
Jul 3, 2022
Messages
35
I am very new to TrueNAS and ZFS.

I recently converted my home Hyper-V / Windows file server over to TrueNAS Scale.

The system had 32GB of ram when I first installed TrueNAS, but I quickly realized I needed more to support the VMs I wanted to run. I upgraded the RAM to 64GB and I now have allocated 28GB of RAM to my 6 VMs.

The main storage pool is a RAIDZ1 (4 x 8TB HDDs / 1x 256GB nvme for cache) archived data (photos and such) along with the image backup repository for all the other computers in the house.

VMs are all running on a mirrored pool (2 x 1TB nvme).

The question I have is regarding Arc size. I was under the impression that ZFS would pretty much consume all my left over RAM for arc, but every time I check it seems like my arc size is only around 3.2GB. Is this correct? Is it possible the arc settings are still based on my initial installation which only had 32GB of ram?

2022-08-04 09_04_59-Window.png


2022-08-04 09_06_09-Window.png
 

Constantin

Vampire Pig
Joined
May 19, 2017
Messages
1,829
I wonder if it's simply a matter of time. Leave the machine on for a week and come back. Chances are, all free RAM will be allocated...
 

cwagz

Dabbler
Joined
Jul 3, 2022
Messages
35
I wonder if it's simply a matter of time. Leave the machine on for a week and come back. Chances are, all free RAM will be allocated...
Thank you for the reply. Uptime is almost 6 days and no change. I also just had lightroom copy a few GB of data to the library on my pool and the arc size did not change during the entire operation. Is there a way for me to check to see if it is limited somehow?


1659910223396.png
 

Chris3773

Dabbler
Joined
Nov 14, 2021
Messages
17
You could try to change the zfs_arc_max size, however the default is 1/2 of total system RAM so it should be using the rest of your RAM.

This can be changed with the command:
echo SIZE_IN_BYTES >> /sys/module/zfs/parameters/zfs_arc_max

You will need to add it as a preinit command under the "Init/Shutdown Scripts" selection for the change to persistence across reboots.
 

cwagz

Dabbler
Joined
Jul 3, 2022
Messages
35
You could try to change the zfs_arc_max size, however the default is 1/2 of total system RAM so it should be using the rest of your RAM.

This can be changed with the command:
echo SIZE_IN_BYTES >> /sys/module/zfs/parameters/zfs_arc_max

You will need to add it as a preinit command under the "Init/Shutdown Scripts" selection for the change to persistence across reboots.
I just opened the file you referenced and the value that was in the file is 3602014208 bytes. This is 3.6GB! I will try changing it to half my ram size and see what happens. Thanks!
 

cwagz

Dabbler
Joined
Jul 3, 2022
Messages
35
Arc started growing immediately after echoing the value of 34359738368. It is already up to almost 7GB in just a few minutes.

Any ideas on why my zfs_arc_max size is being set to 3602014208 (3.6GB) by the system? I created a post init command to fix the arc size but it seems like a bug that it is being set so small on my system.
 
Joined
Oct 22, 2019
Messages
3,641
You could try to change the zfs_arc_max size, however the default is 1/2 of total system RAM so it should be using the rest of your RAM.

FreeBSD wins with a much better default setting. :cool:

From the OpenZFS GitHub:
If set to 0 then the maximum size of ARC is determined by the amount of system memory installed:
  • Linux: 1/2 of system memory
  • FreeBSD: the larger of all_system_memory - 1GB and 5/8 × all_system_memory
 

cwagz

Dabbler
Joined
Jul 3, 2022
Messages
35
FreeBSD wins with a much better default setting. :cool:

From the OpenZFS GitHub:
I just changed it to 0 and it is still growing. Thanks for the default value. Is anyone allowed to file a bug report? I was looking through the bug reporting system, but it did not seem like I had permission to file a bug.
 
Joined
Oct 22, 2019
Messages
3,641
I just opened the file you referenced and the value that was in the file is 3602014208 bytes. This is 3.6GB!

Did you at any point create a "Tuneable" to change the default value?

Did you at any point enable "Autotune"?

If the answer is "no" for both of the above questions, then it means, for whatever reason, TrueNAS SCALE explicitly set that parameter to 3.6 GiB of RAM?

Something about that just doesn't seem right. More users would have noticed their ARC remaining suspiciously low on SCALE.

Since I'm not on SCALE, I can't really share my parameter. However, on Core, the parameter ships with the default (i.e, "0"):
vfs.zfs.arc_max: 0

It behaves as expected.

---

UPDATE: Upon reading more posts in here and elsewhere, it's possible that TrueNAS SCALE tries to be "smart" about your ARC and explicitly sets a fixed amount in the parameter zfs_arc_max upon first-time installation? Perhaps to give priority to Apps / pods / containers?
 
Last edited:

cwagz

Dabbler
Joined
Jul 3, 2022
Messages
35
Did you at any point create a "Tuneable" to change the default value?

Did you at any point enable "Autotune"?

If the answer is "no" for both of the above questions, then it means, for whatever reason, TrueNAS SCALE explicitly set that parameter to 3.6 GiB of RAM?

Something about that just doesn't seem right. More users would have noticed their ARC remaining suspiciously low on SCALE.

Since I'm not on SCALE, I can't really share my parameter. However, on Core, the parameter ships with the default (i.e, "0"):
vfs.zfs.arc_max: 0

It behaves as expected.
I did not create any tuneables / Autotune or change any default values. I am actually not sure where these are listed or how I would change them.

My only thought was that when I had 32GB RAM installed I allocated all but about 4GB of Ram to virtual machines. I then upgraded to 64GB of RAM but I never saw the arc size grow after the upgrade. Maybe Scale set it based on what I had allocated to virtual machines and my original ram amount? That is all I could think of.
 
Last edited:
Joined
Oct 22, 2019
Messages
3,641
@morganL and @Kris Moore is this a known thing? Is it intentional?

Does SCALE explicitly set a value for zfs_arc_max rather than leave it at the default? Does it set a value based on the system specs in which it was originally installed? Does it change the value based on usage (such as Apps, VMs, etc?) Does it change the value if the user upgrades their RAM at a later date?

Here is another example of a SCALE user with 128 GB of RAM. Their situation is not as extreme as the one in this thread, but it's still noticeable (and they discovered that the zfs_arc_max parameter was not left at the default of "0"):

Something (TrueNAS?) is setting it to a lower value than the Linux default, which should either be "0" or about 68719476736, depending on whether it evaluates the formula.


With TrueNAS Core, even after upgrading my RAM, the parameter remains at "0" (i.e, ZFS "default").
 

cwagz

Dabbler
Joined
Jul 3, 2022
Messages
35
This might be related to something that could be throwing off the calculations on my system. Even when I have a lot of free RAM showing on the dashboard I get this:
1659931042239.png


It will start out at about 6GB after a reboot and slowly decrease to 0.00.

Here is current free RAM:
1659931100077.png

Cache is still increase since I set it to "0"

There is another thread regarding this as well, but nobody was complaining about Arc size issues.
 

morganL

Captain Morgan
Administrator
Moderator
iXsystems
Joined
Mar 10, 2018
Messages
2,694
@morganL and @Kris Moore is this a known thing? Is it intentional?

Does SCALE explicitly set a value for zfs_arc_max rather than leave it at the default? Does it set a value based on the system specs in which it was originally installed? Does it change the value based on usage (such as Apps, VMs, etc?) Does it change the value if the user upgrades their RAM at a later date?

Here is another example of a SCALE user with 128 GB of RAM. Their situation is not as extreme as the one in this thread, but it's still noticeable (and they discovered that the zfs_arc_max parameter was not left at the default of "0"):




With TrueNAS Core, even after upgrading my RAM, the parameter remains at "0" (i.e, ZFS "default").

Linux tunings for ZFS ARC are very different.

There are some improvements being made in 22.02.3. Check that out and then see where we are.
One fix below:

NAS-113422
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
I was under the impression that ZFS would pretty much consume all my left over RAM for arc
This is true on BSD based implementations of ZFS, ie CORE. But SCALE is Linux, and last I checked Linux's memory system and ARC interact differently.

Ideally Linux ZFS would get to a point where its possible to automagically dedicate all the "free" memory to ARC. I don't think it's there yet.
 
Last edited:

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
As an update, here's a talk from ZFS Summit 2020 which goes into the difference in ARC sizing between Linux and BSD/Illumos

ZFS caches data in the ARC. The size of the ARC cache is determined dynamically by memory pressure in the system. This mechanism is separate from the kernel’s “page cache”, and these two caches sometimes don’t get along well. This talk will explain how the ARC decides how big to be, comparing behavior on Linux and illumos

(slides unfortunatley seem to be locked away now)
 
Joined
Oct 22, 2019
Messages
3,641
This is true on BSD based implementations of ZFS, ie CORE. But SCALE is Linux, and last I checked Linux's memory system and ARC interact differently.
An interesting thing to explore, but I can't ignore the elephant in the room: a system with 64 GiB of RAM has its ARC capped at under 4 GiB. By forcing the above parameter to "0" (i.e, the "default" on Linux systems), @cwagz noticed their ARC grows with usage to a more reasonable amount, and there is less unused "free" memory.

To revert to the "default" value set by upstream OpenZFS yields better numbers, and supposedly better performance. Less then 4 GiB is not much room for data to jostle in the ARC, when you've got 64 GiB of total physical RAM.
 

cwagz

Dabbler
Joined
Jul 3, 2022
Messages
35
Thanks, everyone for all the help and great information. Here is where I am at 24 hours after changing zfs_arc_max to "0":

1660006285951.png


1660006360318.png


1660006377528.png


When 22.02.3 is released, I will remove the init command and see what the system does on it's own.
 

cwagz

Dabbler
Joined
Jul 3, 2022
Messages
35
Just updated to 22.02.3. After reboot I ran:

nano /sys/module/zfs/parameters/zfs_arc_max

zfs_arc_max is set to 3602016256. This is still 3.6GB.

I will manually set this back to 0 for now.

Linux tunings for ZFS ARC are very different.

There are some improvements being made in 22.02.3. Check that out and then see where we are.
One fix below:

NAS-113422

I did not see this issue in the release notes for 22.02.3. Maybe it didn't make it?
 

cwagz

Dabbler
Joined
Jul 3, 2022
Messages
35
On 22.02.3 it looks like I am stuck with an ARC of 3.6GiB even when setting zfs_arc_max to 0.

Any ideas or suggestions?
 
Top