Truenas SCALE not using all memory for cache

boynep

Dabbler
Joined
Jan 9, 2012
Messages
29
Hello everyone,

For some reason I kept having constant rebooting with my Truenas Core installation after working for 4+ years. Thought it is time to test Truenas SCALE and upgraded to TrueNAS-SCALE-21.06-BETA.1.

However, something I have noticed between CORE and SCALE is SCALE doesn't use all memory for ARC like CORE used to. I will be using SCALE for being purely a NAS and nothing else so I don't need to reserve memory for other services like docker or VM's. Feels like it is a wasted memory.

I have a script that reads through and backs up to the cloud so the files definitely has been read. While using CORE it would use all the memory given to it except couple of GiB's.

Is there any parameters/tunables I can use to force it to use more memory ?

System:
Truenas SCALE
Version : 21.06-BETA.1
Virtualised with 3 LSI HBA Passthrough.
64GB of DDR4 ECC Memory Assigned.
Pool of 2 X (RAIDZ2 vdev of 6 X 4TB disks)

Would appreciate any pointers.
 

Attachments

  • Capture.JPG
    Capture.JPG
    23.1 KB · Views: 322

ornias

Wizard
Joined
Mar 6, 2020
Messages
1,458
This is normal (Default) behavior for OpenZFS on Linux and a bit different from the BSD variant.
You can file a Jira suggestion if you want iX to change this, or manually look up the OpenZFS parameter required.
 

boynep

Dabbler
Joined
Jan 9, 2012
Messages
29
This is normal (Default) behavior for OpenZFS on Linux and a bit different from the BSD variant.
You can file a Jira suggestion if you want iX to change this, or manually look up the OpenZFS parameter required.
Thank you for your response. I will raise it as feature request. However do you know which parameters should I be modifying?
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419

ornias

Wizard
Joined
Mar 6, 2020
Messages
1,458
Thank you for your response. I will raise it as feature request. However do you know which parameters should I be modifying?
Not on the top of my head, but there is enough resources out there, one of which is the OpenZFS manual itself. You should be very well capable to find out :)
 

JoeR

Cadet
Joined
Jul 4, 2021
Messages
1
Mine is working the same way and has been very stable. I am using 32GB ECC RAM. My benchmarks on the pool with CrystalDiskMark shows I'm still getting the same read/write speeds with SCALE as I was with Core. I haven't added an NVME cache to the pool yet as I don't see a need for it. The caching speeds in Plex are the same.
 

Chris3773

Dabbler
Joined
Nov 14, 2021
Messages
17
The default setting for zfs_arc_max is 1/2 of total system RAM. If ZFS is the primary application they do recommend to increase the value.

This can be changed with the command:
echo SIZE_IN_BYTES >> /sys/module/zfs/parameters/zfs_arc_max

You will need to add it as a preinit command under the "Init/Shutdown Scripts" selection for the change to persistence across reboots.
 
Top