winnielinnie
MVP
- Joined
- Oct 22, 2019
- Messages
- 3,641
I find that the number did surpass 4GiB, but nowhere close to 8GiB.
I wonder if it reports a different value with...
arc_summary | grep "Metadata cache size (current)"
...during your tests?
If I strictly use full rsync crawls (directory tree crawls for the entire datasets) for three different datasets (used by three different clients), and navigation via the SMB shares, then I appear to never exceed the following values:
Code:
kstat.zfs.misc.arcstats.metadata_size: 2296452608 (2.14 GiB) Metadata cache size (current): 3.2 GiB
At no point do I witness/experience any aggressive metadata eviction, and my tests continue to be snappy (rsync crawls, directory tree listings, SMB browsing). All the while my arc.meta_min is still set to 4294967296 (4.0 GiB).
It's "working as advertised", in that until metadata exceeds 4.0 GiB in the ZFS ARC, there is no aggressive metadata eviction by userdata.
This is why I haven't yet tried increasing the threshold, since I haven't needed to.
Technically, I could just go ahead and set it to 8 GiB, under the assumption it will never be reached, while always giving metadata higher priority over userdata in the ARC.
(Your recent test is interesting! That's why I'm wondering what would happen if you retry it, but this time monitor the other value in the command shared.)
arc_summary | grep "Metadata cache size (current)"
I'm seriously wondering if doing this (whether over SMB/NFS) requires extra metadata to be read into memory, above and beyond what is required for rsync tasks, directory crawls, and browsing.ran the same "check files numbers/size" over NFS.
Last edited: