This isn't exactly a question, I'm just sharing an interested parameter setting. I'm using TrueNAS Scale, on which the default values for L2ARC write parameters are:
Essentially, every second, ZFS will look at the two last 8 MB (8388608 byte) blocks of the ARC about to be evicted, and write them to L2ARC if they aren't already there.
I haven't done much benchmarking, but these parameters seem ridiculously low to me, for today's recommended RAM sizes and current drive speeds. I found a post from 2012 that agrees ;) https://www.truenas.com/community/threads/zfs-and-l2arc-tuning.7931/
Then I read the docs and found this interesting bit about l2arc_headroom :
I'm testing it now. After a reboot it may be faster than before, but I was expecting an even bigger difference. There's no excessive CPU usage so far, but the ARC size is only 6 GB so far. There's not as much writing to L2ARC as I would expect.
l2arc_write_max: 8388608
(description here: https://openzfs.github.io/openzfs-docs/man/4/zfs.4.html)l2arc_headroom: 2
Essentially, every second, ZFS will look at the two last 8 MB (8388608 byte) blocks of the ARC about to be evicted, and write them to L2ARC if they aren't already there.
I haven't done much benchmarking, but these parameters seem ridiculously low to me, for today's recommended RAM sizes and current drive speeds. I found a post from 2012 that agrees ;) https://www.truenas.com/community/threads/zfs-and-l2arc-tuning.7931/
Then I read the docs and found this interesting bit about l2arc_headroom :
That's potentially very cool. If it can look at the *whole ARC*, then it doesn't matter that it writes only 8 MB/s. I'm just surprised that I could change this parameter effectlvely by a factor of several thousands (from covering 16 MB to something like 80 GB) and it not cause any weird performance issues (if it has to scan the full ARC every second). It's vulnerable to churn like when setting a really high l2arc_write_max, but that's fine in some situations (discussed in this presentation linked from the TrueNAS docs https://www.snia.org/sites/default/...ices_for_OpenZFS_L2ARC_in_the_Era_of_NVMe.pdf ).ARC persistence across reboots can be achieved with persistent L2ARC by setting this parameter to 0, allowing the full length of ARC lists to be searched for cacheable content.
I'm testing it now. After a reboot it may be faster than before, but I was expecting an even bigger difference. There's no excessive CPU usage so far, but the ARC size is only 6 GB so far. There's not as much writing to L2ARC as I would expect.