ilmarmors
Dabbler
- Joined
- Dec 27, 2014
- Messages
- 25
I have warm storage server with two pools - 1) 3 vdevs of 11 disks in RAIDZ2 (long term, usage close to write once, read many, one dataset with 1M record size) and 2) 1 vdev of 2 disks in mirror (scratch space for incoming data upload, processing and preparation for ingest into long term pool, multiple datasets with 128K recordsize). 128GB RAM and Intel OPAL D7-P4610 1.6T NVMe for L2ARC.
I have installed FreeNAS-11.3-U5, can migrate to TrueNAS-12.0 probably when U2 is out, or might upgrade to TrueNAS-12.0-U1 sooner, if there is good reason to do so.
My main goal is to get filesystem metadata in the L2ARC as much as possible. I don't have random load - mainly rsync, find, du, streaming files (which are 1-30MB usually). Biggest gain I have seen is when ARC has cached filesystem metadata, but it can't fit everything in ARC (RAM) and data in ARC is replaced by file data itself during normal use case.
Ideal solution would be shared L2ARC, when ZFS takes care of balancing usage of L2ARC among multiple pools, but that is not possible currently and probably won't be for some time - https://github.com/openzfs/zfs/issues/9859
FreeNAS interface allows adding whole devices as L2ARC to the pool. I have only one NVMe drive, and due to reasons beyond my control, I won't be able to upgrade particular server.
How safe or dangerous is following workaround - partition NVMe disk manually and add individual paritions as L2ARC cache for different pools? In my case I created two partiions - 128G for scratch pool and remaining for big tank pool:
root@freenas# gpart create -s GPT /dev/nvd0
nvd0 created
root@freenas# gpart add -t freebsd-zfs -a 1m -l l2arca -s 128G /dev/nvd0
nvd0p1 added
root@freenas# gpart add -t freebsd-zfs -a 1m -l l2arcb /dev/nvd0
nvd0p2 added
root@freenas# zpool add scratch cache nvd0p1
root@freenas# zpool add tank cache nvd0p2
Running zpool status shows nvd0p1 and nvd0p2 under cache sections of scratch and tank pool respectively.
On pool status page FreeNAS UI shows /dev/nvd0p1 and /dev/nvd0p2 for pools respectively instead of nvd0 (full device, can be attached to one pool via UI) in cache sections.
I there any downsides with approach I took? Something that might bite my ass down the road? Things I should not forget and remember during upgrades, config restores or anything else in the future?
Is there any way how to let ZFS know that I prefere caching filesystem metadata in the L2ARC? If yes, what is the correct way to congure that? I would like to avoid long sequential file data reads or writes expunging metadata from L2ARC.
I have installed FreeNAS-11.3-U5, can migrate to TrueNAS-12.0 probably when U2 is out, or might upgrade to TrueNAS-12.0-U1 sooner, if there is good reason to do so.
My main goal is to get filesystem metadata in the L2ARC as much as possible. I don't have random load - mainly rsync, find, du, streaming files (which are 1-30MB usually). Biggest gain I have seen is when ARC has cached filesystem metadata, but it can't fit everything in ARC (RAM) and data in ARC is replaced by file data itself during normal use case.
Ideal solution would be shared L2ARC, when ZFS takes care of balancing usage of L2ARC among multiple pools, but that is not possible currently and probably won't be for some time - https://github.com/openzfs/zfs/issues/9859
FreeNAS interface allows adding whole devices as L2ARC to the pool. I have only one NVMe drive, and due to reasons beyond my control, I won't be able to upgrade particular server.
How safe or dangerous is following workaround - partition NVMe disk manually and add individual paritions as L2ARC cache for different pools? In my case I created two partiions - 128G for scratch pool and remaining for big tank pool:
root@freenas# gpart create -s GPT /dev/nvd0
nvd0 created
root@freenas# gpart add -t freebsd-zfs -a 1m -l l2arca -s 128G /dev/nvd0
nvd0p1 added
root@freenas# gpart add -t freebsd-zfs -a 1m -l l2arcb /dev/nvd0
nvd0p2 added
root@freenas# zpool add scratch cache nvd0p1
root@freenas# zpool add tank cache nvd0p2
Running zpool status shows nvd0p1 and nvd0p2 under cache sections of scratch and tank pool respectively.
On pool status page FreeNAS UI shows /dev/nvd0p1 and /dev/nvd0p2 for pools respectively instead of nvd0 (full device, can be attached to one pool via UI) in cache sections.
I there any downsides with approach I took? Something that might bite my ass down the road? Things I should not forget and remember during upgrades, config restores or anything else in the future?
Is there any way how to let ZFS know that I prefere caching filesystem metadata in the L2ARC? If yes, what is the correct way to congure that? I would like to avoid long sequential file data reads or writes expunging metadata from L2ARC.