Oh right. Yes. You’re running out of cache space for metadata.
So a thing about L2ARC, secondary cache: Its index takes away from ARC, primary RAM cache. You want it large enough to solve the problem, but not so large it makes the problem worse.
Do you have a small 128GB SSD? If so, that could be added as L2ARC as a test. If not, someone will be along presently to help calculate the amount of metadata nearly 2 million files generate, the amount of ARC the L2ARC will eat, and the optimal size of that L2ARC.
About adding L2ARC: That’s reversible, but adding a disk to the pool proper is not. The latter has catastrophic impacts to pool redundancy, so, best to be very deliberate when adding read cache and making sure it’s really being added as read cache.
Come TrueNAS 12, which should be out in summer, you have another option: Remove the L2ARC again, get a second SSD of the same size, and add them as a mirror pair to the pool storage as a “special allocation vdev” that will only hold metadata. For that, definitely calculate how much metadata you will hold, first. This is also not reversible and will require careful planning.
What is your “pool layout” currently? Number of disks, in what kind of vdev? As in, 6-wise raidz2, or 3 mirrors, or something else ...
How much data do you have total in those files, expressed in TiB?
Here is a very rough formula for estimating metadata size, taken from a reddit thread:
“ If it's helpful for anyone, here are the estimates I used for sizing some metadata vdevs. Metadata is roughly the sum of:
a) 1 GB per 100k multi-record files.
b) 1 GB per 1M single-record files.
c) 1 GB per 1 TB (recordsize=128k) or 10 TB (recordsize=1M) of data.
d) 5 GB of DDT tables per 60 GB (recordsize=8k), 1 TB (recordsize=128k) or 10 TB (recordsize=1M) of data, if dedup is enabled.
e) plus any blocks from special_small_blocks.”
So 20GiB for a), plus another ... well depends on total TB of the data ... and assume no dedupe (hopefully, you don’t have enough RAM for dedupe), a 128GiB SSD should be good as an L2ARC right now. If you have one “flying about” to use as a test balloon.
(L2ARC size in kilobytes) / (typical recordsize -- or volblocksize -- in kilobytes) * 70 bytes = ARC header size in RAM.
So ... around 70MiB of RAM for a 128GiB L2ARC, give or take? Someone who can do this better needs to check my math ...
Lastly, how much data is free on your pool? If there is enough free space for the move, and if the bulk of your photos is at 1M or bigger, then you could potentially reduce the amount of needed metadata some by creating a new dataset with recordsize=1M and moving all your photos there. That’d be a lengthy move and definitely requires a sanity check before going down that path.