Yeah I did think it did not make sense when I read this. For reference's sake I read it here:
https://www.servethehome.com/buyers...nas-nas-servers/top-picks-freenas-l2arc-ssds/
STH is still basically a home enthusiast's site, and has a lot of "bigger, badder = better" sort of feel to it at times. It's kinda like when you go off to the local hot rod show and they tell you all the cool things they did to a Mack truck to make it "better."
The flames on the side do NOT actually make it go faster.
View attachment 31498
I read that to suggest that if you are using more than 1GbE you would need an NVMe but I probably read it wrong.
It assumes that you have some magic workload where you will be constantly be accessing content only present in L2ARC and feeding that out the network. This is ... unlikely. Hot ARC content will be in the ARC. You will periodically have less-hot cached long runs of sequential data coming in from L2ARC -- yes -- ok -- occasionally. But it's not going to be particularly common.
The thing that L2ARC is necessary for when doing VM hosting is fragmentation. ZFS mitigates read fragmentation through use of the L2ARC. When highly fragmented, and especially when overfull, a ZFS pool will have poor read performance, because even a read of blocks that you might think are "sequential" can be coming in from different areas of the pool, incurring a seek penalty. That seek penalty can cripple a HDD pool down to the point where it is doing just maybe a thousand IOPS per second. So what you want is LOTS OF L2ARC. You want SO MUCH L2ARC that the system only rarely ever has to go fetch a block from the pool, because after the previous read, it sent it out to L2ARC. This is called the "working set."
Now the other thing is that you have more than one VM running, and it is very likely that each of those will be running off of some other set of interesting disk blocks. If you were pulling ~30 running VM's worth of disk blocks from the pool, each needing 50 IOPS average, you now have 1500 IOPS required on average. This may exceed what your pool can easily deliver, but it is well within the performance envelope of even very slow SSD.
So what you want is for all the blocks VM's commonly access to be loaded into L2ARC. This might be a good fraction of the total size of your pool. Most of that is NOT going to need to be pulled in at 3GBytes/sec over NVMe. And if you get a 500GB NVMe drive (SN750, yo!) for $120, okay, that's great, but I can get two 860 EVO 500's for the same price, and in aggregate that still gets me about 1GByte/sec of L2ARC read capacity -- far more than I'm likely to need.
Ok so that makes sense, fortunately as I USB boot the FreeNAS my two internal SSD slots are unused. Interestingly though you recommended using 500GB SSD's, if I was to do that it would give me 1TB of L2ARC (which is great), however I thought this wasn't recommended as it would exceed the 5:1 ratio of L2ARC:ARC?
I've done it, and I did it before the L2ARC indirect pointer changes, which is the era where that recommendation comes from. By the way, it's really freaky to run zpool iostat on your pool and see no pool reads, just writes...
The ratio is a curious thing because it is based on a number of assumptions that aren't guaranteed to be true. The better, more modern advice would be to keep an eye on the L2ARC statistics and memory pressure it is causing on the system. If you *needed* to, you can forcibly limit the amount of L2ARC used. So I would say it's fine to go big as long as "big" != "stupid big."