Best use for 2x Spare NVME SSDs? Mirrored ZIL or ZIL+L2Arc?

boredandlazy

Dabbler
Joined
Jul 16, 2023
Messages
11
I've always used QNAP devices. but I've decided to move to a more DIY solution like TRUENAS Core, and now I'm a little confused with all the information available regarding various logs and caches etc.
I have 6x SSDs in my build (2 will be used for mirrored Boot Drive) and I'm about to go through the installation process so I'd like tips as how I should utitlise the 4 spare drives. I plan on using 2x in a mirror to store apps (MYSQL etc.) and a single VM, which leaves me with 2 drives for other purposes.
I'm wondering if I'd be better off using both for a mirrored ZIL or 1 for a ZIL/SLOG and 1 for L2ARC?
I was comfortable enough with a mirrored ZIL but then I read that if you have 64+GB of RAM then L2ARC is helpful. I of course happen to have 64GB of RAM installed.

The use case for my NAS is a large storage pool on HDDs which will host personal files and media files for streaming around the house. Whilst the SSD storage pool is for a windows server VM and other various services/apps.

So, which option would people recommend I go for?

Thanks. :)
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
I was comfortable enough with a mirrored ZIL but then I read that if you have 64+GB of RAM then L2ARC is helpful. I of course happen to have 64GB of RAM installed.
It's very likely that your disks are not suitable for SLOG, as they almost certainly lack power loss protection. So, using them does not give you any degree of protection over sync=off.
As for L2ARC, it's hard to say in the abstract. If the ARC hit rate were low, with the deadlist hit rate significantly higher than zero, L2ARC may help. But... If the L2ARC is "as fast as" your pool, you may end up losing performance, as you'd be eating up ARC with L2ARC pointers. This is a good trade-off if L2ARC is much faster than the pool, but a lot more circumstantial in this case. The L2ARC read path is still faster than finding things on the pool, but the speedup may not compensate the degradation imposed by restricting ARC space.
You could also just add the SSDs as another mirror vdev to the pool, for more capacity and (Conceptually, at least) performance.

Put more succinctly, just because you have two SSDs, that doesn't mean you have to use them.
 

boredandlazy

Dabbler
Joined
Jul 16, 2023
Messages
11
It's very likely that your disks are not suitable for SLOG, as they almost certainly lack power loss protection. So, using them does not give you any degree of protection over sync=off.
As for L2ARC, it's hard to say in the abstract. If the ARC hit rate were low, with the deadlist hit rate significantly higher than zero, L2ARC may help. But... If the L2ARC is "as fast as" your pool, you may end up losing performance, as you'd be eating up ARC with L2ARC pointers. This is a good trade-off if L2ARC is much faster than the pool, but a lot more circumstantial in this case. The L2ARC read path is still faster than finding things on the pool, but the speedup may not compensate the degradation imposed by restricting ARC space.
You could also just add the SSDs as another mirror vdev to the pool, for more capacity and (Conceptually, at least) performance.

Put more succinctly, just because you have two SSDs, that doesn't mean you have to use them.

Thanks. I'm guessing that when you refer to disks with power loss protection you're referring to the SSDs where the SLOG would bre stored? Not the main storage pool which is populated with WD RED Plus Drives?

I'm still just getting my head around the change of philosophy of using ZFS compared to EXT4 from QNAP.

One reason I always found a read cache necessary on QNAP was it made browsing threw the media files on the HDD RAID pool much faster, but I'm guessing this will be handled natively by the ARC when using ZFS? Or is this a case where the L2ARC would probably be beneficial?
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Thanks. I'm guessing that when you refer to disks with power loss protection you're referring to the SSDs where the SLOG would bre stored?
Correct.
main storage pool which is populated with WD RED Plus Drives?
So there's a pool on HDDs? Well, that changes the calculus a bit, but the basics are the same. Measure first, add L2ARC later if needed.
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
I was comfortable enough with a mirrored ZIL but then I read that if you have 64+GB of RAM then L2ARC is helpful. I of course happen to have 64GB of RAM installed.
Yes, but it also depends on the size of the drives. How big are they? The SSDs.
The ARC to L2ARC ratio for best performance should be 1:4 if I am not wrong, which means that you need a 250GB drive with 64GB of RAM (excluding overprovisioning).
 
Last edited:

boredandlazy

Dabbler
Joined
Jul 16, 2023
Messages
11
Yes, but it also depends on the size of the drives. How big are they? The SSDs.
The ARC to L2ARC ratio for best performance should be 1:4 if I am not wrong, which means that you need a 250GB drive with 64GB of RAM (excluding overprovisioning).
All 4 SSDs are 1TB Samsung 980s (Non-Pro).
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
All 4 SSDs are 1TB Samsung 980s (Non-Pro).
From my understanding they are way too big for your amount of RAM... Your ARC would need 25GB in order to reference a single drive.
 
Last edited:

mrpasc

Dabbler
Joined
Oct 10, 2020
Messages
42
Might be worth to consider those 2 980 as a metadata-only special vDev? Should help with „browsing large directories on hdd pool“.
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
Might be worth to consider those 2 980 as a metadata-only special vDev? Should help with „browsing large directories on hdd pool“.
Well, depending on your pool layout maybe... their true value in such role would be, imho, the large space for slamm files, which negatively impact performance on HDDs.

But it really depends on your configuration since once they are in place they are an essential part of the pool and losing them (as VDEV) means losing the pool (unlike L2ARC). Generally you want the same level of redundancy of your other vdevs and since you only have 2 if I understand right, anything but RAIDZ1 or 2-way mirror for your HDD pool isn't an option.

I'd wait @Etorix (sorry, wrong ping) @Ericloewe opinion about those drives though, I'm not an expert regarding L2ARC.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
L2ARC has no special requirements, "fast" is the order of the day. Obviously low latency, lots of IOPS potential and lots of bandwidth are positive. Same goes for metadata vdevs, although in that role they must be used in a reliable configuration.

Thanks to the twin magic tricks of DMA and "leeching off of the host's DRAM", DRAM-less NVMe SSDs are surprisingly capable these days, to the point of competing with - and beating - older PCIe 3.0-era SSDs.
 

boredandlazy

Dabbler
Joined
Jul 16, 2023
Messages
11
Thanks everyone for the input thus far. I know a lot of this type of information is available in a lot of different places but having is concentrated into something related specifically to my situation is helping me learn a lot.

So this is what I now plan to try initially. My HDD Pool will be running with asynchronous writes, as it's just media files, photos and documents etc.
Running Asynchronously means I won't need a ZIL for this pool, so the regular read/write caching in RAM should provide all the performance increases I'd want over regular spinning disk.
The 4 NVME drives will be setup in either a striped mirror, or a RAIDZ1. This will be hosting a VM and MYSQL in a Jail so I will have Synchronouse writes enabled, but with these being NVME having the ZIL on the same volume should still perform well enough? In this use case would RAIDZ1 or a striped mirror provide better performance?
 

samarium

Contributor
Joined
Apr 8, 2023
Messages
192
I wonder how DRAM-less interacts with ZFS data integrity requirements? Will we see the same problems re RAID controllers caching / thinking they know best?
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
I wonder how DRAM-less interacts with ZFS data integrity requirements?
It changes very little. Controllers still have internal SRAM for housekeeping and minimal caching. Plus, since they're all using the host's DRAM anyway, I wouldn't be super comfortable, though I'm not sure what's really typical or not.
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
The 4 NVME drives will be setup in either a striped mirror, or a RAIDZ1. This will be hosting a VM and MYSQL in a Jail so I will have Synchronouse writes enabled, but with these being NVME having the ZIL on the same volume should still perform well enough? In this use case would RAIDZ1 or a striped mirror provide better performance?
VMs and (small) database transactions are strong pointers to mirrors, not raidz, and sync writes.
Depending on the number of drives, a dedicated SLOG may actually slow you down over distributed ZIL across the NVMe drives. But a small Optane SLOG would avoid double writes to the data drives.
 
Top