Looking for advice

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
will you recommend the wd black sn750 as SLOG? I mean, you said "appropiate SLOG device" but I'm not pretty sure what device it is.

No. Consumer grade SSD's generally lack power loss protection. This is commonly implemented as a few large capacitors integrated into the SSD that allow the SSD controller and flash to run a fraction of a second after power is lost, and allows the SSD to guarantee that data that the OS thinks has been written to SSD is _actually_ written to SSD.

If you do not have this feature (or equivalent, such as Optane), then you do not have a SLOG which can guarantee sync writes. There's a list of SSD's over in the Resources section, I believe.

If you are operating under the mistaken impression that SLOG is some sort of write cache, please allow me to disabuse you of the notion. Given the two options of "sync=never" or the world's fastest SLOG, "sync=never" is always faster. If you do not need sync writes, disabling them is a great performance boost in many cases.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
Currently, and only for testing purposes, I have a Zvol created on the RAIDZ2 in order to share a block device with iSCSI.

This isn't going to be particularly quick, as block storage such as VMFS demands rapid random I/O, which RAIDZ isn't very good at delivering - especially if it's also trying to fulfill requests from four other datasets against the same physical disks. Putting some of those 6x1T SSDs to work, on the other hand, should give you massively improved read results, and better writes (assuming they aren't QLC NAND such as a Samsung QVO)

Everything works almost as expected, the only thing I need to review is when I restart the vmware exsi server, the iSCSI configuration stops working, but this is a topic for another forum :)

You might be pleasantly surprised how many VMware nerds lurk around here. It's definitely not expected for a restart of the hypervisor to make the iSCSI extent unreachable - the other way around, certainly, but ESXi should rescan the bus and remount on boot.

I'm having fun with my homelab, it's true that I do not want to reinstall a VM every month, but If I lose anything, should not be a big problem. Also I have backups of the VMs so I'm not so reckless :P

To be honest; in this scenario, I don't believe you "need" an SLOG or sync writes. By your own admission, you're not storing data here that you can't afford to lose. It's a homelab, you're willing to rebuild, and you even have backups you can restore from. That shows an excellent understanding of your own personal risk tolerance; so leaving sync=standard set may just be the best option here, rather than picking up an SLOG device at a premium.

If you do not have [in-flight PLP] (or equivalent, such as Optane), then you do not have a SLOG which can guarantee sync writes.

Well, you probably still do - it's just only marginally better than putting said sync writes on the in-pool ZIL itself. Unless the device is lying about sync writes (glares at his old OCZ Vertex 2) then it's still the same "safe, but slow" as writing to your pool devices.

There's a list of SSD's over in the Resources section, I believe.

Not yet; maybe I should work on that.

 

faktorqm

Dabbler
Joined
Jan 18, 2019
Messages
25
Thanks all for the input.

I will be totally honest with you because you deserve it: I know that SLOG it's not a write cache, but I know it only because I read post or guides made by you and others gurus in this forum and repeats the same sentence over and over again. It's not something I truly understand. It's just something that I know it's not, but because you said that lot of times (and I choose to believe without reading a ZFS book).

Will have any positive effect on performance setting the NVME disk as L2ARC? As far as I understand, it will not, because I have more than 1Gb per Tb (11Tb RAIDZ2 + 3TB mirrors, 14TB in total and 32GB RAM). This is write?

I can grab a used Optane for 32€. Just in case it helps.

@HoneyBadger
It's just a VmWare bug somewhere. There are some workarounds. But yes, it should work "automagically" XD
 
Top