SOLVED 1 SSD as SLOG for two separate pools?

fahadshery

Contributor
Joined
Sep 29, 2017
Messages
179
Hi,

I have been doing some benchmarking on my Dell R720. I can clearly see the massive difference a ZIL drive makes when using NFS shares vs iSCSI.
I have 2 pools that I would ideally include a zil in. But I have no space to add two separate SSDs in the server. So I have 1 SSD that I could use as a ZIL.

The question is: is it even ok to do that? what are the pitfalls? If it's 'OK' to do it, then how to do it?

I added the zil by:

Code:
zpool add -f [pool name] log /dev/da[x]


As I mentioned above, I added one SSD to my stripped mirror 2 x 2 pool running SAS hard drives. I would love to partition it and add the other half of this SSD to the second 2 x 2 pool so that I could use sync=always (because the second pool is hosting proxmox VMs and the first pool is running Postgres DB)

Many thanks
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Presumably you mean a SLOG drive. There is always a ZIL, and it is part of your pool.


Be aware that you need an appropriate drive for use as a SLOG. Adding a random SSD without proper characteristics, including power loss protection, does almost NOTHING for you. It's like driving around in a car with air bags, thinking they'll save you, without bothering to fasten your seatbelt.



The fastest ZIL access is always to disable sync writes. The reason for to place the ZIL on a SLOG drive is to speed up the horribly slow in-pool ZIL mechanism. But people sometimes get all bollixed up by the fact that you can also just turn off sync writes. Disabling sync writes is ALWAYS faster than ANY sync write. If you're not going to implement the sync write mechanism correctly, with a PLP SSD or other "safe" mechanism, you should just disable it.

You should definitely NOT be adding daX devices to your pool, as the OS is not guaranteed to retain your preferred device ordering. You can use geom to assign partition labels which are then safe to use as long as they are unique.

The main downside to a shared SLOG device is that there's contention for the device.
 

fahadshery

Contributor
Joined
Sep 29, 2017
Messages
179
Presumably you mean a SLOG drive. There is always a ZIL, and it is part of your pool.

ok thanks. Yes, that's what I meant.
Be aware that you need an appropriate drive for use as a SLOG. Adding a random SSD without proper characteristics, including power loss protection, does almost NOTHING for you. It's like driving around in a car with air bags, thinking they'll save you, without bothering to fasten your seatbelt.


Do you have any recommendation on a SAS3 SSD? I got a 12gbs:
Code:
=== START OF INFORMATION SECTION ===
Vendor:               NETAPP
Product:              X447_S1633800AMD
Revision:             NA04
Compliance:           SPC-4
User Capacity:        800,166,076,416 bytes [800 GB]
Logical block size:   512 bytes
Physical block size:  4096 bytes
LU is resource provisioned, LBPRZ=1
Rotation Rate:        Solid State Device
Form Factor:          2.5 inches
Logical Unit id:      0x5002538a756075d0
Serial number:        S20LNWAG600735
Device type:          disk
Transport protocol:   SAS (SPL-3)
Local Time is:        Sun Dec 26 14:22:38 2021 GMT
SMART support is:     Available - device has SMART capability.
SMART support is:     Enabled
Temperature Warning:  Enabled
Read Cache is:        Enabled
Writeback Cache is:   Disabled

=== START OF READ SMART DATA SECTION ===
SMART Health Status: OK


You should definitely NOT be adding daX devices to your pool, as the OS is not guaranteed to retain your preferred device ordering. You can use geom to assign partition labels which are then safe to use as long as they are unique.

The main downside to a shared SLOG device is that there's contention for the device.
I tried to find the device-by-id but I don't know how to do it. I re-added the SLOG using the GUI with a warning. But it is showing as added:
Screenshot 2021-12-26 at 14.27.49.png


how do I use geom? any example of using it pls?

thank you for taking the time to answer.
 
Last edited:

fahadshery

Contributor
Joined
Sep 29, 2017
Messages
179
and the question remains.
since I don't have any extra slot available to put in another SLOG. can I use 1 SSD, split it half and assign it to 2 pools?
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
ok thanks. Yes, that's what I meant.

Do you have any recommendation on a SAS3 SSD? I got a 12gbs:

I'm far too lazy to take the time to do the research to find out what a random NetApp SSD might be doing under the cover. The upside is that NetApp usually doesn't "cheap out" so I'd say your chances are north of 50/50 that it has PLP etc. The usual suspects are stuff like the Intel datacenter SSD's (S3710 etc). A lot of the true SAS stuff is likely to be PLP, but may be pricey.

how do I use geom? any example of using it pls?

From a blank disk named "da99" (use "camcontrol devlist" to identify your actual disk)

# gpart create -s GPT da99

creates a GPT partition table, then

# uuidgen

It'll be some big ugly hex string, to use as the partition label. Cut and paste that string as needed below.

# gpart add -t freebsd-zfs -l your-ugly-hex-string -s 100G -a 4k da99

adds a FreeBSD ZFS partition to it using a "label" of whatever the uuidgen command generated, size of 100G, aligned to 4K boundaries

I do this stuff regularly (like several times a week for a decade) for UFS filesystems when building virtual machines. The ZFS bits here are ... hypothetical, but should be close to correct. Then add that to the pool. You used

# zpool add -f [pool name] log /dev/da[x]

above which is generally the right-ish strategy for adding a device, but the problem is that daX devices are not nailed down. You instead want something like

# zpool add -f [pool name] log /dev/gptid/your-ugly-hex-string-here

You can then add a second partition of whatever size and apply that to the other pool.

and the question remains.
since I don't have any extra slot available to put in another SLOG. can I use 1 SSD, split it half and assign it to 2 pools?

Well, it was implied in my previous answer that you could do that.
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
Do you have any recommendation on a SAS3 SSD? I got a 12gbs:
The best choice for a SLOG would be a fast NVMe drive with PLP; ideally an Optane drive or AIC (even a consumer-grade 900p will do if this is not a critical business use). Contention for a single SAS device is not going to play well.
 

fahadshery

Contributor
Joined
Sep 29, 2017
Messages
179
From a blank disk named "da99" (use "camcontrol devlist" to identify your actual disk)

# gpart create -s GPT da99

creates a GPT partition table, then

# uuidgen

It'll be some big ugly hex string, to use as the partition label. Cut and paste that string as needed below.

# gpart add -t freebsd-zfs -l your-ugly-hex-string -s 100G -a 4k da99

adds a FreeBSD ZFS partition to it using a "label" of whatever the uuidgen command generated, size of 100G, aligned to 4K boundaries

I do this stuff regularly (like several times a week for a decade) for UFS filesystems when building virtual machines. The ZFS bits here are ... hypothetical, but should be close to correct. Then add that to the pool. You used

# zpool add -f [pool name] log /dev/da[x]

above which is generally the right-ish strategy for adding a device, but the problem is that daX devices are not nailed down. You instead want something like

# zpool add -f [pool name] log /dev/gptid/your-ugly-hex-string-here

You can then add a second partition of whatever size and apply that to the other pool.
This has worked beautifully. Thank you so much. Now I have a SLOG for each of my fast str. mirrored pools and its working great!

Have a great festive season
 

fahadshery

Contributor
Joined
Sep 29, 2017
Messages
179
The best choice for a SLOG would be a fast NVMe drive with PLP; ideally an Optane drive or AIC (even a consumer-grade 900p will do if this is not a critical business use). Contention for a single SAS device is not going to play well.
understood. This is just for my homelab. Not business critical or anything.

Cheers,
 

fahadshery

Contributor
Joined
Sep 29, 2017
Messages
179
above which is generally the right-ish strategy for adding a device, but the problem is that daX devices are not nailed down. You instead want something like

# zpool add -f [pool name] log /dev/gptid/your-ugly-hex-string-here
The only problem I still have is that the your-ugly-hex-string-here is not found in /dev/gptid/ location. So I had no choice but to use the /dev/da13p1 notation.
Not sure why its not in there
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I wonder if I did that wrong. I'm sorry, but I don't really have time to go exploring. It may be that there's an automatically generated gptid in there. You could check it out with "gpart list" and/or "gpart show". You seem enterprising enough to have a good chance of figuring it out. ;-)
 

fahadshery

Contributor
Joined
Sep 29, 2017
Messages
179
I wonder if I did that wrong. I'm sorry, but I don't really have time to go exploring. It may be that there's an automatically generated gptid in there. You could check it out with "gpart list" and/or "gpart show". You seem enterprising enough to have a good chance of figuring it out. ;-)
you were right. It generates a rawuuid which you can get from gpart list command.

Now you can run the:
Code:
# zpool add -f [pool name] log /dev/gptid/rawuuid


and that's it!

thank you so much for taking the time to help. I am finally at peace with my pools and performance :cool:

Have a great festive season!
 
Top