What are my options for a cheap sata ssd for slog use?

imsocold

Dabbler
Joined
Dec 13, 2021
Messages
43
I am currently in the process of upgrading my home nas/lab

it has mixed used use as ;a target for backups, host to videos, and plex media server.

I am using an avoton c2550 with 12 sata ports, no support for nvme and a single pcie port that will house a quadro p400.
the plan is to use one of the sata ports as an slog device.

The pool will consist of 2 vdevs of 4 disk raidz1.

The pool is shared over nfs, and afp.

I have been looking for small sata drives with plp support for cheap. I am already way over budget on this build.

can someone point me in the proper direction?
 

ChrisRJ

Wizard
Joined
Oct 23, 2020
Messages
1,919
Why not have a single VDEV with 8 drives in RAIDZ2?

As to SLOG: I really don't think it makes sense. How much do you know on the subject?
 

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,947
SLOG won't have any effect unless you are using NFS or iSCSI
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
can someone point me in the proper direction?

Yup. I suggest you go read


and then come back and tell us if any of it seems relevant. If you're not doing VM storage for a different machine, not running databases, not doing critical transactions, etc., you probably do not need a SLOG device. Remember, a SLOG device will significantly slow you down over plain async writes (and before anyone decides to contradict me, recall that the OP said a SATA SLOG).
 

imsocold

Dabbler
Joined
Dec 13, 2021
Messages
43
Why not have a single VDEV with 8 drives in RAIDZ2?

As to SLOG: I really don't think it makes sense. How much do you know on the subject?

To get the write benefit of striping across 2 vdevs.
I know that the best practice is raidz2 but I'm balancing the risk on the need for more space.
The way I understand it the write speed of the entire pool would be limited to the speed of a single drive.

I have read quite a bit on the slog, and I understand that I wouldn't see that much benefit for what I am doing most of the time (i.e streaming movies off of plex), but for small writes synchronous writes it would go to slog first before being flushed to disk.

I also understand that this if fairly rough on the slog device. that said any write performance increase is appreciated.
Obviously this may not be necessary in my use case.
 

rvassar

Guru
Joined
May 2, 2018
Messages
972
I'll second both @ChrisRJ's and @jgreco's sentiments.

I still wanted one to play with on my block storage / NFS mirror pool. I'm guessing the Intel S3500's got utilized in a lot of casino slot machine equipment, and they get routinely replaced when they hit the 85% wear remaining point. I picked up a couple 120gb S3500's for all of $18 each on eBay that got sold out of Nevada as pretty regular listing there for a while, though I haven't checked lately. They work, have PFP capacitors, SATA attach, etc...

Result of my experiments: Didn't make much difference, even for what I thought were heavy NFS workloads. One is still in the NAS, mostly because I haven't powered down to remove it, and the other is now attached to a Raspberry Pi4 via a USB3 enclosure. Save your money!
 

imsocold

Dabbler
Joined
Dec 13, 2021
Messages
43
Yup. I suggest you go read


and then come back and tell us if any of it seems relevant. If you're not doing VM storage for a different machine, not running databases, not doing critical transactions, etc., you probably do not need a SLOG device. Remember, a SLOG device will significantly slow you down over plain async writes (and before anyone decides to contradict me, recall that the OP said a SATA SLOG).

They say that it's not good to meet your heroes.
I actually read your post, I also read something similar that was posted on the ixsystems blog back in 2015, the issue is twofold, no-one is completely clear on when exactly one can benefit from an slog, I have seen quite a few posts telling people they do not need an slog for their specific use case. my assumption and this may be nonsense is that given the small number of disks and consequently being constrained by the speed of spinning rust. I might be wrong, hell I probably am definitely wrong. I am just trying to build a cheapish Nas, but there is no need to be mean about it. I don't do this for a living, and parsing through dozens of threads filled with contradictory information is tedious and time consuming.

Thank you for your contribution, I mean that sincerely without smart people helping out things are much more difficult than they need to be. But there is simply no reason to be rude.
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
my assumption and this may be nonsense is that given the small number of disks and consequently being constrained by the speed of spinning rust
You are constrained by the speed of your RAM for async writes and by the speed of spinning rust for sync writes. Only in the latter case will you benefit from an SLOG device.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
But there is simply no reason to be rude.

I don't see how that was rude. I pointed you at a good resource, then I explicitly called out the common use cases for SLOG, and then gave you the most critical bit of information to know about SATA SLOG. Please explain how that was rude.

no-one is completely clear on when exactly one can benefit from an slog

Well, that's really not true. Again, I literally gave you the answer in my reply. If you were too busy trying to be offended by my response to get that, well, I'm sorry. Part of this is that I am not paid to sit here and compose lengthy responses. I do a LOT of heavy lifting by pointing people at the pre-written resources I've bothered to compose in great detail. I often respond to these posts in between other things I'm doing. That may be terse. It's not intended to be rude.

This seems to be a candidate for Postel's Law.
 

imsocold

Dabbler
Joined
Dec 13, 2021
Messages
43
You are constrained by the speed of your RAM for async writes and by the speed of spinning rust for sync writes. Only in the latter case will you benefit from an SLOG device.
I will be running three vms on this device, and use it as a backup target for 3 macs.
In your opinion should I just drop the slog?
I plan to run the vms in a container on a seperate SSD.

This is largely academic at this point. So far it seems the consensus is that I should not bother with the SLOG.

Thanks to all for the help.
 

imsocold

Dabbler
Joined
Dec 13, 2021
Messages
43
I'll second both @ChrisRJ's and @jgreco's sentiments.

I still wanted one to play with on my block storage / NFS mirror pool. I'm guessing the Intel S3500's got utilized in a lot of casino slot machine equipment, and they get routinely replaced when they hit the 85% wear remaining point. I picked up a couple 120gb S3500's for all of $18 each on eBay that got sold out of Nevada as pretty regular listing there for a while, though I haven't checked lately. They work, have PFP capacitors, SATA attach, etc...

Result of my experiments: Didn't make much difference, even for what I thought were heavy NFS workloads. One is still in the NAS, mostly because I haven't powered down to remove it, and the other is now attached to a Raspberry Pi4 via a USB3 enclosure. Save your money!
Thanks. I’ll save my money.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
three vms on this device, [...]
In your opinion should I just drop the slog?

If you are running the VM's on the NAS alongside ZFS, then, if the NAS fails, both ZFS and the VM's go out simultaneously. This isn't a problem for ZFS, and usually isn't a problem for VM's.

The problem we are normally trying to avoid with SLOG is that if you have a separate hypervisor, and are not using sync writes (either via in-pool ZIL or via SLOG), then if the filer crashes or reboots, AFTER the hypervisor has written data but BEFORE the filer has committed it to disk, then when the filer comes back up, those writes have disappeared from the VM's virtual disks. This could be harmless (or nearly so), but can get dangerous when the VM was writing important filesystem metadata to its virtual disks. From the VM's point of view, the hard drive not only didn't write data it said had been written, but the old data that used to be there suddenly reappeared, and the VM has no idea that this has transpired. That could be really bad if it was a critical file or critical metadata.

There is a much smaller edge case when you are running VM's ON the NAS. During a crash, there can still be filesystem writes in progress, but the committed activity will "break off" at a transaction group boundary. This will appear similar to a real machine suddenly having been powered off. Since the VM is going to suddenly stop operating (along with the NAS), this only turns into a fsck upon reboot. The SLOG will reduce the "window of loss" here, but unless you're running critical transactions that cannot be lost, like a bank's accounting system, this usually doesn't matter.

In the end, only you can really decide what your tolerance level for loss is. ZIL/SLOG-backed sync writes are only going to SAVE you during crashes (panic, power loss, etc)., but enabling sync writes is going to be an ongoing performance tax you pay with every write.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Since the VM is going to suddenly stop operating (along with the NAS), this only turns into a fsck upon reboot.
Or just run ZFS in the VMs as well. I like managing filesystems with ZFS, so I put some ZFS in my ZFS so I could not get headaches while I don't get headaches.

Come to think of it... ZFS delegated datasets but for VMs. I wonder how difficult that would be... I'm guessing the only major work would be this new guest driver, the host side would be fairly straightforward.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Or just run ZFS in the VMs as well

This is impractical for those of us who run highly compartmentalized networks. ZFS on FreeBSD is generally thought to be unstable with less than 1GB of memory. About 2/3rds(?) the VM inventory here are 256MB FreeBSD systems, many doing infrastructure-type things like routing or DHCP.

But it's a good way to justify more RAM. :smile:
 

imsocold

Dabbler
Joined
Dec 13, 2021
Messages
43
If you are running the VM's on the NAS alongside ZFS, then, if the NAS fails, both ZFS and the VM's go out simultaneously. This isn't a problem for ZFS, and usually isn't a problem for VM's.

The problem we are normally trying to avoid with SLOG is that if you have a separate hypervisor, and are not using sync writes (either via in-pool ZIL or via SLOG), then if the filer crashes or reboots, AFTER the hypervisor has written data but BEFORE the filer has committed it to disk, then when the filer comes back up, those writes have disappeared from the VM's virtual disks. This could be harmless (or nearly so), but can get dangerous when the VM was writing important filesystem metadata to its virtual disks. From the VM's point of view, the hard drive not only didn't write data it said had been written, but the old data that used to be there suddenly reappeared, and the VM has no idea that this has transpired. That could be really bad if it was a critical file or critical metadata.

There is a much smaller edge case when you are running VM's ON the NAS. During a crash, there can still be filesystem writes in progress, but the committed activity will "break off" at a transaction group boundary. This will appear similar to a real machine suddenly having been powered off. Since the VM is going to suddenly stop operating (along with the NAS), this only turns into a fsck upon reboot. The SLOG will reduce the "window of loss" here, but unless you're running critical transactions that cannot be lost, like a bank's accounting system, this usually doesn't matter.

In the end, only you can really decide what your tolerance level for loss is. ZIL/SLOG-backed sync writes are only going to SAVE you during crashes (panic, power loss, etc)., but enabling sync writes is going to be an ongoing performance tax you pay with every write.
I don’t actually care about the vms that much outside of the their use for serving media they can’t be rebuild relatively easily. The reason I am running zfs is to prevent silent corruption of my music collection which includes hard to find audio from 25 years ago. Spefically to prevent bit rot. The only reason I am running the vms on the same box is so that they don’t need to communicate over the network. Apparently my belief that the slog would increase write performance was out of line. So it’s no longer part of the plan.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
In that case, I think you're good. Skip the SLOG. Let ZFS keep your bits safe. Running data for VM's over the network may not be as huge a deal as you think; most media systems don't pull in content from their storage at gigabit-plus speeds, and even if they do, it's probably not a huge bottleneck. Many media streamers are still limited to 100Mbps wired ethernet... I find it amazing how many hypervisors with 10G+ networking still only average less than 1Gbps speeds.

And 10G networking is a fun upgrade too, not even that expensive these days. Don't fear the 10G:

 

imsocold

Dabbler
Joined
Dec 13, 2021
Messages
43
In that case, I think you're good. Skip the SLOG. Let ZFS keep your bits safe. Running data for VM's over the network may not be as huge a deal as you think; most media systems don't pull in content from their storage at gigabit-plus speeds, and even if they do, it's probably not a huge bottleneck. Many media streamers are still limited to 100Mbps wired ethernet... I find it amazing how many hypervisors with 10G+ networking still only average less than 1Gbps speeds.

And 10G networking is a fun upgrade too, not even that expensive these days. Don't fear the 10G:

I would love to do 10G but it would defeat my budget, the home switch only has 1 10GB link that should be used for uplink but I repurposed for my Esxi box, The motherboard I am using for this build is an avoton 2550 with only 1gig links and a single pcie card that I am going to use for my hardware accelerated transcoding.
so at the very least going 10 gig would require a new switch, and a different motherboard.
I know I don’t need 10 but for those moment when I want to transfer large amounts of data I get impatient.
That said, given the brain trust here, does anyone know if those pcie bifurcation cards with oculink support anything other than SAS? My case room for an additional card, but the motherboard only has one port. The motherboard supports pcie bifurcation though. Which leads to me believe that with the right resources I may be able to stuff an additional card in the case.
 

imsocold

Dabbler
Joined
Dec 13, 2021
Messages
43
never mind on the bifurcation bit. Typing that out gave me the right keyword to google it myself, and what I found was that I may be able to send everything back and just expand my existing box using i bifurcation card and an external sata controller.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Not clear on what cards you're looking at.

OCuLink is a bit of an ill-defined distraction in the first generation gear, IMO.

If you look through the Bitcoin GPU mining forums, you find lots of intriguing hardware for bifurcation. The weirdest stuff too...


I guess that's handy if you only need x1 to each of your GPU's. :smile:

If you have a Supermicro or AsRock Rack board, I think they both support x4x4x4x4 bifurcation on the Avotons, so there's probably some breakout that would work for that. I don't have any handy though.
 

imsocold

Dabbler
Joined
Dec 13, 2021
Messages
43
Not clear on what cards you're looking at.

OCuLink is a bit of an ill-defined distraction in the first generation gear, IMO.

If you look through the Bitcoin GPU mining forums, you find lots of intriguing hardware for bifurcation. The weirdest stuff too...


I guess that's handy if you only need x1 to each of your GPU's. :smile:

If you have a Supermicro or AsRock Rack board, I think they both support x4x4x4x4 bifurcation on the Avotons, so there's probably some breakout that would work for that. I don't have any handy though.

The entire reason I was forced to upgrade was that I wanted more sata ports than my motherboard support and my case needed additional 3.5 slots.
The P400 is a x4 sitting in x16 slot.

to get the additional sata ports, I had to get a new motherboard and case.
had I known about bifurcation, I would have just upgraded to a microatx case, split the pcie ports up and saved the money spent on a new motherboard, ram, and psu.

if the sites selling them weren’t so dodgy, I would abort the current upgrade and go for it.

This is what I am looking at:


with this you could basically run your existing pci port to external enclosures, one for the hardware acceleration card and another
for an additional 6 sata ports. this would be awesome for extending the array in the future or even as temporary expanded storage during an upgrade.

instead of zfs sending over the 1gig link, it would go at pcie speed to an external enclosure.

Sorry for the spam. I don’t see how to edit posts so I am typing as I’m working this out.
 
Top