I have no experience with that type of card, so you'll have to get someone's else's opinion on it.
I wonder if a M.2 SSD will work
It is a SATA controller. It should work, but I can't say how reliable or fast it would be.I dont buy anything from China. I just found this one on Ebay;
https://www.ebay.com/itm/SEDNA-PCI-...091892&hash=item3625784bcf:g:ugQAAOSwZaNaBYlg
I don't know of any that are optimal for a SLOG. You need power loss protection, high write endurance, and low latency. If you can find those things I don't see why it would not work.
From what people are saying PLP is built into the Intel SSD S3700 drive, just have to find a home for it, either suck up and buy a PCIE Intel SSD S3700 or look for a PCIE controller that can work with the S3700 for low latency.
Only if sync=disabled or sync=standard and using iSCSI with ESXi in the OP's scenario. OP stated that this setup is for a small business...I wouldn't recommend running with either mode, only sync=always, hence the recommendation for a good SLOG device. The pool doesn't have close to enough disks that an on pool zil would be faster than a slog. I'd still highly recommend a good slog...just not sure about your choice of that pci-e sata card jobby.The SLOG causes a performance reduction
The SLOG causes a performance reduction
Only if sync=disabled or sync=standard and using iSCSI with ESXi in the OP's scenario.
OP stated that this setup is for a small business...I wouldn't recommend running with either mode, only sync=always.
The pool doesn't have close to enough disks that an on pool zil would be faster than a slog.
That makes no sense. The SLOG stops causing a performance reduction with sync=disabled but also becomes pointless.
That ought to be dependent on the risk/reward calculation.
That also makes no sense. There is literally no situation where the in-pool ZIL would be faster than a SLOG.
https://forums.freenas.org/index.php?threads/some-insights-into-slog-zil-with-zfs-on-freenas.13633/
2. I am playing catch up and learning about SLOG devices and technology on here as I am new to Freenas, so I am not so sure given my business case whether or not a SLOG device is required.
first point...You're taking what I said out of context by omitting what you said, saying it reduces performance.
While I suppose its true to an extent, but if you don't use it (meaning having the zfs options set the way I said) then you're potentially putting company data at risk with the only benefit being increased speed as writes. You're reading my text wrong or I'm no English major or both.
second point...ultimately up to the business leaders.
third point...incorrect. If you had a very large pool of disks in a mirrored vdev setup, say 44 10k or 15k sas disks and had an S3700 slog, or better yet, a large pool of ssd's, then an ssd slog would be slower than an on pool zil (PCIe NVMe drive maybe not the case). The point is, it's definitely possible, but unlikely in most scenarios and that was my point.
Yes, and I've agreed with this the whole time...I'm sure I could have worded my sentence better. End of storyI literally quoted what you quoted of mine. I did not omit it.
So let me be extra-crispy clear here. The SLOG will cause a performance reduction, if it is used, and if it isn't used, then there is no frickin' point in having a SLOG device, so better then not to spend the money.
Agreed. It may be beneficial in this users case to then think about creating multiple zvols, one that has sync=always for business critical apps like his/her Exchange server, etc...and one where sync=disabled, for increased speed where data can be at a slightly higher risk, although the former would require a SLOG or at least in-pool ZIL.The benefit of increased write speed is very significant to most environments. If I can tell you "I can give you a massive speed increase, while decreasing your cost, for a modest increase in risk," many environments take that.
Again, I agree and this is how my company works, but you're generalizing. I was speaking to the fact that this is a 20 person shop and it may be wise to talk this over with somebody so the risk/rewards are understood by more than just the "IT guy." No offense OP ;-)Actually, usually up to the IT people, manager, usually not something that reaches the CTO.
Yes, I also agree with this. I absolutely should have left that comment out because no correct setup would ever configure their system like that. I was merely eluding to the fact that its possible to have an in-pool zil that would perform decently without a SLOG if you had a crazy setup. You said, "There is literally no situation where the in-pool ZIL would be faster than a SLOG." While there is no reasonable solution where you'd likely ever want this, it is possible and that was the point. People do dumb things with their systems all of the time, you of all people can attest to that. again...should have left this out as it doesn't apply here.So yes there are edge cases where you're doing something idiotic, but even for the SAS HDD's you would find that not to be true. Basically you need to create a situation where the SLOG device is actually *slower* than the individual pool devices, but who would do that? The write path for the SLOG device is optimized for SLOG, whereas the write path to the in-pool ZIL follows the whole pool write path, and is not optimized for the needs of the ZIL. So if you create a pool of mirrored S3700 SSD's, and try to use in-pool ZIL on that, and then compare it to a separate SLOG device, you'll still find out that the SLOG works better because you're not going through the general pool write path, and instead using the optimized SLOG write path. You actually have to go full-stupid and move on to using a crappy SSD or HDD that's slower than your S3700 pool. You could also put your SLOG on magtape. But all of these cases are idiotic.
So basically what happens when there is a request to write to disk, the request can be Synchronous or Asyncronus.
Now I am no expert on what decides which type but it looks like the Application running will request what type. In ESXI, if you use NFS, the ESXI kernel will always request Synchronous. If using ISCSI this is not the case. I believe the program in the VM determines the request type. Anyway for Freenas Synchronous request means that Freenas must wait till the write request is written to permanant media before it returns the ok to the requester. This means the requester (in your case ESXI) must wait for the acknowledgement. If you are waiting on spinning disk, then your wait can be slow to very slow. If it is an Asyncronus request then Freenas can reply after it has the request cached in memory and does not have to actually wait until it is written to disk. This greatly speeds up things as you can imagine. The problem comes in when you have a stoppage of some kind (NAS lockup, power loss, etc). It is possible for you to loose the requests that are still in memory before they were written to disk. This may or may not be a big deal, depending on what was in memory. Since you are planning to run the entire VM from your NAS, I would suggest you tell Freenas to use Synchronous writes always on your ISCSI drive. Then to speed up the writes, use a fast SLOG device to speed up writes. How Freenas actually does the writes I will leave up to you to research (they actually use a ZIL before it is written to disk). Anyway so the decision to use a SLOG to hold your ZIL comes down to whether you are okay with possible corruption or loss of data if there is a NAS lockup or power failure, etc.
Once again I am not an expert on exactly all the request that go on and such but you get the big picture of what can happen if you choose not to use Synchronous writes.