FREENAS Mini XL+ build

Plutotype

Cadet
Joined
Aug 8, 2019
Messages
3
Hi folks,
FreeNAS Mini XL+
Can I use Intel Optane drive into the M.2 x2 slot ( SLOG ) and Intel Optane drive into the PCIe x4 slot (L2ARC) on the mobo?
Can I use 8 x WD RED PRO 12TB? No overheating with these drives?
Where is the OS installed? On the SATADOM ( orange colour )?
The mobo offers another 3 x SATA ports, can I use them for three 2.5" SSDs and create a separate pool out of them in ZFS ( need to mount the third in the chassis somehow )?
Thanks
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
Can I use Intel Optane drive into the M.2 x2 slot ( SLOG ) and Intel Optane drive into the PCIe x4 slot (L2ARC) on the mobo?
Given that SLOG is often a bottleneck in sync writes, I would flip those two around; use the x4 slot for a high-performance Optane SLOG, and the M.2 for a larger-capacity L2ARC. Performance there is not as critical so Optane isn't likely necessary, or the most cost-effective.
 

Plutotype

Cadet
Joined
Aug 8, 2019
Messages
3
Yes, it depends on the workload scenario ( write/read operations ratio ). In my home 10Gbe BASE-T e the bottleneck would still be 10Gbe network due to maximum transfer speed of 1000MB/s for a single client. Even a cheap 800p Optane would help with writes in case there are 8 HDDs in RAIDZ2 behind with approx 600MB/s ( as found in the the STH review ). The solution is to use link agregation on the client, on the network switch and on the FreeNAS MiniXLplus ( dont know if the NAS supports link agregation ). 2x 10Gbe would alow teoretical 2000MB/s speeds ( single client-to- NAS scenario ).
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Given that SLOG is often a bottleneck in sync writes, I would flip those two around; use the x4 slot for a high-performance Optane SLOG, and the M.2 for a larger-capacity L2ARC. Performance there is not as critical so Optane isn't likely necessary, or the most cost-effective.

That seems a curious statement.

Sync writes are by definition going to be of a lock-step nature, which means that they're always going to be slow-ish. Reading from the stuff above, if the M.2 slot is x2, at PCIe 3.0, that's still nearly 2GBytes/sec, and that's like 2x10Gbps ethernets saturated, which just isn't likely to happen. This suggests to me that the x2 is perfectly fine for SLOG. You might reduce latency just a bit by going to x4, but I'm not buying unless someone demonstrates that with actual numbers.

On the other hand, L2ARC operations tend to move data in large chunks, and I can see more benefit to an x4 device there. If nothing else, it seems like keeping the x4 slot available for a multi-M.2 card like the AOC-SHG3-4M2P being discussed in another thread would be a better use of the slot.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Even a cheap 800p Optane would help with writes in case there are 8 HDDs in RAIDZ2 behind with approx 600MB/s ( as found in the the STH review ).

Such a setup suggests database or VM storage usage, in which case you shouldn't be using RAIDZ2. The STH review is kinda strange since it doesn't seem to envision (or understand?) the common use cases. If you have 8 HDD's and a SLOG, they should be in mirrors and you should be able to squeeze up to around 750-900MBytes/s writes out of the raw pool, and ~1600-2000MB/s reads. Adding a SLOG will reduce the writes there somewhat, and in that case you want low latency.

RAIDZ2 is more for large file archival storage, and most of the time when people are busy applying a SLOG to a RAIDZ, it's either because they don't understand that the SLOG is not a write cache, or they don't understand the other performance issues or pool design issues related to RAIDZ and block style storage, which I've discussed endlessly here over the years.
 

Plutotype

Cadet
Joined
Aug 8, 2019
Messages
3
The scenario would not be VM storage use or databases, nothing enterprise. I just want to use Optane SSDs to saturate 10Gbe network for large file copying, sharing, zi/unzip, streaming over 10Gbe or possibly 2x10Gbe using link agregation. At 64GB RAM, there could be 118GB 800p for SLOG and 280GB 900p for L2ARC instead of those 2.5" Cache SATA SSDs which iXsystems prebuilts into the XLplus model.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
The scenario would not be VM storage use or databases, nothing enterprise. I just want to use Optane SSDs to saturate 10Gbe network for large file copying, sharing, zi/unzip, streaming over 10Gbe or possibly 2x10Gbe using link agregation. At 64GB RAM, there could be 118GB 800p for SLOG and 280GB 900p for L2ARC instead of those 2.5" Cache SATA SSDs which iXsystems prebuilts into the XLplus model.

Honestly I can't figure out what the hell this means. I've read it over several times and it's entirely contradictory when compared to facts and reality.

SLOG is not a write cache. Adding a SLOG will ALWAYS cause a pool to be slower than it would be if you simply used async writes. If you want fastest writes, use ZFS defaults (which will only do sync for metadata updates) or turn off sync entirely. You CAN NOT GET ANY FASTER THAN THIS. SLOG will only slow you down.

You don't need Optane SSD's for L2ARC either. That's an expensive boondoggle. Optane is stupid-expensive. Use a pair of cheap SATA SSD's to get a ton of very fast L2ARC, and at the same time you'll be able to move stuff to/from L2ARC at around 1GByte/sec. If you really think you're going to have so much in cache that you need faster L2ARC, use standard NVMe SSD for L2ARC. But that's highly unlikely. ZFS is generally very hesitant to push sequential workloads out to L2ARC and will tend to pull them from the pool anyways.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
That seems a curious statement.

Sync writes are by definition going to be of a lock-step nature, which means that they're always going to be slow-ish. Reading from the stuff above, if the M.2 slot is x2, at PCIe 3.0, that's still nearly 2GBytes/sec, and that's like 2x10Gbps ethernets saturated, which just isn't likely to happen. This suggests to me that the x2 is perfectly fine for SLOG. You might reduce latency just a bit by going to x4, but I'm not buying unless someone demonstrates that with actual numbers.

On the other hand, L2ARC operations tend to move data in large chunks, and I can see more benefit to an x4 device there. If nothing else, it seems like keeping the x4 slot available for a multi-M.2 card like the AOC-SHG3-4M2P being discussed in another thread would be a better use of the slot.
My corollary is that the entire purpose of SLOG is to reduce sync write latency as much as possible; ergo, putting the device on the x4 interface makes more sense to get any possible gains there. Barring the Optane P4801X there's also a wider variety of high-performance SLOGs in the HHHL PCIe form factor, as opposed to the M.2 (is the onboard slot capable of fitting an M.2 22110 board?) and cooling them might be easier as well.

L2ARC might move a greater volume of data but the overall transfer rate would be likely still be less than the 2GB/s from the x2 link.

It does beg for a bit of benchmarking though to see if latency numbers are significantly impacted by link width.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
I always assumed they wouldn't be, given ample bandwidth (which even 2x qualifies as, when it comes to SLOG ), but I'd be very interested in seeing numbers, if anyone wants to try.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
My corollary is that the entire purpose of SLOG is to reduce sync write latency as much as possible; ergo, putting the device on the x4 interface makes more sense to get any possible gains there.

Reducing write latency is fine, but mostly you're going to see a benefit from moving from SATA/SAS (see my list of traversals in the SLOG/ZIL sticky) to NVMe, not from widening an already-sufficiently-wide NVMe data path.

Barring the Optane P4801X there's also a wider variety of high-performance SLOGs in the HHHL PCIe form factor, as opposed to the M.2 (is the onboard slot capable of fitting an M.2 22110 board?) and cooling them might be easier as well.

That could be a good argument but only because it's grounded in practical realities. It doesn't actually have anything to do with PCIe width. It's safe to say that it'd be easier to cool a HHHL in a PCIe slot but this is a correct statement whether it's x1, x4, or x16. Likewise, physical fit or cooling for M.2 is a challenge. M.2 is also likely to be higher density which may affect those outcomes.

Tangentially, I am actually doing heat/stress testing of inexpensive NVMe SSD's right now and it's a bit of a crap shoot. I'm generally convinced that it's good to use a heatsink but to only pad the controller, leaving the flash free to get warmer. I haven't seen any degradation on read activities and it only seems to hurt if you have high write volumes. This seems to concur with what some others have noted. It's unfortunately a bit difficult to find information from reputable sources on this topic, alas.

L2ARC might move a greater volume of data but the overall transfer rate would be likely still be less than the 2GB/s from the x2 link.

I agree, but that same argument applies to SLOG.

It does beg for a bit of benchmarking though to see if latency numbers are significantly impacted by link width.
 

BNoir

Cadet
Joined
Apr 22, 2016
Messages
5
Hi Plutotype,

it's funny as everyone began to discuss wether the PCIe Optane is better for ZLOG than the m.2 or vice versa, but didn't answer your question, if you can even use it in an Freenas Mini XL+.
I had the same question so I did a little research. According to the test at https://www.servethehome.com/freenas-mini-xl-plus-review-8-bays-10gbe/ ixSystems the "motherboard is similar" to the Supermicro A2SDi-H-TF.
And according to the data sheet at https://www.supermicro.com/en/products/motherboard/A2SDi-H-TF it only carries 2242 and 2280 m.2 ssds.
What a pitty.

To the rest of you. Don't get me wrong, your answers and discussions were quite interesting, but I think someone should have pointed this out or at least pointed out "not sure, you have to check for m.2 22110). The way you answered, you could think a m.2 22110 is possible.

So with the Optane P4801X not possible, what's the best next option using a m.2 2280 or 2242?

@Plutotype, some notes to your other questions
Yes you can use all internal SATA ports for any kind of usage (SLOG/L2ARC, ZVOL). As ixSystems added fans for the HDDs I guess there should be no heating problem. I think the OS is on the SATADOM. I have not yet any experience with the Freenas XL+, but with the older Freenas XL.
But I just ordered the Freenas XL+ ... :smile:

Greetings
BNoir
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
Don't get me wrong, your answers and discussions were quite interesting, but I think someone should have pointed this out or at least pointed out "not sure, you have to check for m.2 22110).

You mean like this post here, calling into question the sizing of the onboard M.2? ;)

(is the onboard slot capable of fitting an M.2 22110 board?)

If the slot is physically limited to 2280 size SSDs that certainly reinforces the decision to use it for L2ARC and put SLOG in the PCIe slot (although the unit reviewed by STH seems to have a 480GB SATA SSD - anyone know the specifics?)
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
To the rest of you. Don't get me wrong, your answers and discussions were quite interesting, but I think someone should have pointed this out or at least pointed out "not sure, you have to check for m.2 22110). The way you answered, you could think a m.2 22110 is possible.

Well, as @HoneyBadger says, someone *did* point that out, and further, you're incorrect. It *is* possible.

A 22110 can be slotted in either by using a PCIe adapter card in the available PCIe slot, or by using an NVMe M.2 extension.

Whether or not you want to use a valuable PCIe slot on this, or risk signal degradation on an extension, is, of course, a different discussion.
 

BNoir

Cadet
Joined
Apr 22, 2016
Messages
5
Thanks @HoneyBadger and @jgreco for your answers.
I didn't want to piss anyone off.

I referred more to the beginning of the discussion, where you didn't mention the phyisical compatibility, but discussed which is better for SLOG and L2ARC, but sorry if my words appeared to aggressive.

As far I understood the specifications of the FreeNAS Mini XL+ / Supermicro A2SDi-H-TF there is only 1 PCIe 3 x4 slot, so if Plutotype uses this one, with adaptor then he can't use an Optane 900P / Optane 4800x in the PCIe slot at the same time.
Interesting idea about the NVME M.2 extension. Anyhow I have the Freenas Mini XL and space is really rare in there, so I'm curious if it works.

What do you think about using a Optane 800P using the M.2 2280 slot? If you want to have low latencies this might be the best solution in the FreeNAS Mini XL+.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
No harm done; I certainly wasn't upset by it, more poking fun gently at the mutual overlooking. :)

The consumer Optane cards will physically fit into the slot, but they don't have the same endurance rating as the enterprise P-series cards (and there's wording in the paperwork that states Intel will void the warranty if used in an "enterprise" or "shared server" workload) so if the server is intended to be supporting a business, I'd shy away from it on that reason alone.

Given the physical slot constraints, I'd suggest the path of least resistance, and apply the less-demanding L2ARC role to the M.2 slot, and fit an SLOG into the PCIe.

Still curious as to what model SSD is being used in the XL+. The numbers line up roughly with a DC S4600.
 
Top