First TrueNAS custom build

iSource

Cadet
Joined
Mar 4, 2024
Messages
6
Hi everyone,

first post here, although I've been "stealth" reading along for a while now. :wink:

So, after several years and iterations of QNAP and Synology boxes, it's now time for me to make the move to TrueNAS. The two main reasons for this are performance and an expected better value for the money spent. In addition, I finally want to make use of ZFS and ECC in terms of data integrity (checksums/self-healing) - something that hasn't been part of the storage ecosystem I had until now.

This TrueNAS system will become *the* new main network storage and supersede/combine the current of-the-shelf boxes from the mentioned manufacturers (which in turn will become one on- and one off-site backup destination, so they will be repurposed). TrueNAS will be used for:
  • file storage (roughly 30 TB right now)
    • small files and large files
    • NFS and SMB
    • systems/users are directly working on those shares
  • iSCSI target (PVE cluster with three nodes)

Regarding the hardware, I've already prepared a build, though I would be more than happy if you could have a look at it and check if there are any no-gos in it which might cause minor or major issues running TrueNAS SCALE:

[ This configuration has been updated due to given feedback as well as further research from my side. Details can be found in this post. ]
TypeModel
MainboardSupermicro X13SCH-F (MBD-X13SCH-F-O)
CPUIntel Xeon E-2414, 4C/4T, 2.60-4.50GHz (BX80715E2414)
CPU CoolerNoctua NH-L9i-17xx
RAM[2x] Micron 32GB, DDR5-4800, ECC (MTC20C2085S1EC48BR)
OS (SSD)Samsung SSD 980 500GB, M.2 2280, PCIe 3.0 x4 (MZ-V8V500BW)
PCIe/M.2 Add-In Card (for OS SSD)RaidSonic IB-PCI208-HS (60830)
Data (HDD) / Manufacturer A[4x] WD Ultrastar DC HC560 20TB, 512e, SATA (WUH722020BLE6L4 / 0F38785)
Data (HDD) / Manufacturer B[4x] Toshiba Cloud-Scale Capacity MG10ACA 20TB, 512e, SATA (MG10ACA20TE)
SLOG[2x] Intel Optane SSD P1600X 58GB, M.2 2280, PCIe 3.0 x4 (SSDPEK1A058GA01)
PSUCorsair SF-L Series SF850L, 850W, SFX-L, ATX 3.0 (CP-9020245-EU)
CaseSilverStone Case Storage CS381 V1.2, microATX (SST-CS381)

TypeModel
MainboardSupermicro X13SCL-IF (MBD-X13SCL-IF-O)
CPUIntel Xeon E-2436, 6C/12T, 2.90-5.00GHz (BX80715E2436)
RAM[2x] Samsung 16GB, DDR5-4800, ECC (M324R2GA3BB0-CQK)
OS (SSD)Samsung SSD 980 500GB, M.2 2280, PCIe 3.0 x4 (MZ-V8V500BW)
Data (HDD)[6-8x] WD Ultrastar DC HC560 20TB, 512e, SATA (WUH722020BLE6L4 / 0F38785)
HBABroadcom SAS 9300-8i, PCIe 3.0 x8 (H5-25573-00/LSI00344)
PSUSilverStone SFX Series ST45SF-G (Rev. 2.0), 450W, SFX (SST-ST45SF-G v2.0)
CaseJonsbo N3, Mini-ITX (N3 Black)

A few remarks:
  • I need to look for the proper SAS-to-SATA cables. By a quick look, I should need two SFF-8643 with a SAS connector on one end and four SATA connectors each on the other. Is this correct?
  • The listed Jonsbo case is definitely prefered. Unfortunately, it's hard to get (at least here in Germany). Therefore, the SilverStone Case Storage DS380 (SST-DS380B/71062) might be an alternative, though it seems as it won't fit the HBA and all eight drive bays (since some kind of bracket needs to be replaced on the drive cage in order to fit the length of the PCIe/HBA card). Can anyone having this case confirm this?
  • If so, do you have a suggetion on either a suitable (and still affordable) "half-length" HBA card or another case of that type (i.e. as small as possible for eight drives)? Hot-swap would be nice, but I could go without it if necessary.

Last but not least I would like to hear your thoughts regarding the volume being configured on this system:
  • Performance is quite important (notably but not only due to the iSCSI workload).
  • Of course I don't want to sacrifice availability, so RAID0 is not an option. :wink:
  • I've been thinking back and forth and have come to the conclusion that instead of a RAIDZ2, I'm going for (in hopes of using the proper terminology) a two-way mirror with four vdevs.
  • In other words: I would create four mirrored pairs (so in RAID terms four RAID1 groups) and span a single stripe (RAID0) over those four pairs, basically building a classic RAID10 group.
  • How did I come to that conlusion? Because from what I understand I would
    • eliminate additional parity calculations (RAIDZ1/2), hence getting more performance out of the volume group,
    • still have a single-drive fault tolerance (even up to two in a best case scenario)
    • and have faster rebuild/resilver times (simple clone instead of parity reconstruction).
  • Is there anything wrong with that or something I need to consider?

Thanks for reading so far! I'm really looking forward to your input. :smile:
 
Last edited:

ChrisRJ

Wizard
Joined
Oct 23, 2020
Messages
1,919
What's your use-case?
 

iSource

Cadet
Joined
Mar 4, 2024
Messages
6
Hi Chris,

I'm not sure I understand your question, sorry. Did you perhaps overlook this paragraph?

TrueNAS will be used for:
  • file storage (roughly 30 TB right now)
    • small files and large files
    • NFS and SMB
    • systems/users are directly working on those shares
  • iSCSI target (PVE cluster with three nodes)
 

ChrisRJ

Wizard
Joined
Oct 23, 2020
Messages
1,919

chuck32

Guru
Joined
Jan 14, 2023
Messages
623
I haven't played around with iSCSI yet so I don't really know what kind of power is needed for your application but your CPU / mainboard seems overkill for what I would interpret as file storage only. I'm happy to be corrected though.
Additionally the CPU was released just a few months ago, in general the consensus is to go for older, proven hardware rather than the latest and greatest.

Your 450W PSU seems a little on the low end. With 6-8 HDDs drives I'd go for more, 650W+ preferably but see for yourself:


Get the 250 GB Version of the samsung ssd for your boot drive, you don't need 500 GB.
 

ChrisRJ

Wizard
Joined
Oct 23, 2020
Messages
1,919
For iSCSI please have a deep look at this

 

iSource

Cadet
Joined
Mar 4, 2024
Messages
6
CPU / mainboard seems overkill
Even I would agree, despite being the one coming up with that build. :grin: But what I've learned from several past projects: Usage can change quite fast and this system is here to stay.

go for older, proven hardware rather than the latest and greatest
Just to properly understand your comment: Is this about Linux kernel compatibility or have there been major CPU/hardware-related issues in the recent past?

As far as the release notes go, SCALE Cobia uses a 6.1.x kernel, which should support Intel Raptor Lake (not mentioning the upcoming 6.6 kernel with Dragonfish). I first started with a Rocket Lake Intel Xeon, but since the prices aren't actually competitive (i.e. "two years old platform and still basically the same price tag"), I wasn't considering going this way yet.

450W PSU seems a little on the low end. [...] go for more, 650W+ preferably
Noted and applied. If I could edit my initial post (why can't I?), I would update the parts list.

Anyway, I would then install a SilverStone 650W SFX PSU.

Get the 250 GB [...] ssd for your boot drive [...] don't need 500 GB
Yep, I thought so and first had it configured this way. However, it's roughly just a 6 EUR difference, so I'd go for the larger version with higher TBW, who knows if it might come in handy.

For iSCSI please have a deep look at this
Thanks!

So my thoughts and gut feeling regarding "mirror is better suited than RAIDZx" weren't that far off from real-world examples, good to know. :smile:

Additionally, I've updated the configuration from 2x 16 GB to 2x 32 GB DIMMs (unfortunately I can't edit my first post... I now can, thanks!).



UPDATED PARTS LIST:
<s. initial post>
 
Last edited:

NickF

Guru
Joined
Jun 12, 2014
Messages
763
Alright cool. good component choices, alot of potential in a small box. Love the build idea here, its like a TrueNAS Mini on steroids.
I'd love to see how your thermals/accoustics are. That case is rather interesting in its pricing/availability. I kinda want one lol. It'll just probably be loud vs an appliance like thing. Their "two bay design" forces you into small fans, which is understandable but unfortunate.

I guess the most important question is how many clients are connecting to iSCSI, how many SMB concurrent sessions might you expect?
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
The Jonsbo case looks pretty cool. If I ever want to replace my Fractal Node 304 (6 drive) with something to hold 8 drives I'd look at that :)

Meanwhile, the Silverstone DS380 has a reputation for cooking drives, so look into cooling solutions, for example
 

chuck32

Guru
Joined
Jan 14, 2023
Messages
623
Just to properly understand your comment: Is this about Linux kernel compatibility or have there been major CPU/hardware-related issues in the recent past?
I haven't heard of the latter. From reading a lot here that's what I took as a key message. I would imagine that compatibility issues may not end solely with linux kernel support, but I can't confirm.

Regarding your PSU choice, did you come up with 650W from the calculator?

You selected a gold PSU, may I suggest looking at seasonic? The GX750 is 20 Eur cheaper and you can even get the prime PX 650W for 150 Eur. I scored the Prime PX 750W for 120 Eur a few month ago.
Seasonic offers 10 to 12 years warranty.
 

iSource

Cadet
Joined
Mar 4, 2024
Messages
6
First of all, thanks to the moderators for giving me further privileges, especially allowing me to edit my posts. :smile:

[...] PSU choice, did you come up with 650W from the calculator?
Basically, yes. Although I have to admit that I find those peak measurements in the article a bit overstated, I do see the point in general.

may I suggest looking at seasonic? [...] GX750 [...] PX 650W [...]
If I didn't miss something, those are all ATX PSUs, which won't fit in the intended case, as I need SFX(-L).

However, that reminded me of the "PSU Tier List" over at Cultists Network, which then again now lead me to getting one of the Corsair SF series PSUs, seemingly the best SFX PSUs you can get. Since the "old" SF750 costs a bit more than the new SF850L, I'll opt for the latter (which probably leaves me with roughly 250W of headroom, but anyway).



With the feedback from all of you, new info (e.g. regarding availability of certain parts) and some (read: a lot of) additional reading on my side, I've updated the intended build (s. initial post).

The Jonsbo N3 case will be replaced by a SilverStone Case Storage CS381, because a) availability is basically nonexistent, b) things might get a bit too cramped in there (thermals) and c) I'll kindly take the additional options regarding DIMM/M.2/PCIe slots with a then µATX board instead of Mini-ITX. That case has some hefty price tag on it, rather unreasonable. But here we are...

In addition, I downgraded the CPU from a 6C/12T to a 4C/4T Intel Xeon CPU, since this still should have enough performance for the planned iSCSI workload (particularly not using any RAIDZ).

Since I learnt that VM iSCSI data (and some NFS stuff) should be done synced, I also added an SLOG mirror with two Intel Optane 16GB modules P1600X 58 GB drives.

Consindering the board/lane topology, I will have those two Optane modules "directly" attached to the PCH (i.e. using the two available M.2 slots). Therefore, the OS Samsung M.2 SSD will be connected with a PCIe/M.2 NVMe add-in card in the LGA1700 PCIe x4 slot of the CPU.

Last but not least, I now also incorporated two manufacturers/brands regarding the HDDs, which I will equally distribute across the four 2-way mirrors, so that each mirror has one drive each of the two models.

Thoughts and further feedback on this updated build are highly appreciated! :wink:
 
Last edited:

DigitalMinimalist

Contributor
Joined
Jul 24, 2022
Messages
162
Don’t you find these „cheap“ 16GB Optane modules a little bit slow in write speed (145MB/s)?

Unfortunately, it’s getting very difficult to find Optane drives for good prices
 

iSource

Cadet
Joined
Mar 4, 2024
Messages
6
Don’t you find these „cheap“ 16GB Optane modules a little bit slow in write speed (145MB/s)?
I could be way better, absolutely. But as you mentioned yourself...

[...] very difficult to find Optane drives for good prices
Although the actual sequential throughput is far from "SSD-like", I strongly assume that it should still perform way better as a direct write to the conventional disk array, at least in regards to latency and IOPS.

But still, I'd be more than happy if any of you could provide me with a feasible recommendation for an SLOG mirror, that is a) not used/refurbished and b) won't cost me 400+ EUR per drive.

For example, I've been trying to find a Radian RMS-300/[8|16]G, but it seems as though you can't get your hands on one of those if you don't happen to be some OEM. And I don't know the price for those, but as I can't get them, it doesn't matter anyway.

Then again, the top-end of "normal" SSDs (e.g. Samsung 980/990 Pro), even insanely overprovisioning those, seem no option either, even though having 600TB (1TB model) or even 1.2PB (2TB model) TBW specifications. Probably since they are not using SLC and - more importantly - they not have PLP.

So, do you have some reasonable recommendation?
 

DigitalMinimalist

Contributor
Joined
Jul 24, 2022
Messages
162
I‘m scouting on various platforms (in EU) for used Optanes, or fire sales of new, but no luck so far.

In the end, we look for high IOPS with low latency and high durability…
Exactly what Optane offers
Would love to find some cheap
Intel Optane SSD DC P4801X 100GB (M.2 22110/M-Key/PCIe 3.0 x4 SSDPEL1C100GA01)
Or
Intel Optane SSD P1600X 118GB, M.2 2280/M-Key/PCIe 3.0 x4 (SSDPEK1A118GA01)

Just found new Intel Optane SSD P1600X 58GB for 45€ on Amazon… I should probably try

For US people:
Newegg


As an alternative, I bought refurbished Micron 7300 Pro 960GB (Enterprise PCIe 3.0 NVME with PLP) for 40€ per piece last year - though 4k write is also bad with 30k

Update: I just ordered 2x P1600X 118GB for 160€ from Amazon US… too curious to try
 
Last edited:

iSource

Cadet
Joined
Mar 4, 2024
Messages
6
new Intel Optane SSD P1600X 58GB for 45€ on Amazon
Thank you very much!

Got curious, when I read your post, and now two of those are on the way. :wink: Still don't know how I missed those, but I'm glad you dropped that hint.

Two of those 58 GB drives hardly cost me more than a single one of those 16 GB Optane modules, that's a win-win in terms of price and performance.

Would love to find some cheap [...] Intel Optane SSD P1600X 118GB
Also available on Amazon for 80 EUR, brand new, Amazon directly (so no Marketplace gamble).
 

DigitalMinimalist

Contributor
Joined
Jul 24, 2022
Messages
162
I ordered the 118GB from Amazon.us via Amazon Germany = free shipping :)

As you mentioned: the 58GB is Marketplace - in addition, 58GB is really small and the 118GB size allows for more usages...
special vDev, SLOG, Boot drive for Windows... more flexible

I'm glad you dropped that hint
I would not have looked again, if you would not have asked :wink:
I had an eye on ebay for the exact same model: 135 Euro used...
 

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,947
You really don't need to mirror the SLOG. If it fails then the pool just gets slower. The only time a failure matters is if:
1. You get an unexpected reboot (aka kernal crash)
2. SLOG fails on boot, with data on the SLOG
 
Top