Build help: NAS with 10x 10TB raw storage, home use, range (w/o disks) 1000 € / $

emk2203

Guru
Joined
Nov 11, 2012
Messages
573
1233291_0__41625718_0544553515.jpg
16GB Crucial CT16G4WFD8266 DDR4-2666 ECC DIMM CL19 Single x2

1199333_0__8813742.jpg
Intel Core i3 8100 4x 3.60GHz So.1151 BOX

1292947_0__no_image_manufacturer_160.jpg
Supermicro X11SCH-F Intel C246 So.1151 Dual Channel DDR4 mATX Retail

1284703_0__8896442.jpg
Fractal Design SSD Bracket Kit - Type A - schwarz x3

1274635_0__71942.jpg
Fractal Design Define R6 Black USB-C

1212184_0__8826320.jpg
650 Watt Seasonic Prime Ultra Modular 80+ Titanium

992035_0__8616412.jpg
EKL Ben Nevis Tower Cooler

1047641_0__8666507.jpg
SATA 6Gb/s 4xSATA to SFF-8087 x2

The above list is my current shopping tray for a new NAS. Additionally, I want to have system either from the internal USB3 port with a M.2 SSD - USB3 adapter, or via SATADOM (DELL INNODISK SATADOM-ML 3SE 64GB). HBA should be LSI 9207-8i or LSI 9305-16i when I get it for a good price. Disks are WD100EZAZ (the ones from WD MyBook Desktop enclosures).

I need to make sure that the system can deal with the PWDIS pin on the disks - 3.3V on SATA should be easily removable.

Specs:

* Light usage, approx. 4 clients simultaneously
* jail for torrent client
* no reencoding (or no real-time reencoding)
* plain storage for media files as main function
* somewhat quiet, no screamer

Is there anything wrong with this selection?
 
Joined
Oct 18, 2018
Messages
969
EKL Ben Nevis Tower Cooler
You may want to make sure this doesn't block any PCI slots or ram slots.

Fractal Design SSD Bracket Kit - Type A - schwarz x3
Where are you mounting these? Your case comes with 2 pre-installed. When you purchase additional trays straight from Fractal Design they come in packs of 2. If you purchase 3 kits you'll end up with 6 additional trays plus the 2 that come with the case.
 

Yorick

Wizard
Joined
Nov 4, 2018
Messages
1,912
Nothing wrong with it. You’ll pay a premium per TiB for those 10TB over going with 8TB, and I’m assuming you’ve considered that and have accepted the premium.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
I need to make sure that the system can deal with the PWDIS pin on the disks - 3.3V on SATA should be easily removable.
Good video on ways to deal with the problem. Don't make permanent changes to your hardware.
https://www.youtube.com/watch?v=9W3-uOl4ruc
Another good video on solutions:
https://www.youtube.com/watch?v=fnISM_LMuss
HBA should be LSI 9207-8i or LSI 9305-16i when I get it for a good price.
There is no reason in the entire world to put a 12Gb SAS controller like the LSI 9305-16i in a home NAS. That thing can run 1024 drives. Are you ever going to have that many drives? Do you plan on having a stack of 12Gb SAS SSDs in your system? There is no price over $60 that make that worth considering and they run super hot too.

This is the best SAS controller you need for any quantity of mechanical SATA drives under about 80 drives:
https://www.ebay.com/itm/HP-H220-6Gbps-SAS-PCI-E-3-0-HBA-LSI-9207-8i-P20-IT-Mode-for-ZFS-FreeNAS-unRAID/162862201664
If you have more than 80 drives or if you want a pool of SSDs, you might then need a better SAS controller. Not for any other reason.
 

emk2203

Guru
Joined
Nov 11, 2012
Messages
573
From the reviews I read, the 10TB are better in terms of quality than the 8TB, also quieter. That's why I waited until they got into my price range. I agree, the 8TB are less expensive in terms of €/TB.
 

emk2203

Guru
Joined
Nov 11, 2012
Messages
573
@PhiloEpisteme Thanks for the advice re: CPU cooler. I'll double-check that it's not in the way. If there are recommendations for a better / smaller one, I am all ears. Regarding the bracket kits: I want to have 10 drives inside, I assumed that I need 5 (one for each drive), 2 are pre-installed.

Is this wrong?

@Chris Moore: The 9207-8i is the one I am probably going to settle for. There was an eBay auction for the other one which stood at 20€, so I thought it might be a good opportunity. If it has disadvantages like running hot, I am certainly only buying the 9207-8i. They are pricey enough here. And thanks for the links. I saw already some stuff on YouTube regarding this matter, but these were new to me.
 

Stevie_1der

Explorer
Joined
Feb 5, 2019
Messages
80
If there are recommendations for a better / smaller one, I am all ears.

The Noctua NH-U9S should fit without blocking the RAM slots.
It seems widely used and I haven't read about any clearance problems.
Or a little cheaper, the Cryorig M9i, it could partly block the first RAM slot, depending on the module height and at which height the cooling fins start.

I scaled a photo of the board and measured the sizes for my own coming build to check for fitting coolers. :D


1284703_0__8896442.jpg
Fractal Design SSD Bracket Kit - Type A - schwarz x3
The trays you have listed are for 2.5" SSD drives (no anti-vibration rubbers), and two of those come with the chassis, they are mounted at the back of the mainboard tray.
For mounting your HDDs, you need the Define R6 HDD Tray - Black (available in white, too) instead, 6 come with the chassis.

Additionally, I want to have system either from the internal USB3 port with a M.2 SSD - USB3 adapter, or via SATADOM (DELL INNODISK SATADOM-ML 3SE 64GB).
You have 2x M.2 PCIe connectors on the mainboard, so why not use them?
Or are you planning to use SLOG and/or L2ARC?
If not, get a cheap M.2 PCIe (not SATA) SSD and use that for boot.

Or just a regular SATA SSD, hooked up to a mainboard port and mounted to an SSD bracket behind the mainboard tray.
You have 8 ports from the HBA, and 8 ports on the mainboard, so with 10 data disks you still have 6 free SATA ports available.

I need to make sure that the system can deal with the PWDIS pin on the disks - 3.3V on SATA should be easily removable.
Good video on ways to deal with the problem. Don't make permanent changes to your hardware.
I would just remove the 3.3V pin from the PSU connectors (those connectors that are plugged directly into the PSU) with a pin remover, and put some electrical tape around the removed pin.
No need to fiddle around with tiny pieces of kapton tape or using adapters.
 

emk2203

Guru
Joined
Nov 11, 2012
Messages
573
The Noctua NH-U9S should fit without blocking the RAM slots.
It seems widely used and I haven't read about any clearance problems.
Good idea, I like Noctua, and they have a sale here of the NH-U9S with some cosmetic issues, but 100% performance guaranteed. I switch to that.

The trays you have listed are for 2.5" SSD drives (no anti-vibration rubbers), and two of those come with the chassis, they are mounted at the back of the mainboard tray.
For mounting your HDDs, you need the Define R6 HDD Tray - Black (available in white, too) instead, 6 come with the chassis.
Thanks, I caught this during a second read of the order list. Switched to the HDD tray.


You have 2x M.2 PCIe connectors on the mainboard, so why not use them?
Or are you planning to use SLOG and/or L2ARC?
If not, get a cheap M.2 PCIe (not SATA) SSD and use that for boot.

I was planning on keeping the M.2 ports for a small SSD pool, but generally speaking, a good alternative. I try to see what I can get. I do have 32GB SSDs with a USB3 adapter and good cooling, so I wanted to use them first. Everything else seems to be a waste for the system pool. They are running in two of my other systems, and I like this solution a lot.

Or just a regular SATA SSD, hooked up to a mainboard port and mounted to an SSD bracket behind the mainboard tray.
You have 8 ports from the HBA, and 8 ports on the mainboard, so with 10 data disks you still have 6 free SATA ports available.

Maybe I use one of the SSDs I have lying around for that, also worth evaluating. Saves some money, reuses old hardware.


I would just remove the 3.3V pin from the PSU connectors (those connectors that are plugged directly into the PSU) with a pin remover, and put some electrical tape around the removed pin.
No need to fiddle around with tiny pieces of kapton tape or using adapters.
This is what I plan to do now. Modular PSU, and then removing the 3.3V pin on the cables which supply the disks. Easy, elegant solution which keeps the disks in original state. Maybe it's an idea to put a switch into the cable, to use the PWDIS pin for its intended purpose when the need arises. But that's probably too much fiddling.
 

Snow

Patron
Joined
Aug 1, 2014
Messages
309
I would label the Cable you mod, So down the road if you swap out parts to a different Nas or Upgrade or any thing you know that you removed that Pin From that cable. Just a Idea :P
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
I would label the Cable you mod, So down the road if you swap out parts to a different Nas or Upgrade or any thing you know that you removed that Pin From that cable. Just a Idea :p
That is the reason I think it is better to use an adapter than to mod the cable. I would probably put the Kapton tape on the power connector of the drive if I didn't want to use an adapter.
 

emk2203

Guru
Joined
Nov 11, 2012
Messages
573
I just got the Kapton tape 10 minutes ago, but I try first to use a cable without 3.3V to the disks. I am going to properly label it on both sides anyway.

From SeaSonic: "Each PRIME Ultra Series power supply will also ship with a SATA 3.3 adapter to support the “Power Disable” (PWDIS) feature of the newer, high-capacity hard drives. " I report back how this works in practice.
 

Yorick

Wizard
Joined
Nov 4, 2018
Messages
1,912
Yeah, Seasonic ships Molex to SATA adapters. Two, if memory serves. No thank you. I removed the 3.3V pins on the PSU side connector and wrapped them in electrical tape. Simpler.

As for labeling: Which SATA device would conceivably use 3.3V, ever? And isn’t a pin with electrical tape around it obvious enough? ;)
 

emk2203

Guru
Joined
Nov 11, 2012
Messages
573
If it's just Molex to SATA adapters, I probably just remove the 3.3V pins on the PSU side connector as well.

Regarding labeling: If the stuff runs for 5 years, I bet that you don't know why there was tape around a pin when you have to open it. I better play it safe with some helpful hints.
 

John Doe

Guru
Joined
Aug 16, 2011
Messages
635
can confirm wd red 10 tb are much more quite and less warm than 3tb wd red.
parts are looking good, if it matches your needs.

I went a step further with esxi, pfsense etc and it is great to have an all in one box. if you want to push it further consider a xenon cpu with more cores.

what about your logical setup in freenas? 10 discs should be raidz3 if they are in one vdev.
 

emk2203

Guru
Joined
Nov 11, 2012
Messages
573
I want to have a RAIDZ2 setup with 8+2 for parity=10 disks as the sweet spot for this vdev config. Yes, I am willing to take that risk over RAIDZ3. I have not heard many horror stories from people with failing three RAIDZ2, as opposed to data loss from two failed disks with RAIDZ1. I read somewhere (Backblaze? Can't find it at the moment) that the rate of second disk failure while resilvering is 8%, which would give a rate of third disk failure of less than 1%. I can live with that.

Regarding other options, this should be an oldskool storage server and not much beyond that. I thought about the Xeon briefly, but it brings nothing to the table which justifies the price difference for me. For other stuff, I would use another machine, maybe with added 10GbE adapters on both machines if fast transfer is essential. But that's future extension.
 

Yorick

Wizard
Joined
Nov 4, 2018
Messages
1,912
As per https://www.ixsystems.com/community/resources/zfs-raid-size-and-reliability-calculator.49/ , assuming those EZAZ are He10 at 2.5Mh MTTBF, you get a MTTDL of 30,800 years with raidz1, 43,379,000 years with raidz2, and 5.7 billion (10^9) years with raidz3.

That directly contradicts the "raidz1 is the devil" story. Probably because there is an assumption that drives that age at the same rate tend to fail at the same time, which would change the calculations.

This calculator warns us that a best estimate real world would be 250k hours MTBF, never mind the 2.5M manufacturer spec. https://www.servethehome.com/raid-calculator/raid-reliability-calculator-simple-mttdl-model/

With that, I get a raidz2 data loss likelihood of 0.2% after 10 years, raidz1 4.4% (okay that's uncomfortable), and raidz3 0.00001%

And that's cutting the stated manufacturer MTBF by an order of magnitude. To me, a 0.2% likelihood of data loss over 10 years is acceptable. Backups are a thing, after all. In a system without backup or where data loss would be catastrophic, we'd want raidz3? - but wait, you wouldn't build like that. If downtime can't be accepted, there'd be two systems in HA, TrueNAS not FreeNAS, possibly in separate DCs connected via low-latency dedicated links. After all, power goes out too and power supplies fail and NAS wants an update and reboot.
 
Top