A "doing more with less" home FreeNAS build analysis/thoughts

Status
Not open for further replies.

AndyN

Cadet
Joined
May 7, 2013
Messages
4
Hi everyone! I've finally decided to take the plunge and build a FreeNAS home server.
I will list the components that I'm thinking of getting, with some reasoning as to why, given my use case. Feel free to add your $0.02 and/or constructively critique my choices. Hopefully it will be useful for others in my shoes.
Speaking of use case - mine is as follows:
  • room for at least 8 drives, ideally 10-12 for 2 raidz2 pools or 1 raidz3 + mirror
  • small physical footprint, definitely not a rack or full tower
  • used as media/personal archive with potential to stream 720p/1080p content to TV/PC (1-2 streams at most)
  • system backup destination for my PC
  • download station - BT and/or FTP
  • clients (PC + laptop) will likely use home WiFi to upload/download data (but will connect via Cat5/6 for initial data load/dump)
  • possibly (depending on system noise level) act as HTPC as well, but that's secondary to above uses
So basically no hardcore CPU or I/O requirements IMO, but more drives in less space (thus title of the thread :))

With that in mind, here is what I found:
Case: Fractal Design Node 804.
  • Pros - space for up to 12 drives: 10x3.5" + 2x2.5"; decent ventilation options; compact - 35x31x39 cm (WxHxD)
  • Cons - no hot-swap support (hopefully won't need that often); cable management could be tricky
I couldn't find a better case that wouldn't be a full tower and able to handle 10 3.5" drives out of the box. Perhaps I missed something? Interestingly, it allows for alternate placement of one HDD cage (see here and here) I wonder if you can smuggle in 4 more disks that way (for a total of 14 3.5" spots if you somehow obtain an extra HDD cage from them)? Not sure how PSU cable management is going to work when you place a cage so close though. They claim to support PSUs up to 26 cm deep and the one I'm getting is only 17 cm deep. Any Node 804 owners here?

Motherboard (not decided yet):
X11SSH-CTF
  • Pros: 16 ports (SATA+SAS), M.2 PCI-E slot
  • Cons: expensive (515 USD)
I would've bought X11SSH-F (costs almost half), if it had >8 ports. I looked for a HBA card (LSI SAS 2308/9217-8i/9207-8i) and the only economically viable options were used cards which I'd rather not risk (every component is going to be bought as new). New LSI HBA cards cost about the same as (or more than) the price difference between the motherboards I am considering.
or
X11SSL-CF
  • Pros: cheaper than X11SSH-CTF (by 35%/185 USD); 14 ports; no useless 10Gb LAN chip - home router can handle 1Gb
  • Cons: no M.2 PCI-E - I wanted to use that for booting FreeNAS - overkill?
I wanted to use mirrored USB sticks to boot FreeNAS at first, but reading suggestions here, reconsidered for SSD option. Having a built-in M.2 slot saves a SATA/SAS port/space for data drives. I could try getting a M.2 PCI-E card for this purpose, but then the question is FreeBSD driver support (will it blend boot?) and total cost vs. going for X11SSH-CTF directly. Will have to investigate.

CPU: Xeon E3-1230 v6 - cheapest 8 threads/4 cores Xeon I could find. Should be enough.

RAM: 2x Micron 16 GB ECC VLP UDIMM DDR4-2400 (on the Supermicro QVL)

PSU: Seasonic Prime SSR-750TD - 750W is probably overkill, but I am hoping that would translate into less generated heat and less fan noise. Has glowing reviews and cabling to power 10 SATA drives out of the box.

GPU: optional, only if I decide for HTPC use. 2 concerns here:
  1. Driver support on OS level
  2. Conflict with onboard IPMI card (Aspeed AST2400 BMC). Don't want the two to fight and potentially brick my install/corrupt pool data
Any pointers/thoughts on this appreciated. I imagine integrated graphics cards (e.g. Intel HD Graphics P630 in a Xeon E3-1245 v6) are useless since the MB doesn't have another video output besides VGA used by BMC chip?

3.5" HDDs: Initially I wanted HGST 8TB Deskstar NAS drives:
  • Pros - supposedly better reputation in terms of reliability
  • Cons: 7200RPM => could be noisier/hotter than WD Reds. Also pricier than WD Reds, but not by much (adds up though, if you do x10 :))
I'll probably buy 8TB WD Reds unless there is conclusive evidence of inferior QC compared to HGST (which is owned by WD anyway lol).

2.5" HDDs: Don't know - potentially 2x 1TB WD Reds in a mirror or just individually. One could be used by SSD for FreeNAS if I decide for an M.2-free/cheaper MB. I probably won't need L2ARC or SLOG device, if I understood cyberjock's powerpoint/pdf correctly (excellent work by the way!)

So...am I reasonable or are the above components nonsense?
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Cons: no M.2 PCI-E - I wanted to use that for booting FreeNAS - overkill?
Seriously? Just get a cheap PCIe card to M.2 adapter and use one of the PCIe slots. Instant money saver.

but then the question is FreeBSD driver support
Drivers have been in place for quite a while now. NVMe works fine, AHCI has been working fine for over a decade.

PSU: Seasonic Prime SSR-750TD - 750W is probably overkill, but I am hoping that would translate into less generated heat and less fan noise. Has glowing reviews and cabling to power 10 SATA drives out of the box.
Yes, it is overkill. I'm not sure if there's a Prime Titanium 650, but that would be better for 10ish drives.

That's not going to work. At best, you could have it in a VM, but that option is still not properly supported and it's a crappy one.

I'll probably buy 8TB WD Reds unless there is conclusive evidence of inferior QC compared to HGST (which is owned by WD anyway lol).
Reds are fine, RMAs are painless when needed.

2.5" HDDs:
But what for?
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
3.5" HDDs: Initially I wanted HGST 8TB Deskstar NAS drives:
  • Pros - supposedly better reputation in terms of reliability
  • Cons: 7200RPM => could be noisier/hotter than WD Reds. Also pricier than WD Reds, but not by much (adds up though, if you do x10 :))
I'll probably buy 8TB WD Reds unless there is conclusive evidence of inferior QC compared to HGST (which is owned by WD anyway lol).

I refer to this article from a company called Backblaze, when I say that you would be better off spending your money on Seagate drives.
Read the article if you want, they have (as of the date of the article) 5120 Seagate ST8000DM002 drives installed and they say the anualized failure rate is comparable to the HGST drives. They are in the process or replacing more than 4000 HGST and Westerd Digital drives with Seagate drives. I don't understand all the hate on Seagate. They made a bad model once but that doesn't mean all Seagates are always bad.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
2.5" HDDs: Don't know - potentially 2x 1TB WD Reds in a mirror or just individually. One could be used by SSD for FreeNAS if I decide for an M.2-free/cheaper MB. I probably won't need L2ARC or SLOG device, if I understood cyberjock's powerpoint/pdf correctly (excellent work by the way!)
I use a pair of 2.5" laptop (40GB) hard drives for my boot pool. The boot pool does not need to be fast as it is only used to boot from and once the boot image is loaded into RAM, those disks are only used to store configuration data. So, low usage and low speed. A mirrored (with FreeNAS) pair of hard drives works well and is reliable for a long time and you can use all the same utilities to monitor their health that you use for any other drive in the NAS. You just have to schedule SMART tests using cron because those drives are not listed in the GUI menu for that. It would be a waste of space to use a larger drive.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
They made a bad model once
A truly horrible model, rather recently.

As for Backblaze's analysis... It's mostly crap, at least what they publish. If I had the time, I'd properly analyze their raw data, because their analysis is simplistic and not statistically sound.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
The one 3TB model. I never memorized the model number, but it's the one with a failure rate approaching 100% after a year or two.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
The one 3TB model. I never memorized the model number, but it's the one with a failure rate approaching 100% after a year or two.
Yes, that is the one I was thinking of also. I think the drive mechanicals were the same or similar for the 750, 1.5 and 3 TB models because those all had very high failure rates where I work. The 2TB and 4TB models have been much better.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
Yes, that is the one I was thinking of also. I think the drive mechanicals were the same or similar for the 750, 1.5 and 3 TB models because those all had very high failure rates where I work. The 2TB and 4TB models have been much better.

Personally had over 120% failure rate on about a dozen 1.5TB. Stopped bothering to RMA them eventually.

Anyway, Seagate 8TB Ironwolfs seem fine, but run hot (they're conventional 7200rpm drives)

Most of the OPs choices seem pretty good.
 

AndyN

Cadet
Joined
May 7, 2013
Messages
4
Hi all,
Thanks for the valuable inputs.
Changes/updates:

Case
: Based on pictures of other people's builds (here), looks like my "3 HDD cages" fantasy is just that - I don't see where the PSU cables would go in that case (pun intended) and how they'd reach the MB in the other compartment/half of the case.

PSU: 650W Seasonic [spec here] has 6 SATA, 5 peripheral (=molex?) & 4 PCI-E 6/8 pin connectors. Would it be possible to power 10 3.5 + 2 2.5 SATA drives out of that (preferably without starting a fire)? I assume molex->sata would cover me (+ 1 Y-split), but would it be safe? There is a whole thread here about this is and I would say it is inconclusive. YT videos and Reddit posts about burnt molex->sata cables are scary. Maybe 750W version [spec here] is better? Overpowered, true, but at least almost all SATA power cables would be stock (has 10 SATA cables). What do you think? Or what 650W PSUs would you suggest that have 10 SATA power connections without messing around with molex->adaptors?

Motherboard - going for X11SSL-CF + an M2 card. Thanks for the tip. Card choices:
Akasa - cheapest
Addonics - also quite cheap
Asus - slightly more expensive
SuperMicro AOC-SLG3-2M2 that can house two M.2 SSDs (costs 2.5x more too :)), but someone wrote (see post) that only one M.2 port actually works with one drive. Not sure it is worth it, even if it has SuperMicro logo on it. What do you think? Board has x8 PCI-E, so pulling two SSDs could be ok

GPU - no problem, will use Raspberry Pi 3 with RasPlex & plug it between NAS and my TV. Should be doable (and small...I hope).

As to why 2.5 HDDs - why not have an extra 1TB mirror if you can? Or a 5TB one (if using Seagate ST5000LM000)? There is a question of height limitations - WD Reds are 9.5mm high and Seagate even 15mm, perhaps they wouldn't even fit into the dedicated case spots (didn't find any info on Fractal Design site). Or worst case an SSD for SLOG or L2ARC.

As for 3.5" HDDs - I would prefer quieter and colder drives - it is a small case, and it will be in the living room, so probably WD Reds - people mostly say they're very quiet. But I am not buying drives yet - that will probably take a while as I want to space out my purchases in the dubious belief of reducing the chance of a simultaneous drive failure. :)
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
Hi all,
Thanks for the valuable inputs.
Changes/updates:

Case
: Based on pictures of other people's builds (here), looks like my "3 HDD cages" fantasy is just that - I don't see where the PSU cables would go in that case (pun intended) and how they'd reach the MB in the other compartment/half of the case.

PSU: 650W Seasonic [spec here] has 6 SATA, 5 peripheral (=molex?) & 4 PCI-E 6/8 pin connectors. Would it be possible to power 10 3.5 + 2 2.5 SATA drives out of that (preferably without starting a fire)? I assume molex->sata would cover me (+ 1 Y-split), but would it be safe? There is a whole thread here about this is and I would say it is inconclusive. YT videos and Reddit posts about burnt molex->sata cables are scary. Maybe 750W version [spec here] is better? Overpowered, true, but at least almost all SATA power cables would be stock (has 10 SATA cables). What do you think? Or what 650W PSUs would you suggest that have 10 SATA power connections without messing around with molex->adaptors?

It is not safe to split SATA power connectors, as a single SATA power connector can only power about 1.5 drives. You can run 4 drives relatively safely off a single molex connector though. This does not mean that you can hang sixteen drives off a single 4 connector peripheral molex run. The actual gauage of the wire from the PSU comes into affect, and then voltage drop across the run.

The best thing to do is to get a PSU which has enough sata/molex connectors to power your drives, and then use the connectors which are closest to the PSU.

For example, with my build, I had six passive backplanes, each backplane has a single molex connector and powers four drives. I had 3 molex runs from the PSU, and I used the first two molex connectors on each run to power two of the backplanes. The 1000W PSU had a single 12V rail. It has 6 peripheral/sata ports on the back of the PSU, which can be used with the modular sata or peripheral cables. They sell additional sata cables. But it came with all the cables I needed.

I probably could've gotten away with the 850W PSU, but then I would've needed to purchase some more molex cables... or possibly overload the existing ones. As it is I have 8 drives hanging off each PSU peripheral/sata port. And that leaves another 3 ports for SSDs etc.


Motherboard - going for X11SSL-CF + an M2 card. Thanks for the tip. Card choices:
Akasa - cheapest
Addonics - also quite cheap
Asus - slightly more expensive
SuperMicro AOC-SLG3-2M2 that can house two M.2 SSDs (costs 2.5x more too :)), but someone wrote (see post) that only one M.2 port actually works with one drive. Not sure it is worth it, even if it has SuperMicro logo on it. What do you think? Board has x8 PCI-E, so pulling two SSDs could be ok

If the motherboard supports PCI bifurcation, then you can use that adapter to run multi m2.cards. If your motherboard doesn't then you need one with a PCIe switch. I know SuperMicro's current X10 boards support bifurcation (ie XeonD and Xeon-E5), but not sure about the X11 E3 boards.

GPU - no problem, will use Raspberry Pi 3 with RasPlex & plug it between NAS and my TV. Should be doable (and small...I hope).

As to why 2.5 HDDs - why not have an extra 1TB mirror if you can? Or a 5TB one (if using Seagate ST5000LM000)? There is a question of height limitations - WD Reds are 9.5mm high and Seagate even 15mm, perhaps they wouldn't even fit into the dedicated case spots (didn't find any info on Fractal Design site). Or worst case an SSD for SLOG or L2ARC.

As for 3.5" HDDs - I would prefer quieter and colder drives - it is a small case, and it will be in the living room, so probably WD Reds - people mostly say they're very quiet. But I am not buying drives yet - that will probably take a while as I want to space out my purchases in the dubious belief of reducing the chance of a simultaneous drive failure. :)
 

AndyN

Cadet
Joined
May 7, 2013
Messages
4
I've checked regarding PCI bifurcation, looks like the board I am going for doesn't support it. No problem - will save on the adapter and SSD.

I got the case and PSU yesterday and SATA power cables are an issue - they're too widely spaced out for the HDD cage - see pic1 and pic2. I tried different cage placement - if it is above the PSU then it looks like a total no go, but even in the alternate location next to PSU, the cable would be in the way of the disks in the cage above it. What kind of SATA power cables did the Node 804 designers have in mind when they made this? I'll make some more measurements, but looks like I'll return the PSU and look for a different one based on its sata power cable spacing more than anything else... Unless you guys see a way out with this - I'd rather not bend the cable inward into the space between the drives - it is going to be hot there and it will likely touch the PCB and metal plate of the drives.
 

ChriZ

Patron
Joined
Mar 9, 2015
Messages
271
You can try using one sata power cable for drives 1,3,5... and a second one for drives 2,4,6... etc,
That's what I have done in some occasions, though I admit I had sleeved cables in hand which are easier to work with in this configuration
 

AndyN

Cadet
Joined
May 7, 2013
Messages
4
I thought about the odd/even approach, but it doesn't look like it would be possible with these cables and available space. I need 50 mm spacing between the SATA power connectors, instead of the 120 mm provided by Seasonic.
Googled a bit and behold, salvation is here: https://store.cablemod.com/configurator/, for only $25 per quad SATA cable (+$20 shipping +EU import duties). Given that I need at least two and perhaps another one for the drives in the front section, I'm looking at $100 minimum. The case cost $120...not sure if I should laugh or cry at this point...such a trivial thing as SATA power cables tripped up the setup.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Why... don't you just bend the cables a little bit so the extra 70mm form an arc behind the drives?
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
I thought about the odd/even approach, but it doesn't look like it would be possible with these cables and available space. I need 50 mm spacing between the SATA power connectors, instead of the 120 mm provided by Seasonic.
Googled a bit and behold, salvation is here: https://store.cablemod.com/configurator/, for only $25 per quad SATA cable (+$20 shipping +EU import duties). Given that I need at least two and perhaps another one for the drives in the front section, I'm looking at $100 minimum. The case cost $120...not sure if I should laugh or cry at this point...such a trivial thing as SATA power cables tripped up the setup.

Here is what you do, use these: http://www.ebay.com/itm/LOT-OF-5-SA...t-replace-broken-connector-cable/252285533366

Then you put the connectors where you need them. I made all custom cables for the last build I did that wasn't in a rack mount case with hot-swap hard drive bays. Works like a champ. I suggest that you put no more than five on any single line from the power supply.
Since all the wires on your power supply are black, you need to be very careful to make sure you have the wiring done right.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
I thought about the odd/even approach, but it doesn't look like it would be possible with these cables and available space. I need 50 mm spacing between the SATA power connectors, instead of the 120 mm provided by Seasonic.
Googled a bit and behold, salvation is here: https://store.cablemod.com/configurator/, for only $25 per quad SATA cable (+$20 shipping +EU import duties). Given that I need at least two and perhaps another one for the drives in the front section, I'm looking at $100 minimum. The case cost $120...not sure if I should laugh or cry at this point...such a trivial thing as SATA power cables tripped up the setup.

If you just need more connections and you don't want to build it yourself, these are well made:
http://www.ebay.com/itm/D-type-IDE-...male-Power-Connector-Cable-18AWG/172791351036

I bought two sets of them and used them for quite a while now. This is the kind of connector that you don't need to worry about melting.
 
Status
Not open for further replies.
Top