Pheran's 80TB FreeNAS Build with photos: Kaby Lake Edition

Pheran

Patron
Joined
Jul 14, 2015
Messages
280
Greetings FreeNAS community! This thread is a followup to my original Pheran's 32TB FreeNAS build with photos thread. Here I design and build a FreeNAS server with the following objectives, trying to show as much detail as I can:

1. Rock-solid, reliable hardware that is considered "approved" by the FreeNAS community
2. Powerful and relatively high-capacity with reasonable power consumption
3. Quiet!

This first post will basically be the parts list and some rationale for each. I'll start with a summary and then go into details. I'll provide pricing for most parts, however I cannot guarantee that you can replicate my prices because I take advantage of a lot of "Slickdeals" to get discounts.

Part Summary

Fractal Design Define R5 Case - $100
Fractal Design Dynamic GP-14 White Case Fan - $14
EVGA Supernova 550 G2 Power Supply - $71
Supermicro X11SSL-CF Motherboard - $219
Intel Xeon E3-1230 V6 Processor - $222
2x Crucial CT16G4WFD824A 16GB DDR4-2400 ECC UDIMM - $336
2x SanDisk Ultra Fit 32GB USB Flash Drive - $27
2x CableCreation Mini SAS SFF-8643 Host to 4x SATA Target 0.5m Cable - $27
Intel X550-T2 10G NIC (see below)
8x 10TB NAS Drives (see below)

Server Cost (no storage or 10G NIC) - $1016

Here's a photo of the parts, minus the memory/SATA cables/NIC which haven't arrived yet and the USB flash drives, which I forgot.

viK4xEx.jpg


Case: Fractal Design Define R5

3ws6CH2.jpg


This is the same case I used for my first FreeNAS build, and I was very happy to use it again because this case is fantastic. You've got sturdy construction, 8x 3.5" drive trays, great noise isolation, filtered fans, a nice clean look, and just a level of quality that's difficult to put into words. One example is that the drives mount to the removable drive trays with rubber grommets that help absorb the vibration and noise from each drive. If you commandeer the 2x 5.25 bays you could even cram in a few more drives, though I don't do that. This case sits right next to my desk in the den - I'm literally sitting maybe 1m away from my previous build right now, and I barely even notice it's running. The Define R6 is out now, but it only includes 6 drive trays and it's unclear how you would get more, even if there's space for extra. My only fear is that the nearly flawless R5 case will be discontinued because of the R6.

Fan: Fractal Design Dynamic GP-14 White

28kduSB.jpg


The Define R5 comes with two of these built-in (front/rear), but there's space for another in the front and you definitely want it, because that's the cooling for half of the drive bays. It's super easy to flip down the filter and install the second fan. The one thing you need to know is that you must use the long fan screws that come with the R5 case; the short ones included with the fan won't work for this fan position. Also make sure to feed the fan power cable out of the top right corner, since the top fan feeds out of its bottom right corner, that way the two cables will be together.

Power Supply: EVGA Supernova 550 G2

2JoVsP9.jpg


The traditional wisdom on the FreeNAS forum is to use Seasonic power supplies, and I did that with my first build. Unfortunately Seasonic has stopped making power supplies at a reasonable wattage that also have a good number of SATA power connectors (8+), so I had to turn elsewhere. I chose this EVGA power supply because of my positive experiences with other EVGA products and their support, it offers 9 (count 'em!) SATA connectors out of the box, and because of its perfect score at jonnyGURU.com. There is a bit of a controversy over how this PSU handles its PWR_OK signal during brownout/loss, but I don't care about this because this server will always be connected to a UPS. This power supply is fully modular, so you connect only the cables you need to it and leave the others in the thoughtfully-provided fabric bag that comes in the box.

Motherboard: Supermicro X11SSL-CF

YVbkV5Z.jpg


My previous build used the Supermicro X10SL7-F; this board is essentially the same feature set updated for socket 1151, which supports Skylake and Kaby Lake CPUs. You get 6 onboard SATA ports, plus an LSI (now Broadcom) 3008 RAID controller which can support 8 SAS or SATA drives. This controller can be used in IT mode so that the RAID functionality doesn't get in the way of ZFS. You also get IPMI onboard, which is invaluable for managing the server without needing any monitor or keyboard; you can even manage it remotely as if you were sitting in front of it. Unlike the older board, this one doesn't provide 8 separate drive connectors for the LSI 3008; instead you get two SFF-8643 connectors. So you need two breakout cables that split each SFF-8643 out into 4 SATA connectors. A big advantage of socket 1151 systems over their predecessors is that they support 64GB of RAM instead of 32GB.

I chose this model because having 14 drive connectors gives me plenty of wiggle room in case I want to add something later (a fast SSD pool or boot device), or if I ever decide to virtualize FreeNAS I can do it properly by handing the LSI controller over to a VM. If you don't care about those issues, you could definitely cost cut by downgrading to the X11SSM-F, which simply provides 8 native SATA ports without any SAS controller chip. I'm intending to install a 10G NIC (discussed later), but if you want 10G without a separate NIC you could upgrade to the X11SSH-TF (8 SATA) or X11SSH-CTF (8 SATA+LSI 3008), both of which provide onboard 10GBASE-T ports using the Intel X550 chipset.

It's getting late and I need to get farther into the assembly to get some more photos, so I'm going to have to continue this in the next post.
Jump directly to part 2.
 
Last edited:

wblock

Documentation Engineer
Joined
Nov 14, 2014
Messages
1,506
I have that power supply, and found out, eventually, that several of the SATA power connectors on the included cables were nonfunctional. I posted that in a thread here, which I now can't find. It might have been just an assembly problem with that batch of cables, but I got two of them that were bad in the same way: the SATA power connector on the end of the cable worked, one or more others on the same cable did not. The support experience with EVGA was not pleasant, but eventually I got replacement cables, which I have not tried yet.

Probably not a problem on yours, but something to watch for.

And agreed on the Seasonic cables: a Seasonic Focus 550+ includes only two SATA power cables, one with four connectors and one with two connectors. Seasonic did not answer email on where to find more cables.
 
Last edited:

adrianwi

Guru
Joined
Oct 15, 2013
Messages
1,230
Nice! I've been thinking it's time to upgrade my server, which is almost identical to your 32TB build, as I'm creeping up to the 80% storage level.

Looks like a lot of the components I'd started looking at for mine, although I was probably going to cheap out on 8x8TB drives. Is that the board with the built-in SAS controller, as I can't see many SATA ports?
 

Pheran

Patron
Joined
Jul 14, 2015
Messages
280
Nice! I've been thinking it's time to upgrade my server, which is almost identical to your 32TB build, as I'm creeping up to the 80% storage level.

Looks like a lot of the components I'd started looking at for mine, although I was probably going to cheap out on 8x8TB drives. Is that the board with the built-in SAS controller, as I can't see many SATA ports?

Yes, it is the board with the LSI/Broadcom 3008. Read the part about the SFF-8643 connectors. :)
 

adrianwi

Guru
Joined
Oct 15, 2013
Messages
1,230
Yes, it is the board with the LSI/Broadcom 3008. Read the part about the SFF-8643 connectors. :)

Sorry, only got as far as the last picture! I'll watch for further updates with interest....
 

pro lamer

Guru
Joined
Feb 16, 2018
Messages
626
2x Crucial CT16G4WFD824A 16GB DDR4-2400 ECC UDIMM - $???
Will 32GB be enough for an 80TB build? I can remember a rule of thumb:1GB per TB
 

Pheran

Patron
Joined
Jul 14, 2015
Messages
280
I'm back with part 2 of the parts guide!

CPU: Intel Xeon E3-1230V6

bBlTaN1.jpg


I chose the Xeon E3-1230 V6 for a number of reasons. First, I wanted the latest-generation Kaby Lake (Xeon V6) CPU as it has the most robust video encoding/decoding features which should assist Plex with transcoding. My Plex server may even require 4K transcodes, so I need all the horsepower I can get. The Xeon CPU also supports ECC RAM for highest reliability with FreeNAS. I specifically chose the 1230 level E3 because it's the lowest E3 CPU that supports hyperthreading (essentially it's a Core i7 equivalent), which also gives a substantial boost to overall transcoding performance. Fair warning - there is a risk that if you receive an X11 motherboard with an older BIOS revision that a Kaby Lake CPU won't boot without a BIOS upgrade. Fortunately I didn't run into this problem as my motherboard already had the latest BIOS 2.0c on it.

Memory: 2x Crucial CT16G4WFD824A DDR4-2400 ECC UDIMM

JZIrccf.jpg


Since Micron is the only 16GB DDR4-2400 RAM even listed as tested for this board, I chose Crucial/Micron ECC RAM. Using ECC RAM will provide the highest reliability for FreeNAS and your data. I highly recommend using 16GB DIMMs as shown here, because that way you will be able to reach the maximum 64GB of the motherboard when you populate all 4 slots. The timing of this build was unfortunate for the memory, because DDR4 prices are high right now due to a shortage. I would have preferred to just load up with 64GB right away, but with these prices it doesn't make sense. Hopefully I can add another 32GB later when prices are lower.

Boot Device: 2x SanDisk 32GB Ultra Fit USB 3.0 Flash Drives

4NlwnGI.jpg


The FreeNAS OS is very small, so a simple flash drive makes a good boot device. Flash drives aren't as reliable as other media, so it's best to run them in a mirrored pair to reduce the possibility of boot device failure. I like these SanDisk Ultra Fit drives because of the tiny form factor - they'll practically be flush with the back of the case. So you can leave them installed without worrying about snapping off a USB stick, even if you need to move the server around for any reason. The other advantage of USB is that you won't consume any SATA ports - not a problem with this motherboard, but valuable if you go with a board that's limited to 8 or fewer SATA connections.

Drive Cables: 2x CableCreation MiniSAS SFF-8643 to 4 SATA 0.5m

K1HxEaz.jpg


The Supermicro X11SSL-CF only has 6 discrete SATA connectors; those are from the Intel chipset. The 3008 RAID controller provides two SFF-8643 ports, each of which can support 4 drives. In order to make use of these with individual drives (rather than a backplane/enclosure), you need SATA breakout cables. Each of these cables turns one SFF-8643 connector into 4 SATA connections.

Drives: 7x Seagate IronWolf 10TB and 1x HGST Deskstar NAS 10TB

0CbTLNK.jpg


WARNING: I do not recommend the use of Seagate IronWolf drives with FreeNAS! These storage drives are moving over from my original 32TB build which was later upgraded with 8 10TB IronWolf drives. Sadly, that upgrade turned into a mess. I don't know exactly where the problem lies, but my original 4TB HGST NAS drives were solid as a rock. The 10TB IronWolf drives, on the other hand, constantly time out and drop out of the RAIDZ2 volume; maybe once a week or so. Sometimes multiple drives will fail at once. They aren't permanently failed - if you reboot the server they come right back and do a quick resilver for the data they missed. I can only speculate that there's a bug in either the IronWolf firmware or somewhere in FreeBSD - but I lean toward the IronWolf since other drives worked fine. I've started to slowly replace the IronWolf drives with HGST, but I've only been able to do one so far, hence the 7/1 drive mix. One the plus side, the HGST drives are logically exactly the same size as the IronWolf, so there's no problem swapping them out. If you want 10 TB drives, I would recommend HGST NAS or WD Red. I do not know if this problem extends to other sizes of IronWolf drives, but I have no intention of ever using any others. It will be very interesting to see if this problem persists on my brand-new build with all-new hardware other than the drives. It would be wonderful if it went away, but I'm not counting on it.

NIC: Intel X550-T2 10Gbps

omrmfbn.jpg


The Supermicro X11SSL-CF comes with two onboard 1 Gbps network interfaces, and you can certainly use those. But the unfortunate truth is that gigabit networks can't even begin to keep up with a well-built NAS system. Consider that a modern high-density NAS drive can put out 100 MB/sec or more - that's 800 Mbps in networking terms. So that single drive can nearly saturate a gigabit (1000 Mbps) network port. This NAS has 8 of those drives working in tandem - gigabit doesn't even have a prayer of keeping up. Sure, it will work, but if you are wondering why all your SMB transfers are stuck around 110 MB/sec, that's because you are saturating the gigabit network. Link aggregation (using multiple 1 Gbps ports) doesn't really solve this problem either, because it only helps with multiple clients. Each client system is still limited to 1 Gbps. The real solution is 10-gigabit, but implementing it can be fairly expensive and sometimes complex for home users. I make it a priority because I frequently deal with full-fidelity Blu-ray or even UHD 4K rips on my server, so I'm copying around files in the 30-60 GB range. Using 10 Gbps makes this work well; it's painfully slow over gigabit connections. There are many options for 10G connectivity; I'm not going to go into all of them here but I'm happy to talk about them in more detail if anyone has questions.

That's all for now, the next post will begin detailing the build process.
Jump directly to Part 3.
 
Last edited:

tarnar

Cadet
Joined
Apr 9, 2018
Messages
7
Hi Pheran, that's looking really slick. I was wondering if you could answer a simple question about the motherboard itself, specifically with the SAS connectors.

On which side of the SFF-8643 connector does the retention clip sit? Toward the back of the case, or toward the front of the case?

I'm asking because I'm looking to build a small form factor (short depth, either 2U or 3U) storage system and this motherboard is high on my list.

But the SATA breakout cable is a concern in the small form factor, it's probably going to go right up into where drives are going to be. So I'm looking into right-angle connectors, but want to be sure I know what I'm getting myself into. Hence, understanding which way the cable will go after the right-angle connector - toward the front or back of the case.

Worst-case, I might be able to get the cable made 'backwards' (which would make it a pain to disconnect, but that's not a showstopping concern).

P.S. I recently built a similar system with a plain X11SSM-F + LSI2308. FYI I was able to pick up some Kingston memory that isn't on the SuperMicro "blessed" list but it was on Kingston's compatible list for that motherboard. KVR24E17D8/16

edit: wait, that's an X11SSM-F, not an X11SSL
 
Last edited:

Pheran

Patron
Joined
Jul 14, 2015
Messages
280
Hi Pheran, that's looking really slick. I was wondering if you could answer a simple question about the motherboard itself, specifically with the SAS connectors.

On which side of the SFF-8643 connector does the retention clip sit? Toward the back of the case, or toward the front of the case?

Sorry it took me a while to reply, I haven't had time to work on my build lately, but I'm getting back to it. The retention clip on the SFF-8643 connector is towards the front of the case (the edge of the motherboard).
 

Pheran

Patron
Joined
Jul 14, 2015
Messages
280
OK, it's time to put this thing together. First, case prep and build. The first thing I did was to install the EVGA power supply, which is shown in the PSU photo in the first post; it fits nicely into the case with plenty of clearance toward the front. Next, I installed the I/O shield included with the Supermicro board onto the back of the case after snapping out some of the metal cutouts to match the motherboard connectors. Note to motherboard manufacturers: why do you still include D-sub VGA connectors in 2018??

yrq5DBt.jpg


The next step is the motherboard standoffs. Fractal Design includes a helpful plastic tool that allows you to use a Philips screwdriver to tighten the standoffs. I did have to remove a pre-installed standoff in the center of the case that Fractal seems to think is universal, but doesn't match up with this Supermicro board. The standoff locations are coded with letters like "AMI" to indicate positions for ATX, MicroATX, and MiniITX, but the standards aren't consistent enough to depend on these; you are better off just examining the mounting hole layout on your motherboard.

DdJTzYX.jpg


Qe3XKBQ.jpg


After getting the second front fan installed, the case itself is good to go!

v3eQyZa.jpg


Now I mount the motherboard in the case. Some install guides recommend installing the CPU and/or memory before doing this, but I personally prefer that the motherboard be secured in place before dealing with those installs, even though it's a little less accessible once it's in the case. Sorry this photo is a bit overexposed.

47rW8U2.jpg


Now I pop open the CPU socket, remove the plastic insert, and install the shiny Xeon CPU. Make sure you get the orientation right; there's a small triangle on the corner of the CPU indicating the Pin1 location; there's a corresponding triangle on the socket shield shown here. Getting this right will also make the small cutouts on the side of the chip line up correctly with the tabs in the socket.

LqiaUMX.jpg


Next, don't forget the CPU fan! In the old days if you left this off the CPU would actually smoke or catch fire. Thankfully (disappointingly?), modern systems have thermal-limiting circuitry that just shuts down the system before the fireworks start. Let me issue a complaint to Supermicro here: you SUCK at documenting fan connectors. Sadly, this hasn't changed since the last generation of boards. You have 5 fan connectors on this board, FAN1-4 and FANA. There is no clear documentation about what any connector is for. You might naturally think that since FANA is different from all the others, it must be for the CPU - nope. I believe that the correct CPU fan connector, like the X10 boards, is FAN1, though any connector will power up the fan. I just don't want the board to freak out because it thinks there's no CPU fan.

qhLzb3S.jpg


Next I installed the 2 16GB ECC DIMMs into the blue sockets, which are recommended for use if you only have 2 DIMMs. Finally I connected all of the power cables to the board, as well as the case power/LED and USB connectors. I had the same problem with the EVGA power supply as I did with the Seasonic PSU in my previous build - the main board power cable isn't long enough to route around the back of the board. I was able to route the additional 8-pin CPU power cable on the back. The 10G NIC isn't installed yet; I'm holding off until I qualify the memory and do any necessary firmware upgrades on the board. Yes, I know, the cabling could stand to be a little neater.

fWY4I23.jpg


That's all for now, coming up next is using IPMI to power up the beast and memory testing.
 

Pheran

Patron
Joined
Jul 14, 2015
Messages
280
Part 4 of the build is here already!

One thing i forgot to mention in the last build post, there are two options for the 3 case fans. You can either connect them to fan headers on the motherboard, or to the fan controller that is built into the Define R5. The advantage of connecting them to the motherboard is that you can monitor and potentially control their speed through software. The advantage of using the fan controller is that you get a switch right on the front of the case that will allow you to select between low/medium/high fan speed. I don't get too excited about fan or temperature monitoring, so I use the fan controller, though I admit I just leave mine on high all the time because it's still quiet with that setting. The photo below is kind of a mess, but it does show the fan controller connections. The large connector on the left is the SATA power feed into the controller, and the two smaller connectors next to it, as well as the one off to the right, are the connections from each case fan.

Uw5TwSv.jpg


With that out of the way, it's time to actually power up! Since IPMI is so awesome, I resolved to never once connect this new build to a monitor or keyboard. For the initial tests, we only need 2 connections to the server - power and IPMI network. There are 3 ethernet connections on this motherboard. The two next to each other are the actual data connections for the server - I left those completely disconnected for now. The third connection above a couple of USB ports is dedicated to IPMI. The thing to know about IPMI (Intelligent Platform Management Interface) is that it's run by a SoC (system on a chip) that is completely separate from the rest of the server. Even when the server is powered off, as long as the PSU is connected to power and the physical 1/0 switch on the PSU is flipped on, IPMI is up and running.

The IPMI interface will get an address from DHCP by default, the only problem is finding out what it is. Since I had it connected to a managed switch, I was able to look up the IPMI MAC address on my switch port and then check my router to see what IP address it handed to that MAC. Even if you don't have a managed switch, you may be able to check the client list on your router to see what new device showed up on the network. If not, you can always hook up a keyboard and monitor to access the BIOS and set the IPMI IP to something you know. Once I found the IPMI MAC, I was able to setup my router to hand out a static IP (192.168.1.8) to the IPMI port so I could easily access it going forward. When you first connect to IPMI with your web browser, you'll have to go through a certificate warning and then you'll see this.

nJKDnle.png


The default user/password is ADMIN/ADMIN, but please change the password after you get in. Right at the first screen, you get some useful info. You can see the IPMI firmware revision, the BIOS version of the server, the IPMI IP and the ethernet MAC addresses of all the interfaces.

kTAZ8V4.png


The feature I most frequently use is under the Remote Control menu and is called iKVM/HTML5. This will give you a window that is the console of your server without requiring any special software (like Java) on your client system.

YEpyNvx.png


So the blank iKVM window is pretty boring, since the server is off. But go to the Power Control menu and select Set Power On to make the magic happen. You are now in full control of the console of your server with no need for a keyboard, mouse or monitor attached to it. In fact it could be in another state or country as long as you have IP access to the IPMI interface.

au6PLhd.png


There is one downside to using the iKVM/HTML5 interface. Since it only needs a browser, there's no way to use it to mount remote media on the server, as that requires a higher level of access to your client computer. I use iKVM/HTML5 whenever I need routine console access, but for installations, we need a little more. So I'm going to switch to the Console Redirection feature on the Remote Control menu. This requires that you have Java installed on your client and that you click through a number of security prompts. The key feature you can see in this console application is that first menu - Virtual Media.

hoz2Zbo.png


I want to run a memory test on the server to verify that my 32GB RAM is good. Originally I was going to try to be fancy and use IPMI virtual media to boot the test, but for some reason I had trouble getting it to boot correctly this way in UEFI mode. So, I just made a USB stick with memtest86 7.5 on it and plugged it into the server. The server was still defaulting to legacy boot (I'll have to see if I can turn that off), so I used F11 to get to the boot menu and explicitly chose to boot the USB stick in UEFI mode. If you don't do this you end up with the legacy memtest86 4.3.7 application.

W0BIWEm.png


I'm going to leave this memory test running for a while, so this will be the end of this part. Until next time, may your data always stay safe!
 

tarnar

Cadet
Joined
Apr 9, 2018
Messages
7
Thanks for the updates Pheran.

Re: IPMI, there's a menu option in there to use an SMB mount to boot ISO images, so you don't need to use the Java iKVM.

I'm moving forward with a bigger version of the system I posted on here (thread on low power/small form factors) using the same motherboard as you. I hope the SATA breakout cables fit, as I didn't end up ordering right-angle ones.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Will 32GB be enough for an 80TB build? I can remember a rule of thumb:1GB per TB
Above about 16GB of RAM, the memory is all about cache for the filer functions of the NAS. Cache might not help a lot if you are constantly doing something different. It depends on the work the filer is doing, but in most home use cases, having a large amount of RAM doesn't really help because you are not often looking at the same file. You can look at the ARC Hit Ratio in your reporting tab to see how often the data stored in the ARC (Adaptive Replacement Cache) is being referenced.
upload_2018-6-9_9-42-11.png
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Greetings FreeNAS community! This thread is a followup to my original Pheran's 32TB FreeNAS build with photos thread. Here I design and build a FreeNAS server with the following objectives, trying to show as much detail as I can:

1. Rock-solid, reliable hardware that is considered "approved" by the FreeNAS community
2. Powerful and relatively high-capacity with reasonable power consumption
3. Quiet!
Very nice photos. I hope we get to see more.
I'm going to leave this memory test running for a while, so this will be the end of this part. Until next time, may your data always stay safe!
Is that memory test done yet?
 

Pheran

Patron
Joined
Jul 14, 2015
Messages
280
I'm happy to say that this server is finally stable after dealing with 2 years of the Seagate Ironwolf 10TB nightmare. I just upgraded it from FreeNAS 11.2U4.1 to 11.2U6 with no issues. I've also got some storage and memory upgrades planned for it! Soon 80TB will become 112TB, courtesy of WD Gold 14TB drives. What I don't know yet is if I'm going to have problems with the obnoxious SATA power disable feature on the WD Golds, but it seems likely. I already had to mod one of my SATA connectors for an HGST drive; I may need to mod the rest as well.
 

adrianwi

Guru
Joined
Oct 15, 2013
Messages
1,230
Sorry to hear of the problems, as I'd missed the other thread. I'm about to buy a very similar system, although has pretty much decided on 8x8TB Seagate drives. This has certainly pushed me away from the 10TB model. Do you still have to flash the SAS controller with this board, and is it the same as the X10 boards?
 

Pheran

Patron
Joined
Jul 14, 2015
Messages
280
Sorry to hear of the problems, as I'd missed the other thread. I'm about to buy a very similar system, although has pretty much decided on 8x8TB Seagate drives. This has certainly pushed me away from the 10TB model. Do you still have to flash the SAS controller with this board, and is it the same as the X10 boards?

Yes, you need to flash the onboard controller to IT mode. It's similar to an X10 board except a newer model; the X11 boards with controllers have LSI-3008 instead of LSI-2308. If I were you I'd avoid Seagate like the plague and go with WD or HGST. The Ironwolf debacle proved to me they can't be trusted with NAS drives.
 

John Doe

Guru
Joined
Aug 16, 2011
Messages
635
nice pictures!
 
Top