The Time has Come.....First FreeNAS Build

Status
Not open for further replies.

ikecomp22

Dabbler
Joined
Nov 2, 2017
Messages
11
Hi All -

Been lurking the forums for a couple of weeks as well as researching parts and I believe I have a pretty good selection of parts and I'm ready to build I just have one question I'll get to below. I mainly plan to use the box as a NAS. I might setup plex mediaserver and 1 VM but that's the max I plan to do.

Parts
  • Motherboard: Supermicro X10SL7-F
  • Processor: Intel Xeon E3-1231 V3
  • Chassis: Supermicro SC846E16-R1200B with BPN-SAS2-846EL1 (I may replace the power supplies with something lower since it will be in a home environment maybe 2 X SuperMicro PWS-721P-1R?)
  • Memory: Crucial 16GB (2 x 8GB) 240-Pin DDR3 SDRAM ECC Unbuffered DDR3L 1600 (PC3L 12800) Server Memory Model CT2KIT102472BD160B - I will eventually expand this to 32 GB
Now to my question. My motherboard has 4xSATA3 connections, 2xSATA6 connections and 8xSAS2 connections. My question is how would I connect the backplane (BPN-SAS2-846EL1) to my motherboard? Is it possible? If not, would I need to get something like the LSI-9211-8i and if so how would I connect it to see all 24 drives?

I read through the SAS primer in the resource section but obviously there are some things that are still a little unclear to me so just trying to get direct answers for my particular build

Thanks in advance and let me know if I need to provide more information
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
(I may replace the power supplies with something lower since it will be in a home environment maybe 2 X SuperMicro PWS-721P-1R?)
I would leave the power supplies that are in it. Having more capacity does not make it use more electricity. It was built with the high power supply to accommodate 10,000 or 15,000 RPM drives being worked hard in a data center, plus the start capacity to get the drives spinning to begin with. The system you describe will likely be a much lower load, so they will only use the power they need. Not much point spending extra money to replace them.
My question is how would I connect the backplane (BPN-SAS2-846EL1) to my motherboard? Is it possible?
You need a reverse-breakout cable. Here: https://www.ebay.com/itm/24-2-Feet-...6-Pin-Reverse-Breakout-Cable-Red/222545474669
The system will work with just one, but it will give you access to all the available SAS lanes if you use two. It could improve performance.
The backplane (BPN-SAS2-846EL1) is a SAS expander backplane and works kind of like a network switch in that the traffic to all 24 drives is passed through the 8 lanes of connectivity in the two SFF-8087 ports on the back of the backplane. There should be a third port on the backplane that is an 'output' that would allow you to cascade to another expander backplane for use in a system with a 12 bay backplane in the back. Don't worry about that.
 

ikecomp22

Dabbler
Joined
Nov 2, 2017
Messages
11
I would leave the power supplies that are in it. Having more capacity does not make it use more electricity. It was built with the high power supply to accommodate 10,000 or 15,000 RPM drives being worked hard in a data center, plus the start capacity to get the drives spinning to begin with. The system you describe will likely be a much lower load, so they will only use the power they need. Not much point spending extra money to replace them.

You need a reverse-breakout cable. Here: https://www.ebay.com/itm/24-2-Feet-...6-Pin-Reverse-Breakout-Cable-Red/222545474669
The system will work with just one, but it will give you access to all the available SAS lanes if you use two. It could improve performance.
The backplane (BPN-SAS2-846EL1) is a SAS expander backplane and works kind of like a network switch in that the traffic to all 24 drives is passed through the 8 lanes of connectivity in the two SFF-8087 ports on the back of the backplane. There should be a third port on the backplane that is an 'output' that would allow you to cascade to another expander backplane for use in a system with a 12 bay backplane in the back. Don't worry about that.

Thanks for the fast reply

For the power supply, that makes sense. I saw it mentioned that there were quieter options on the reddit post below so thought it would be worth asking.
https://www.reddit.com/r/DataHoarder/comments/683wxv/question_about_a_supermicro_powersupply/

For the backplane, thanks for the explanation. I never thought of it like a network switch for traffic but it makes sense. So basically, I could take 2 cables and plug them into the back of the backplane and take the other end of the 2 cables (8 connectors total) and plug them into the 8 SAS2 ports on my motherboard and I should be good to go. Plus I would still have the other 6 SATAI/II ports left on my mother board to use for either some OS drives or cache drives.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
For the power supply, that makes sense. I saw it mentioned that there were quieter options on the reddit post below so thought it would be worth asking.
Well, if you want it quiet, that might be a reason to make some changes. I would start with the other fans in the chassis first because the power supplies are not cheap. I replaced all the fans (except in the power supply) in all three of the rack mount chassis that I have because the original fans were LOUD. I suggest getting the system where it is going to live and turn it on and deciding for yourself how tolerable the sound is. I was happy with mine after just changing the main system fans. Three on the fan wall between the drives and the system board that were 92mm and two at the back of the system board that were 80mm. Your system fans may be different because there are many options on how the system could be configured.
So basically, I could take 2 cables and plug them into the back of the backplane and take the other end of the 2 cables (8 connectors total) and plug them into the 8 SAS2 ports on my motherboard and I should be good to go. Plus I would still have the other 6 SATAI/II ports left on my mother board to use for either some OS drives or cache drives.
Yes, and this chassis has an option for mounting hard drives in the system board compartment that you can use as boot drives.
Supermicro SC846E16-R1200B boot drives.JPG https://www.newegg.com/Product/Product.aspx?Item=N82E16816101828

As for the cabling, if you are standing at the front of the server (where the drives go) looking toward the back, the connectors on the backplane that you want to use are the two on the left. The one at the extreme left is the "first" one, then there is another just closer to center and both of those should go to the system board. The third port that is even closer to center on the backplane is the "output" that could cascade to another SAS expander.
Supermicro BPN-SAS2-846EL1.JPG
In this photo (click to enlarge) the connectors you want to use are the ones on the right.
 
Last edited:

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194

ikecomp22

Dabbler
Joined
Nov 2, 2017
Messages
11
Well, if you want it quiet, that might be a reason to make some changes. I would start with the other fans in the chassis first because the power supplies are not cheap. I replaced all the fans (except in the power supply) in all three of the rack mount chassis that I have because the original fans were LOUD. I suggest getting the system where it is going to live and turn it on and deciding for yourself how tolerable the sound is. I was happy with mine after just changing the main system fans. Three on the fan wall between the drives and the system board that were 92mm and two at the back of the system board that were 80mm. Your system fans may be different because there are many options on how the system could be configured.

Yes, and this chassis has an option for mounting hard drives in the system board compartment that you can use as boot drives.
View attachment 21420 https://www.newegg.com/Product/Product.aspx?Item=N82E16816101828

As for the cabling, if you are standing at the front of the server (where the drives go) looking toward the back, the connectors on the backplane that you want to use are the two on the left. The one at the extreme left is the "first" one, then there is another just closer to center and both of those should go to the system board. The third port that is even closer to center on the backplane is the "output" that could cascade to another SAS expander.
View attachment 21419
In this photo (click to enlarge) the connectors you want to use are the ones on the right.

That makes sense. I'll try swapping the system fans first once I get my case in. Also, thanks for the layout information as I was looking at a PDF of the backplane and wasn't quite sure what the third port was used for.

I should get my main board by Friday and I assume once I flash it to IT mode (the mainboard uses an LSI2308 chip) I should be good to start my build.
 

ikecomp22

Dabbler
Joined
Nov 2, 2017
Messages
11
Why Haswell? Why not Skylake?

To be perfectly honest, I was looking at the X11SSL-cF for a while but the haswell version was a little cheaper. Not to mention the Ram components and CPU's were also a little more expensive for the Skylake version of my board. If I do feel the need to upgrade I could always sell my motherboard, CPU and RAM and just upgrade it in the future but it fits my needs for now. Thanks for weighing in though.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
If it's something like 20-30 bucks total, Skylake might still be worth it, at least for the potential for 64GB of RAM.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Why Haswell? Why not Skylake?
Isn't DDR3 less expensive right now? I have not been keeping track of the prices, but someone told me that DDR4 RAM had really gone up.
I should get my main board by Friday and I assume once I flash it to IT mode (the mainboard uses an LSI2308 chip) I should be good to start my build.
Sounds like a plan. Let us know how it goes. What are you planning for drives?
 

ikecomp22

Dabbler
Joined
Nov 2, 2017
Messages
11
If it's something like 20-30 bucks total, Skylake might still be worth it, at least for the potential for 64GB of RAM.

Just taking the motherboard, the haswell version new was around $210 while the skylake version of the same board I found online was around $280. For the CPU new, they were about the same price but when I checked the CPU scores on cpubenchmark skylake was 9707 and haswell was 9631. Lastly, the ram was a similar scenario. Although 2x8 GB sticks of ECC DDR3 is about $200 1 stick of 16 GB ECC DDR4 is about the same price. While I could go with just one stick for now I would prefer to have 2 for dual channel support. Plus, if I'm going to go skylake I don't see the point of buying 2 x8 GB sticks of DDR4
 

ikecomp22

Dabbler
Joined
Nov 2, 2017
Messages
11
Isn't DDR3 less expensive right now? I have not been keeping track of the prices, but someone told me that DDR4 RAM had really gone up.

Sounds like a plan. Let us know how it goes. What are you planning for drives?

I'll be starting off my build with 12x 4 TB drives. 3 of the drives will be 5400 RPM seagate drives I already have in an external enclosure. The other 9 will be WD Red drives. I'll be using a RaidZ2 config for the volume. I know the seagate drives aren't necessarily NAS approved but I hope to mitigate the heat and vibration issues by airgapping the seagate drives so that there is at lease one drive space worth of separation from the other drives. Hopefully that should minimize the wear and tear. Plus, I'm somewhat cheap so I don't want to buy 12 brand new drives at this point lol
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
I'll be starting off my build with 12x 4 TB drives. 3 of the drives will be 5400 RPM seagate drives I already have in an external enclosure. The other 9 will be WD Red drives. I'll be using a RaidZ2 config for the volume. I know the seagate drives aren't necessarily NAS approved but I hope to mitigate the heat and vibration issues by airgapping the seagate drives so that there is at lease one drive space worth of separation from the other drives. Hopefully that should minimize the wear and tear. Plus, I'm somewhat cheap so I don't want to buy 12 brand new drives at this point lol
All 12 disks in my Irene-NAS are Seagate Desktop drives and I have been happy with that for more than 5 years. I don't think there is anything to worry about.
I would not suggest putting all 12 disks in a single vdev.

Sent from my SAMSUNG-SGH-I537 using Tapatalk
 

ikecomp22

Dabbler
Joined
Nov 2, 2017
Messages
11
All 12 disks in my Irene-NAS are Seagate Desktop drives and I have been happy with that for more than 5 years. I don't think there is anything to worry about.
I would not suggest putting all 12 disks in a single vdev.

Sent from my SAMSUNG-SGH-I537 using Tapatalk

Yeah, I've had pretty good success with seagate drives. Haven't had one die before the 5 year mark (knock on wood). The real incentive to go with the red drives is 2 fold. 1) I wanted my first build to be as close to a recommended build as possible 2) I had a 20% off coupon for the WD Store so I thought it was the perfect time to bulk buy some drives

As for the 12 drive vdev, why would you recommend against it? Most of the posts I've read say it should be ok to do 12 drives but anymore than that might be pushing it. What would be the maximum amount of drives given a RaidZ2 config would you recommend?
 

BigDave

FreeNAS Enthusiast
Joined
Oct 6, 2013
Messages
2,479
As for the 12 drive vdev, why would you recommend against it? Most of the posts I've read say it should be ok to do 12 drives but anymore than that might be pushing it. What would be the maximum amount of drives given a RaidZ2 config would you recommend?
You might find this thread will possibly answer your question, especially cyberjock's post #8
There's a big difference between OK and Best Practices/Recommended Practices.
 

ikecomp22

Dabbler
Joined
Nov 2, 2017
Messages
11
You might find this thread will possibly answer your question, especially cyberjock's post #8
There's a big difference between OK and Best Practices/Recommended Practices.

Thanks for posting this. I just finished reading it and looks like I'll be going with an 8 drive raidZ2 config. One other thing that caught my eye in the article was the mention of ZFS fragmentation which I would like to avoid as much as possible. I was always curious why people don't recommend going beyond 80% utilization in the pool and now I know. However, I wanted to know if there is a best practices post out there for minimizing fragmentation other than smaller vdevs and keeping utilization below 80%?
 

Evertb1

Guru
Joined
May 31, 2016
Messages
700
Isn't DDR3 less expensive right now? I have not been keeping track of the prices, but someone told me that DDR4 RAM had really gone up.

Sounds like a plan. Let us know how it goes. What are you planning for drives?
I think it is depending on the local market. Here in the Netherlands ECC DDR4 is around 10 % more expensive (today an depending on the size). But this is strongly depending on the retailer you are dealing with. More important is that ECC DDR3 is getting sparse and often is not in stock with the retailer. If I should build a new server today DDR3 would not be my first choice anymore.
 

ikecomp22

Dabbler
Joined
Nov 2, 2017
Messages
11
Update 11/16

I got the case, motherboard, CPU, RAM and Hard Drives in. I tested the main parts on the motherboard box and everything booted up fine so then last night I put everything in the case and launched memtest to validate the RAM and it's been running for about 14 hours with multiple successful passes and no issues. So now, I plan to put all the drives in each hotswap bay and test each of them to make sure they all work (don't want to find out years down the line I have bad bays when I go to fill them up.)

One thing I did notice though is that when I put a few hard drives in, it randomly selected one of the drives and made the hard drive light solid blue. I even tried switching that drive to another bay but it still stays solid blue while the others merely just blink from time to time. I'm assuming this is normal. When I checked my chassis manual it states a solid blue light on the hard drive caddy/tray means "SAS/NVMe drive installed". Anyone know if this is normal with the supermicro case and mother board I have? why aren't all the lights solid blue? I haven't flashed the LSI controller to IT mode yet if that makes a difference.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
I haven't flashed the LSI controller to IT mode yet if that makes a difference.

If you are running FreeNAS, yes it makes a difference.
 

ikecomp22

Dabbler
Joined
Nov 2, 2017
Messages
11
If you are running FreeNAS, yes it makes a difference.
Sorry, let me clarify. I have not loaded any OS on the machine yet. When I mentioned that I haven't flashed the controller to IT mode yet it was strictly in the context of only one light on my HD trays staying solid blue because controller is still in IR mode instead of IT mode. It stays like this even when I'm just booted to the "An operating system wasn't found screen."
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Sorry, let me clarify. I have not loaded any OS on the machine yet. When I mentioned that I haven't flashed the controller to IT mode yet it was strictly in the context of only one light on my HD trays staying solid blue because controller is still in IR mode instead of IT mode. It stays like this even when I'm just booted to the "An operating system wasn't found screen."
Probably an artifact from the firmware. My system only flashes the lights on activity.

Sent from my SAMSUNG-SGH-I537 using Tapatalk
 
Status
Not open for further replies.
Top