BUILD Advice and comment welcome on my proposed FreeNAS build

Status
Not open for further replies.

trionic

Explorer
Joined
May 1, 2014
Messages
98
Build’s Name: TBD
Operating System/Storage Platform: FreeNAS
CPU: Intel Xeon E3-1230 V3 BX80646E31230V3 3.30GHz
Motherboard: Supermicro X10SLL-F Socket 1150
Chassis: X-Case RM 424 Pro with SAS Expander & SGPIO Backplane
Drives: 12x Western Digital 3TB WD30EFRX Red (already have these) + 12x Western Digital 4TB WD40EFRX Red
RAM: 32GB ECC un-buffered un-registered but brand currently UNKNOWN (sourcing Supermicro HCL RAM in the UK is not straightforward)
Add-in Cards: 1x IBM M1015
Power Supply: Corsair Professional Series AX1200 High Performance 1200W Modular '80 Plus Gold' (from an existing server)
Other Bits: 2x 3ware 8087-8087 Multilane cable
Usage Profile: NAS media and backup server.

I will be building a rack-mount NAS media server. The general theme is similar to many of the projects I have read about on these forums. However, building one of these machines is unfamiliar to me despite having built plenty of PCs in the past. I was hoping that some of the experts here would cast their eye over the draft spec and highlight any problems or improvements.

I *think* that I have picked components that are well matched but as always the devil's in the detail. I am really looking forward to building and using this server and want to avoid show-stopper mistakes.

Of particular interest is the choice of RAM, the SAS backplane/M1015 combination and whether I should in fact use the newer M1115s instead.

Thanks for your time :)
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Your parts look good. To be honest, if you can't find Supermicro HCL RAM, most name brands that meet the unbuffered/unregistered and have ECC will work. This isn't ideal, and HCL is definitely recommended, but sometimes you have to settle for a little less than you want.

I'm an M1015 guy. I see nothing that makes me want to get an M1115 unless it's significantly cheaper at buying time than the M1015. ;)

Supermicro cases are king with stuff like this. Unfortunately, the 24 drive cases are often $1000+. But, on ebay you can often find a used server for like $200-300. You can buy it, yank out the crappy CPU/motherboard/RAM and put your stuff in it. I don't know about you, but I'll take a used Supermicro case over any other brand's "new" case. Might be a little beat-up physically, but they are just amazing cases nonetheless. Plus, it might be cheaper!

I do have one concern though.. your RAM limit.

You're limited to 32GB of RAM on that board. You may be maxing it out right now, but you are about to drop 84TB of raw disk space in it. Not the most ideal for the 1GB of RAM per TB of disk space thumbrule. If this is for home use and you don't plan to do things like run tons of torrents while streaming 5 movies over Plex and run VMs on your box you will probably be fine.

In the future you will almost certainly get bigger disks, so you might not be doing yourself much of a favor with a limit of 32GB of RAM. If you aren't concerned about this eventuality, you may be okay with 32GB of RAM. But, it will suck if you build this thing and find out that 32GB of RAM won't cut it. You'll be forced to go back and buy a new CPU, RAM, and motherboard. So you may want to examine some of the Socket 2011 boards as they support far more than 32GB of RAM.

Just something to think about...
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
It's typically cheaper to buy the X10SL7-F, which already includes an LSI 2308 controller (PCI-e 3.0 version of the LSI 2008 used in the M1015) than a "regular" motherboard plus an M1015. If you end up going for an LGA 2011 motherboard, the Supermicro X9SRH-7F (Gigabit LAN) and X9SRH-7TF (10Gb LAN) seem to be the closest things in that segment.
 

trionic

Explorer
Joined
May 1, 2014
Messages
98
Thanks for the advice everyone :)
Supermicro cases are king with stuff like this. Unfortunately, the 24 drive cases are often $1000+. But, on ebay you can often find a used server for like $200-300. You can buy it, yank out the crappy CPU/motherboard/RAM and put your stuff in it. I don't know about you, but I'll take a used Supermicro case over any other brand's "new" case. Might be a little beat-up physically, but they are just amazing cases nonetheless. Plus, it might be cheaper!
I'll look into used Supermicro chassis. New however, they're £250 ($420) more than the XCase chassis which is well regarded.

I do have one concern though.. your RAM limit. You're limited to 32GB of RAM on that board. You may be maxing it out right now, but you are about to drop 84TB of raw disk space in it. Not the most ideal for the 1GB of RAM per TB of disk space thumbrule. If this is for home use and you don't plan to do things like run tons of torrents while streaming 5 movies over Plex and run VMs on your box you will probably be fine.
Well, the server might run Transmission from a jail or at least be on the receiving end of torrents downloaded by a different server and on a 100Mbit/sec line. It'll certainly be streaming HD video, although probably only one stream at a time. It could be handling simultaneous multiple inbound backups and a CrashPlan outbound backup. Guess that's enough workload to demand more memory.

In the future I will either replace the 3TB disks with 4TB units or (more likely) build-up as required a JBOD enclosure with 24x 4TB disks. That could eventually amount to some serious capacity and 32GB just won't cut it.

This build is actually my second attempt at a NAS server. The first was made from consumer parts in a tower chassis and using FlexRAID (whoops). After the build I quickly realised that had gone in completely the wrong direction and I am keen to avoid such mistakes. Like you say, I don't want to find in a year's time that I must replace the motherboard and memory because with the stuff that I should have bought first-time around.

The old server will be re-built into a 4U case and run Windows 7 with medium workload.

So, more than 32Gb it is. I'll start with a single 32GB RDIMM and add more as I add drives and the bank balance recovers!

If you end up going for an LGA 2011 motherboard, the Supermicro X9SRH-7F (Gigabit LAN) and X9SRH-7TF (10Gb LAN) seem to be the closest things in that segment.
Nice pickup on the X9SRH-7F. I was working my way through the LGA 2011 'boards and hadn't yet got to that one. 256GB RAM support with RDIMMS, IPMI, on-board 8-port SAS and supported by FreeBSD 9.1 (not 9.2).

So, revised spec:
Operating System/Storage Platform: FreeNAS
CPU: Intel Xeon E5-2609 S2011 Sandy Bridge Quad Core 2.4GHz
Motherboard: Supermicro X9SRH-7F LGA 2011
Chassis: X-Case RM 424 Pro with SAS Expander & SGPIO Backplane
Drives: 12x Western Digital 3TB WD30EFRX Red + 12x Western Digital 4TB WD40EFRX Red
RAM: Initiall 1x 32GB DDR3-1066 RDIMM, with upgrades as required. Brand unknown
Add-in Cards: none
Power Supply: Corsair 1200w (from an existing server)
Other Bits: 2x 3ware 8087-8087 Multilane cable

A few further questions:
  • Is the CPU excessive? The E5-2605 runs at 1.8GHz and costs about £70 less
  • The 8x SAS2 (LSI 2308)(6Gb/s) ports eliminate any need for an IBM M1015, right? 'Cos that saves £100 right there. If so, I think I still use the 3ware 8087-8087 multilane cables...?
  • Sourcing this stuff in the UK is not easy. I can get the motherboard (just) but not the memory... dare I buy used memory? o_O
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Maybe 16GB RDIMMS are large enough to allow for any required growth (dunno about RDIMM pricing, though).

I can't find a Xeon E5-2605 at Intel's website, are you sure you got that right? As for the 2609, are you looking at the original (Sandy Bridge) or the v2 (Ivy Bridge)? If the price difference isn't excessive, Ivy Bridge should cut power consumption and slightly improve performance. Since you were already looking at an E3-1230v3, the E5-2609s should be good enough (somewhat slower though, lower clocks but larger L3 cache, same amount of cores).

As for the LSI2308, yes, it replaces the M1015. The M1015 uses the older version, the LSI2008. The difference is that the 2308 adds PCI-e 3.0 support (I guess it removes a bottleneck in some cases). Both are very well supported (same driver in fact, I believe), just be sure to flash to IT mode. Unfortunately, you need the reverse breakout cables, since Supermicro decided to expose the individual channels. Someone else will have to tell you exactly which cables you need, I'm something of an SAS noob.
 

trionic

Explorer
Joined
May 1, 2014
Messages
98
Maybe 16GB RDIMMS are large enough to allow for any required growth (dunno about RDIMM pricing, though).
16GB RDIMMS x8 = 128GB. If I expand this thing out to a 24x 4TB JBOD enclosure then by the 1GB RAM/1TB disk rule I need at least 180GB. Not sure it'll ever get that mental though...

I can't find a Xeon E5-2605 at Intel's website, are you sure you got that right? As for the 2609, are you looking at the original (Sandy Bridge) or the v2 (Ivy Bridge)? If the price difference isn't excessive, Ivy Bridge should cut power consumption and slightly improve performance. Since you were already looking at an E3-1230v3, the E5-2609s should be good enough (somewhat slower though, lower clocks but larger L3 cache, same amount of cores).
My mistake - I meant the 2603:
http://www.scan.co.uk/products/inte...d-core-18ghz-64gt-s-qpi-10mb-cache-80w-retail

I was indeed looing at the original Sandy Bridge Xeon when I should have listed the v2 Ivy Bridge. The price difference is a princely £0.91 ($1.54) :D

As for the LSI2308, yes, it replaces the M1015....Unfortunately, you need the reverse breakout cables, since Supermicro decided to expose the individual channels.
Good news (M1015 replaced)... bad news (cluttered breakout cables). I liked the two-cable solution provided by the SAS backplane and M1015.

Hmmmm... so the 8 ports on the board are SATA ports? That means I'd need an M1015 anyway in order to control the rest of the drives. In which case I may as well use the M1015 to control them all and benefit from tidier cabling. Unless I have that all wrong...

I'm something of an SAS noob.
Me too! :)

Every time I think I have this SAS thing understood I get confused again, even when I read excellent threads like this.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
16GB RDIMMS x8 = 128GB. If I expand this thing out to a 24x 4TB JBOD enclosure then by the 1GB RAM/1TB disk rule I need at least 180GB. Not sure it'll ever get that mental though...

Don't worry about following that rule to the letter, it's mostly for smaller capacities. At those amounts of RAM, the experts here will certainly be able to push you in the right direction (More RAM vs L2ARC).

As for the processor, it's slower than the E3 Xeon originally considered, but it should be enough.

Hmmmm... so the 8 ports on the board are SATA ports? That means I'd need an M1015 anyway in order to control the rest of the drives. In which case I may as well use the M1015 to control them all and benefit from tidier cabling. Unless I have that all wrong...
1 connector, though. Supermicro opted to expose all 8 channels. All you have to do is use a reverse breakout cable that connects four SAS channels on the motherboard to 1 SFF-8087 socket (this one carries 4 channels). M1015s use two SFF-8087 sockets instead of 8 traditional SATA sockets. Connect the individual channels to the expander(s) as necessary and you're done (Theoretically, at least. SAS gets really confusing at times...).
 

panz

Guru
Joined
May 24, 2013
Messages
556
Thanks for the advice everyone :)

So, more than 32Gb it is. I'll start with a single 32GB RDIMM and add more as I add drives and the bank balance recovers!

I have 32 GB of RAM and there's plenty of it for Plex media server transcoding to all the family, audio server, backup for documents, etc.


Intel Xeon 1230 V2.
Supermicro X9SCM-F.
32 GB of Kingston KVR16E11/8 ECC RAM.
Gooxi RM4024-660-BX 4U Rackmount Chassis with 6Gb SGPIO backplanes and three 120mm, temperature controlled, hot-swap PWM fans with warning function support; the SGPIO Mini SAS backplane requires n. 6 SFF-8087 cables to connect all 24 drives.
Corsair HX650 80 Plus Gold.
IBM M1015 SAS/SATA Controller card (flashed to IT mode).
Intel RES2SV240 SAS/SATA Expander.
APC SMART 1000 (serial cable).
LSI CBL-SFF8087SB-05M Mini SAS cables (with sideband).
Corsair Voyager USB Flash drive CMFUSB2.0-8GB.
1x12 Western Digital WD30EFRX Red disks.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
I have the the RM 424 Pro without the expander backplane. With a Supermicro motherboard it works well, because of the 120 mm temp controlled fans: they run silent at 1250 rpm, with plenty of airflow.

Supermicro cases have 80 mm fans: they're too loud :)

There's also a SERIOUS tradeoff on sucking power on 80mm fans versus 120mm fans. Read my thread where I was overheating my hard drives *because* I was using 120mm fans. There's a reason why the expensive case has 80mm fans and the cheap one has 120mm fans. ;)
 

trionic

Explorer
Joined
May 1, 2014
Messages
98
I'll get in touch with XCase and find out what the mmH2O figure is for the 120mm fans that they use. They look quite thick almost certainly not thick enough for a 120mm fan to generate the required pressure-drop.

For those looking for cyberjock's fan thread, it's here.

Revised spec for CPU and memory.
Operating System/Storage Platform: FreeNAS
CPU: Intel Xeon E5-2609v2 4 core BX80635E52609V2 2.5GHz
Motherboard: Supermicro X9SRH-7F LGA 2011
Chassis: X-Case RM 424 Pro with SAS Expander & SGPIO Backplane
Drives: 12x Western Digital 3TB WD30EFRX Red + 12x Western Digital 4TB WD40EFRX Red
RAM: Initially 2x Hynix Server Memory 16GB DDR3 PC3-12800 (1600) - HMT42GR7MFR4C-PB
Add-in Cards: none
Power Supply: Corsair 1200w (from an existing server)
Other Bits: Cables currently unknown

And I can get all these bits in the UK :)

So... do we think we have a final system spec? (Pending clarification on the point below)

All you have to do is use a reverse breakout cable that connects four SAS channels on the motherboard to 1 SFF-8087 socket (this one carries 4 channels). M1015s use two SFF-8087 sockets instead of 8 traditional SATA sockets. Connect the individual channels to the expander(s) as necessary and you're done
<sas-n00b>
If the ports on the Supermicro motherboard are SATA ports then just one device can be connected to them. So the 8 ports will control 8 devices, leaving (in a 24 drive enclosure) 16 devices requiring some other form of connection... such as an M1015. Is it correct to say that I'll still have to buy at least one M1015?
</sas-n00b>
 

ser_rhaegar

Patron
Joined
Feb 2, 2014
Messages
358
1 connector, though. Supermicro opted to expose all 8 channels. All you have to do is use a reverse breakout cable that connects four SAS channels on the motherboard to 1 SFF-8087 socket (this one carries 4 channels). M1015s use two SFF-8087 sockets instead of 8 traditional SATA sockets. Connect the individual channels to the expander(s) as necessary and you're done (Theoretically, at least. SAS gets really confusing at times...).
Keep in mind to use a reverse breakout cable and not a forward breakout cable.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
<sas-n00b>
If the ports on the Supermicro motherboard are SATA ports then just one device can be connected to them. So the 8 ports will control 8 devices, leaving (in a 24 drive enclosure) 16 devices requiring some other form of connection... such as an M1015. Is it correct to say that I'll still have to buy at least one M1015?
</sas-n00b>

If the enclosure has expanders, you just connect it "normally" to the motherboard (to four connectors instead of a single big one, with a big one on the enclosure end). If not, you will need additional controllers or provide your own expander. Since yours provides an expander, here's what your setup would look like:

This is a formal warning that bad ASCII Art lies ahead.


[PRE]
********************************************
* Supermicro X9SRH-7F *
* *
* (SATA ports from PCH) *
* S S S S S S *
* S *
* S *
* S * ______________
* S * semi-arbitrary | |
* SAS Ports from LSI SAS2308: S--------- * ---| **************** number of channels | |
* S--------- * ---| 4 channels * SAS expander * (n channels) | |
* S--------- * ---|----------------* *--------------------------| n HDDs |
* S--------- * ---| * * | |
******************************************** **************** | |
|______________|

[/PRE]

Note that the LSI 2308's ports are SAS. The single-drive connector is identical between SATA and SAS.

Some more complicated topologies exist. You might get two expanders (and can thus use 8 channels instead of only 4, adding bandwidth - not a real issue with mechanical drives).
Your case implies you get pretty much what my drawing says.
There's also the cheaper version that doesn't have the expander. In that one, you'd need to provide your own expander or two additional LSI 2008/2308s.

While we're at it, can anyone give me a logical explanation of why reverse and forward breakout cables are physically different?
 

ser_rhaegar

Patron
Joined
Feb 2, 2014
Messages
358

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Mystery solved, thank you!

Seems like they could've easily avoided this when creating the SAS specification. Oh well, one thing to keep in mind.
 

ser_rhaegar

Patron
Joined
Feb 2, 2014
Messages
358
Mystery solved, thank you!
No problem, happy to help. Before you asked, I didn't know why, just knew that there were two types of cables. Glad to have the push to look up the reason.
 

panz

Guru
Joined
May 24, 2013
Messages
556
There's also a SERIOUS tradeoff on sucking power on 80mm fans versus 120mm fans. Read my thread where I was overheating my hard drives *because* I was using 120mm fans. There's a reason why the expensive case has 80mm fans and the cheap one has 120mm fans. ;)

I don't agree. The RM4024 has these fans

http://www.chenghome.com.tw/exec/product.php?mod=show&cid=9&pid=CHD12012&lg=E

here some good photos

http://hardforum.com/showthread.php?t=1745790&page=2

A friend of mine has the Supermicro SC846BA-R920B: it is damn loud and my hard drives run 2° C hotter in his case than in mine.

I live in the South Mediterranean and temperatures and humidity here reach high levels. I tested the RM 424 in a small closet with 4 KW vented heating pushing hot air. Ambient temp was 38° C during a 9 days stress-test, with my hard disks (WD Red 3GB) running this test 24/7:

badblocks -svw -b 4096 -t 0xFF -t 0x00 -t 0xFF /dev/daX

HDs' temperature never reached 40° C. The daughterboard fan controller works well with Supermicro's BIOS, regulating the fan speed. They're not quiet, but not as loud as Supemicro's.
 

ser_rhaegar

Patron
Joined
Feb 2, 2014
Messages
358
Panz, can you hear your friends SM case in other rooms? Curious about the noise as I'm looking at case options for a 12 drive setup.
 

panz

Guru
Joined
May 24, 2013
Messages
556
Panz, can you hear your friends SM case in other rooms? Curious about the noise as I'm looking at case options for a 12 drive setup.

I can give you an idea of the differences with these two videos. I know it is a stupid idea to compare them this way, but this is like the real difference when you're in front of them

Supermicro

View: http://youtu.be/Ym0uHos09Cc


RM 424 Pro

View: http://youtu.be/nyb8tIQ8duo


I live in a little house under the roof: the RM424 server is near my bedroom. I can't hear it. My friend with the Supermicro chassis had to place it in the basement ;)
 

ser_rhaegar

Patron
Joined
Feb 2, 2014
Messages
358
Thanks Panz. The SM sounds like the HP Proliant dl180 G6 in my basement... I can hear it in every room of my house lol. I have other servers and network gear in the rack that make noise but not enough to hear outside the basement area. I like the quietness of your chassis. Might get that to put my new SM motherboard in and migrate my drives over.
 
Status
Not open for further replies.
Top