BUILD 24 Drive Build - Looking for feedback and suggestions

Status
Not open for further replies.

pmccabe

Dabbler
Joined
Feb 18, 2013
Messages
18
Hello Folks

So my current NAS solution is nearing capacity, and I'm looking to build something that will last me well into the future. I've settled on FreeNAS as the OS, and I've put together a build that I'm hoping for some feedback on.The primary uses for this server are to store lots of personal HD videos, pics, etc. Store and stream HD content to several set top boxes. Use as a backup server for several PC's. And running the occasional Virtual Machine for testing.

Here is what is currently on my list of components.
1. Case
NORCO RPC-4224 4U Rackmount Server Case (http://www.newegg.ca/Product/Product.aspx?Item=N82E16811219038)
This has lots of drive space, which should leave lots of room for future upgrades.

2. Motherboard
GIGABYTE GA-X79-UP4 LGA 2011 Intel X79 SATA 6Gb/s (http://www.newegg.ca/Product/Product.aspx?Item=N82E16813128562)
Again, with expandability in mind, I'm looking for a MB that can support at least 64GB of RAM ( going by the 1GB RAM per 1TB storage rule )

3. CPU
Intel Core i7-3820 Sandy Bridge-E 3.6GHz (3.8GHz Turbo Boost) LGA 2011 130W Quad-Core (http://www.newegg.ca/Product/Product.aspx?Item=N82E16819115229)
Not really many choices for an LGA 2011 socket

4. Hard Drives
6x Western Digital WD Caviar Green 3TB SATA3 3.5IN 64MB Cache
6x Western Digital 2TB Caviar Green SATA3 Intelli Power 64MB Cache

Note, I already have 3 WD 2TB Green drives, so instead of wasting them I figured I would get 3 more.

5. Power Supply.
CORSAIR HX Series HX850 850W ATX12V 2.3 / EPS12V 2.91 SLI Ready CrossFire Ready 80 PLUS GOLD (http://www.newegg.ca/Product/Product.aspx?Item=N82E16817139011)

6. RAM
G.SKILL Ares Series 32GB (4 x 8GB) 240-Pin DDR3 SDRAM DDR3 1600 (http://www.newegg.ca/Product/Product.aspx?Item=N82E16820231559)
This leaves me with 4 DIMMS to upgrade to 64 GB in the future.
===================================================================
For the FreeNAS configuration, my plan is to create 2 VDev's of 6 drives each in RaidZ2, and add them to a single zpool. The plan is to eventually add 2 more VDev's of similiar configuration to the zpool as the need arises.

Now, for my questions.
1. Is there any concern with the hardware, in terms of compatibility with Motherboard ? Has anyone successfully setup FreeNAS with these components ?
2. I understand the Norco case comes with back planes which have SFF 8087 connections. Is there a converter cable or something that I can run from the SATA port on the MB to the mini SAS port on the back plane ? The only cables I was able to find were mini SAS to 4 SATA, which is not what I need. Or would I be better off purchasing an add in card such as this instead ? http://www.newegg.com/Product/Product.aspx?Item=N82E16816118112
3. Is there any substantial advantage of using the WD RED drives over the Greens ?
4. Is the power supply enough to support 24 drives in the future ?

Any other suggestions or advice is appreciated.

Thanks so much.

Pat
 

Phlox

Cadet
Joined
Mar 5, 2013
Messages
3
Wow, this is pretty close to my upcoming build, except I was going with Asus mobo and corsair ram, with a 4220 case and 3930K 6 core.
To answer some of your questions,
2) You'll need a controller card for the SFF 8087's, Highpoint makes cheep non raid ones but they supposedly don't have freeNAS drivers yet (i haven't checked myself).
3) Reds supposedly do 24/7, so i suspect they use better bearings, closer to enterprise class. The standard low duty cycle bearings in the Green's can spin out their lubricant over time (but it'll seep back in after they sit to trick you).
4) The 4224 uses 5 molexes to power the backplanes, any 750w should do it. Dual 12v rails or a mid to high end single rail is recommended, and enable staggered startup if you can, to handle the voltage draw down.

You might want to add a real NIC (an intel one).
It's worth noting that this is my first foray into servers, my backround is desktops and workstations.
-Mike
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
I'm fairly close to what you are asking. I have that case, those drives, and that power supply. Here's my advice:

1. Get a smaller power supply. 850W is still a hell-of-a-lot for 24 drives. I have 18 WD 2TB Green drives at the moment and the whole system runs under 200w at the wall idle and under 300 during a scrub.
2. Those 24 drive cases are awesome. Do yourself a favor and never ever put any hard drives in it that are 7200RPM or faster. You'll cook the drives fairly quickly unless each hard drive has empty slots above and below. This means using the top and bottom rows would be bad.
3. You need a SATA to SFF-8087 "reverse breakout" cable. They look the same as regular breakout cables, but don't mix them up. They won't work for the other. Here's the one I bought...http://www.amazon.com/dp/B002MK7F0Y/?tag=ozlp-20
4. In theory, WD Red should(keyword: should) be better than WD Green in a NAS environment. The rumor is that the only difference is that the RED has different firmware(aside from the RED having 1 more year on its warranty). My 18 Green drives have been running 24x7 since 2010 and I've had 1 flaky drive about 2 months ago that I swapped out. Do Google and read up on the wdidle.exe tool and use it on those Green drives. Mine is set to 300 seconds. If you get confused as to what it does and if you want to disable it, just use it and set them to 300 seconds. Otherwise go find my lengthy post in the forum somewhere as to why you better use it for NASes. ;)
 

survive

Behold the Wumpus
Moderator
Joined
May 28, 2011
Messages
875
Hi insolent,

I would imagine they are going with socket 2011 because of the memory capacity. Socket 2011 boards seem to top out at 256GB of RAM if you can afford 32GB DIMMs.

That said guys, why aren't you looking at the Supermicro 2011 boards? I would think the built-in dual Intel i350 gig NIC's would be reason enough, but you should also get a better slot layout for all the controllers you will wind up using.

-Will
 

insolent

Cadet
Joined
Jun 8, 2011
Messages
7
I should have said x79 instead of 2011, but survive is catching my drift. For what you're putting together, it looks like it might not be a bad idea to check out the server grade hardware.

While they can be slightly intimidating with ecc registered memory and whatnot. You may be able to put something together more desirable since you aren't paying enthusiast grade prices. That was the direction I was considering, but then I wussed out because I knew I'd probably never buy 24 drives at 100 bucks a pop... at least not all at once

I woke up with a brutal headache, if anything looks like I am cracked out when I write this... I apologize.
 

Phlox

Cadet
Joined
Mar 5, 2013
Messages
3
My reason for not going with a supermicro or tyan is they tend to be, as I understand it, finicky about the cpu and ram choices, and appear to do some weird things mounting and heatsink wise. An X9SRA would be great, but I don't trust products that ignore industry (atx and the intel 2011 socket) specs.
Also, if I wanted the same processor as an E5 to work with a proper server board, it'd be double the price, and I might have to make modifications to my case. The enthusiast level boards also have more/larger/wider spaced pcie slots, giving you more flexibility in HBA or NIC choices. Some of them like the Asus I plan on using, even have an intel nic (not a great one, but still).
ECC and remote management tech would be nice, but not worth the added $1500, time requirement, and pain of incompatibilities to me

-Mike
 

survive

Behold the Wumpus
Moderator
Joined
May 28, 2011
Messages
875
Hi Phlox,

You might want to re-think things or at least not discount Supermicro & other server-grade gear so quickly.

Let's take things in order:

Yes, there is a memory compatibility list, just like Asus & everyone else has for their boards...this is just so vendors & users can refer to a list of memory that really really should work. Memory get's a bit more specific with servers because....well people want to know that the dran thing is going to work. It could very well work just fine with some G-Skill super-OC sticks, but if it's on the list you know that it will.

You really only see the weird CPU heat sinks when you are dealing with 1u & 2u cases where it's a bit tougher to manage the heat or a retail box HSF simply can't fit. If you are going with something like a 4u case a retail HSF should fit just fine.

If you look at the X9SRA page on Supermicro's site it says that it will even "Support Intel Core i7 Extreme / Performance LGA 2011 processors with Non-ECC UDIMM only" so you might actually give up some of the desired memory density if you want to run the non-Xeon chips.

Funny thing is the X9SRA is only $30 more than the OP's chosen Gigabyte.

-Will
 

Phlox

Cadet
Joined
Mar 5, 2013
Messages
3
Thanks for responding.
The X9SRA (I mention it because I just found it, and its a serious competitor for my build now) seems to have a proprietary supermicro heatsink mount on the board. I'm trying to determine if this is true. A number of the (few) comments or reviews mention a proprietary mount, but I'm not sure if this is just people not recognizing the narrow 2011 mount or if it actually is proprietary.

The SRA is actually $30 less that my chosen board, which helps with the increased ram cost to go ECC. Of course, the processor price difference depends on the outcome of my 'how fast' and 'speed v. cores' debates. There aren't many 40-60TB machines out there as examples, and its a serious possibility to reach that territory in the long term.

Sorry pmccabe for hijacking your thread.

-Mike
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
I just wanna pipe up and say that I understand your concerns Phlox but those are a little incorrect. I will agree that if you've never used the hardware its hard to understand what some writers are trying to convey when they write about their problems, but if you've used the hardware you will often understand that their "issues" aren't "issues" in all circumstances. By far I recommend server grade hardware if you want exceptional reliability and compatibility with FreeNAS/FreeBSD. After seeing 2 or 3 people lose data from bad RAM I'll never build a FreeNAS production server without ECC RAM ever again.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I would imagine they are going with socket 2011 because of the memory capacity. Socket 2011 boards seem to top out at 256GB of RAM if you can afford 32GB DIMMs.

X9DR7-TF+ .. 768GB and has 10Gb ethernet too. With two procs, will do up to 256GB of relatively inexpensive 16GB 1600MHz or 384GB of cheap 16GB 1066MHz memory. Or 768GB if you have cash to burn on 32GB modules.

I don't know about the X9SRA specifically, but the easy solution would seem to be to buy the Supermicro heat sink. They're generally well-designed. As with many "server" build issues, the manufacturers typically try to stick with standards, but if there's a compelling reason to bend or break a rule, it can and will be done, but in return you usually get something out of it. For example, on the board I list above, it has the extra memory slots to go to 768, but if you use them, you lose some memory speed (and there are other complex rules as well). Many other manufacturers don't support that, even Supermicro doesn't on most of their boards.
 
Status
Not open for further replies.
Top