Use current hardware or shoot for new?

Status
Not open for further replies.

miercoles13

Cadet
Joined
Jun 16, 2012
Messages
6
Hello Everyone,

I'm a somewhat new Freenas user outside of limited VM testing and sub-par hardware implementations, so I have not really gone all out with it. I been doing a lot of research.
however I'm kind of stuck at a point where I definitely need some third party opinions.I currently have the following hardware running some file services for VMware through iscsi
but I would really like to put everything on a new Norco RPC-3216 3U case with a redundant power supply:

4Core1600P35-WiFi+
http://www.asrock.com/mb/Intel/4Core1600P35-WiFi+/

Intel Core2 Quad Processor Q6600
(8M Cache, 2.40 GHz, 1066 MHz FSB)
http://ark.intel.com/products/29765...cessor-Q6600-(8M-Cache-2_40-GHz-1066-MHz-FSB)

RocketRAID 2340
http://www.highpoint-tech.cn/USA/rr2340.htm

8 Gig Gskill DDR2 1066

4x 1TB Drives
6x 750Gig Drives
2x 300Gig 10k RPM Drives

This motherboard already has a failed NIC and appears to be limited to 16gigs of ram. At the moment I would like to have two or 3 GB nics
in LACP mode, while another NIC would be dedicated to management. Long story short, I'm considering a complete replacement of hardware with the following items:

Quad Server Intel NIC (PCI-E 2.0) http://www.newegg.com/Product/Product.aspx?Item=N82E16833106050
Intel S1200BTL ATX Server Motherboard LGA 1155 socket http://www.newegg.com/Product/Product.aspx?Item=N82E16813121525
Intel Xeon E3-1225 Sandy Bridge 3.1 Ghz http://www.newegg.com/Product/Product.aspx?Item=N82E16819115088
Crucial 8Gb 240-Pin DDR3 1333 Memory http://www.newegg.com/Product/Product.aspx?Item=N82E16820148436

I would leverage the RocketRaid card along with Hard drives and think about one or two SSD's for future expansion, but that is probably after I figure out what direction i shoot with hardware. I'd appreciate any comments and or recommendations.
 

survive

Behold the Wumpus
Moderator
Joined
May 28, 2011
Messages
875
Hi miercoles13,

Here are my thoughts:

If you can swing a complete replacement of the hardware, do it.

I currently run an AMD Phenom II 940 proc (4 core, 3.0Ghz) in my filer and had the chance to swap it for a Supermicro board with a Q6600 in it (a server board so it used DDR2-667 RAM) I thought it would be great...at least as fast as the 940. I was wrong. I don't know if it was the 600Mhz X4 clock speed decrease or the cut in memory speed but it cost me close to 100MB/s in internal disk speed (as measured by the standard "dd" test) and ~50MB/s in CIFS speed. The 940 was quickly swapped back in.

As for your new hardware:

Quad port NIC: seems a bit overkill. The Intel board you selected has 2 ports that presumably use Intel chips, why not add a $30 Intel "CT" card for the third interface? You could also head over to serversupply.co and pick up an HP NC360T (rebranded Intel "PT" dual port card) for about $50.00 if you need more gig-e ports.

Motherboard: I see no problems with the board. That said, if you are so inclined take a look at the Supermicro socket 1155 boards here:

http://www.newegg.com/Product/Produ...50001655 600136967&IsNodeId=1&name=SuperMicro

I got a MBD-X9SCL+-F to use in my ESXi box and I gotta say I think I adore that board.

Processor: That's a lot of proc for a filer. I use an i3-2100 in my Supermicro board and couldn't be happier. Since I'm using it on a Cougar Point platform the i3 supports ECC memory.

Memory: Are you planning on getting 2 sticks? I would encourage you to so you will have dual channel memory. You can get 4 X 4GB ECC sticks for about $120.00.

Personally I would toss the RocketRAID card & get a proper LSI SAS HBA. You can find IBM M1015 cards for around $100 that you can "dumb down" with a firmware flash. Get 2 ant you have enough ports to support all 16 drive bays in the Norco. Use the 2 6gig SATA ports on the board to drive any SSD's you get. Pick up some SFF-8087 to SFF-8087 cables from monoprice.com:

http://www.monoprice.com/products/subdepartment.asp?c_id=102&cp_id=10254&cs_id=1025410

if you get the M1015's and it will make a real clean install.

-Will
 

Joshua Parker Ruehlig

Hall of Famer
Joined
Dec 5, 2011
Messages
5,949
does the board support PCIe 1.1 or 2.0? Just wondering why you choose a PCIe2.0 NIC, but only a PCIe 1.0 Sata card.
 

miercoles13

Cadet
Joined
Jun 16, 2012
Messages
6
@Survive,

I have reviewed your recommendations and I begun exploring supermicro options and I'm liking a lot of the different options available. I'm a bit concerned on their memory support, but I have made a few changes to my list that has dropped my expected $$ a little over 400$. The Quad nic to Dual Nic and changing the Xeon chip to an i3 have been a couple of good ideas without compromising expected performance. I do show some concern on the M1015, it seems like an awesome piece of hardware. I read some of the specs saying it could support up to 16 drives, but it only had 2 mini SAS connectors visible. I believe with the NORCO case I'm envisioning this would require 2 cards running 2Mini SAS connectors each as you mentioned. The BIOS flash to IT mode seems interesting that is supposed to allow me to passthrough connected devices without the need of JBOD i'm guessing. I'll have to consider that one and find a good place to compare prices.

@Joshua Parker,

I'm guessing you are referring to the list I posted on what I was looking into, yes the board supports PCIe 2.0, but the Rocket Raid card is something old i had lingering around.

The new list i'm looking at is as follows:

Norco RPC-3216 3U http://www.newegg.com/Product/Product.aspx?Item=N82E16811219034
Rail kit http://www.newegg.com/Product/Product.aspx?Item=N82E16811997301
NC360T - PCI EXP Dual Port Gigabit Server Adapter http://www.serversupply.com/NETWORKING/NETWORK INTERFACE CARD (NIC)/2 PORT/HP/NC360T.htm
SFF-8087 to SFF-8087 Internal Multilane SAS Cable http://www.ipcdirect.net/servlet/Detail?no=216
Mini Redundant Power Supply Zippy 500w MRW-5500 V4V http://www.ipcdirect.net/servlet/Detail?no=272
2x AVEXIR Server Series 8GB DDR3 1333 http://www.newegg.com/Product/Product.aspx?Item=9SIA0ST09A9867
Supermicro MDB-X9SCM-F-O LGA 1155 Intel C204 Micro ATX http://www.newegg.com/Product/Product.aspx?Item=N82E16813182253
Intel Core i3-2100 Sandy Bridge 3.1Ghz http://www.newegg.com/Product/Product.aspx?Item=N82E16819115078

Yes its a lot of crap, but for my environment my ESX servers and my needs Im willing to spend this much.
 

survive

Behold the Wumpus
Moderator
Joined
May 28, 2011
Messages
875
Hi miercoles13,

If you go with a Supermicro board I would strongly suggest you get memory off their "Tested memory List" available here:

http://www.supermicro.com/support/resources/mem.cfm

I really like this place for RAM:

http://www.superbiiz.com/query.php?...rom+current+results&ob=r&myanchor=#displaytop

There's a good 4 part series on the M1015 available here:

http://www.servethehome.com/ibm-m1015-part-1-started-lsi-92208i/

Flashing to "IT" mode is covered in part 4. You will want to do this to make the card a plain old "dumb" HBA that gives all the control of the drives over to FreeNAS.

That card will support a ton of drives you you use what's called a "SAS expander" but the Norco case you picked out doesn't have an expander, it has a backplane that takes an SFF-8087 connection and splits it into 4 SAS\SATA ports on the other.....nothing wrong with that, far better than a rat's nest of SATA cables! Serversupply has them in stock starting at $75.00....just be sure the card you order has a bracket.

-Will
 

miercoles13

Cadet
Joined
Jun 16, 2012
Messages
6
Oh I did a search on that ram but did not find it that such price, great link you have saved me another valuable dollar my friend. Funny, I have been reading that M1015 site most of this morning along with a few other examples, it seems as a popular card indeed. I'm also excited about the possibility of using it as an extender for future projects, but right now I will face what is in front of me. I really think I'm set with the items I plan to buy, expecting 1525$ or so plus tax and shipping cost. I'm glad I did my comparison vs buying pre-built solutions like synology and such. Now the most important part is to review everything and prepare my justification to the wife on why this is important and I must have it ;)
 

miercoles13

Cadet
Joined
Jun 16, 2012
Messages
6
Well the hardware arrived yesterday and I put most of the pieces together, I just need to go through the hacking the M1015 cards to be in IT mode. I'll be running benchmarks to tweak my performance over the next few days if not week.
 

paleoN

Wizard
Joined
Apr 22, 2012
Messages
1,403
Well the hardware arrived yesterday and I put most of the pieces together, I just need to go through the hacking the M1015 cards to be in IT mode. I'll be running benchmarks to tweak my performance over the next few days if not week.
Let us know how it turns out. I'm looking to pick up a M1015 myself sometime, so I'm particularly interested.
 

miercoles13

Cadet
Joined
Jun 16, 2012
Messages
6
Well i got the system up and running I have been testing different scenarios, as for setting up the M1015s it was great thanks to the link by survive. I found out the hard way that I could not flash their bios with them in my motherboard for some reason this is a known issue. I switched them one at the time to another PC and flash them, got all drives hooked up and recognized by the system/OS. I'm playing around with different ideas on how to share out drives to vmware and cifs the rest to my domain.
 

miercoles13

Cadet
Joined
Jun 16, 2012
Messages
6
Now that the whole system is running, I currently have 12 drives as follows:
2x 30gig SSD as cache (internal non hot swappable)
6x 750gig + 4x 1tb drives as RAIDZ2 which I thought would be the most logical at the time, but since I have 10 extra hot swappable bays available I'm investigating some options. I have also seen a lot of folks talk about making several smaller raidz groups as part of a larger volume, for quicker recovery and faster IOPs.

I'm in the process examining the associated cost with what i would like to do. Currently I'm looking at 12 of the WD 1tb RE4 drives and considering allocating them in the following way:

4x 1tb RaidZ1 3tb usable (already owned)
4x 1tb RaidZ1 3tb usable (to be purchased)
4x 1tb RaidZ1 3tb usable (to be purchased)
4x 1tb RaidZ1 3tb usable (to be purchased)
4x 750gig Raidz1 2.2tb usable (already owned)

I would dump 2x of the 750gig drives to be re-purposed on another future build and set myself up so I could upgrade the last group with 1tb in the future. All of these groups would be 1 volume of I'm guessing 14.2 Tbs. My reasoning behind this is to increase IOPs and get extra redundancy while having the ability of easily expand the volume.
 
Status
Not open for further replies.
Top