BUILD SM X10SL7 32TB build, comments?

Status
Not open for further replies.

IanWorthington

Contributor
Joined
Sep 13, 2013
Messages
144
Having some second thoughts about certain aspects of this build.

1. Rather than using RAIDZ2 with one cold spare, I'm thinking of keeping that spare warm so I can resilver remotely. Or should I simply go RAIDZ3? How much does performance suffer?

2. Any way of getting 11 drives in a Define R4 (is the 2x5" box big enough for 3x3.5"?) or must I get something like the R2 XL monster?

3. Are there any issues of splitting drive manufacturers three ways WD/Seagate/Hitachi? (The later are more expensive but the Backblaze report suggests they /might/ be better.)

i
 

JohnK

Patron
Joined
Nov 7, 2013
Messages
256
Having some second thoughts about certain aspects of this build.

1. Rather than using RAIDZ2 with one cold spare, I'm thinking of keeping that spare warm so I can resilver remotely. Or should I simply go RAIDZ3? How much does performance suffer?

2. Any way of getting 11 drives in a Define R4 (is the 2x5" box big enough for 3x3.5"?) or must I get something like the R2 XL monster?

3. Are there any issues of splitting drive manufacturers three ways WD/Seagate/Hitachi? (The later are more expensive but the Backblaze report suggests they /might/ be better.)

i
I just changed my case to the ARC XL. My plan is to add two 3x3.5's in the future for 14 drives total. 2x7 vdevs in Raidz3. I have not found RaidZ3 to be a problem for what I'm trying to do which is mostly streaming.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Having some second thoughts about certain aspects of this build.

1. Rather than using RAIDZ2 with one cold spare, I'm thinking of keeping that spare warm so I can resilver remotely. Or should I simply go RAIDZ3? How much does performance suffer?

2. Any way of getting 11 drives in a Define R4 (is the 2x5" box big enough for 3x3.5"?) or must I get something like the R2 XL monster?

3. Are there any issues of splitting drive manufacturers three ways WD/Seagate/Hitachi? (The later are more expensive but the Backblaze report suggests they /might/ be better.)

i

1) The most likely reason not to RAIDZ3 is simply performance. A bit lower in Z3.

3) I view a heterogeneous array as best practice to protect against certain types of failures (bad mfg batch etc). Make sure the characteristics are closely matched, don't mix 5400 and 7200 unless you like pessimistic behaviours.
 

IanWorthington

Contributor
Joined
Sep 13, 2013
Messages
144
I just changed my case to the ARC XL. My plan is to add two 3x3.5's in the future for 14 drives total. 2x7 vdevs in Raidz3. I have not found RaidZ3 to be a problem for what I'm trying to do which is mostly streaming.


What are you using as your 3x3.5" to 2x5.25" adapter? I'm looking for something like this I guess

http://www.genesysgroup.com.tw/images/2525to3hdbk-z.jpg

but all I'm finding through the regular outlets are $60+ front access docking cages.

I'd rather keep the smaller case if I can but not at that price.

i
 

IanWorthington

Contributor
Joined
Sep 13, 2013
Messages
144
1) The most likely reason not to RAIDZ3 is simply performance. A bit lower in Z3.

3) I view a heterogeneous array as best practice to protect against certain types of failures (bad mfg batch etc). Make sure the characteristics are closely matched, don't mix 5400 and 7200 unless you like pessimistic behaviours.


What do you mean by "pessimistic behaviours"? Simply the array won't run any faster than the slowest drive or something more serious?

In this situation, where I might not be able to physically replace a failing drive rapidly, would you recommend a warm spare or RAIDZ3?

i
 

Mguilicutty

Explorer
Joined
Aug 21, 2013
Messages
52
The 1220 also did not support VT-d in the v2 variant where the 1230 did. I'm not sure if that carried over in the v3s and it is not really applicable to FreeNAS (maybe jails?) but if you repurpose down the road and do virtualization you'll want it.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
What do you mean by "pessimistic behaviours"? Simply the array won't run any faster than the slowest drive or something more serious?

That's the issue that occurs to me, however it may not be the only issue.

Fortunately a lot of the old pessimistic behaviours that could surface on a parallel SCSI bus, where differing firmware and features created havoc, just don't appear on the point-to-point SAS or SATA systems.

Basically if the drives perform similarly then ZFS will not know that they are from different manufacturers.

In this situation, where I might not be able to physically replace a failing drive rapidly, would you recommend a warm spare or RAIDZ3?

Z3 if your goal is simply overall reliability, or warm spare if you're OCD about ZFS being degraded and are sure you can initiate a replacement within a reasonable timeframe.

Some of us do Z3 and a warm spare. And a cold spare.
 

_Adrian_

Dabbler
Joined
Oct 7, 2011
Messages
41
Can i ask you guys something ??

Why not just run some retired servers instead of forking around building on consumer boards that were never meant to be run 24/7 and with 0 redundancy ?
I mean I picked up my NAS ( DL370 G5 with Dual X5450's ) and 16GB of DDR2 ECC for $225 + Shipping !
Server only was missing the hard drives which I ended getting the "drive sleds" and adding some Chinese SSD's ( Quad SD RAID to SATA - 16GB SD Cards ) dropped an OS on them in RAID5
Also to serve me better I replaced the Smart Array P400 with a P800 and added a second controller as the chassis supports 2 drive cages which in turn can house 8 SFF ( 2.5" ) SAS / SATA drives. ( Yes you can use whatever drives you want as HP doesn't limit you to using branded drives and i know this first hand as i tried random SATA drives and SAS drives ) Main reason for the second Smart Array card was for the drive array i wanted to add to it which is an MDS600 which is really nothing more than a huge JBOD Chassis that supports LFF DP SAS drives and LFF SATA Drives ( LFF = 3.5" ). The beauty part of this enclosure is that it supports up to 70 drives and you can daisy chain 4 of these monstrous chassis together on a single SAS link...
You do the math !!
 

Mguilicutty

Explorer
Joined
Aug 21, 2013
Messages
52
Power.

Noise.
 

Mguilicutty

Explorer
Joined
Aug 21, 2013
Messages
52
My setup:
Firewall: HP DL380 G5 - Single L5240, 8GB PC2-5300F ECC, Smart Array P400, 146GB 10K SP SAS, HP 520T 1GBE card, Myricom 10GBe Card, Mellanox InfiniHost III HBA

NAS: HP ML370 G5 - Dual X5450, 16Gb PC2-5300F ECC, DUAL Smart Array P800 w/512MB BBWC, 4x 36GB 15K SP SAS ( OS ), Myricom 10GBe Card, Mellanox InfiniHost III HBA
- HP MDS600
Drawer 1: 14x Seagate Barracuda 2TB, 8x 1TB Seagate Barracuda
Drawer 2: 3x WD Caviar 1TB, 5x WD Caviar 2TB, 8x WD Caviar 3TB

Web Servers: 2X HP DL580 G4 - Quad 7130M, 32GB PC2-3200 ECC, Smart Array P400 w/512MB BBWC, 3x 72.6GB 10K SP SAS ( OS ), Mellanox InfiniHost III HBA

Switches:
Main - Woven Systems TRX100 - 48x 1GBE, 4x 10GBE
Management / ILO - HP ProCurve 2626 - 24x 10/100, 2x Dual Personality 1GBE
Infiniband - TopSpin120 / Cisco SFS7000 - 24x 10GB OR 8 port 30GB
KVM - HP AF600A

UPS:
2x HP R5500XR for ML370 G5 and MDS600
2x HP R3000XR for DL580 G4
1x HP R3000XR for Switches

Just out of curiosity, how much are you sucking from the wall with all of that powered up and running?
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Because we do not recommend consumer boards, and because most prebuilt servers are not suitable for ZFS due to the inclusion of RAID cards etc. See the sticky in the hardware forum.
 

JohnK

Patron
Joined
Nov 7, 2013
Messages
256
What are you using as your 3x3.5" to 2x5.25" adapter? I'm looking for something like this I guess

http://www.genesysgroup.com.tw/images/2525to3hdbk-z.jpg

but all I'm finding through the regular outlets are $60+ front access docking cages.

I'd rather keep the smaller case if I can but not at that price.

i
I have not decided on an adapter, yet. Currently I am running 7 disks at about 20% utilization. It is going to take a few years before I add another 7 disks :)
 

jyavenard

Patron
Joined
Oct 16, 2013
Messages
361
The 1220 also did not support VT-d in the v2 variant where the 1230 did. I'm not sure if that carried over in the v3s and it is not really applicable to FreeNAS (maybe jails?) but if you repurpose down the road and do virtualization you'll want it.


I'm not sure where you got that information from, but regardless of v2 or v3, the E3-1220 has all the feature the E3-1230 provides, including VT-d. The only exception is hyper-threading.
E3-1220V2: http://ark.intel.com/products/65734/
E3-1230V2: http://ark.intel.com/products/65732/
compare: http://ark.intel.com/compare/65732,65734

and the v3:
E3-1220V3: http://ark.intel.com/products/75052/
E3-1230V3: http://ark.intel.com/products/75054/
compare: http://ark.intel.com/compare/75052,75054
 

Mguilicutty

Explorer
Joined
Aug 21, 2013
Messages
52
After some extensive ARKing I don't know what the hell I'm talking about either. I'm sure we had a project in the past where the 1220s were missing a feature and we had to go 1230, and I was sure it was because of VT-d. However, maybe my mind is a bit off on this one, this is going to bother me until I figure it out....
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
You may well be thinking of the Sandy's, where the 1220 was slightly compute-reduced

http://ark.intel.com/compare/52271,52269

There are other oddities in the original E3 lineup. I'm pretty sure I know what you're talking about; but am too lazy to go figure out how to get it to give me a full feature matrix for all E3 Sandy....
 

Mguilicutty

Explorer
Joined
Aug 21, 2013
Messages
52
Guys I don't remember what it was, I even talked to another guy who was on the project and he can't remember either. He also thought it was VT-d. Either the ARK has an error :eek: or we've all lost our minds.
 

PenalunWil

Contributor
Joined
Dec 30, 2013
Messages
115
Status
Not open for further replies.
Top