Revised Parts List: Opinions?

Status
Not open for further replies.

Thronesmelt

Cadet
Joined
Sep 12, 2014
Messages
8
Hello Everyone,

I have changed my CPU and MOBO I am still considering my original one but I will most likely go with the new ones posted below. What are your opinions on a 9 or 12 single vdev RaidZ3 setup with the option to upgrade the case and vdev quantity later. Am I correct that those 2 quantities are the "optimal" configurations for ZFS RaidZ3? I am thinking most likely 9 for less chance of total data loss and Zpool due to drive failures during resilvering. I am just keeping my options open and would like recommendations.
With all that said here are my Potential Parts:

Case:
http://www.newegg.com/Product/Product.aspx?Item=N82E168...

NEW
Motherboard:
http://www.newegg.com/Product/Product.aspx?Item=N82E16813182822

NEW
CPU:
http://www.newegg.com/Product/Product.aspx?Item=N82E16819117315

RAM:
2 x http://www.newegg.com/Product/Product.aspx?Item=N82E16820148770

Power Supply Unit: (Currently Own)
http://www.newegg.com/Product/Product.aspx?Item=N82E168...

5.25" Expander for more 3.5" Drive Space:
http://www.newegg.com/Product/Produ..._25_to_3_5_bay_adapter-_-17-994-152-_-Product

FreeNas OS USB Stick:
http://www.newegg.com/Product/Product.aspx?Item=N82E16820211668

HBA:
IBM M1015

Fans to cool the case:
7 of these http://www.newegg.com/Product/Product.aspx?Item=N82E168...

2 Of these top mount http://www.newegg.com/Product/Product.aspx?Item=N82E168...

HDDs:

5 or 7 of these http://www.newegg.com/Product/Product.aspx?Item=9SIA5EM...

4 or 5 of these http://www.newegg.com/Product/Product.aspx?Item=N82E16822236350

The hard drive quantities above are not the Vdev breakdown but just the physical quantities of each. I will not be purchasing all the drives from just a single retailer. I will be using many different online sellers but it is just easier to link to Newegg so you all can have a clear understanding of my parts list.

(Secondary Considerations:
Motherboard:
http://www.newegg.com/Product/Product.aspx?Item=N82E168...
CPU:
http://www.newegg.com/Product/Product.aspx?Item=N82E168...)


Thank you all for your help and advice. I greatly appreciate any who take the time to read and reply.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Do you own an M1015? If not, an X10SL7-F will be quite a bit cheaper than separate motherboard and HBA.

Other than that, you may have trouble cooling 7200RPM drives, but otherwise everything looks ok.
 

Thronesmelt

Cadet
Joined
Sep 12, 2014
Messages
8
I dont want to purchase that board (X10SL7-F). It does not have the feature sets i want. I can get very picky and specific with things on a mobo. I dont want to cross flash its Sas raid chip to work in IT mode. I would much rather have either of the other two boards and multiple HBAs.

With the cooling of the drives. I doubt that will be a problem. I mean first off these are NOT 10-15k rpm drives. Those might be a concern. 7200rpm is an industry standard when it comes to average drives. According to both Segate's (Here) and WD's (Here) data sheets the drives max operating temps are 55C. Also according to Toms Hardware Enterprise HDD testing chart temperatures (Here) the specific drives I am planning on using run, in an raid environment, at an average of 41-42C. Now in that data I do not know what kind of drive cooling solution they are using but it can not be worse then mine.

To quote
"Google published a very interesting study (PDF) about hard drive health and lifespan, based on data collected from their systems (many thousands of hard drives). That study says that:
Overall our experiments can confirm previously reported temperature effects only for the high end of our temperature range and especially for older drives. In the lower and middle temperature ranges, higher temperatures are not associated with higher failure rates. This is a fairly surprising result, which could indicate that datacenter or server designers have more freedom than previously thought when setting operating temperatures for equipment that contains disk drives.

Their graph shows that faulure rate does not go up until drive temperature goes past 45 degrees."

The design of the case plus the fans i have selected will have 2 3000 rpm fans with 158.5 CFM (highest consumer fans without going to Delta branded server farm fans) directly blowing on the drives front mount (fresh air). PLUS another 2 directly pulling the hot air off of the drive cages at the same speeds/cfm, PLUS another two same speeds/cfm bottom mounted underneath blowing directly up with fresh air on the drives, PLUS one more back mounted same speeds/cfm emptying the heat out of the case And finally 2 166.2 CFM 1300 RPM 200mm fans top mounted removing the hot air as well. Also less important the aftermarket heat sink for the cpu which will be pulling hot air away from the drives with 2 140mm fans.

Do you think that is enough air movement of do you think that I will still have issues with cooling? Also any thoughts on which quantity of drives i should use in the RaidZ3 setup?
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
I dont want to purchase that board (X10SL7-F). It does not have the feature sets i want. I can get very picky and specific with things on a mobo. I dont want to cross flash its Sas raid chip to work in IT mode. I would much rather have either of the other two boards and multiple HBAs.

With the cooling of the drives. I doubt that will be a problem. I mean first off these are NOT 10-15k rpm drives. Those might be a concern. 7200rpm is an industry standard when it comes to average drives. According to both Segate's (Here) and WD's (Here) data sheets the drives max operating temps are 55C. Also according to Toms Hardware Enterprise HDD testing chart temperatures (Here) the specific drives I am planning on using run, in an raid environment, at an average of 41-42C. Now in that data I do not know what kind of drive cooling solution they are using but it can not be worse then mine.

To quote
"Google published a very interesting study (PDF) about hard drive health and lifespan, based on data collected from their systems (many thousands of hard drives). That study says that:
Overall our experiments can confirm previously reported temperature effects only for the high end of our temperature range and especially for older drives. In the lower and middle temperature ranges, higher temperatures are not associated with higher failure rates. This is a fairly surprising result, which could indicate that datacenter or server designers have more freedom than previously thought when setting operating temperatures for equipment that contains disk drives.

Their graph shows that faulure rate does not go up until drive temperature goes past 45 degrees."

The design of the case plus the fans i have selected will have 2 3000 rpm fans with 158.5 CFM (highest consumer fans without going to Delta branded server farm fans) directly blowing on the drives front mount (fresh air). PLUS another 2 directly pulling the hot air off of the drive cages at the same speeds/cfm, PLUS another two same speeds/cfm bottom mounted underneath blowing directly up with fresh air on the drives, PLUS one more back mounted same speeds/cfm emptying the heat out of the case And finally 2 166.2 CFM 1300 RPM 200mm fans top mounted removing the hot air as well. Also less important the aftermarket heat sink for the cpu which will be pulling hot air away from the drives with 2 140mm fans.

Do you think that is enough air movement of do you think that I will still have issues with cooling? Also any thoughts on which quantity of drives i should use in the RaidZ3 setup?

You do realize that the M1015's LSI 2008 needs flashing as well, right? Once the FreeNAS driver gets updated, it'll have to be flashed again to match the new driver.

The board literally only doesn't have the following: Two USB 3.0 ports and four SATA 3Gb/s ports replacing four SATA 6Gb/s ports. The "missing" slot is just routed to the onboard LSI 2308.
If you can justify the extra expense, it's your money - but you're unlikely to benefit from the changes.

As for cooling, it's hard to say what will happen unless the setup has been previously studied. What is known is that more fans are better (you have that covered) and that the extra performance of 7200RPM drives is wasted over GbE.
The typical recommendation is to keep drives from exceeding 40ºC.

Feel free to use whatever number of drives you want, but 5, 7 and 11 are optimal.
 

mjws00

Guru
Joined
Jul 25, 2014
Messages
798
You'll be able to cool the 7200's fine. That case flows well. You don't really gain anything by running enterprise drives in a large zfs array. That's why we don't do it... not because we're cheap. Cheap ssd's blow away enterprise drives for iops and seek. ZFS is designed to utilize them and cache like crazy. Not to mention it's transactional. Why you'd spend $200+ on fans for a nas is mind blowing. They should run cool. You throw that kind of cooling firepower at over-clocked quad sli rigs. Not a mostly idling nas.

You are dropping money all over the place for no significant benefit. If it is just for the sake of hot-rodding... awesome. Otherwise, the recommendations given around here are pretty much the best bang for the buck you can get, and still maintain server quality. I have yet to see ericloewe, cyberjock, or jgreco, make a hardware recommendation that wasn't well thought out in terms of features and value.

Yes I have a buttload of 7200 rpm drives. Years ago they made sense. Now they are pretty much considered hot, power hungry, useless beasts.

It's your money. You really aren't going to run into problems spending extra for no performance bump. You'll just get a few puzzled looks ;).

Have fun, we don't all need the same rig. (See sig for details.)
 

Thronesmelt

Cadet
Joined
Sep 12, 2014
Messages
8
First off I want to thank you for all your input on my posts ericloewe. I am now confused with the last statement you said of "Feel free to use whatever number of drives you want, but 5, 7 and 11 are optimal." Because when I go and reference RAIDZ Configuration Requirements and Recommendations that says "Start a triple-parity RAIDZ (raidz3) configuration at 9 disks (6+3)" That is where my confusion stems from.
Also the motherboard i have selected at this moment is cheaper then the X10SL7-F in my region at least. I am spending less and getting stuff closer to what i want. Yes i realize if and when i update the freenas driver i will have to flash the M1015. I hate flashing MOBOs it is just a pain to me. Hey and everyone has a personal preference right?

Now mjws00 to address your points. Yes SSD's blow any spinning platter disk out of the water. Show me ANY ssd in quantity lager then 4tb AND costs closer to the $0.07 per gb that enterprise platter drives cost. The only one i could find is the outrageous 7.25 a gigabyte that a 4tb ssd costs. I can I also understand the breakdown and temporary size limitations of ssds so dont go into that crap. Now why i am running enterprise drives? Because i am getting a longer warranty period, Higher MTBF, Better Non-recoverable read errors per bits read, a better AFR % AND it just makes sense. If i am building a server then use the appropriate materials. I dont see where i am "dropping money all over the place." The fans are to ensure a cool case and drives to reduce failure. You may find you dont want to spend the money on excellent quality fans but hell i do.

How do 7200 rpm drives make sense previously? In your opinion and by your own admisison "considered hot, power hungry, useless beasts." Going by your logic your older drives make no more sense when you bought them versus now. 7200 rpm drives have not changed from those statements in fact i would say it has gotten better as technology has improved, so the drives you are running are worse then the ones i will be.
 
Last edited:

mjws00

Guru
Joined
Jul 25, 2014
Messages
798
The excess costs come because of the additional hba, the onboard controller is cheaper and the exact same thing. The additional fans are nice, but if you are well within the lower end of operating specs without them, then it is a waste. The heat issues come from small cases or designs like the 4224 that don't have much room to move air. Your case selection is better.

I used to buy 7200 and faster drives for seek times and throughput. They were also of higher quality, and we didn't consider power. I like "blacks" and your constellations. However, increased density negated the throughput benefits significantly. On a large pool of "reds" vs the "constellations" the performance will be very close, likely limited by network speeds and certainly z3. If we want more performance we mirror, add ram, slogs, and l2arc. In terms of reliability... much of the benefit is marketing. The whole point of z3 is that both enterprise drives and NAS drives FAIL. We design for it and therefore can use drives with a much much better cost per GB and significantly decrease TCO. Long term warranty etc is just marketing... storage changes so fast it is obsolete long before that matters, imho.

I've got servers with blacks and constellations they run hardware raid, and they were "best fit" for my workloads, space requirements, and client budget. They also get swapped out for ssd's frequently. It is OLD SCHOOL. My collection of fast spinny rust is from the days when it was the most cost effective way to address that bottleneck. However, they add little benefit to a large z3 pool.

I love high end gear, but I also go into it knowing that someone has likely spent 2/3 for the same performance. :) If there was an upside to constellations in a 12 drive array, I'd be there in a second. I'm about the least price sensitive guy you are ever going to meet.

Good luck. Make sure and rip into those that take the time to answer you.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
First off I want to thank you for all your input on my posts ericloewe. I am now confused with the last statement you said of "Feel free to use whatever number of drives you want, but 5, 7 and 11 are optimal." Because when I go and reference RAIDZ Configuration Requirements and Recommendations that says "Start a triple-parity RAIDZ (raidz3) configuration at 9 disks (6+3)" That is where my confusion stems from.
Also the motherboard i have selected at this moment is cheaper then the X10SL7-F in my region at least. I am spending less and getting stuff closer to what i want. Yes i realize if and when i update the freenas driver i will have to flash the M1015. I hate flashing MOBOs it is just a pain to me. Hey and everyone has a personal preference right?

Now mjws00 to address your points. Yes SSD's blow any spinning platter disk out of the water. Show me ANY ssd in quantity lager then 4tb AND costs closer to the $0.07 per gb that enterprise platter drives cost. The only one i could find is the outrageous 7.25 a gigabyte that a 4tb ssd costs. I can I also understand the breakdown and temporary size limitations of ssds so dont go into that crap. Now why i am running enterprise drives? Because i am getting a longer warranty period, Higher MTBF, Better Non-recoverable read errors per bits read, a better AFR % AND it just makes sense. If i am building a server then use the appropriate materials. I dont see where i am "dropping money all over the place." The fans are to ensure a cool case and drives to reduce failure. You may find you dont want to spend the money on excellent quality fans but hell i do.

How do 7200 rpm drives make sense previously? In your opinion and by your own admisison "considered hot, power hungry, useless beasts." Going by your logic your older drives make no more sense when you bought them versus now. 7200 rpm drives have not changed from those statements in fact i would say it has gotten better as technology has improved, so the drives you are running are worse then the ones i will be.

Well, the "optimum" rule (which may or may not be really significant) is 2^n+p, with n= 1, 2, or 3 and p the number of parity drives - for RAIDZ3, this would be 5, 7 or 11 drives.
The one you linked is a rather odd recommendation, seemingly not based on any real math. Of course, this "optimum" setup may get you little to nothing, especially using compression, which is the default.
 

Thronesmelt

Cadet
Joined
Sep 12, 2014
Messages
8
I am still having a little trouble understanding where that formula came from. I get N = the data drives number and P = the parity drive number i just dont understand where you pull n=1 or n=2 or n=3 from. Also in the equation 2^(n+p) the 2 is to the power of that quantity of n+p correct? I get the that the P= 3 for Raidz3 because that is the number of parity drives. I am just trying to figure this out clearly that is all. Sorry.
 

gpsguy

Active Member
Joined
Jan 22, 2012
Messages
4,472
No, it's (2^n)+p. That's how one comes up with 5, 7 or 11 drives for RAIDz3.

Also in the equation 2^(n+p) the 2 is to the power of that quantity of n+p correct?
 
Status
Not open for further replies.
Top