So you want some hardware suggestions.

Status
Not open for further replies.

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
That math from that 1998 article is a joke. There's also some problem with how you extrapolated it out.

As you double the amount of RAM the error rates don't double. This shocked scientists because it doesn't make sense. Also, as die shrinks take place(there's been quite a few since 1998) and it was expected that as each die shrink occurs error rates should be slightly more than double, but instead it increased very little. Again, this wasn't expected by silicon scientists.

You should read up on the Intel paper on ECC RAM. I'd trust that a little more.

Personally, RAM failure in my experience is very rare. I have only seen 6 bad sticks of RAM in more than 20 years in computers. Strangely, they are heavily skewed to recent hardware. First bad stick was 2005-ish, seconds was 2008, the other 4 have been this year. I have no explanation for it. The 4 that were bad were from different machines, different brands, some on-line, some unplugged and in a corner for more than 2 years. I can't explain it, but I am a little skeptical of RAM right now considering how many bad sticks I've seen this year. The problem with bad RAM is there isn't necessarily any warning signs. No SMART, no failure to boot, nothing. If you don't shutdown FreeNAS and run a RAM test you may never find out you have an error until its too late. Since ZFS is geared to finding silent corruption, it is imperative that you be able to identify all sources. For bad RAM, that is only possible with ECC RAM. ECC RAM can repair single bit errors, but identify multi-bit errors. For most systems with ECC, multi-bit errors result in a system shutdown and a text message on your screen from your BIOS listing the error location(s). Sun/Oracle don't "require" ECC RAM for reliability. If you want to roll the dice, you are welcome too. Unfortunately it seems that all 3 failure we've seen on the forums involving RAM involved complete corruption of the zpool and its data. So Sun/Oracle's "requirement" for ECC shouldn't be taken lightly, but using non-ECC is a gamble. Frankly, if I were building a system, even one for home use, I'd always use ECC now. But my first FreeNAS system uses non-ECC and next time I have to do work on it then it will be upgraded to use ECC RAM. I see no point in spending big bucks to protect my data, then deliberately leaving out something like ECC RAM because of the relatively small cost increase.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I know that some SUN manual says ecc is part of the data protection strategy of zfs but if dram errors never occur, having error correction code is just extra hardware along for the ride.

I would like to know which it is.

Of course it is extra hardware along for the ride. The seatbelts in your car are just extra hardware along for the ride, too. You don't need them and maybe don't even appreciate them until that one day you would have been propelled through the windscreen and onto the pavement...

In general, DRAM errors appear to be rare. If you can run memtest for an extended period without seeing errors, a stick is probably good, though over time it is possible for new problems to show up. Weeding out bad units up front goes a long way to reducing memory errors. But ECC gets you that next tier of assurance. And really, the cost differential isn't that great. Why screw around, if your data is important to you?
 

wash

Cadet
Joined
Apr 25, 2013
Messages
8
Actually my extrapolation is correct, the only question is if his single bit error number is close to reality, garbage in, garbage out.

Your claim that error rates are expected to increase with die shrinks seems to be refuted by the excellent yield of dram fabs and wide availability of quality inexpensive DDR3 dram. If it was error prone, yield would suck and prices would be very high or everything would have ecc to deal with the errors and it would just be the price of doing business.

I understand the server mentality where ticking the ecc box is only a single drop in the bucket compared to the cost of hosting, administration and downtime but so far I haven't seen real numbers that justify scrapping mother board and ram or mother board ram and CPU to replace them with hardware that supports ecc.

I'm going to roll the dice because my box will mainly be used to serve video whose original blue ray disk will be retained and will be periodically backed up to rotated external USB disks.

I'm also at the point where money has been spent so I'm just not going to upgrade unless it looks like failure is guaranteed. I just need hard data to get to that point.

Please link to that Intel ecc paper.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Your claim that error rates are expected to increase with die shrinks seems to be refuted by the excellent yield of dram fabs and wide availability of quality inexpensive DDR3 dram. If it was error prone, yield would suck and prices would be very high or everything would have ecc to deal with the errors and it would just be the price of doing business.

Please link to that Intel ecc paper.

My argument wasn't that RAM error rates ARE expected to climb, my comment was that back in the 1990s there was alot of concern that error rates WERE due to climb to epic proportions, potentially making ECC RAM unusable requiring more advanced ECC algorithms. Scientists were shocked when technology continued to shrink and the error rates didn't increase as fast as was expected based on die size, background radiation levels, etc. After all, as the number of electrons in a "bit" is halved, you'd expect that even lower levels of radiation would cause bit flips. I used to work in the nuclear industry, so radiation interaction with living tissue and electronic components are near and dear to my heart. :)

You do know why RAM doesn't have ECC today for desktops? There actually is a story behind it. All RAM was ECC up until Apple appeared with their first computer. They got rid of ECC as a cost savings measure because they considered RAM reliability as second rate to saving money. In an effort to compete with Apple, IBM (and their clones) did the same thing. What's interesting to me is that ECC is used all over the place, CPU on-chip cache uses ECC, hard drives use ECC internally, many communication methods such as TCP/IP and UDP use ECC, HDMI uses ECC, heck even optical audio out on your computer uses ECC! So if ECC was so "useless" ECC wouldn't be extremely widespread throughout computing. So before you consider ECC over the top you should also recognize that ECC is used throughout your hardware, you just don't know it unless you read up on it. Trying to make the argument that ECC RAM is extra hardware is really a sign that you are pretty ignorant to how all this hardware works internally(no offense). If it weren't for Apple trying to save a few bucks over 30 years ago, there would likely be only 1 type of RAM today, ECC. And we wouldn't be calling it "ECC", we'd be calling it "RAM" because there would be no such thing as non-ECC RAM.

As for thee Intel paper, I'll let you Google it. I'm on slow internet this weekend. :(
 

wash

Cadet
Joined
Apr 25, 2013
Messages
8
I never said useless but I did a little more reading and heard of a fairly serious sounding server having three recorded ecc corrections in one year of use.

Not useless, not unused but very infrequently used in that case.

Speaking of saving money, I bet the people designing ecc ram don't pay as much attention to things that cause single bit errors because they have a safety net and catching a few errors makes it look like its doing something.

I'm not sure about everywhere ecc is used but I think a few you mentioned are interface implementations rather than at bit level in memory (TCPIP and HDMI?).

I have valid experience too, high level as a computer user and low level working in the industry, the middle is mostly blind spot for me because its nearly impossible to be enough of a generalist to grasp the whole picture.

At the level I work on, there are either errors or there are not. Intermittent or rare go in to the error category because things happen so fast that they just appear broken.

When you see something intermittent on a time scale of minutes or more, the problem is almost never down at the transistor level in silicon (unless you're talking about flash which like you said has error correction circuitry).

I guess I've got to find that Intel paper and dig through the stats.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Speaking of saving money, I bet the people designing ecc ram don't pay as much attention to things that cause single bit errors because they have a safety net and catching a few errors makes it look like its doing something.

This is just silly. You're proposing that memory designers make ECC RAM so that it has errors?

No. First off, the thing that's most likely to generate errors is the silicon itself, which is identical between ECC and non-ECC. ECC is built with extra chip(s), but they're the exact same ones used to build non-ECC. As a matter of fact, on many non-ECC modules, the manufacturer uses the same exact PCB and just omits the DRAM and support chips for non-ECC versions:

dram-memory-module-18268-2787977.jpg

See that gap in the middle? But seriously, it is not at all unusual for people to have a "no-fault" level of tolerance and to take the existence of logged ECC issues as time to replace the memory module in question. Manufacturers have every reason to have the lowest failure rates, because ECC does command a small price premium, and there should be more profit to be had in ECC - so you want people to be buying yours, and liking them. If a memory manufacturer was going to screw anyone, it'd be the non-ECC people, because they have no always-on tool to be checking and double-checking.
 

wash

Cadet
Joined
Apr 25, 2013
Messages
8
OK, that shows one of my blind spots, I thought ecc was implemented in the dram chip.

Now that pisses me off that ecc commands such a premium when it should be more like 15-20%.

Still, try not to confuse intent with reality, if reported ecc events serve to justify buying ecc ram and a reported ecc event generates a sale, there is very little motivation to build a stick of ecc ram that never has reportable errors. I'm sure they don't intend to generate errors, they just work on anything else that is motivated by profit. That's just the way things work.

I paid $160 for 32 GB. An extra $32 for the ram and maybe a few more for motherboard support would be entirely justified but when I bought the price difference was about $300 and at the time I was not aware that it was an option in lga1155 so I am only looking in hindsight.

At this point it would be a $500+ upgrade for me and that's hard to justify without numbers and also a ripoff since it should have just been a $40 option when I bought in the first place.

I'll be more careful next time but its a shame that ecc isn't widely adopted in consumer gear. Its a shame that ecc reliability data is hard to find and its a shame that Intel omits the feature from its top of the line consumer products.

If it was a $40 option, I'm sure I would have known about it, the data would be out there for people choosing their ram and I wouldn't still be wondering if ecc is truly justified.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Kingston KVR16E11K4/32 4x 8GB ECC is about $275.

Kingston KHX16C9K4/32X 4x 8GB non-ECC is about $269.

What fscking price premium are we talking about here? You can't afford ten bucks?

Yes, I know you can find generic RAM for super cheap. But even the generic stuff is like $55 a stick for 8GB, so at $220 ... why would you spend $220 on generic non-ECC potentially troublesome memory, if you can get quality ECC for $275?

So yes it's not a $40 option. It's a $10 option if we're being ~~fair. It's a $55 option if we're not being ~~fair. Either way, I don't see the point in spending a thousand bucks to build a ZFS storage server to provide redundancy and data integrity verification, but then stab ZFS in the heart by cheaping out over ten bucks.

The Supermicro Xeon server motherboards are actually cheaper than a lot of the prosumer 1155 boards people seem to want to use. The ECC memory costs about the same as the non-ECC. The Xeon CPU's are admittedly a little more pricey, but you get so much more potential. Buy server grade and then just never have to wonder if you've wasted money.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
there is very little motivation to build a stick of ecc ram that never has reportable errors. I'm sure they don't intend to generate errors, they just work on anything else that is motivated by profit. That's just the way things work.

Hahaha.

Oh, and the way things REALLY work? We buy memory with a lifetime warranty. So if there's a problem, RMA. RMA means free replacement plus mfr eating the cost of shipping too. Memory manufacturers run a RISK with ECC that they'll run up against a finicky IT department that doesn't tolerate bit errors ... a risk that generally doesn't exist with consumers. It doesn't generate new sales of memory for the manufacturer if they sell you a bad module.

So *my* theory is that they take the best graded chips that they can, and stick them on ECC/server or other "premier" memory brands. Those cheap generic modules are made from the remnants that maybe barely passed testing, or maybe from better stuff if there just happened to be a lot of good modules that day.

Rewinding to the beginning of this thread, if you do what I said, and if you buy ECC memory and you use a manufacturer's memory selection tool to do it, you'll probably end up with memory from a reputable manufacturer and you're even likely to get a lifetime warranty along with it. That advice wasn't trite and it had some thorough reasoning behind it, even if I didn't spell it out in massive detail. I put tons of good advice in there.

its a shame that Intel omits the feature from its top of the line consumer products

Welllllhuh? Um, you do realize that Intel has generally considered the low-to-mid end of the Xeon family to be its "workstation" class lineup, which is another way of saying "top of the line consumer products".

Dell, IBM, Supermicro, etc., all sell Xeon based workstations and I suspect they all can do ECC.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
I'm not sure about everywhere ecc is used but I think a few you mentioned are interface implementations rather than at bit level in memory (TCPIP and HDMI?).

I have no idea what the heck you are trying to say with "interface implementations" and not "bit level". The whole reason for ECC is to correct bit level errors. Even when transferring data through your motherboard, you can have bit-level errors. RAM doesn't have to be bad, but poorly designed motherboards can cause their own interference.

So I have no idea where you are getting the idea that ECC is an interface implementation, but its not. It's a straight bit-level fix. Don't be fooled by the BS.

Some hardware chooses not to use ECC and instead uses DC balance to minimize errors. Look up the spec for SATA. They use 10 bit encoding for 8 bits of data.

Anyway, the point I'm making is that ECC and error detection methods and error minimization tools are all throughout computing. Even your cell phone has stuff to prevent errors, but not ECC RAM directly if I remember correctly. Thinking that ECC isn't necessary is an exercise in gambling. You can gamble and win(woohoo.. you saved yourself some cash). But you can also lose, and when you lose, expect to kiss your zpool goodbye. The whole point around ZFS was to minimize how many single point vulnerabilities you have in software. This theory is exactly why ZFS is so resilinet. But, you must set limits to how much corruption you can fix on your own. RAM is one of those that are excluded, hence the requirement for ECC by Sun/Oracle.

You want to gamble, great. I know I won't after seeing how fast it can ruin a zpool. I'm using a 3 year old Xeon, they're dirt cheap on ebay, but support ECC. So even upgrading can be cheap depending on the hardware you already own. Just check the CPU compatibility matrix for your motherboard. I originally didn't have a Xeon or ECC RAM, but after buying both I dropped them in and my system was instantly ECC protected.
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
So much for the lesson on Non-ECC vs. ECC RAM. I am in favor of running MemTest86+ overnight (at least 3 complete passes) on memory after it's been installed into a system and running Prime95 [or similar] for 2 hours on a CPU to validate your CPU can handle the stress.

I really like the #1 post listing all those parts you recommended. There are a few things I felt were left out and they can be subjective...

CPU Heatsink/Fan -- Most CPUs come with a stock heatsink/fan but there has been at times some manufacturers not selling those fans in the "Boxed" product. Also what if you bought a non-boxed "tray" CPU which definitely does not come with a heatsink/fan. I know recommending a heatsink/fan can be a little tricky because there are several factors to consider such as if it will fit inside your case, how loud will it be and is it going to keep the CPU nice and cool. Well I'm not going to recommend any specific product because of those factors but I do prefer a beefy heatsink/fan unit (90mm fan or larger) that creates virtually no noise. I will recommend that you use a high quality heatsink compound like Artic Silver 5 and follow the instructions for using it.

Case -- Pick your other parts (MB, PS, RAM, Drives, etc...) and then figure out what size of case you need to fit everything in. If you find out the case size is larger than you want then you might need to change your parts a little bit. The main things about a case that you should be concerned about are:
1) Good ventilation (lots of incoming air and circulated within the cabinet).
2) Hard drives are mounted on bushings/grommets (due to vibrations shared amongst hard drives which can cause premature failure).
3) Depending on your environment you might want air filters on your intake fans.

Note: For item #2, the WD Red series drives are able to sense the vibrations and adjust the drive RPM to compensate but it's not the better solution, grommets are the better way to go.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Overnight? Two hours? Geez. We qualify hardware a lot more rigorously around here... ;-)

There are a few things I felt were left out and they can be subjective...

I agree with the general sentiment but feel that it is such a subjective area that it would be exceedingly difficult to make specific purchase recommendations. There is a lot of room for creativeness. From a hardware build perspective, we have in-house capabilities that lead to more difficult solutions, such as a well-stocked electronics shop that is totally capable of making wiring harnesses.

So one of my silly hot buttons is making the finer points work correctly, so that means that things like the IPMI/BMC needs to be able to monitor and control the fans, which leads to 4-wire PWM fans, and then vSphere can monitor and alert to failures, and if the server starts to get busy or warm, then they automatically kick into higher gear, but the rest of the time they're on low speed which means low power and low noise. This means that the motherboard support for monitored and controlled fans has got to be done properly. I suspect that most motherboards these days will do better with this than they used to, especially with the need for something to be actually monitoring the tach and temps and be feeding the PWM output - I remember consumer boards with 3-pin fans that did nothing at all with the tachy and just fed 12v to the fans.

So I know it is tempting to get a small, sweet, ***y little case that is just barely big enough. Get a big case, one with plenty of airflow. A case that's big means less worry about cables restricting the airflow. A case that takes 120mm fans can be made quieter, because 120mm fans run at lower RPM than 80mm fans for the same air movement requirements.

Get a massive heatsink for your CPU. As large as space will reasonably allow. Remember that fans fail, so the crappy little heatsink that might have come with your boxed CPU is not as good as a unit you spec specifically for the task.

So for example, when we moved to Socket 1155, I spent a lot of time qualifying hardware before we made the jump. The ProLiant DL365 virtualization hosts we were replacing were power-hungry 200-300W noisy 1U units that are great at the data center for density, but not so great back in the office. (We had at one point ~40-50 real servers and ~8 kVA of redundant UPS power and that's going to be down to maybe half a dozen physical servers when all is said and done.) After careful consideration and experimentation with both ESXi and FreeNAS for compatibility, we ended up with the Supermicro X9SCL+-F because of the dual ESXi compatible ethernets and the Xeon E3-1230 because it provided a hell of a bang at a reasonable price. Since we have rack space to spare, going to a 4U chassis was well within the realm of possibilities, and we ended up recycling some legacy Antec IPC3480B's, because they could take dual 120mm fans (and they were already in inventory, so no cost!). I wanted something like Delta AFB1212HHE-TP02, but we ended up not being able to wait around for them, and ordered some Cooler Master Excaliburs, which are admittedly very quiet, but we've already seen them fail. A Xigmatek Loki SD963 cooler with the fan set on the back side to suck results in a redundant fan scenario, since the 3480 fans blow towards it as well, and the TDP of the Loki is 130W, much greater than the 80W TDP of the E3-1230. Since our ESXi nodes typically do not run at 100% CPU, the actual thermal situation is MUCH better, and there is enough cooling that one fan failure is not a problem at all, even two might be survivable in the short term. You can probably see that there's a pattern of going a bit further than strictly necessary...

Anyways, for a desktop or other non-rackmount chassis build, grommet mounting of the hard drives is probably a good idea. I personally prefer rackmount and sleds, which generally eliminates the option.
 

lraymond

Dabbler
Joined
May 2, 2013
Messages
14
ok, adding to this as I am new to freenas and lost but under the gun as my current high end EMC is getting old and I don't want to spend hundreds on one hard drive! So I looked a bit at the OP and came up with the following;

A 1U barebone server (as I am short on storage space) with a supermicro motherboard, PS and 4 how swap bay's
- http://www.newegg.com/Product/Product.aspx?Item=N82E16816101314
CPU
- http://www.newegg.com/Product/Product.aspx?Item=N82E16819117286
16G Ram
- http://www.newegg.com/Product/Product.aspx?Item=N82E16820148637
(4) 2T internal drives
- http://www.newegg.com/Product/Product.aspx?Item=N82E16822236343

So I will be around $1500. I read up on the benefits of ZFS and will go that route, hopefully boot off a USB stick. Looking to see if I am on the right path here. still trying to watch the video's, and crash course as I need to place the order asap, then wait while I read more on the OS.

Thanks.
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
Overnight? Two hours? Geez. We qualify hardware a lot more rigorously around here... ;-)
I would say my estimates are fair for running those two specific tests. Running Prime95 for any length of time will shorten the life span of your CPU because it runs your CPU at 100% continuously which isn't typically how a CPU operates in the real world. This test will give you an idea how stable your CPU is while under stress. As for the MemTest86+, yes overnight should be fine unless you have a very slow system, just make sure you get 3 completes PASS runs. Running these two tests should give you confidence that your system is stable from a hardware perspective. Now when it comes to software and hardware compatibility, that is another type of testing. In my field before we ship out any software to the submarine community the software is tested on proven hardware and it is tested in the same environment (running 24hour x 3days with an interactive interface) to see what pops out. Just yesterday we found something of that was not easy to spot unless you did a 3 day test. It was minor and really cosmetic but shit would have hit the fan if it stayed in there. The fix was easy and normally we don't see these things but this is why we run the testing. If we fine a major problem then we rerun the 3 day test all over again because a code change in one location may affect something else.

So as for testing to see if FreeNAS is compatible, well I'd think several weeks of realtime testing would be required to validate a system is compatible.

Peace Baby Peace!
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
ok, adding to this as I am new to freenas and lost but under the gun as my current high end EMC is getting old and I don't want to spend hundreds on one hard drive! So I looked a bit at the OP and came up with the following;

A 1U barebone server (as I am short on storage space) with a supermicro motherboard, PS and 4 how swap bay's
- http://www.newegg.com/Product/Product.aspx?Item=N82E16816101314
CPU
- http://www.newegg.com/Product/Product.aspx?Item=N82E16819117286

I realize that Xeon part numbering is a bit inscrutable. These are incompatible. 1366 is "old technology" while 1155 and 2011 are current technology. For smaller servers pick 1155; 32GB max RAM is probably enough for a 4 drive system. You probably want something out of this block of servers here. The 5017C-MTF has the added bonus of being $120 cheaper at NewEgg, which is often not the cheapest place to buy Supermicro (another $50 cheaper). Then you probably want memory that's 1600 instead of 1333, and certified to work with that board, such as the Kingston KVR16E11/8 ... in any case, ends up being an insanely nice storage server for the price.

But also consider, why would you not spend just $100 more total to get 3TB drives (50% more space!) or more to get 4TB drives and double the space? While the 4TB drives are a price premium, they were going for $159 on sale a few weeks ago - but the reality is, the driving cost in a NAS isn't the drives, it is the hardware platform. Which of these is more rational:

$500 - 5017C-MTF
$235 - E3-1230 v2
$150 - KVR16E11/8 x 2
$480 - WD20EFRX x 4
-----
$1365 for 8TB raw storage, but you probably use that as 6TB of RAIDZ1 (that's $227 per TB) or 4TB of RAIDZ2 (that's $341 per TB)​

Now compare:

$500 - 5017C-MTF
$235 - E3-1230 v2
$300 - KVR16E11/8 x 4
$760 - ST4000DM000 x 4
-----
$1795 for 16TB raw storage, but you probably use that as 12TB of RAIDZ1 (that's $149 per TB) or 8TB of RAIDZ2 ($224)​

The point here is that on a 4-drive system the cost per TB is dominated by the cost of the hardware platform, not so much by the actual drives, so there's not a lot of point in going small unless you are positive that space is unnecessary.

Remember, once you've "settled" for a lower amount of storage, you've spent the money and you'd have to spend a lot more to get those larger drives. The incremental cost to get larger drives when you're building the server may be a better deal, even as drive prices fall over time, and the avoiding the hassle of needing to migrate your data is worth something too. Also if you didn't notice I was sneaky and stuffed an extra 16GB of RAM into that second server, so actually if you pull that out and watch for a sale on the 4TB'ers you can probably get that system for less than the $1500 you were originally expecting to spend.
 

lraymond

Dabbler
Joined
May 2, 2013
Messages
14
The point here is that on a 4-drive system the cost per TB is dominated by the cost of the hardware platform, not so much by the actual drives, so there's not a lot of point in going small unless you are positive that space is unnecessary.

Remember, once you've "settled" for a lower amount of storage, you've spent the money and you'd have to spend a lot more to get those larger drives. The incremental cost to get larger drives when you're building the server may be a better deal, even as drive prices fall over time, and the avoiding the hassle of needing to migrate your data is worth something too. Also if you didn't notice I was sneaky and stuffed an extra 16GB of RAM into that second server, so actually if you pull that out and watch for a sale on the 4TB'ers you can probably get that system for less than the $1500 you were originally expecting to spend.

Love the idea. Since I do want to keep these (I am getting 2 of these) I want to make sure I have growth and I appreciate you taking the time to reply with great detail as you have to many others (and yes, I saw the extra RAM snuck in there) but as I read more on ZFS it does seem to eat it up and for that cost, well worth it!

So ... I am finalizing today, trying to see if I can get a similar case with duel Power ... but thanks again. My next post 'should' be pre/post install questions :)
 

Wyl

Explorer
Joined
Jun 7, 2013
Messages
68
I keep seeing people come in asking about system builds and making what appear to be irrational choices.

Thank you! This a great guide to at least set a budget and not over spend.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Irrational choices: someone who has selected a $109 socket 1155 motherboard with a Realtek and refers to the above-suggested $150 board as "robust" and "cost too much". Because it's like $109 + $38 is only $147 and by George that last $3 to get to an actual server board with server-grade ECC and known to work very well is going to break the bank. Things that make you want to go o_O hmmm.

Thank you! This a great guide to at least set a budget and not over spend.

So, like, yeah, happy to spend a little time talking about stuff to people who are willing to listen.
 

ikjadoon

Cadet
Joined
Jun 11, 2013
Messages
1
Hard drives:

Of course, you want to use cheap hard drives. But consider that you're building a system for hundreds of dollars. Add up the total cost of the system you propose with those "cheap" 2TB or 3TB drives, then divide by the number of usable TB you get. Then add up the total cost of a system built with 4TB drives, divide again. Shocked? The 4TB is often the less expensive choice per delivered TB, despite the drives being a bit of a price premium.

In Windows, small files and random access are bottlenecks and SSDs excel there. In FreeNAS/ZFS, I presume that's not true. So, what's the performance bottleneck?

I don't need a lot of storage (~100GB and that's in 10 years, haha). Will ZFS/FreeNAS work faster with SSDs? I'm looking to create a photo backup/sharing system for our home. Sequentially, it won't matter as GbE tops at ~100MB/s and most hard drives hit that. But, random reads, writes, small 4K files (pictures are at least 4MB each, though): do I stand to get any benefit splurging on an SSD?

Example: 2 500GB HDDs in RAID 1 vs two 128GB SSDs in RAID 1, assuming everything else is constant. Should I expect any difference in accessing files, adding or copying files, etc.? I don't want to open a CIFS with 5GB of photos on a ~200Mbps wireless connection and wait for the pictures to load/thumbnails to cache.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
All file systems suffer from small files and random access. It's not a file system issue as it is the laws of physics. The same hard drive head can't read multiple places on the hard drive at the same time. SSDs alleviate that issue by removing the 3+ms seek time of hard drives in favor of microsecond seek times.

I wouldn't run SSDs in a RAID-1 without a backup since you'd literally be doing 1:1 writes to both drives. In theory, both drives would reach end of life at roughly the same time. What you could do is a RAID1 of 2 SSDs with a nightly backup to a regular platter based hard drive. Maybe do ZFS snapshots to the platter based drive.
 
Status
Not open for further replies.
Top