So you want some hardware suggestions.

Status
Not open for further replies.

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,681
Mod note: This thread is jgreco's original hardware recommendations guide, from back in the day. General advice still holds, but some specifics have changed.

The current Hardware Recommendations Guide can be found in the Resources section, at the following link:
https://forums.freenas.org/index.php?resources/hardware-recommendations-guide.12/



I keep seeing people come in asking about system builds and making what appear to be irrational choices.

The following is some guidance on how to select high performance hardware suitable for heavy-duty home or small office use. The goal isn't necessarily to pick the smallest, cheapest things that can possibly be used to get the job done, but rather how to throw a reasonable amount of money at the problem and get a solution that won't have to be totally replaced in a year because you underestimated everything. ZFS is piggy! But vendor-based NAS devices can be slow and are usually expensive. This is probably cheaper and faster. If you do want a vendor-based NAS device, see our friends at iXsystems, developers of FreeNAS who make the commercial TrueNAS product.

By the way, all these clicky links? Want more info? USE THEM. The intention is that you can use this to research stuff yourself...

ZFS is designed to protect your data, but cannot save you from poor hardware choices. ZFS works great with ECC (see cyberjock's sticky about why you really want ECC). ECC is not horribly expensive. ECC is well-supported with a Xeon CPU and a server-grade motherboard. ECC may not be well-supported on random prosumer grade boards and Pentium CPU's. If you are going to spend money on a system that's holding important data, get the right tool for the job. If your system is only dealing with backups, okay, fine, then go non-ECC if you really must.

Chassis:

For a smaller FreeNAS system, Fractal Design makes a bunch of products that I've never seen but many users rave about. As such, this isn't a suggestion so much as a recommendation that you check out the forum to see what people have said. It seems to me that the Node 304 could be ideal for a small 6-drive FreeNAS system. The Define R4 is attractive for up to 10 drives.

However, I'm simply not a fan of a chassis where you need to open it up to swap drives. The convenience of accessible drive trays is fairly large to me. That usually moves us up into rack-mount land, and most of these come with integrated boards.

ASUS makes the RS300-E7/PS4 which is an interesting 1U socket 1155 unit for around $550. Haven't played with one but heard it works well.

Supermicro has lots of options.

The SuperServer 5017C-URF is pricey at around $850, but offers redundant platinum-certified power supplies with the Intel C216 chipset. The SuperServer 5017C-MTF is probably the definitive prebuilt 4-drive, 1U small FreeNAS server based on C202, for around $525. The SC813MTQ-350CB chassis is available as a separate part if you prefer a different board. Note that Supermicro's chassis assume a Supermicro motherboard; ATX manufacturers have a variety of different placement strategies for their ports. Supermicro also offers a ton of 1U socket 2011 options if you need to go north of 32GB of RAM, redundant supplies, 10GbT, up to 768GB RAM, etc.

Supermicro falters a bit in the 2U-12 drive department. They don't actually have any single socket 1155 or 2011 as prebuilts, which would seem to be shortsighted as a NAS could reasonably need only a single CPU. But if you needed dual, then the 6027R-E1R12T or 6027-E1R12N would seem to be great choices. But you would probably be better off taking the CSE-826BE16-R920LPB chassis and integrating the board of your choice. Note that the "BE16" variant uses a backplane with an SFF-8087 for easy cabling and rapid assembly.

A lot of users have used Norco cases. These allow you to pick your own components a bit more easily, but by the time you end up swapping out the fan tray for the 120mm one, and you put in a decent power supply, and you deal with the headache of a bad backplane, pricewise you're within striking range of a Supermicro chassis, and the build quality of the Norco isn't as good.

Mainboards:

The cheap consumer boards and even many of the prosumer boards are ... well, "designed for Microsoft." Yes, your nifty prosumer board might sport USB3 and 8 SATA ports, but it also has a Realtek ethernet that really (makes you want to) scream, and capacitors that might have been bought in a back alley in Taiwan, and it wasn't truly designed to run 24/7. You do not want to overclock your system. That just adds stresses and increases the chance of failure. PCIe x16 is a waste (and might only support a video card) and you will have a hard time fitting an expansion HBA into a PCIe x1. Both are indicators to "look elsewhere."

So instead of that ECS Golden Z77H2-AX(1.0) LGA 1155 Intel Z77 for $319, or that ASUS Maximum VI Extreme for $400, both of which would admittedly make for an awesome PC, look at some server-grade boards. You're making a server. Why use a gaming enthusiast's hardware? We've been having great luck with Supermicro's X9S lineup of 1155 boards, but Tyan, HP, IBM, Dell, and others make usable hardware too. Unsure if it's a server board? Good server boards typically lack audio ports.

The Supermicro X9SCM ($154-$170) has 4 SATA2 and 2 SATA3 ports, dual Intel gigabit ethernets, a bunch of USB2 ports, and can be put in a nice 1U unit if you wish. It'll support ECC error correction and log ECC events.

The Supermicro X9SCA ($170) has both PCI and PCIe slots for more flexibility with any legacy stuff you might have.

The Supermicro boards have variations with IPMI, which means that you can access the console video and keyboard remotely, attach USB devices remotely, and basically never have to plug anything except power and network into the unit. They're also intended to run 24/7, and the Supermicro people who provide support know that a "server" isn't the waiter in a restaurant. Bonus: you get (at least) dual Intel server ethernets, instead of learning later that your Realtek is garbage and needing to fork out $35 for an Intel add-on desktop ethernet card. That effectively reduces the cost of your board by $35 (or even $70 if you count the second interface).

CPU's:

The Xeon E3-1230v2 is around $220. FOUR cores. Hyperthreading. 3.3GHz, 3.7GHz turbo for Samba zippiness. 8MB of cache. 69 watt TDP, and much lower consumption when idle. Up to 32GB ECC 1600MHz. AES-NI support for disk encryption fun. Even Intel VT-d in case you decide to play with virtualization.

Okay, so real low power? The Xeon E3-1220Lv2 is around $180. TWO cores, 2.3GHz with 3.5GHz turbo. 3MB of cache, 17 watt TDP. I suspect the 1230v2 idles at around the same level and may be a better choice due to its ability to bring lots of cycles to bear if needed, but the 1220Lv2 is an interesting option.

So, still too expensive? If cost is the all-controlling factor for your build, then at least pick a CPU with ECC support, because ZFS is designed assuming that the system isn't likely to randomly corrupt bits. I've heard good reports about the Intel Pentium G2020, around $60. Two cores, 2.9GHz, no turbo. 55 watt TDP. But you lose on AES-NI and other features that could be helpful. And I haven't actually tried one of these on a Supermicro board.

Memory:

Please don't guess at compatible memory for your system. Use a manufacturer's memory selection tool. This is your data that you're handling, after all. Do you really want to risk it?

Buy ECC. ZFS is designed with ECC in mind as part of the protection strategy.

Make sure you don't buy Registered memory if your system requires Unbuffered. Patrick Kennedy explains it all with pictures.

Don't buy low density and fill all your slots unless that is the only option available cost-wise. For example, don't buy 4GB sticks for socket 1155 systems with 4 slots. A 4GB KVR16E11/4 is $35. An 8GB KVR16E11/8I is $70. Given that pricing, there is NO REASON to buy 4GB modules. If you populate all your slots unnecessarily, then you lose out if you later decide you need more RAM. It is okay to buy two 8GB modules and leave two slots open.

ZFS loves memory. If you are shocked at the idea of a NAS needing 8GB of memory, get over it. Buy 16GB at a minimum. It's 2013. Memory is cheap. FreeNAS and ZFS requires a minimum of 8GB. You are not special. You are not the exception to that. You must outfit your box with at least 8GB. But seriously, you are strongly encouraged to go with 16 or more.

Power supply:

Don't buy a cheap power supply. It's your data, don't let a crummy power supply eat your system and take your data with it.

Do consider that a 24/7 supply that is less efficient costs a lot of money in the long run. Do yourself a favor and buy an 80Plus Gold rated supply, at least. You probably will never recover the cost differential of an 80Plus Platinum supply though.

Sizing a power supply is kind of a multidimensional problem. Remember that drive spin-up takes a LOT of current.

1) In terms of watts, a supply that is running at about 30-50% of its rated load is likely to be within its peak efficiency window, but is also likely to be relatively unstressed, leading to a longer life and reduced likelihood of premature failure.

2) You also need to estimate (by adding up the start currents for all the drives, usually around 2 amps each, and then estimating other 12V loads such as fans, CPU's, etc.) the 12V load, and verify that that will remain under the rated capacity of the power supply. You can CHEAT at this by hooking up the system without powering on the drives, letting it run memtest86, and checking power consumption. Take the watts, divide by 12, and that's a bad overestimate of the maximum 12V amps that the base system takes. Then you add the peak amps for all the drives, maybe add 10% more, check against the power supply rating, and there you have an easily derived pass-or-fail.

But you need BOTH these tests to pass and be reasonable, or you need to adjust your gameplan (mandatory staggered spin, etc). And for a NAS or other server with lots of 3.5" disks, this typically means selecting a power supply that runs at something closer to 30% of its rating, in my experience.

Additional SATA connectivity:

The IBM ServeRAID M1015, crossflashed to IT mode, is probably one of the best choices available if you need a bunch of extra SATA ports. It does take an extra ~12 watts however. Don't buy it new. They're available cheap on eBay all the time.

Random RAID controllers that are operating in RAID mode (showing virtual or logical devices to FreeNAS) are a very bad idea. If your controller costs more than a few hundred dollars, it may not be a good choice for FreeNAS.

That Adaptec RAID controller? Probably won't work. Neither will that SATA PCIe controller that you got in the 99c bin at the corner computer shop. Well, it might "work" but be cautious, it may like to drop bits. In general, ZFS works great if it has direct communication with a disk. It does not need to be fancy or expensive communication, but it does need to be reliable.

Do not obsess over SATA3. You do not need SATA3 ports, except to get the most out of a SSD. The average hard drive these days can read at about 150MBytes/sec. 3Gbit/s SATA2 works out to something like 375MBytes/sec.

Avoid the LSI SAS/SATA 3Gbps HBA/RAID controllers. They have a 2TB size limit. Ick.

Boot device:

Use a USB flash drive that you trust. I know the FreeNAS specs say 2GB. Most of the ones that claim to be 2GB are a bit less. Get three nice name-brand 4GB USB flash drives, not out of the 99c bin. SSD is totally awesome for a boot device, much faster, but eats a SATA port, is quite expensive (in comparison), and isn't necessarily any more reliable. Three USB flash drives means one for current use, one to install an upgrade, and one for backup/oops.

Hard drives:

Of course, you want to use cheap hard drives. But consider that you're building a system for hundreds of dollars. Add up the total cost of the system you propose with those "cheap" 2TB or 3TB drives, then divide by the number of usable TB you get. Then add up the total cost of a system built with 4TB drives, divide again. Shocked? The 4TB is often the less expensive choice per delivered TB, despite the drives being a bit of a price premium.
 
Last edited by a moderator:

gpsguy

Active Member
Joined
Jan 22, 2012
Messages
4,472
Excellent!!! I hope the mods can make it a sticky.

Could you add more emphasis (Red, Bold) etc., to the memory component. Even though you're said 16Gb, it seems like folks still want to start with 4/8 Gb on a new build with 4x4Gb+ of storage. And then they complain about poor performance.
 

ProtoSD

MVP
Joined
Jul 1, 2011
Messages
3,348
Nice post.

Remember that drive spin-up takes a LOT of current.

Just a little more detail on current use. Below is taken from the specs of the Western Digital RED disks. It lists max current use at 1.73 Amps, not a lot, but a bit more than it uses for read/writes.

attachment.php



If you look close, the top row with the 1.73 is listed in AMPs, but the next rows is in WATTS. To convert between the two use the formulas below.

P=I*E
I=P/E

So Amps for the WD RED during read-write would be 4.4/12 or ~.37 Amps.

Most newer motherboards have a feature that will let you "stagger spin up". So for example if you had 6x of these drives WITHOUT using the staggered spinup, that would be 6x1.73, or 10.38 Amps.

To convert 10.38 Amps to Watts you'd do:

10.38x12 = ~125 Watts

If you stagger the spinup, you'll get closer to the individual power of each drive which is:

1.73x12 = ~21 Watts

To calculate power use for all the drives during read-write (like a ZFS scrub) you multiply the number disks by 4.4.

6x4.4 = 26.4 Watts or 26.4/12 = ~2.2 Amps
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,681
To convert 10.38 Amps to Watts you'd do:

The electronics guy in me kind of recoils in horror at this.

Your typical power supply is rated in watts, yes. However, power supplies supply multiple voltages and each of these is typically limited to a certain number of amps, and usually none of those V*A individually hit the W rating. This is significant because the startup current for the drives represents an abnormal load in addition to already-existing loads such as the CPU, mainboard, fans, controllers, etc., so it strikes me as dangerous to convert the startup amps number to watts. You can lose sight of the maximum amp load on the 12V rail(s) if you convert it, and THAT limit is really the thing that matters anyways. I expect that you understand this, but for the sake of the audience, it is important to be clear so that the non-EE crowd doesn't go frying their 24-drive systems with a too-small supply.

{snip, moved to Power Supply section at top}

Keep the suggestions rolling in guys, thanks for the input.
 

SoonerLater

Explorer
Joined
Mar 7, 2013
Messages
80
FWIW... I started an Amazon Wish List (here) which contains everything that jgreco mentioned. You still have to add an enclosure, but it's an easy jumping-off point. Feel free to point out any equipment errors.

Note: I'm sure that many/most items can be purchased elsewhere a little cheaper. I'm not married to Amazon, but over the years, I've come to regret buying from some vendors. I don't mind paying a little extra to know that if anything is not right, Amazon will make it so. YMMV.
 

Majo

Dabbler
Joined
Apr 23, 2013
Messages
14
My main reason to register just now was to say my humble thanks to jgreco and the thread he started. Thanks a lot! :)

After reading the thread so far i thought i could build an even better system, hehe.

I started with the CPU. I want to encrypt my volumes, so i'm going for hardware AES support. A quick visit on Intel ARK advanced search (http://ark.intel.com/search/advanced/) left the Xeon E3 1200 family as ideal CPUs, namely the 1230, 1220 and 1220L. :) Ah, well...

Next, the mainboard: I have very good experience with Supermicro, too, albeit in larger servers and firewalls (15 years of IT security background). So i looked for Supermicro boards with LGA1155 support (Xeon E3) and found - the X9SC series. Pfff...

The next hours i looked for RAM, case etc etc and found - my perfect FreeNAS box. Just like this.

Over here in germany, Thomas Krenn is a well known server reseller. It is widely known that they just rebrand Supermicro articles - like most of the industry does. ;) Product quality and service is very good.

While searching for my components I learned that they (shameless ad) have a "Silent Server" line, that's basically a Supermicro X9SC-M with IPMI, an E3 1200 CPU (customizable), ECC 1600 RAM and a Supermicro case, starting with the SC731 (4 3"5 slots).

Here's the link (german): http://www.thomas-krenn.com/de/produkte/server-systeme/silent-server.html

Sorry for posting an ad with my first post - these are exactly the systems jgreco recommends. Prices are ok. Neither cheap nor expensive. Quite ordinary german prices.

Might save german FreeNAS noobs (like myself) a fair bit of work...
 

Dusan

Guru
Joined
Jan 29, 2013
Messages
1,165
FWIW... I started an Amazon Wish List (here) which contains everything that jgreco mentioned. You still have to add an enclosure, but it's an easy jumping-off point. Feel free to point out any equipment errors.

850W PSU is overkill for this setup. Despite being 80Plus Gold, it will run very inefficiently for sub 100W loads. This is a much better (and cheaper :)) match: http://www.seasonicusa.com/G-series-360.htm. (This PSU would be even better, if you manage to buy it somewhere: http://www.fsp-group.com.tw/report/FSP250-60PFK_Report.pdf.) X9SCL-F + E3-1220Lv2 + 2 memory sticks + 4 WD30EFRX consume about 45W when idle and 30W with disks spun down (using the mentioned Seasonic G-360).
Also, the 80 Plus Bronze 300W Supermicro PWS-303-PQ, that ships with the Supermicro 731i-300B chassis is surprisingly very efficient at low loads (comparable with the G-360).
 

Scareh

Contributor
Joined
Jul 31, 2012
Messages
182
any suggestions on non-server grade hardware mobo's/cases/PSU/...

I only see hints on server graded hardware, but since not everyone is willing to spent *that* much money some other up to date hardware suggestions would be fun to read and maybe invest some money in.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,681
If you're not willing to spend *that* much money, then you won't be getting decent non-server-grade hardware either. You'll end up with a hodgepodge AMD system that takes a max of 8GB and has a nifty Realtek, or some other set of inane compromises.

Basically any decent prosumer 1155 motherboard is anywhere from $100 to $200. Even if you settle on a $100 board (usually some significant compromises there), by the time you add $35 on for an Intel Pro/1000 CT, you can get a server board for $20 more than that - and the board's much nicer. The place you might "save" is on the CPU, by not going Xeon, but then you probably lose ECC support and other qualities as well. And it is really the only component where there is a price premium; if you add up the entire cost of a system build, the difference is marginal at best.

Basically it is stupid to spend a thousand dollars on a NAS and then try to cheap out on spending that last fifty bucks for a decent server grade system, losing ECC data protection along the way.
 

Scareh

Contributor
Joined
Jul 31, 2012
Messages
182
I never said how much *that* was but no worries.

Problem is most webshop overhere don't even sell that server-graded stuff, so I have to start looking at ebay, which is in my opinion a lottery.

so i'll go ahead and list the recommended hardware:

the Supermicro X9SCM: price on ebay: $112 (http://www.ebay.com/itm/SUPERMICRO-...466675369?pt=Motherboards&hash=item5d3dac86a9)

+ shipping to Europe gives me about $125.


Which is a nice price. To me that would be well spent money. 3 problems with that:
1) its used. So whats the exact state? Noone knows
2) It might work as described and then its shipped. We all know how packages get handled... Will it survive?
3) its used, so how long will it live? No guarantees on anything.

(off course i looked at the cheapest one on the list :p)

They range from $112 to a whopping $250 for a new one. (12 items listed overhere) Even if i buy the new one, i have absolutely no guarantees whatsoever.

However, if I were to take a consumer-grade motherboard and order that overhere at a webshop or a store even...
I pay on average €100-€200 (yes Euro's :cool:) BUT i get in return:

1) 2 years off warranty, as in pick up and return...
2) I can go pick up if i chose to or have it delivered at my home. Sure it might get damaged on the delivery but no worries there, I got 2 years pick up and return warranty remember?

This is just an example of what i'm talking about. I *might* find all the servergrade stuff cheaper or a tad more expensive that the consumergraded versions but nothing is guaranteeing me that i find everything. I mean, if buy everything on ebay and in the end i'm missing my CPU, that kinda means i'm screwed xD.

Excuse the ranting about this, but it ticks me off a bit that you guys only look at server-graded hardware and say *that is the best you can get and anything else is crap*. I understand that ECC-ram is beneficial to the ZFS-filesystem, and that consumer graded hardware is relativly speaking inferior to servergraded ones, but not everyone has the luxury to get the complete uber- setup you guys invision in an ideal world.

At the moment I'm running an old pentium 4, with a 2.8Ghz CPU and 2gigs ram. It has a total harddisk capacity off 3.7tb. I know I need more ram and a better system (its around 10 year old ^^). But I wanted to test the freenas possibilities before I invested in it.
Now that I'm aware of the possibilities and drawbacks of Freenas i'm willing to upgrade. And thats why I came looking here.
 

ProtoSD

MVP
Joined
Jul 1, 2011
Messages
3,348
Just my 2 cents:

I guess it depends on how much you value your data, there's always some trade-off. If you don't want to use ECC, the options are run a RAM test as often as you are comfortable, do scrub immediately after, followed by a backup. We've already seen a number of users who have completely lost their data because of bad RAM. If that's a risk you're willing to take with your data, there should be plenty of cheap hardware you can choose from. I find it pretty unbelievable that ebay is the only choice for getting reasonably priced hardware in Europe. Maybe you could make a post in one of the other forum sections asking some of the other European users for advice where to get components. It sounds like your warranty system is a lot different from what we have in the U.S. If there's a 2 year warranty here, it means sending it back to the manufacturer and waiting for them to send some replacement which is most likely refurbished, so perhaps part of the extra cost is your "enhanced" warranty.

I've used an Atom mini-itx motherboard without ECC for about 3 years now, and "knock on wood", it hasn't had any problems. I should probably run a RAM test now ;)
 

titan_rw

Guru
Joined
Sep 1, 2012
Messages
586
If server hardware is too expensive, or too hard to find where you are, then really, any decent consumer system will do.

My primary nas is a consumer 1155 board with an i5-3570 and 32 gigs of non-ecc ram. I'm running 10 seagate 3tb consumer drives.

The mainboard had a realtek nic on it, so I added an intel pci-e card. Also added the infamous ibm m1015 raid card to support my drives.

I probably could have gotten away with a cheaper cpu, and a bit less ram, but the difference in cpu price and memory price didn't make it worth while.
 

Majo

Dabbler
Joined
Apr 23, 2013
Messages
14
Hmm, hardware recommendations really depend on the expectations for a NAS. I'm with jgreco on this: I might be new to FreeNAS and to this forum but i work in the IT field for 20 years now, the last 10 years as a self employed networking and IT security consultant. In this time i saw lots of failed computer components: mainboards and PSUs with blown capacitors, lots and lots of broken harddisks (failed electronics aswell as headcrashs), blown CPUs because of fan failures (mostly broken ball-bearings) ... i even have a broken ECC RAM module. i keep it as a warning... ;)

So, for me data safety comes first, price second and performance last.

Therefore i use reliable componets where it matters: servergrade mainboard, CPU, RAM and PSU. I expect my NAS to run about 5-6 years 24/7, so price is relative. Instead i'll save money on the non critical, redundant components, the harddisks. I plan to run RAIDZ2 with 5 2 TB Disks. WD red with 7200 rpm cost about 160 Euros a piece, standard consumer WD20EARX about 80. With 5 disks that's 400 Euros i can save. And with 2 redundant disks, chances are marginal to experience a fatal crash. Sure, MTBF is much less, but in my experience, disks crash in the first 6 months (-> warranty) or only if they cycle up and down a lot. In 24/7 crashes are not as common and since disks are highly redundant in RAIDZ2, there's not much risk in using consumer disks. With only one redundant disk i'd use server hardware, too, just to be sure.

Oh, and i plan to run FreeNAS in a KVM environment, knowing fully well that i'll loose up to 20% in disk performance, even with the SATA controllers on PCI passthrough (n.b.: You need VT-d in CPU and mainboard for PCI passthrough - don't know if consumer hardware supports it. Server hardware does.). This way i can run a full blown server in another machine, using the same reliable server hardware. Of course i'll sacrifice some performance.

If you value performance or price over data safety you can make different hardware choices. Either with slower server hw, sacrificing performance or with cheaper consumer hardware, sacrificing data safety. All a matter of priorities.

But backups aside (you backup your data, don't you? ;) ) - if you're like me, your photo/video folder is about 70 GB now. Wedding, birth and first years of 2 daughters, etc. My wife threatened to kill me if i loose these photos in a disk crash. So i better don't take risks. My priorities are very clear here... ;)
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,681
Excuse the ranting about this, but it ticks me off a bit that you guys only look at server-graded hardware and say *that is the best you can get and anything else is crap*. I understand that ECC-ram is beneficial to the ZFS-filesystem, and that consumer graded hardware is relativly speaking inferior to servergraded ones, but not everyone has the luxury to get the complete uber- setup you guys invision in an ideal world.

That's nice and all, but it's not an uber-setup and we're not saying everything else is crap.

Uber setup? I have a really nice Supermicro SC846BE26 with an X9DR7-TF+, Xeon E5 capabilities, 10GbE, up to 768GB RAM, and 24 drive bays begging for 4TB drives. There ya go.

I think it is fairly defensible to say that I should not make suggestions that I am not comfortable with. The hardware that's being suggested here, is simply good, solid hardware. It isn't the best you can get, but it is among the best that you can get for the average user's needs at a reasonable cost. It isn't likely to have unusual problems ("My Realtek doesn't work!", "My capacitors are bulging!", "Why won't ACPI work?", "My M1015 won't work in the PCI-e slot and the manufacturer says it is only for video cards!", etc). The very nature of asking for suggestions implies that one is open to listening to the opinion of another; if you don't like that, well, then by all means, DON'T LISTEN TO THE SUGGESTIONS. I'll happily let you buy whatever you want and do whatever you want. I'm not going to force you to buy my suggestions. Whether or not the hardware I've suggested is available in your area is not a factor in my suggestions. As a matter of fact, you're free to make your own suggestions and post them. People can evaluate my suggestions on the basis of the information and links I provided, and prefer yours if that works for them. It's a free world.

I haven't mentioned quality hardware such as the Supermicro Atom D525 board (dual Intel ethernets! nearly unheard-of in Atomland.. and 8GB RAM supported!) because in general the Atoms are underpowered for FreeNAS and do not support ECC, and from a power consumption point of view, the Xeon board eats only maybe 10(?) watts more at idle - while having much greater capabilities. For the right application, it is probably a good choice, but generally speaking, I would hesitate to recommend it, especially with the advent of encryption, etc.

You also have the HP MicroServer series, which suffers a bit from various issues, but is a solid low-end prebuilt NAS platform. Loaded with 16GB, we have one suffering little unit doing backups (kind of slowly). But that's a prebuilt, and has been endorsed on these forums many times.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,681
Oh, and i plan to run FreeNAS in a KVM environment, knowing fully well that i'll loose up to 20% in disk performance, even with the SATA controllers on PCI passthrough (n.b.: You need VT-d in CPU and mainboard for PCI passthrough - don't know if consumer hardware supports it. Server hardware does.). This way i can run a full blown server in another machine, using the same reliable server hardware. Of course i'll sacrifice some performance.

I'm going to preface this by saying, if you can walk in here and comfortably make that statement, you have a better-than-average chance of success at virtualizing FreeNAS. That having been said, the prevailing sentiment around here is that virtualizing FreeNAS is bad. I tried with one of the other members to get a VT-d setup running on his Gigabyte board that appeared to work, for a little while, then went to bits, so definitely distrust consumer/prosumer boards on that front, but even server boards sometimes screw up the VT-d. So, please, please, test, test, and then more test, and make sure you do NOTHING that would strand your data if you had to ditch the virtualization layer. We have some ESXi boxes here that run FreeNAS VM's (VT-d passthru). Then we stuffed a RAID controller and datastore on to hold ESXi and the FreeNAS VM config. We can still boot FreeNAS on the bare metal with a USB key in moments, have copies of the FreeNAS config, etc. The real goal is shutting off hosts by shedding load onto NAS machines that had to be running anyways... yay virtualization. Giving thought to the failure modes is a grand idea though.

If you value performance or price over data safety you can make different hardware choices. Either with slower server hw, sacrificing performance or with cheaper consumer hardware, sacrificing data safety. All a matter of priorities.

But backups aside (you backup your data, don't you? ;) ) - if you're like me, your photo/video folder is about 70 GB now. Wedding, birth and first years of 2 daughters, etc. My wife threatened to kill me if i loose these photos in a disk crash. So i better don't take risks. My priorities are very clear here... ;)

Yeah. Sigh.
 

Majo

Dabbler
Joined
Apr 23, 2013
Messages
14
So, please, please, test, test, and then more test, and make sure you do NOTHING that would strand your data if you had to ditch the virtualization layer...... yay virtualization. Giving thought to the failure modes is a grand idea though.
Uh, that's a valid point that i hadn't thought of. Right - i should be able to access my data if virtualization fails. Ok, back to the drawing table...
 

Scareh

Contributor
Joined
Jul 31, 2012
Messages
182
yay I got some responds :cool:
exactly what I wanted, thanks a LOT.
Its just what i'm missing on this forums a bit:



That's nice and all, but it's not an uber-setup and we're not saying everything else is crap.

Uber setup? I have a really nice Supermicro SC846BE26 with an X9DR7-TF+, Xeon E5 capabilities, 10GbE, up to 768GB RAM, and 24 drive bays begging for 4TB drives. There ya go.

I think it is fairly defensible to say that I should not make suggestions that I am not comfortable with. The hardware that's being suggested here, is simply good, solid hardware. It isn't the best you can get, but it is among the best that you can get for the average user's needs at a reasonable cost. It isn't likely to have unusual problems ("My Realtek doesn't work!", "My capacitors are bulging!", "Why won't ACPI work?", "My M1015 won't work in the PCI-e slot and the manufacturer says it is only for video cards!", etc). The very nature of asking for suggestions implies that one is open to listening to the opinion of another; if you don't like that, well, then by all means, DON'T LISTEN TO THE SUGGESTIONS. I'll happily let you buy whatever you want and do whatever you want. I'm not going to force you to buy my suggestions. Whether or not the hardware I've suggested is available in your area is not a factor in my suggestions. As a matter of fact, you're free to make your own suggestions and post them. People can evaluate my suggestions on the basis of the information and links I provided, and prefer yours if that works for them. It's a free world.

I haven't mentioned quality hardware such as the Supermicro Atom D525 board (dual Intel ethernets! nearly unheard-of in Atomland.. and 8GB RAM supported!) because in general the Atoms are underpowered for FreeNAS and do not support ECC, and from a power consumption point of view, the Xeon board eats only maybe 10(?) watts more at idle - while having much greater capabilities. For the right application, it is probably a good choice, but generally speaking, I would hesitate to recommend it, especially with the advent of encryption, etc.

You also have the HP MicroServer series, which suffers a bit from various issues, but is a solid low-end prebuilt NAS platform. Loaded with 16GB, we have one suffering little unit doing backups (kind of slowly). But that's a prebuilt, and has been endorsed on these forums many times.



See what you did there? You gave advice on some hardware you are comfortable with. Supermicro Atom D525 board = bad, why => underpowered.
That teaches me something.
Basicly what i *want/like* is that guys like you post their set up and their advantages/disadvantages, maybe even performances and what they generally use the nas for. I'm sure there are people on this forum that have non-server graded hardware and are having a blast with their nas.
Its hard to have a clear picture (for me anyways) of what I exactly need as hardware. I mean I know i should get server-graded and such, but what if I can't, whats the next best thing? You could post different setups of non server graded hardware that you are familiar with, that work for your purpose.
Then i can judge for myself, mmm that sounds a lot like what I want to do with my freenas. Mmm that performance isn't all what I would want, guess I could buy 32gigs instead of this guy's 16 gig ram and still see an improvement.

I envision something along the lines of this (as in my case):

Pentium 4, 2.8Ghz, 2Gigs of ram, generic old stuff basicly.
I have 3 disks, 3tb, 500Gig and 80gig.
Entire network is Gigabit with cat6 cabling

General uses of the nas:
1) downloading torrents
2) streaming to my HD-TV in 1080P (preferably)
3) samba streaming to a laptop for wachting tv-shows that i downloaded
4) sharing stuff with people through Ajaxplorer

performance:
torrents download at a max off about 6.5 Mb/s (=maxing out my connection)
streaming to my HD tv goes without hitches or buffering
samba streaming aswell, although if i move files around from my Windows pc to the nas I tend to top out at around 30-40mb/s (2gig ram i know)
the ajaxplorer thing is recent (ty for that Freenas forum :p) but i generally tend to upload at around 250kb/s max


it gives you an idea what hardware i'm using and performance i'm getting.
Say I want to get 100Mb/s on samba, i just look at the next guy posting his hardware/performance specs and see what he's getting as performance. (I know i'm a bad example cause of the RAM amounts, its just easier to make an example with my own hardware)
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,681
See what you did there? You gave advice on some hardware you are comfortable with. Supermicro Atom D525 board = bad, why => underpowered.

No, I think you don't see what I did there. The topic of the thread is hardware suggestions, and the Atom is something I do NOT suggest. If you want me to start a thread of "things I don't suggest", I won't have enough hours in the day to be comprehensive about that, because really, I don't suggest about 99% of what's available out there.

I mean I know i should get server-graded and such, but what if I can't, whats the next best thing?

There are hundreds of posts people have made to the forums describing just such setups. The inevitable issue with most of them is that they try to run on consumer-grade boards, which may work fine, or may not, depending on how hard someone is hammering on the "let me see if I can make this out of the cheapest, dodgiest components known to Man" button. So if you want "the next best thing", take note of that bit in italics there and DON'T DO THAT. Use name-brand components, and by that I mean not RAM that fell off the back of some truck in Shenzhen or mainboards without a reputable manufacturer's name on it. Do some research using the search feature on the forums to see if your hardware has been successfully used by anyone else.

Either your data is important to you or it isn't. If it is, follow my suggestions and you have excellent odds of winding up with an awesome NAS running ZFS. No promises, just statistically much more likely to all work out. If you want to cheap out, don't expect everyone else to do your homework for you. And don't be shocked when that $15 bargain bin power supply pops a cap and takes your NAS with it. And don't be shocked - or blame FreeNAS - when your data's gone.

In the meantime, I hope you're not planning to run ZFS on that Pentium 4 system you outlined. You can run UFS/FFS on it just fine, though, I would expect.
 

wash

Cadet
Joined
Apr 25, 2013
Messages
8
Either your data is important to you or it isn't. If it is, follow my suggestions and you have excellent odds of winding up with an awesome NAS running ZFS. No promises, just statistically much more likely to all work out. If you want to cheap out, don't expect everyone else to do your homework for you. And don't be shocked when that $15 bargain bin power supply pops a cap and takes your NAS with it. And don't be shocked - or blame FreeNAS - when your data's gone.

Speaking of statistics, I read this when searching for "ecc ram statistical reliability":


The article also suggested that single bit errors in parity ram cause crashes. It was last updated in 1998 so he is talking about the bad old days but if you re-frame his statements with what we have seen with non-ecc ram in the last 15 years, it will tell us what ram reliability is like today.

My computer has 32 GB of pc1600 DDR3 dram, that's 256 times the example case and I think that runs at 100 MHz quad pumped which is 40 times the example case which means there should be 1024 single bit errors per second if reliability has not changed.

I don't remember computers crashing every ten seconds back in 1998 so I think his reliability estimate was pretty pessimistic.

I think ecc has its place in mission critical fully populated servers stuffed in a crowded rack in a hot server room.

When you move to a roomy tower case with good air flow, less than constant load and smaller budgets, I doubt that ecc has a very big statistical advantage. At some point the reliability is so high that the only errors are caused by cosmic rays and other weird cases that aren't related to the quality of the ram cells.

Any way I totally support buying quality parts with a good reputation but I hesitate to take "trust me its statistics" on faith when there aren't even numbers.

I would love to see real numbers because I think ecc in servers might just be what they do because no-name ram in crummy motherboards with questionable assembly gives standard ram a less than perfect record.

Ecc might be greatly justified too, I just haven't seen the numbers either way.

I know that some SUN manual says ecc is part of the data protection strategy of zfs but if dram errors never occur, having error correction code is just extra hardware along for the ride.

I would like to know which it is.
 
Status
Not open for further replies.
Top