Register for the iXsystems Community to get an ad-free experience and exclusive discounts in our eBay Store.

Guide: How much will a proper home FreeNAS setup cost me?

Status
Not open for further replies.
Joined
Mar 25, 2012
Messages
19,151
Thanks
1,861
#21
Nope. Not any more.
 
Joined
May 23, 2014
Messages
6
Thanks
0
#22
I figured I'd write up some low-cost part's of my own for a budget build in the home. I would not recommend this for a production environment because of used parts but feel comfortable with my personal data being on it.

Motherboard: $80 shipped: Foxconn Dual LGA 1366 Motherboard with 18 Ram Slots (9 per socket).. Taking a little bit of a gamble with this board because the apparent lack of documentation but have plenty of other boards laying around if it doesn't work out and can find another use for it otherwise. It appears to have IPMI but I'll comment back more when I know more. I could have likely sent the guy a $70 offer but I didn't want to waste my time.

CPU: $19 Shipped: Intel Xeon L5630 - This is one of the lowest 1366 wattage chips available at 40w - you can save ~2-5 watts by not getting a quad core and likely wouldn't have a perfomance difference but I felt the extra 2w was worth having a QC - I'm only using one of the CPU sockets for power but like having the option down the road to drop a second one in to use the additional ram slots if the need arose.
Case - I've got a lot of cases floating around my shop but if I had to buy one for this purpose would be a cheap case as mentioned above - I've seen several floating around ebay or locally for as little as ~$40-50.0

Ram: $90 Shipped: Hynix 6x4gb DDR3 ECC 1333Mhz - Allows for 24gb and 3 open slots without using the second CPU to easily upgrade to 36gb - Could even do more with 8gb sticks but this is a lot cheaper with 4gb sticks.

I'm figuring after a case and PSU I'll have ~$300 in the hardware.. Not bad. I'll have 6x4tb drives in it. It looks like if I wanted to add more drives down the road I'll have to get a storage controller (Which wouldn't be a bad idea anyways).

I was originally wanting to go with the A1SRM but starting at $250 plus having only 4 ram slots and being restricted to UDIMM's was a hard pill to swallow. The A1SRM would definitely use less power but I don't mind using the extra ~30w for the xeon to have more horsepower and more importantly more ram slots.
 
Joined
Jul 15, 2015
Messages
10
Thanks
0
#23
I thought the ASRock C2550D4I would be great value for money as well. Has got 4*sata-300 and 6*sata-600, dual ethernet, and the Atom C2550. Price runs for about 300$ on Newegg, and about 315€ in the Netherlands. What's your (everyone's) opinion on this board? Or would you value the Supermicro A1SAi-2550F higher -- which runs for about the same, but has got only 6 sata ports, but for 4 ethernet ports instead -- and add a pci-sata card later on if necessary? Or do you think the C2250 boards are not worth it at all (from a power/performance ratio compared to the next best option)?
 

Ericloewe

Not-very-passive-but-aggressive
Moderator
Joined
Feb 15, 2014
Messages
16,047
Thanks
3,894
#24
I thought the ASRock C2550D4I would be great value for money as well. Has got 4*sata-300 and 6*sata-600, dual ethernet, and the Atom C2550. Price runs for about 300$ on Newegg, and about 315€ in the Netherlands. What's your (everyone's) opinion on this board?
Only the Intel SATA ports are reliable. The Marvell ones aren't. The Supermicro board is the better option, usually.
 
Joined
Jul 15, 2015
Messages
10
Thanks
0
#25
I know that Marvell SATA ports commonly have got less throughput than Intel's. But other than that, what do you exactly refer to with Marvell's being less reliable? Do they tend to break down over time?
 

DrKK

FreeNAS Generalissimo
Joined
Oct 15, 2013
Messages
3,630
Thanks
1,090
#26
I know that Marvell SATA ports commonly have got less throughput than Intel's. But other than that, what do you exactly refer to with Marvell's being less reliable? Do they tend to break down over time?
The situation is fluid.

The FreeBSD drivers for the Marvell ports, for a variety of complex commercial and sociological reasons, is not up to the state of the Intel drivers. Thus, there are occasionally odd effects, sometimes including reduced throughput, and other general weirdnesses, that some users report experiencing on the FreeBSD platform. While progress has been made on this, it is not clear that all of the questions are solved.

Thus, we typically steer users away from, or warn them, about the Marvell ports on boards with these extended SATA capabilities above the usual chipset.

Some users are using them with no problems. Other users are not. I would be lying if I said we understood everything about this.

You will only catch us recommending things that we are 100% certain about in this forum; none of the most active support guys and subject matter experts are 100% certain about the Marvell ports. You would use them at some risk. Maybe the risk is small. Maybe not. As I said, the situation is fluid, changing as we speak, and at some point I would expect the issues to be cleared.
 
Joined
Jul 15, 2015
Messages
10
Thanks
0
#27
Thank you very much DrKK. I was not aware of this. I haven't built my FreeNAS system yet, but am doing the preliminary research work before proceeding to build. So glad to know that's another subject to take into account.
 

Linkman

FreeNAS Experienced
Joined
Feb 19, 2015
Messages
211
Thanks
53
#28
I will throw my data point into the ring. $1427.94 That's a little high for a home FreeNAS box, but I splurged on the X10SL7 mobo, and the CPU, hoping to future proof to some extent.

(Prices include shipping and/or sales tax where applicable)

Motherboard: SuperMicro MBD-X10SL7-F-O (Amazon.com)....................$ 246.59
CPU: Intel Xeon E3-1241 V3 Haswell (Microcenter).......................$ 245.03
RAM: Crucial 16GB (2x8GB) ECC RAM CT2KIT102472BD160B (Newegg.com)......$ 152.99
Case: CoolerMaster RC-692-KKN2 CM690 II Advanced......................(RECYCLED)
Power Supply: PC Power & Cooling Silencer Mk II 500W (80Plus Cert.)...(RECYCLED)
Boot flash drives: 2 x SanDisk Cruzer Fit CZ33 32GB (Amazon.com).......$ 28.40
Drives: 3 x WD Green WD30EZRX 3TB (Newegg.com) 3 x $89.99 =............$ 269.97
Drives: 1 x WD Red WD30EFRX 3TB (Newegg.com) 1 x $109.99 =.............$ 109.99
Drives: 2 x WD Red WD30EFRX 3TB (Microcenter) 2 x $99.99 =.............$ 214.98
UPS: APC BR1500G Back-UPS Pro (Newegg.com).............................$ 159.99
===================================================================================
TOTAL: $1427.94
 
Last edited:

DrKK

FreeNAS Generalissimo
Joined
Oct 15, 2013
Messages
3,630
Thanks
1,090
#29
Hope you wdidle3.exe those greens, or you'll be sorry.

Also, I don't know why you bought that CPU. What are you planning on doing, transcoding twelve 1080p streams? LOL.
 

Linkman

FreeNAS Experienced
Joined
Feb 19, 2015
Messages
211
Thanks
53
#30
Yes, used wdidle3.exe on the Greens, to 300 secs; as well as one of the Reds that came from the factory at 8 seconds.

Paid the same price at Microcenter for the E3-1241 as both Amazon and Newegg were selling the E3-1230/1 at the time, and I chose to pay the same for more, rather than less for less ;)
 

RegularJoe

FreeNAS Experienced
Joined
Aug 19, 2013
Messages
206
Thanks
4
#31
This is very good info.

This makes the $995 mini very attractive for a cut and dried 100% conmpatible setup : http://www.ixsystems.com/freenas-mini/

If your an enterprise person you know ways to get the off lease servers at a very good price. I have seen some that are quite capable for $650 and they come with 72 gigs of DDR3 RAM. ;)

Thanks,
Joe
 

DrKK

FreeNAS Generalissimo
Joined
Oct 15, 2013
Messages
3,630
Thanks
1,090
#32
This makes the $995 mini very attractive for a cut and dried 100% conmpatible setup : http://www.ixsystems.com/freenas-mini/
Of course, at $995 *without drives*, it's more expensive than doing it yourself. But, it's certainly convenient. For the cost of the drives, you have an already set up, nice case, FreeNAS-perfect system.
 

mattlach

FreeNAS Experienced
Joined
Oct 14, 2012
Messages
280
Thanks
8
#33
For what it is worth, by cruising eBay you can get some tremendous deals on server pulls.

As long as you run stability tests / RAM tests prior to setting everything up, reliability is not an issue, and they save you SO much money.

I built mine out of a unused old stock Supermicro dual socket westmere era motherboard, 96GB of used registered ECC RAM, and two low power 6 core Xeon's.

In the end I spent what one would have for consumer grade hardware, but got something much much more.

My build looked something like this, but it was over time, so I salvaged parts here and there, didn't buy them all at once. This was about a year ago, so probably cheaper now:

CPU: Pair of L5640 Xeon CPU's: $120 for the matched pair used on eBay
Motherboard: Supermicro X8DTE: $150 New on ebay. No I/O plate
I/O Plate: $8 on eBay
RAM: 12 8GB Hynix low voltage DDR3 Registered ECC sticks. $40 per stick from seller in Hardforums FS/FT section
SAS Controllers: 2x IBM M1015 for ~$100 each on eBay
Case: Norco RPC-4216 $320 on Amazon
SAS Cables: 4x for $39 on Monoprice
PSU: Antec Earthwatts 550W 80 plus Platinum: $95 on Newegg
Adapter cable for second EPS power: (PCIe to EPS): $6
CPU Coolers: 2x SNK-P0040AP4: $48 for both on Amazon

Sub total $1,466

So, as you can see, the only place I really splurged was on the 16 bay Norco server case, everything else I got pretty fantastic deals on.

Of course, I spent almost $3000 on drives (12x WD RED's) on top of the above, but you have to spend on drives either way. Can't get away from that if you need storage.
 

mattlach

FreeNAS Experienced
Joined
Oct 14, 2012
Messages
280
Thanks
8
#34
If your an enterprise person you know ways to get the off lease servers at a very good price. I have seen some that are quite capable for $650 and they come with 72 gigs of DDR3 RAM. ;)
Be careful with these. I bought a HP ProLiant DL 180 G6 with the intent of using it as a storage server, but had some issues.

Firstly, performance was compromised due to a hard wired SAS expander built into the backplane server, that would only take one SAS cable.

Secondly, they squeezed A LOT of 3.5" bays (12) into a 2U server which caused the need for a lot of airflow to keep it cool, which leads to the final problem.

The noise level. This thing got the nickname the HP DL180 Dreamliner.

It was loud, but tolerable once placed in the basement out of earshot in its default configuration.

Once I pulled out the HP SAS Raid card and replaced it with a M1015 (because what we really need for ZFS is an HBA, not a Raid card) the HP bios threw a fit. Apparently HP servers have temperature sensors everywhere, and the BIOS expects them to report back so it can adjust fan speeds.

Put even a single expansion card in this server, that it does not recognize and get a thermal signal from, and it goes into "overabundance of caution" mode, maxing all 8 redundant 80mm 12,000rpm fans and putting out levels of sound requiring hearing protection. I could hear it in my bedroom from the basement, two stories up, with all doors shut...

Add to that, that the fans used so much power that the power draw from the wall was unacceptable.

That's when I built my current server, harvesting as many parts from the DL180 as I could.

The shell (server case, motherboard, backplane trays, power supply etc.) is still in my basement unused. If anyone wants it, I'll give it to you for free, if you pay shipping or come pick it up :p Just make sure you have someplace out of earshot to run it.

If you DO want a used Enterprise server (and really, they can be great, lots of stuff for low, off-lease used prices) I would recommend going with a 4U server, not the lower 1, 2 or 3 U systems, as you can usually fit bigger quiet fans in them and you won't have to deal with datacenter like noise level.s
 

Ericloewe

Not-very-passive-but-aggressive
Moderator
Joined
Feb 15, 2014
Messages
16,047
Thanks
3,894
#35
12 bays in 2U is only marginally worse than 24 in 4U, which is pretty common.

Now imagine those 96 drive 4U JBODs :p
 

RegularJoe

FreeNAS Experienced
Joined
Aug 19, 2013
Messages
206
Thanks
4
#38
It uses DDR3 so it is a newer system, each drive is at least 10 watts of power. The system is old enough that it only supports 128gigs of DDR3 RAM and is one of the last FSB systems before Intel finally had to concede and design CPU Memory buses like AMD. ;)

As usual with SUN machines I bet the fans are as loud as heck. When they originally came up with the idea it was called Thumper. This was the machine that they found out that drives that were too close to a fan had a higher bit rate error, moving the drive to another slot made the errors go away. Not sure if new drives would be better or worse but I am sure there are a few engineers at Sun/Oracle that still know what the issue was and how to re-create it. ;)
 
Joined
Aug 1, 2014
Messages
37
Thanks
4
#39
its possible to put the system part of this together for a few hundred dollars even using new gear. ULTRA low cost could use
Gigabyte GA-990fxa-ud3
AMD cpu of your choice ranging from 80$ to 200$ depending on needs
dell perc h310 or h200
intel pcie 1gb nic
nzxt source 210 case 'holds more than 8 3.5" drives'
you will need a cheap video card.
and good power supply.
ECC NON REG RAM (this stuff is kinda pricey but still a lower cost overall build)
this is around 500$ (depending on cpu and amount of ram, you want atleast 8gb, 12gb or 16gb is better)


but even going with the AMD G34 server route and using a supermicro board you can get a complete system with enough horse power to be a VM host and still come in well under 1000$
IE: swap the above MB with an H8SGL, G34 cpu, and ECC REG ram (the ram is dirt cheap, get like 32gb or more) and suddenly you have a beast of a rig. this stuff is still in production today even though it is a few generations old, so its cost has been driven way down but you will not over utilize it even in a server environment.


when money is freed up i personally prefer the Chenbro cases, then supermicro, and only if i found one for nearly free would i use a norco case again. however given the choice i would probably assemble a nzxt source 210 (this case will actually fit sideways in a rack, use a rack shelf or get creative with a universal rail kit) over some of the low end norco cases.

you do not have to burn money to use freenas contrary to some of the 'wisdom' on this board.
 

mattlach

FreeNAS Experienced
Joined
Oct 14, 2012
Messages
280
Thanks
8
#40
its possible to put the system part of this together for a few hundred dollars even using new gear. ULTRA low cost could use
Gigabyte GA-990fxa-ud3
AMD cpu of your choice ranging from 80$ to 200$ depending on needs
dell perc h310 or h200
intel pcie 1gb nic
nzxt source 210 case 'holds more than 8 3.5" drives'
you will need a cheap video card.
and good power supply.
ECC NON REG RAM (this stuff is kinda pricey but still a lower cost overall build)
this is around 500$ (depending on cpu and amount of ram, you want atleast 8gb, 12gb or 16gb is better)


but even going with the AMD G34 server route and using a supermicro board you can get a complete system with enough horse power to be a VM host and still come in well under 1000$
IE: swap the above MB with an H8SGL, G34 cpu, and ECC REG ram (the ram is dirt cheap, get like 32gb or more) and suddenly you have a beast of a rig. this stuff is still in production today even though it is a few generations old, so its cost has been driven way down but you will not over utilize it even in a server environment.


when money is freed up i personally prefer the Chenbro cases, then supermicro, and only if i found one for nearly free would i use a norco case again. however given the choice i would probably assemble a nzxt source 210 (this case will actually fit sideways in a rack, use a rack shelf or get creative with a universal rail kit) over some of the low end norco cases.

you do not have to burn money to use freenas contrary to some of the 'wisdom' on this board.
Careful there.

I used to run my system on that exact motherboard.

While AMD's chipset technically supports it Gigabyte support WILL NOT guarantee that ECC works. I did a lot of research into this back when I used this board for my server, and in the end I determined that I could not confirm working ECC.

The same thing goes for most other consumer AMD AND Intel boards.

ECC is a tricky beast, as there is no way to really confirm that it is ACTUALLY working. (as mentioned previously when it comes to i3-xxxx chips, last page of this thread). The memory controllers on both Intel's and AMD's recent desktop chips have ECC support, but it also requires the motherboard and BIOS to play nice. This is further exacerbated by the fact that many motherboard bioses will detect and use unbuffered ECC Dimms, but just not enable the ECC part (this is the category I believe the 990FX-UD3 falls into)

In a way this is very similar to VT-D/IOMMU support. All recent non-K Intel CPU's and all recent AMD CPU's support it, it it also relies on the motherboard chipset and motherboard BIOS supporting it. The difference - however - is that you can easily test for VT-d, whereas there is no straight forward test that can be run to confirm that ECC is working.

With official server motherboards (Supermicro/Tyan) or OEM Servers you can feel certain you are getting functioning ECC, but with anything consumer, either Intel or AMD, all bets are off.

Neither Intel, AMD, nor motherboard partners will confirm that it works, and just because you see ECC DIMM's detected in BIOS DOES NOT mean that the ECC is actually being used.

There ARE some consumer board/CPU combinations that DO wind up fully supporting ECC. Usually having a BIOS option to enable/disable ECC is a telltale sign that you have one. The only problem is that they are few and far between, and there is no way to confirm that they are actually working.

Now, all that being said, I actually ran an FX-8350 on a Gigabyte GA990FXA-UD3 with regular non-ECC desktop RAM (with 8 WD drives in the NZXT Source case you mention, but with an IBM M1015 controller) as my VMWare ESXi / FreeNAS server for years, and never had any problems. Maybe I was just lucky, but there is still significant debate ongoing whether ECC is really truly necessary for ZFS.

Some suggest that it is absolutely crucial, while others suggest that yes, there is a risk of data corruption in RAM, but it is no worse than it would be with any other file system. I'm honestly not sure who to believe any more, but I have 96GB of Registered ECC DIMM's in my current server, and if I were to upgrade it or build another, I'd go with ECC again, just to be on the safe side. I also run a small ZFS mirror in my workstation, which has 64GB of non-ECC regular desktop RAM.
 
Status
Not open for further replies.
Top