BUILD Hardware for 6-8 disk NAS

Status
Not open for further replies.

Kosta

Contributor
Joined
May 9, 2013
Messages
106
Hello,

I just joined here yesterday, and since I am planning on building my own NAS, I think this is the best place to ask.

The system will be in the cellar tech room, which isn't very warm (it's not cold either), so I have no worries about the loudness.

The current setup I have is a DLINK DNS-323 NAS on the CAT6 Gigabit network, RT-N66U router after the cable modem, rooms connected with single cables, and Cisco SE2500 before the main computer (where I will mainly be using the NAS). You can guess that I'm not really happy with the speed of DNS-323 and this is the reason for the change: I want speed, reliability and storage space with expandability options.

My idea after reading a LOT around is a DIY NAS based on RAID-Z2 configuration using FreeNAS.

What I want:
- 110 MB/s transfer speeds, read and write (not that the computer transfer from a Samsung 840 PRO SSD, so the speed isn't the issue on the computer side)
- storage space of about 8TB in the beginning (4x 2TB), ability to expand to 12TB later on, and even more if and when 4TB disks come out, maybe even through adding vdevs if needed
- redundancy, I don't want to care much about the data loss
- lowest possible power usage without hardware overkill!

I also considered going link aggregation, however I think without two managed switches and a dual connection to the rooms, this would be without much sense. There will only be one computer accessing the NAS for high speeds, other clients are completely OK will somewhat lower speeds.

The current setup in the main computer is an OSX on a PC, NIC is Logilink PCIEX 1Gbit 8111F, mainboard is ASUS P8Z77-V PRO (reason for the Logilink card is the inability of the WOL with the Intel onboard NIC with the OSX).

The reason for the FreeNAS is also the need for all three protocols. Samba has a high overhead, so I intend of going AFP or NFS with OSX, and my media player really likes NFS better than Samba. I have another Apple devices in the house, so AFP would come in handy.

My current plan is to get following:
Supermicro X9SCL-F-O
Intel G2020
Kingston 16GB 1333 CL9 ECC Registered - KVR1333D3D4R9SK2
be quiet 400W E9 (80+ Gold certified)
Fractal Design ATX Mini Arc (6 slots for disks, 2 5,25 bays)
APC Back-UPS ES 325VA BE325-GR++

I reckon that the motherboard is a good choice, but in the combo with the CPU, I am asking myself if it's not an overkill for the given usage? Important thing to notice that the server would be used mainly for the file serving, bittorrent and maybe sabnzbd/sickbeard/couchpotato (didn't research this well yet). I am very keen to keeping my power costs as low as possible. The initial cost can be a little higher, however not so high that it won't pay off in the long run. I also want this server relatively future proof, meaning for at least couple of years not really thinking about it and just leave it on 24/7.

Main worries are:
- can I somehow achieve 50w max idle usage? and 20w when HDDs sleep? will this CPU make that possible?
- are there alternatives, for example Atom dual 1.8Ghz? I notice most of high end Synologies use Atoms D2700 and their power usage is not very high
- how to make the connection saturate fully the gigabit network I have at home (transfer speeds 100-120 MB/s)

Many thanks.
Kosta
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Atoms are probably the only way you're going to get 20w when HDD are asleep. Atoms also aren't very powerful, so getting above about 40MB/sec across the network is extremely rare. There's no chance you'll get anything above 50MB/sec though.

If you are expecting 110MB/sec regularly, you'd better start looking at 10Gb LAN hardware. While its not unheard of to get 120MB/sec when transferring data, typically for large file transfers you can expect it to average 80-90MB/sec.

Make sure you read my guide so you understand the limitations with trying to expand a zpool later. It's not as simple as adding a single disk.

There are potential problems if you try to share the same files with more than 1 protocol at the same time. In short, you can corrupt your files due to improper file locking.

For the wattage you are hoping for, 400w is overkill. Typically you want loading to be 40-60% of the power supplies maximum loading for peak efficiency.

It's kind of funny, everything you want is what alot of other people ask for. Not sure if you've done any searching, but everyone wants a cheap, expandable, low power, high performance server.
 

Kosta

Contributor
Joined
May 9, 2013
Messages
106
Hi cyberjock,

I already went through your pptx, and understand the limitations of the zpool and how to expand it with vdevs. And this is also one of my problems: I wish to back up, externally, some important data, like documents, work, photos, some videos, and for this I'll use a single 2TB disk, which is more than enough. I guess leaving that disk in the NAS and configuring it as a single disk is senseless - I guess the best would be to have a hot swap in the main computer, and just transfer via network what I want, what is important (incremental backup), right?

10Gb network is no-go. Too expensive to explain my wife :]

Darn, with what you say, sharing files over more protocols is bad, this is really weird, as my DNS-323 can even do that, and I never had any file loss. It has a SMB connection active, but I am generally using NFS. I was really hoping at using AFP and NFS mixed, but I guess I could separate them by folders. CIFS will only be used in windows, so that is not really a problem.

Concerning PSU: there is hardly a 150W PSU out there. I read about picoPSUs, I have to research those a little more. 150W picoPSU would be a better solution?

I did a little search, to be honest. I will do more.
 

Kosta

Contributor
Joined
May 9, 2013
Messages
106
After reading this here:
http://forums.freenas.org/showthread.php?12276-So-you-want-some-hardware-suggestions

I see I am pretty in line with some suggestions, the only big discrepancy is the CPU and the PSU, I still have to look at the memory though, Kingston is not on the tested list.

How does the G2020 scale vs E3-1220Lv2 when taking power consumption and CPU power? I mean, it's more than double the price! Of course E3 will have more oomph than the G2020, and considering E3 is only 17W TDP, what will the system idle at when HDDs off? I have no need for AES-NI though, so really no need for the Quad E3. And the transfer speeds, will they also differ between the two CPUs?

Also one very open question: do I want encryption??
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
After reading this here:
http://forums.freenas.org/showthread.php?12276-So-you-want-some-hardware-suggestions

I see I am pretty in line with some suggestions, the only big discrepancy is the CPU and the PSU, I still have to look at the memory though, Kingston is not on the tested list.

How does the G2020 scale vs E3-1220Lv2 when taking power consumption and CPU power? I mean, it's more than double the price! Of course E3 will have more oomph than the G2020, and considering E3 is only 17W TDP, what will the system idle at when HDDs off? I have no need for AES-NI though, so really no need for the Quad E3. And the transfer speeds, will they also differ between the two CPUs?

Also one very open question: do I want encryption??

The E3-1220Lv2 only has two cores.

http://www.cpubenchmark.net/cpu.php?cpu=Intel+Xeon+E3-1220L+@+2.20GHz&id=1197

note: E3-1220L, they don't seem to have the v2.

http://www.cpubenchmark.net/cpu.php?cpu=Intel+Pentium+G2020+@+2.90GHz

Note that the G2020's video support should be unnecessary given the onboard server video provided by the Supermicro X9S boards.

The presence of ECC is totally awesome.

I'm guessing you picked the SCL over the SCM hoping to get slightly lower power utilization? I don't have a head-to-head comparison available but will note that I've seen an SCL+ with E3-1230 idling around 45w, which is pretty amazing.

As far as

Kingston 16GB 1333 CL9 ECC Registered - KVR1333D3D4R9SK2

Please note that there's a very good reason the hardware suggestions sticky says "Please don't guess at compatible memory for your system. Use a manufacturer's memory selection tool." It's because ... you've selected incompatible memory. The proper Kingston 1333 part is "KVR1333D3E9S/8G" for single module or "KVR1333D3E9SK2/16G" for a two-module 16GB kit. However, you should check pricing on the 1600 part and get that instead if possible, parts include "KVR16E11/8" for single module or "KVR16E11K4/32" for a 32GB slotfiller kit. Not only will this offer you extra flexibility if you ever decide to upgrade the CPU, but also it may result in an (admittedly very minor) energy savings running at a lower speed on the G2020.

You can also go the opposite direction and find memory off Supermicro's compatibility list. That may end up being less expensive than deciding on a memory supplier and seeing what they have.
 

Kosta

Contributor
Joined
May 9, 2013
Messages
106
Hi jgreco,

I saw those benches, however, I really have no idea what the requirement is there to achieve full *possible* gigabit transfer speed (I see people get max. 110 MB/s). I am now considering to put in the 1230v2, as I reckon this CPU will do just fine for SABnzbd and SickBeard(?), meaning I could use my main computer even less for such stuff. My computer tends to run hours and days for torrents, so I could really save some power through the FreeNAS and using those things, and just using my main computer for the stuff I really need power for - gaming, surfing, general internet stuff only.

So, should 1230Lv2 have lower idle consumption than G2020? I picked SCL because it's 20€ cheaper than SCM, and it has 6x SATA2, and no SATA3 which is unnecessary anyway. But now I picked SCM, as I don't need PCI slots on the SCL, and SCM has 3x PCIEx, and that is reason good enough for 20€ more.

I wrote the initial post prior to reading your hardware recommendation post, and in the mean time I found the compatible RAM yes. Depending which board, I will look at the compatibility list, Kingston is by now the cheapest available, and it says on the site it's compatible, and luckily also available here for a reasonable price (around 68€ per 8GB stick). It's in fact more expensive if I look on Supermicro homepage.

I have to admit, the price of the whole build has gone up and up, but I hope this server built can hold at least 5 years, if not longer. The intention of going these boards is that they have PCIEx, so if I ever want to expand the speed, I can: 10Gb NIC. Expand disks: no problem, SATA card(s)... if it doesn't die that is.

Okay, couple of questions:
What power consumption can I expect with the 1230v2 and either of these boards on idle (w/o disks)?
Would you recommend Supermicro or Intel? I am looking at these two now: X9SCM-F or S1200BTL?
The RAID-Z2, 4+2, does that mean the 4 disks on which data is on, are basically striped, meaning higher performance?
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Think you're mixing up part numbers. There's the E3-1220Lv2 low power (17W?), 3M cache and two cores, and the E3-1230v2 (69W?), 8M cache and four cores. Probably my fault for injecting further parts for purposes of discussion. There is no E3-1230Lv2 (sigh).

Now as tempted as I've occasionally been to pick up a 1220Lv2, there's just no realistic use case for it here right now. So I tend to focus on what we do use, and on the low end that's 1230 and 1230v2, the low-end fully-featured Xeon. The 1220Lv2 is a crippled version with substantially lower TDP.

That having been said... any CPU sold today can reach "full *possible* gigabit transfer speed". Stick a SSD in an Atom with Intel network cards with large packet traffic and you have gigabit speed.

The problem is that we add more layers in, and they eat CPU, and cause latency, which causes things to be slower. How much slower? Possibly a lot. And there are other factors, like the speed of storage. So it is very difficult to guarantee any particular speed from a given system. In general, throwing more resources at it will tend to make it go faster, throwing better resources at it will tend to make it go faster, etc.

A X9SCL+-F system with an E3-1230 (not v2) with carefully picked parts has been seen to idle at 45 watts here. When pushed, the system was still running less than 90 watts. It's about a 40-45 watt swing in utilization. This leads me to guess that the CPU alone at max is perhaps 70 watts, idling down around 20-25 watts, and the rest of the system is about 20 watts. Just a guess though.

If that's actually the case, then yes, a Xeon E3-1220Lv2 with its substantially lower max TDP is going to save power. However, it is also worth noting that the E3-1230 (v2 or not, doesn't matter) has a *ton* more capacity, and is much more likely to have the legs to handle a large load thrown at it suddenly. If I was running things other than fileservice on a platform, this would weigh heavily in the decision-making process.

I haven't had a reason to check out the consumption of the E3-1230v2 on a bare system but I expect it'd be similar to the E3-1230.

We've had pretty good luck with the Supermicro boards in the past, can't say quite the same for at least a few Intel boards, but they're all pretty solid technology. Looks like users over at NewEgg didn't like the Intel too much, but the X9SCL is well-liked.
 

raidflex

Guru
Joined
Mar 14, 2012
Messages
531
I think the processor choice would heavily depend on if you are using any plugins and which ones. I make use of many plugins, and the Xeon E3-1225V2 CPU was a nice boost over the Intel G520 I had before it. It especially helps when performing any compression/decompression. I can push 100MB/sec+ to my desktop which has a SSD. So your 110MB/sec goal should be almost, if not achievable with good hardware. The Intel NICs are great for Freenas and provide great Performance.
 

Kosta

Contributor
Joined
May 9, 2013
Messages
106
Think you're mixing up part numbers. There's the E3-1220Lv2 low power (17W?), 3M cache and two cores, and the E3-1230v2 (69W?), 8M cache and four cores. Probably my fault for injecting further parts for purposes of discussion. There is no E3-1230Lv2 (sigh).

Indeed I was. However it's clear there is a 1230v2 and a 1220Lv2 out there.

Now as tempted as I've occasionally been to pick up a 1220Lv2, there's just no realistic use case for it here right now. So I tend to focus on what we do use, and on the low end that's 1230 and 1230v2, the low-end fully-featured Xeon. The 1220Lv2 is a crippled version with substantially lower TDP.

I read actually all around that the idle power usage of the 1230v2 is the same as with the 1220Lv2, just the 1230v2 should have a bigger overhead and be able to finish the task faster than 1220Lv2. If that is so, then it suits me. Especially since I would have to buy the 1220Lv2 for 180€, a tray CPU, + about 15€ for a cheapo cooler, or just get a boxed 1230v2 for 205€. We are talking 10€ for a quad core. So if the idle is about the same (they reported about 41W idle for the 1220Lv2), it's a clear decision. However I found no real power consumption measures for the 1230v2.

That having been said... any CPU sold today can reach "full *possible* gigabit transfer speed". Stick a SSD in an Atom with Intel network cards with large packet traffic and you have gigabit speed.

I read that with Atom I would get max. 50 MB/s, and a Xeon would achieve those 90 MB/s. So what is right?

The problem is that we add more layers in, and they eat CPU, and cause latency, which causes things to be slower. How much slower? Possibly a lot. And there are other factors, like the speed of storage. So it is very difficult to guarantee any particular speed from a given system. In general, throwing more resources at it will tend to make it go faster, throwing better resources at it will tend to make it go faster, etc.

What do you mean by layers? I am aware that for the apps, like SABnzbd, repairing binaries, unpacking and all, 1230v2 will come in handy, but are we also talking multiple disks? Do 6 disks in RAID-Z2 also need more powerful CPU than a single disk?

HTML:
A X9SCL+-F system with an E3-1230 (not v2) with carefully picked parts has been seen to idle at 45 watts here.  When pushed, the system was still running less than 90 watts.  It's about a 40-45 watt swing in utilization.  This leads me to guess that the CPU alone at max is perhaps 70 watts, idling down around 20-25 watts, and the rest of the system is about 20 watts.  Just a guess though.

That is OK usage, however with +6 disks, I will probably idle at 70w. But honestly, the difference is 20w between going ATOM or Xeon.

If that's actually the case, then yes, a Xeon E3-1220Lv2 with its substantially lower max TDP is going to save power. However, it is also worth noting that the E3-1230 (v2 or not, doesn't matter) has a *ton* more capacity, and is much more likely to have the legs to handle a large load thrown at it suddenly. If I was running things other than fileservice on a platform, this would weigh heavily in the decision-making process.

Exactly my thoughts too.

I haven't had a reason to check out the consumption of the E3-1230v2 on a bare system but I expect it'd be similar to the E3-1230.

That is what I hope for too.

We've had pretty good luck with the Supermicro boards in the past, can't say quite the same for at least a few Intel boards, but they're all pretty solid technology. Looks like users over at NewEgg didn't like the Intel too much, but the X9SCL is well-liked.

OK, I guess I'll go Supermicro, the SCM, since it has 3x PCIEx, and no unnecessary PCI.


But what about the PSU? I looked here at different solutions, for example:
400w PSU with 80+ Gold certification. Cyberjock says this is an overkill. I read that spin-up of the HDDs needs quite an overhead on the PSU, but how much? What PSU power should I be looking at? Also I've seen solutions with an external PSU, like picoPSU, however I question the rentability. picoPSU costs some 45€ alone, + good ext. PSU, about 70€, and a good internal PSU with gold certification is about 60€. When will these other 60€ come back through the power usage "win" through the usage of the external PSU? Question is still how many watts? If I go after his post, I'd say 180w or a little bit more, like 200w maybe, in that case, external PSU costs about 110€.
There is a FSP Fortron/Source FSP180-50LE 180W, 40€, but the energy efficiency is something like 70%, I'd reckon the be quiet with gold cert would be a better solution then...?

- - - Updated - - -

I think the processor choice would heavily depend on if you are using any plugins and which ones.

Indeed, I totally omitted that if I decide to run those download tools on the fileserver, I also need a potentially potent CPU, otherwise stuff will take centuries.

It especially helps when performing any compression/decompression. I can push 100MB/sec+ to my desktop which has a SSD. So your 110MB/sec goal should be almost, if not achievable with good hardware. The Intel NICs are great for Freenas and provide great Performance.

So Xeon combined with an Intel NIC on the Supermicro board should in fact guarantee me these speeds, as long as writes on my desktop are playing too?
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I read that with Atom I would get max. 50 MB/s, and a Xeon would achieve those 90 MB/s. So what is right?

If you buy a Ford Escort, it has a max speed of 110 miles per hour.

If you put four fat guys into it, however, it'll have problems getting up past 70.

If you try to also haul a camper with it, you might get 30.

If you buy a Ford F-150, it also has a max speed of ~110 miles per hour.

However, you can cram four fat guys and haul a camper with it and still hit that 110, because the engine is sufficient.

This is very much the case with CPU's too. CPU's became capable of filling a gigabit pipe more than ten years ago. Today's Atoms are similar in performance to 2005's Opteron 240, a CPU we used for years as a high powered fileserver.

The problem is, any fileserver can be slagged by a bad load. FreeNAS itself is a pretty heavy load, it's big and beefy and made out of a bunch of scripts and stuff cleverly packaged up together to do all sorts of cool stuff. Then you have ZFS, a filesystem with large amounts of overhead and massive system requirements. Handling multiple hard drives in RAIDZ is a further amount of overhead. And then there's what you're storing. Our old Opterons could serve up large files off of FFS at gigabit speeds but didn't have the seek bandwidth to handle searching thousands of small files quickly. ZFS can actually help with that, but it does so by consuming even more resources.

What do you mean by layers? I am aware that for the apps, like SABnzbd, repairing binaries, unpacking and all, 1230v2 will come in handy, but are we also talking multiple disks? Do 6 disks in RAID-Z2 also need more powerful CPU than a single disk?

Someone has to do all the XOR logic. Sun's theory is that CPU is less expensive than custom silicon.

That is OK usage, however with +6 disks, I will probably idle at 70w. But honestly, the difference is 20w between going ATOM or Xeon.

That's my conclusion too. So if you think that there's a possibility that you could make use of the speed at some point, then the Xeon becomes the desirable selection.

400w PSU with 80+ Gold certification. Cyberjock says this is an overkill. I read that spin-up of the HDDs needs quite an overhead on the PSU, but how much? What PSU power should I be looking at? Also I've seen solutions with an external PSU, like picoPSU, however I question the rentability. picoPSU costs some 45€ alone, + good ext. PSU, about 70€, and a good internal PSU with gold certification is about 60€. When will these other 60€ come back through the power usage "win" through the usage of the external PSU? Question is still how many watts? If I go after his post, I'd say 180w or a little bit more, like 200w maybe, in that case, external PSU costs about 110€.

Going a bit larger is safer than going a bit smaller.

To figure out your minimum acceptable supply, figure out the power consumption of all the parts you select. Some are easily found, like the CPU, you can use the TDP watt rating. Some, like the motherboard, are not easily found, but you can say 30 watts for the motherboard and be fairly certain not to be underestimating. Memory, add-in cards, fans, and hard drives too. Add it all up. Now comes the important part. You need to find the starting current for the hard drives. It'll be on the spec sheet for the drives. This is a separate calculation and will be around 2 amps at 12 volts per 3.5" drive (and if you calculate things the way I suggest, just using 2 amps per drive ought to be safe). So now you've figured out your system requires maybe 180 watts max plus 16 amps at 12v starting current.

First, double the watts. You should look around 360 watts. 350 is fine. 400 is fine. A bit higher is fine. 750 is probably not. Modern "80 Plus" certified supplies are inefficient at 5% loading, aren't that great at 10% loading, but at 20% they're in the neighborhood of their efficiency rating. At 50%, they're doing real awesome.. then they droop a bit as they get up past 80%. Look at this Kingwin STR-500 report (more on that in a sec). Sadly it only samples three load percentages, sigh.

But anyways, so now you've got a supply that's rated at least double what your system actually needs, and that's going to be likely to be in the supply's sweet spot, where it isn't totally stressed and baking itself to death, but it also isn't wastefully and pointlessly large. So look at the specs and see if your additional start current requirement can be met by the power supply. For this Kingwin, rated at 41.5A on the 12V, 180 watts is about 36% of 500 watts, so 36% of 41.5A is 15A, so there's probably something like 25A at 12V free. That's definitely enough headroom for 16 amps worth of start current.

We selected that STR-500 for ESXi nodes because at the time, nothing else was available that met our requirements. The ESXi nodes run around 60 or 70 watts so we're barely in the 10-15% utilization range for those supplies, but they are MUCH more efficient than a standard power supply. So don't be afraid to go a little larger.

In the end, my theory is that it probably doesn't pay off to go for the cheapest and smallest stuff. More CPU gives you more legroom to "go fast" and "do things." More power supply means "runs cooler" and hopefully "lasts longer." More memory means "ZFS happiness." Larger hard drives means "more storage fun." etc.
 

Kosta

Contributor
Joined
May 9, 2013
Messages
106
Alright so Xeon 1230v2 is a go.

PSU: will do that calculation as soon as I am able to decide which HDDs to put in.

I am swinging between 4 drives currently: WD Green 2TB, WD Red 2TB, Seagate Barracuda 7200.14 2TB and Seagate Constellation CS 2TB. They are going between 80€ per piece up to 110€ per piece for the Constellation. My intention would be to have 6 drives, and I already have 2 greens. Two are in the DLINK NAS JBOD, and working fine now for 2,5 years.
Also reading that if using software RAID of FreeNAS, and if in ZFS, one could as well go for 6x Green. However the price difference is really 5€ on green vs red. Can I mix the two? 2x green + 4x red? It would be nice to avoid the batch errors, for instance make one red and one green the redundancy drives...

However reading bunch of reviews on the net, I can't decide.
Green they say is bad in the RAID configuration. Red they say is good, however without meaning since the power usage is apparently higher than reported on tech specs. Barracuda has a high consumption, but is a good disk. Constellation is an enterprise disk, but it also is priced accordingly.

Any thoughts of this?
 
Status
Not open for further replies.
Top