BUILD Newbie 15 x 4TB HDD configuration, doubts

Status
Not open for further replies.

diedura

Dabbler
Joined
Mar 31, 2013
Messages
12
Hi all.

I want to have a nas since a lot of time, I have a kind of digital diogenes.

The use will we for primary storage for my digital content (video, software) and if I can use for my vmwareBox would be great.

Also, I usually go to lanpartys and the brute throughtput will be appreciated.

So, my first approach was a Synology DS2413 plus 12 4TB Hard drives, and the solution is easy, but I think is closed as the day I bought it and no more, and the performance was only 200MB/S (lacp 2 gbit ports).
The pros are the easy-bility, not have to think in configuration, reduced power consuption and plug and rock&roll.

Then, a friend of mine, told me that I should consider to build my own NAS.

The first thing I saw, its I cannot have 12 HDD, so 8+2 in zraid2 or 12*3 in zraid3. I'll go for 15 configuration.

The choosen harddrive will be the ST4000DM000 which I get great price (160€ per unit)

Then, three Backplanes 5 in 3 in a 9 bay case for a total of 15 avialable units. (Sharkoon T9 and Icydock IB-555SK)

For the sata controllers, two IBM1015 flashed will do the job, plus the motherboard free units for ssd cache if I choose to have.

The connectivity will be supplied with a IBM quad GBit Intel pro in LACP

But my problem comes when I have to choose the MB+processor+memory.

My intention was to go to Intel Pentium G2130 or i3, or AM3+ opteron, but the problem is the amount of ram allowed, that remain in 32GB at maximum.

The other option is to go to a low Xeon e5-1620 with supermicro X9SRL plus 64GB of RAM, but the price is high.

So, I post a few questions.

1. It's insane to think in a pool of that size with 32GB of ram?

2. The processor is going to have enought power?

3. With that need of storage, which other build/solution sugest?

4. It would be great if the answer comes with expected throughtput for the proposal

Thanks a lot!
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Do be aware that if you're doing something like RAIDZ2 or RAIDZ3, there is a certain amount of overhead, and performance may not be as great as you would like. Also, 12 plus 3 is not going to provide optimal performance, RAIDZ works best with a low power-of-two disks plus parity disks (that's how to figure it, not how it works), so if you want RAIDZ2, that's two-data-plus-two-parity=four disks, 4+2=6, 8+2=10. For RAIDZ3, 4+3=7, 8+3=11. There is also a general consensus that vdev width should probably be limited, but less consensus as to what that ought to be, though it seems to be about a dozen. There are varying opinions as to how much of a difference all of this makes in reality, and you can absolutely do other layouts, but be aware of it. Your proposed configuration is not entirely unreasonable, but may fall into the "performs poorly" category.

1) The ZFS sizing rules for FreeNAS suggest something like 6GB plus 1GB per 1TB, so I expect your question is, do you really need 54GB of RAM. The answer is conditionally no. The sizing rule is just to give people an idea of what a well-performing, properly-sized system might look like. 32GB is likely to be just fine, and unless your system is very busy, my guess would be that you wouldn't notice much of a difference. If you were creating a busy departmental fileserver, then yes, 48GB or 64GB and don't cheap out.

The thing is, ZFS uses massive amounts of RAM to hold lots of stuff that has been used, and may be used again soon, and a lot of what it is doing isn't strictly required, but is needed for good performance. There isn't an exact GB number, you can easily create a ZFS box with 16TB of disk space that ought to have 32GB of RAM (key: big working set), and on the flip side, a box with less RAM than recommended can be successful, albeit likely with lower performance. The FreeNAS developers just don't want people coming in and saying "well I got this 12 4TB drive box and 8GB of RAM, why does this work like suck?" because in reality you could probably get that to work, given sufficient tuning, but it probably won't perform all that well. But for some workloads, it actually might be just fine... but it's 2013 and RAM is relatively cheap. It was worth agonizing about in 2006, not today.

2) Can I suggest that you go with a Xeon processor with a server class motherboard? There's great stuff like the E3-1230v2 (the Xeon version of your CPU but with four cores and all the features) and Supermicro makes a bunch of nice 1155 boards like the X9SAE-V, the X9SCM, and the X9SCL+-F. The increased price may be offset somewhat by the inclusion of two quality Intel ethernet controllers onboard. ZFS is a bit processor-piggy (it comes from Sun Microsystems after all), and CIFS is also a bit processor-piggy. If you then also want to do things like compression, you can run out of CPU real quick, and when you've spent a fair bit of money on a NAS, that can be frustrating.

3) No particular suggestions. You don't appear to be making significant mistakes.

4) Without someone who has actually built a similar system, it is very difficult to give expected throughput.
 

diedura

Dabbler
Joined
Mar 31, 2013
Messages
12
jgreco, thanks for your answer.

for the first point, my second thougt is to have two volumes, one 8+2 zraid2 and one 4+1 zraid, so the same amount of usable disk and parity disk. Better of worse?

About the amount of ram, your answer gives me relief.

If I go for a 1155 platform, the maximum RAM is 32, no capability to get more. So, following your advice, my idea for 1155 server class is:

Mobo: X9SCM-F - 204€ (8€ against the x9scm, so I spent a little more to get IPMI)
http://www.lambda-tek.com/MBD-X9SCM...SATA-LAN-IPMI-Motherboard-Retail~csES/B666946

Proc: E3-1230v2 - 227€
http://www.lambda-tek.com/CM8062307...et-OEM~csES/index.pl?region=ES&prodID=2481887

Mem: 4x KVR13E9/8HM - 276€
http://www.lambda-tek.com/componentshop/index.pl?region=ES&searchString=KVR13E9/8HM+&go=Go

707€ only on the brain, the budget of the "box" is going to 1800€ plus 2400€ in harddrives :S (4200€ vs 3000€ of synology alternative)

I'll have to think about it again tonight with pillow

Again, thanks a lot for your answer and I'll be glad to read your feedback when you can. More opinions are welcome!
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Maximum speed and maximum storage are always at odds with each other. Maximum storage is generally achieved by tossing as many disks together in a RAIDZn group as you can. Maximum speed typically comes from striping multiple small vdevs together, and is affected very heavily by many design and implementation decisions. In many ways, the best thing to do is to *try* potential configurations and see how it works. Sorry, I know that answer sucks. Second best is finding someone else who did something similar and looking at their results.
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
If you want maximum speed, when/if you build this system, give it a shot using ZFS pools first. If you are not getting the throughput you need then reformat the drives to UFS and create a RAID5 or RAID6, this will give you much better performance but does lack the ZFS data corruption checking.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
That's a good point, but that blade has two edges. ZFS is piggy. FFS/UFS is very lean and efficient, and performance is more predictable. However, FFS/UFS can not take advantage of ARC/L2ARC to the extent that ZFS can; a FFS/UFS system will tend to have a given set of performance characteristics tightly coupled to the hardware/config in use, while ZFS performance for at least some workloads can be varied significantly through architecture of the storage subsystem, tuning and configuration, etc.

If you have a LAN party where you're heavily reading 50GB of storage, for example, a 32GB system cannot fit that all into memory. ZFS will optimize for it "better" than UFS. But if you have a 120GB L2ARC device, the potential exists for the entire 50GB of stuff to wind up in ARC (for the heavily accessed) and L2ARC (for the less-heavily accessed) leaving the pool much less busy than the FFS/UFS equivalent system.

So you have this odd situation where ZFS is too piggy on the low end of the spectrum, and FFS/UFS beats it for the most part, but then also moderate workloads where it gets more complicated to predict, but then the heavy workload on big iron with lots of storage and lots of stuff going on, where it is ZFS for the win almost every time. But then you can still create situations where FFS/UFS on an SSD is just a better choice ... because FFS/UFS just lack a lot of the overhead, and SSD is so fast.

No guarantees that YOUR situation will necessarily work out exactly like any of this, of course, just trying to highlight the differences.
 

JaimieV

Guru
Joined
Oct 12, 2012
Messages
742
Also, I usually go to lanpartys

Aside from all the more technical aspects discussed already, are you really intending to build a 15-disk server that'll be thrown in the back of the car and taken to LAN parties? Because if you are, you really need to consider practical physical aspects and damage limitation for when it gets dropped - which it will.

With the waste heat of this build, going the simple route of building it into a normal flight case will require some serious fans and airflow.

Depending on the details of your situation I'd recommend building one machine for home, and one much smaller "disposable" machine for travel - perhaps with just two disks and an SSD L2ARC.
 

diedura

Dabbler
Joined
Mar 31, 2013
Messages
12
Thanks all for the help.

I started to buy the material.

Today arrived 2 of 3 of the backplanes.

Do you consider useful to upload photos and progress of the build?
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
It would seem to be your decision whether or not to invest time in documenting (or showing off) for others to see.
 

9C1 Newbee

Patron
Joined
Oct 9, 2012
Messages
485

diedura

Dabbler
Joined
Mar 31, 2013
Messages
12
I don't forget the post, but I'm having problems to get the MOBO, Micro and RAM.

I've the three Backplanes and the Tower.

The two controllers are expected to come tomorrow.
 

9C1 Newbee

Patron
Joined
Oct 9, 2012
Messages
485
Good luck getting it all together. Such a great feeling when she fires up.
 

Caesar

Contributor
Joined
Feb 22, 2013
Messages
114
Wait didn't you say you had pics??? :)
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Wait didn't you say you had pics??? :)

No kidding! I want pics, smell-o-vision files of the "new hardware scent", etc!
 

diedura

Dabbler
Joined
Mar 31, 2013
Messages
12
Built!, I have a lot of photos and experiences to share with all of you, but at these moments, it's urgent to configure the NAS.


Some advices to test the performance?

My idea is to build a single volumen with all the disk and three disk of redundancy

Other configuration to test?

Thanks!
 

diedura

Dabbler
Joined
Mar 31, 2013
Messages
12
Pool 15 hard disk zf

iozone -a -+u -b output.xls
 

Attachments

  • output-cpu.rar
    46.6 KB · Views: 330

diedura

Dabbler
Joined
Mar 31, 2013
Messages
12
Still building, as I have been recomended, I going to add a SDD for FreeNas > SanDisk SSD 64GB SATA3

I order a quad port Intel PRO/1000 (one for the nas and other for the computer I use)

Also, I think I should add a couple of SSD, one for the ZIL and another for the LARC, but I dunno what ssd to choose or if I boost the performance of the nas.

Besides, I have data in the NAS, so all the operations I do from now can't be destructive (it's not a perfect situation, but I don't have another option)
 

Sir.Robin

Guru
Joined
Apr 14, 2012
Messages
554
I want pics too! And i'm feeling a upgrade itch... :confused:
 
Status
Not open for further replies.
Top