BUILD C226, i3, 10 drive build

Status
Not open for further replies.

Richman

Patron
Joined
Dec 12, 2013
Messages
233
Yeah it was just hypothetically.

Oh Sorry, I wasn't following along close enough. I thought you were making a recomendation to the OP or something.
If you could add drives to a vdev, starting with a 4 drive RaidZ2 would make a lot of sense (for me at least).
That was actually what I was hoping to do as well. It would probably make a lot of sense for a lot of people. But the first post I read and researched was, http://forums.freenas.org/threads/adding-disk-to-raid.8730/
and I thought that FreeNAS was or should be as advanced as Drobo or some other NAS appliances where you would add a disk to an array but realized that the others don't use ZFS, which is where the limitation lays. I further came to the realization that this one thing that I hoped I could do was a ZFS trade-off for all of the other things ZFS offers that the others don't have.

ANYONE: Since the above forum is locked, and I didn't get a chance to ask one other question, can anyone tell me ............................
Is it possible to add a disk to, I guess it would be called an array, in a RAID 5 or 6 using another file system like UFS, using FreeNAS. I was thinking of having a more traditional RAID array in the same FreeNAS box along with another ZFS storage pool for two different purposes.
ie: a 3 disk RAID 5 and a 5-6 disk ZFS pool.
 

indy

Patron
Joined
Dec 28, 2013
Messages
287
You might want to consider the Supermicro X10SL7-F-O instead of the AsRock.
I looked into board you recommended, and saw that it has a direct PCIE 3.0 x8 connection between the SAS/SATA controller and the CPU.
That is actually pretty cool and really sets it apart imo.

My current build is focused on a lower initial cost, with the option to expand at a later time:
* RaidZ2, 10 disks
* AES encryption
* LZ4 compression
* ECC memory

MB SuperMicro X10SL7-F
CPU Intel E3-1220 V3
RAM Kingston 8GB ECC
CASE Fractal Define R4
HDD Western Digital Red 2TB
PSU Enermax 500W (which I already own)

Future upgrades:
* 2x 8GB memory
* Better PSU
* Bigger hard drives (via autoexpand replacement)
* 10Gbit NIC
* UPS
 

IanWorthington

Contributor
Joined
Sep 13, 2013
Messages
144
@indy

That looks like a very similar build to the one I'm ordering. I'm going with the 1225v3, and also 8GB ram, but not sure if that'll be enough. Ram prices at the moment though...

I thought about using 2 or 3GB drives but decided in the end that whilst I /could/ replace them all with 4 or 5TB drives in the future I'd

1. then have a bunch of 2TB drives I'd have no use for
2. Have to replace /all/ the drives to increase storage

so am biting the bullet an going with 4tb from the outset.

btw seagate's 4tb nas are a little cheaper than the wd reds, at least on amazon so you might like to consider 5+5?

Why replace the psu? If it can supply 12V@>20A you should be ok, no?

i
 

indy

Patron
Joined
Dec 28, 2013
Messages
287
A 10 disk 4TB setup would give me 32TB of usable space, which is way beyond my current needs.
I feel that I would have a lot of unused space, while the mostly empty hard disks loose value.

The Seagate nas disks have even better properties than the wd, regarding power consumption and noise.
I will into them, thanks for the heads-up.

Any reason for going with the 1225?
Only advantage I see are the slightly higher clocks, since most server boards do not use the igp anyway.
The 1220 seems more than powerful enough to me, I mean it is better then my current desktop cpu.

Regarding the psu, I recently bought a passive Seasonic Platinum for my desktop pc and I love that thing.
Its silent, highly efficient even at low loads and the electrical properties were very well reviewed.
Since the Enermax is only rated at bronze efficiency (and probably performs a lot worse below 20% load), it might even pay off after a couple of years.
 

IanWorthington

Contributor
Joined
Sep 13, 2013
Messages
144
The slightly higher clocks should help samba performance if you need that.

Without the igp can you still use ipmi to view to console remotely?
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
The slightly higher clocks should help samba performance if you need that.

Without the igp can you still use ipmi to view to console remotely?

You'd have to check the manual for your motherboard. Normally, yes.
 

indy

Patron
Joined
Dec 28, 2013
Messages
287
According to this site https://calomel.org/zfs_raid_speed_capacity.html under 'All SATA controllers are NOT created equal'
the 6GB/s Intel Sata ports seem to perform pretty badly on the tested mainboards.
There does not seem to be a limit, capping the maximum throughput (like DMI for multiple disks), but the controller is just slower overall.

Would I have to expect bad performance of the 2x 6GB/s Intel ports on the X10SL7-F as well?
Having 8 disks on the fast LSI controller and 2 disks on the (maybe) slower Intel controller, dragging the whole raid down, seems like a bad idea.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Your ethernet limits you to 125MB/sec. That works out to 16MB/sec per drive. Guessing: not an issue to get OCD about.
 

indy

Patron
Joined
Dec 28, 2013
Messages
287
I was more thinking about a future upgrade to 10 Gbit ethernet.
Even the super expensive Intel x520 cards go for about 150€ used...
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
But, still, what's your point? Your options seem to be to try it, or to assume it sucks and get a second controller. So why not just try it? Worst case you decide you need another controller, but if not, $$$ and watts saved.

p.s. the calomel stuff is often interesting but not always "right."
 

indy

Patron
Joined
Dec 28, 2013
Messages
287
Surely someone must have already tried a C22x / WD Red combination?
Another option would be to go with 8 or 6 drives, if the Intel controller can not handle even 2 drives well.
 

indy

Patron
Joined
Dec 28, 2013
Messages
287
Actually I am really liking that 8 drive RaidZ2 idea with 3TB drives.
It fits the controller well, fits the case well, more reliable than 10x 2TB, consumes less power than 10x 2TB, costs as much as 10x 2TB but has 2TB more space.

Will terrible things happen because it violates the 2^n+p rule?
There are some posts saying that the 128kb write blocks end up as a 128/6=21.3333 mess.
Another article mentioned 2*n drives would be nice, but it would not really matter.
My guess: if ZFS were to use 96k (automatically or maybe by using 'set recordsize') everything would be fine?

Also referring to calomel again the read/write speeds seems to scale pretty linear regardless of unideal configurations:
Code:
 6x 2TB  raid6, raidz2        7.1 terabytes ( w=425MB/s , rw=171MB/s , r=424MB/s )
10x 2TB  raid6, raidz2       14   terabytes ( w=467MB/s , rw=212MB/s , r=513MB/s )
12x 2TB  raid6, raidz2       17   terabytes ( w=507MB/s , rw=256MB/s , r=660MB/s ) <--- not ideal
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
There's some inefficiencies, both space and performance, that may result from nonoptimal vdev sizing. They're probably offset by the convenience of having more storage.
 

indy

Patron
Joined
Dec 28, 2013
Messages
287
zfs_scaling.png

I made a graph of some of the data from calomel.org.
Basically it shows how the performance of a RaidZ2 setup scales with the number of disks.
Might be interesting for other people still thinking about their setup.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
And I get 3 things from this...

1. That whole "never go above 11 disks" rings true in the performance arena too.
2. Once again, someone things that they can do some benchmarking and think it means something to everyone(hint: it doesn't).
3. Just by tweaking 1 or 2 ZFS parameters I can make those numbers change significantly. And if you check out FreeNAS versus FreeBSD versus some of the others, each one has different values. So don't dismiss this item as "that never changes" because it does. And they have changed with every FreeNAS release I've checked!
 

indy

Patron
Joined
Dec 28, 2013
Messages
287
No need to poop on my cool graph :)
I guess if you expect those exact number to pop up on your screen you might be disappointed.
However what I get from it is a coarse sense for the diminishing returns as you add more drives.
 

Richman

Patron
Joined
Dec 12, 2013
Messages
233
I made a graph of some of the data from calomel.org.
.

Nice gragh. I don't know how or if you can format it but try making the number the same color as the lines they represent and format it so the number are not over top of each other.
And I get 3 things from this...

1. That whole "never go above 11 disks" rings true in the performance arena too.

That graph shows that you still get noticable performance gain at 12. So maybe it should be "never go above 12 disks"
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
That graph shows that you still get noticable performance gain at 12. So maybe it should be "never go above 12 disks"

Except that those claimed performance gains are strictly with all reads and all writes. Far from a typical load for ZFS.

The 11 disk rule has mathematical basis for the point at which the pool is so wide that small writes takes excessively long to perform.
 

indy

Patron
Joined
Dec 28, 2013
Messages
287
Does a RaidZ2 configuration (for example) not just use 3 disks for small writes?
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
No. Transaction groups are an aggregate of the write requests, though the disks impacted by a single small write are indeed three (data, parity, parity). But the main way that only three blocks would be written is if that was the entire transaction group.

I'm not clear on what cj's point is either, but I'm kinda tired and foggy.
 
Status
Not open for further replies.
Top