Ideal vdev configuration for 4-6 disks, mirrors vs RAIDZ2

Status
Not open for further replies.
Joined
Sep 1, 2014
Messages
9
Hi folks,

I've recently built my first FreeNAS machine in a used Supermicro 6025B-T server I picked up for a good price.

The thing has six drive bays, and I currently have four 3TB Seagate Barracuda 7200rpm drives in it, configured as a RAIDZ2. I did that without having done extensive research on the subject and now realize that two sets of dynamically striped mirrors would be faster.

How much faster is it? Is it worth wiping my pool and starting from scratch? I have an extra computer that can back up everything (in fact it is doing that now) so destroying the pool is still an option at this point, as it hasn't grown beyond the maximum capacity I have on the other rig. There would also be the option of setting up a single raidz1 vdev at first (3x 3TB), buying extra drives and eventually expanding it to two dynamically striped raidz1's. And if there isn't much to be gained I can leave it like that for now, and when the time comes expand it with a mirrored vdev to fill out the remaining drive bays...

At the moment I am seeing around 60-80 MB/s on large file transfers over my network with bursts up to 95 MB/s, so it seems I'm already not too far from maxing out my Gigabit network.

Thanks!
 
Last edited:

anodos

Sambassador
iXsystems
Joined
Mar 6, 2014
Messages
9,554
In most cases (for home users) the comparison of speed between mirrored vs raidz2 vdevs is moot is because your network will bottleneck before your pool bottlenecks. If you have a set budget, your money may be better spent getting more RAM than getting extra disks for mirrors.
 
Joined
Sep 1, 2014
Messages
9
I tried it. Wiped my pool and recreated it with 2x 3TB mirrors instead of all the disks in a raidz2. I do not have a dedicated log or cache device at this point. It seems the mirrors are significantly faster for me. Sustained transfers of large files were mostly capping out around 60-70MB/s with the raidz2 and now I am consistently seeing 100MB/s with occasional bursts above 120MB/s. So it seems that raid1+0 is faster for my configuration.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
How much RAM do you have? Sounds like you have something like 8GB....
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Well, 16GB oughtta be enough.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Yeah, I saw your CPU has a FSB. Pretty much all CPUs that have FSBs have two problems:

1. They are ugly on the power bill.
2. They are bottlenecked by the FSB.

Mirrors don't have processing needs like RAIDZ2 does and with a CPU that old I can imagine it is very likely bottlenecked.
 
Joined
Sep 1, 2014
Messages
9
Mirrors don't have processing needs like RAIDZ2 does and with a CPU that old I can imagine it is very likely bottlenecked.

I'm not seeing heavy CPU use at all though? It's very rarely above 30% and I've seen it peak around 75% under very heavy use. I saw no noticeable difference in CPU usage between raidz2 and the mirrors. Does ZFS have parameters that I can tune to better accommodate such a CPU?

I'm already doing the obvious and not using encryption on my pool, but that's it.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
I'm not seeing heavy CPU use at all though? It's very rarely above 30% and I've seen it peak around 75% under very heavy use. I saw no noticeable difference in CPU usage between raidz2 and the mirrors. Does ZFS have parameters that I can tune to better accommodate such a CPU?

I'm already doing the obvious and not using encryption on my pool, but that's it.

Intel Quad-Cores before Nehalem were actually two dual-cores duct taped together by the FSB, which was a major bottleneck if one half needed something from the other half's cache. The processor would show up as idle while it's waiting for data. To make things worse, the memory controller is on the Northbridge, so every single transaction is clogging up the FSB, unless it's in L2 by some miracle.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
I'm not seeing heavy CPU use at all though? It's very rarely above 30% and I've seen it peak around 75% under very heavy use. I saw no noticeable difference in CPU usage between raidz2 and the mirrors. Does ZFS have parameters that I can tune to better accommodate such a CPU?

Ericloewe explained it very nicely. CPU % is not the factor. The FSB bottleneck is. Flashback to 2006 and 2007. AMD was pwning Intel. What was the two main reasons? Intel was still using a FSB and AMD wasn't and Intel totally screwed up with their extremely deep pipeline. AMD CPUs were basically "highways with 3000 lanes of traffic to every major component" while Intels were "3 lanes of traffic shared by all of the major components". Clearly one can push more cars through while the Intel CPU sits idle waiting for data to show up in the FSB.

I've always stuck with Intel with only a few exceptions. Every time I went AMD I was horribly disappointed. But one thing I noticed about AMD CPUs when AMD was kicking Intel's butt was that AMD systems were "more responsive". When I bought my first Intel CPU without a FSB (nahelem CPU and I ordered it the day it was released) I quickly found that my Intel system had the same kind of responsiveness that the AMDs have had for a while.

Anyway, there's much much more to what is going on than looking at CPU% and calling it "good" because it's not at 100%. You've basically proven that with your first few posts discussing the problems with RAIDZ2 versus mirrors. I've got a system here that I've done some benchmarks with that has 12GB of RAM and for all intents and purposes should be able to run FreeNAS very well. It does, as long as you don't expect too much since it has a FSB.

Intel's fastest FSB was 400Mhz (quad-pumped to basically 1.6Ghz). At maximum theoretical throughput you got 12.8GB/sec. This was total throughput and was shared amongst all devices... RAM, hard drive controllers, PCI bus, etc.

Intel's first non-FSB chips that were the slower clock ran at a speed of 4.8GT/sec (gigatransfers/sec). The total theoretical throughput was about 19.2GB/sec.

Now that's not very much, but consider the fact that RAM is suddenly NOT locked down to going through the FSB and you've got RAM directly attached to the CPU and all of your other devices kind-of-but-not-really share 19.2GB/sec. So the performance gains were nothing short of amazing.

If you had bought an i7 back then you very well might still be using it too. My HTPC is my old i7-920. It works great and to be honest I'm actually wondering how much longer I'm going to have that system around. It's still so overpowered for what I use it for I'm wondering when the thing will break. I've had that thing for almost 6 years now and I'm convinced I'll probably have it for at least 2-3 more. Before that CPU I've never had one more than 2 years. Even today, if you had one of those Nehalem-based Xeons, even the slowest ones (one of which I own myself... the E5606) still *smokes* FreeNAS. Over it's 10Gb LAN card I can do over 600MB/sec on my RAIDZ3 array.

In case you don't know this, the day that Nehalem CPUs hit the market in Nov 2008 the slowest quad-core Intel Nehalem-based CPU sold was faster than the fastest quad-core AMD CPU they had on the market.

Looking back at history, there is a very clear and distinct line drawn in the proverbial sand where FSBs were and when FSBs exited the market. Quite literally just about any CPU that Intel has made that doesn't have a FSB is probably powerful enough to do pretty decent things with FreeNAS. But as soon as you go to a FSB you are talking only the highest-end CPUs being able to keep up with the work ZFS needs to do (and you've demonstrated that very well with your posts).
 
Joined
Sep 1, 2014
Messages
9
Thank you, I very much appreciate the in-depth explanation.

Guess I'll stick with mirrors for the foreseeable future then. With my current setup it's already more or less maxing out my Gigabit network. Am I correct in assuming that raidz1 would be affected by the same FSB bottleneck?

I haven't added my log or cache device yet. I have a single 60GB SSD available at the moment... Considering I'm already seeing pretty good throughput, is it likely I'll gain much from using it as an L2ARC (it's a media library / streaming box)? Because my girlfriend's laptop just turned up with a dead HDD, so I'm a bit torn between using it for her or for the freenas server even though it was originally intended for the server.
 
Last edited:

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
You wouldn't benefit from an L2ARC with only 16GB of RAM.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
You should read my noobie guide.

L2ARC shouldn't exceed 5x your ARC, which for you is abuot 40-50GB. 600GB is just ludicrous.

Not to mention the L2ARC isn't useful for household loads, so adding it won't gain you much. Same for a log device.... for household loads you'll never see the difference. Logs are useful for NFS when doing sync writes and nothing else. I'd bet you aren't going with NFS, so you clearly have no benefit from a log. ;)

That's why my noobie guide says something like "if you don't know if you need one or not, you don't".
 
Joined
Sep 1, 2014
Messages
9
I've actually read most of your guide, thanks :p. It's very helpful.

Alright then. I will fix up my girlfriend's laptop with that SSD.
 
Last edited:

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Status
Not open for further replies.
Top