Question on Overhead of RAID Configurations

Status
Not open for further replies.

kid908

Cadet
Joined
Oct 8, 2018
Messages
2
I've been testing different RAID configuration to optimize performance. I would setup different configurations and test both read and write performances of 1-4 drives (3TB capacity drives) with dd command. I ran the benchmark 5 times per configuration for both read and write. In my testing, I noticed that RAID10 (mirror 2x2x3TB) configuration with 4 drives suffered a read performance penalty that I didn't expect (It's not scaling as I would expect it would).

In the plots, RAID0 refers stripe setup and Nx refers to the number of drives in the configuration (RAID10 refers to mirror). Error bars denote 1 standard deviation.

Read Performance.png


Write Performance.png


The following plots are scaled based on how I would expect performance to scale.
Basically,
RAID0 scales linearly with drive number for both read and write.
RAID10 read scale linearly with drive number and write scale as N/2.
RAIDZ read scale as (N-1) and write scale as (N-1)/(some parity overhead factor).
RAIDZ2 read scale as (N-2) and write scale as (N-2)/(some parity overhead factor).

Read Performance per Disk.png


Write Performance per Disk.png


As you can see, write performance per disk is more or less what I would expect, about the performance of just a single disk drive.
Read for the most part is what I would expect as well except for RAID10 configuration which have a much lower per disk performance than a single drive.

I was wondering if I did something wrong or is there some overhead involved with mirror that causes a lost of nearly 30% in read performance?
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
I am pretty sure that this kind of testing has been done before. Were you just trying to get numbers with your specific hardware?
You are talking about a number of RAID configurations that don't exist in FreeNAS, so I don't really know for sure what you are talking about. Are you comparing some hardware RAID controller to the results that you get with ZFS?
Perhaps you should review the resources as they will probably help with your questions and your terminology:

Slideshow explaining VDev, zpool, ZIL and L2ARC
https://forums.freenas.org/index.ph...ning-vdev-zpool-zil-and-l2arc-for-noobs.7775/

Terminology and Abbreviations Primer
https://forums.freenas.org/index.php?threads/terminology-and-abbreviations-primer.28174/

Why not to use RAID-5 or RAIDz1
https://www.zdnet.com/article/why-raid-5-stops-working-in-2009/
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
Testing methodology is a magic art. In some cases single threaded tests are relevant, (I'm looking at you Samba!). But, unless your tests included several dd commands running at the same time, you may have results that are not appropriate for your use case.

What is your use case?
 

kid908

Cadet
Joined
Oct 8, 2018
Messages
2
Every RAID configuration is done through ZFS with no hardware RAID.
I guess in the terminology, RAID10 is stripe of mirrors. I tried to create them as followed:
RAID10.png

RAID10b.png

Both yielded similar results.

As far as use case, it'll be serving and receiving large files from one client, maybe 2 at most, over a 10GbE network (Think large external hard drive).

Pretty sure that single dd run with 100GByte file would be close enough to the use case. I also tested copying a 10GByte file to and from the NAS.
Code:
write:
dd if=/dev/zero of="/mnt/Test/tmp.dat" bs=2048k count=50k

read:
dd if="/mnt/Test/tmp.dat" of=/dev/null bs=2048k count=50k


As for RAIDZ1, from my calculation, depending on array size, it doesn't matter. The chances of the rebuild encountering a URE after a drive dies is pretty high if the array is too large. Eight 4TB drive in RAIDZ2 has ~75% chance of failing a rebuild.
RAID Rebuild Failure.png
 
Status
Not open for further replies.
Top