New FreeNAS server - Mirrored disks only performing like a single disk

Status
Not open for further replies.

kevin00xxl

Cadet
Joined
Sep 15, 2017
Messages
4
Hi,

i have a HP ML10v2 with the following specs:
  • Intel Xeon E3 1230v3
  • 16GB Kingston ECC
  • Onboard sata controller in AHCI
  • 2x2TB WD RE4
  • 8GB Kingston USB2.0 (FreeNAS)
i installed FreeNAS 11 on the server (after applying the latest SPP for the server) and set up both 2TB drives in a mirror.
i then tested the performance with the following command:

Read
dd of=/dev/null if=/mnt/mirror/dataset/test.dat bs=2048k count=10000
Write
dd if=/dev/zero of=/mnt/mirror/dataset/test.dat bs=2048k count=10000

i then archived read speeds of 107MB/s and write speeds of 105MB/s.
however, i expected that when drives are set up in a mirror that the write speed would be the same as a single drive but the read speeds would be as fast as (in my case) 2 drives.
because of the use of parallel reads from both drives.

am i doing something wrong here or is this the way FreeNAS handles a small 2 disk mirror?

i also tested a striped setup and got about 200MB/s both read and write, so the problem does not seem to be my hardware.


i appreciate every bit of help i could get, i have been trying to solve this problem for some time allready and can't figure it out


Thanks in advance
 

sunrunner20

Cadet
Joined
Mar 13, 2014
Messages
8
You're not wrong, a ZFS mirror's read speeds should be approximately the sum of n drives, with n being the depth of the mirror. Can you offer more insight into your setup, particularly the controller. There are several that are known to have issues with FreeBSD thus freeNAS.
 

kevin00xxl

Cadet
Joined
Sep 15, 2017
Messages
4
First of all, thanks for your quick response.

The "controller" in the HP ML10v2 is a HP Dynamic Smart Array B120i which is embedded on the motherboard.

The "controller" has no dedicated cache. it has a internal PCIe 2.0 x4 bus.

This B120i is actually just a Intel Lynx Point sata controller with some HP RAID software embedded into the bios (like intel "HW" RAID).

In the bios of the HP ML10v2 is an option to disable the B120i RAID function and to enable just plain AHCI. Which i have done. So the disks are presented 1:1 to FreeNAS without any type of raid in between.
 

dtom10

Explorer
Joined
Oct 16, 2014
Messages
81
Post the dataset settings, if you're doing some exotic stuff within the dataset like gz compression you'll spin the cpu for a bunch of /dev/zero and get poor throughput.

Post the pool settings and dataset settings. You should see better read speeds than writes, as you expect.
 

kevin00xxl

Cadet
Joined
Sep 15, 2017
Messages
4
Hi,

The dataset settings are as follows:
  • Compression level: off
  • Share type : Unix
  • Enable atime : Inherit (on)
  • ZFS Deduplication: Inherit (off)
i did't know where i could find the pool settings, i think this must be it

Pool_Settings.png


thanks for all the responses so far guys!
 

kevin00xxl

Cadet
Joined
Sep 15, 2017
Messages
4
Today i borrowed an IBM M1015 (flashed to IT mode) from a friend and installed in into my HP ML10v2,
then i executed the exact same commands to test the read and write speeds...i got almost the same read/write results.

does anybody have an idea on how i can solve these low read speeds?
 

dtom10

Explorer
Joined
Oct 16, 2014
Messages
81
Enabling lz4 compression on the dataset what do you get?

Lz4 is recommended at all times for performance reasons and drive wear as well, has little impact on cpu. I would also try with random data compared to /dev/zero.
Also, the test should be run 3-5 times to compare numbers.

The info I was asking for is provided by the <zfs list> and <zpool list> commands.

Post those and numbers for the below commands run for 5 times:
dd if=/dev/zero of=/mnt/mirror/dataset/test.dat bs=2048k count=10000
dd of=/dev/null if=/mnt/mirror/dataset/test.dat bs=2048k count=10000

dd if=/dev/urandom /mnt/mirror/dataset/test.dat bs=2048k count=10000
dd of=/dev/null if=/mnt/mirror/dataset/test.dat bs=2048k count=10000

The first commands test using zeros, the second test using random data.
 

rs225

Guru
Joined
Jun 28, 2014
Messages
878
I would try time cat /mnt/mirror/dataset/test.dat > /dev/null and ls -l /mnt/mirror/dataset/test.datto see if the read performance seems to change. Post results.
 
Last edited:

farmerpling2

Patron
Joined
Mar 20, 2017
Messages
224
Hi, am i doing something wrong here or is this the way FreeNAS handles a small 2 disk mirror?

A stripe set will increase performance for each drive added. A DD to/from stripe will get n* performance, even synch I/O.

A mirror, can can increase read performance if O/S & drive, are written correctly. On writes it will always act as speed one one drive + a extra ms because platter is not exactly same place on both drives. It is best to think of mirror giving same speed for read/writes. On writes you only get 1* performance. This is true for either synch or asynch I/O. For reads that are synch, you will only get 1* I/O. For asynch I/O and right O/S & drivers you can get n* performance.

If you want performance then go to stripe set, but realize you will not get redundancy. You can stripe mirrors or mirror stripes to get the performance + redundancy (e.g. raid10).

Here is more information https://en.wikipedia.org/wiki/Standard_RAID_levels#Performance_2

n = number for disk drives.
 
Last edited:

Pezo

Explorer
Joined
Jan 17, 2015
Messages
60
dd if=/dev/urandom /mnt/mirror/dataset/test.dat bs=2048k count=10000
This is not a good idea and most certainly limited by CPU, not by the disk. Just try it with if=/dev/urandom of=/dev/null and see.
 
Status
Not open for further replies.
Top