iSCSI performance: slower reads than writes

Maxlink

Cadet
Joined
Mar 27, 2020
Messages
8
Hello all,
I am new to the forums and I have been reading up on a lot of older materials in the past two weeks. However, I can't seem to improve my current situation and I would like to ask for your help and thoughts.

The below setup is double and identical, I have two identical freenas systems.

The hardware is the following:
Code:
1 CPU: Intel(R) Xeon(R) CPU E5-2620 v4 @ 2.10GHz
128 GB ECC DDR4 RAM
Storage disks: 14x 4 TB, 7200 rpm SAS - HGST 726040ALS210
Cache disks: 2x 240 GB - INTEL SSDSC2BB24
Boot disk: USB-stick 32 GB (USB3.0a)
HBA: LSI SAS3008
1x 10 Gb SM231, Intel chipset 82599


I have a single pool configured, including spares and LOG:
Code:
6x mirror - 4 TB
2x spare - 4 TB
2x LOG (mirror) - 240 GB


I have tested the raw disk speed from various configurations and the above proved me the best performance and protection. These are the result from DD:
Code:
Performance test using DD commands - no compression - 200 GB - sync: standard               
RAID Type    Disk count    VDEV count    Space    Read MB/s    Write MB/s
SoM            12            6            21 TB    876.45        971.61

* I did many tests, including sync: always, which I think performs fine (270 MB/s write)


I have used iperf3 to test my network speed, and it seems fine. I am getting 1.17 Gb/s throughput when running iperf3 in server mode on host1, and in client mode on host2. Results are below:

Code:
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec  10.5 GBytes  9.02 Gbits/sec    0             sender
[  5]   0.00-10.00  sec  10.5 GBytes  9.02 Gbits/sec                  receiver
iperf Done.


I have a 5 TiB zvol on the pool, and this zvol is used by ESXi through iSCSI.

Now my issue is that I can only get up to roughly 500 MB/s read speed in various scenarios.
But my write speed can actually get up to 970 MB/s maximum of the array.


I have been testing with synchronisation jobs from host1 to host2, migrating VM storage from host1 to host2 from within ESXi but both won't go above 500 MB/s throughput.
When using CrystalDiskMark from within a ESXi VM, stored on host1, I can actually get the expected 970 MB/s write speed and my read speed gets stuck at 500 MB/s again. I can also see on host1 that the 10 Gbe NIC is actually getting 1.17 Gb/s throughput and during the same test I only half that speed during the read test.

Am I overlooking something/doing something wrong?
I was expecting to be able to migrate VM from host1 to host2 with at least 880 MB/s since that is the read speed of the array, and the write speed is higher so I thought that the read speed would be the limiting factor. Have I made a rooky mistake perhaps?
 
Top