Slow read-speed compared to adequate write speed

Morehatz

Dabbler
Joined
Nov 10, 2019
Messages
13
Heya,
I'm, fairly new to FreeNAS and recently put together a system.
The issue is, that my read speeds are (at least in my opinion) not comparable to my write speeds (Tested with a 90 GB file, so no ram buffering).
I get ~950 MB/s write, but only 250 to 300 MB/s [Yes, I am aware of the difference & correct use of MB/s <-> Mb/s)

My system specs:

AMD Ryzen 7 2700X
ASUS Prime x370-Pro
FreeNAS-11.2-U6
LSI 9220-8i hba card (flashed to IT-mode)
Intel RES2SV240 24-port SAS Expander (connected with 8 "lanes" to the HBA card)
13* WD red 8TB (all connected to expander)
32 GB RAM
NIC Intel X520-da2

I was going to run 12 disks as one pool with Raid-Z2 (with one hot spare)

My tries playing with disabling compression/increasing record size/switching to striped .. where unsuccessful.

I am thankful for any suggestions and/or tips.
Cheers!

- Morehatz
 

Morehatz

Dabbler
Joined
Nov 10, 2019
Messages
13
I did my test with the SMB share (general file transfer / "NAS performance tester"

Results of internal read/write tests:
Code:
dd if=/dev/zero of=/mnt/test/test.dat bs=2048k count=10000
    10000+0 records in
    10000+0 records out
    20971520000 bytes transferred in 6.564939 secs (3194473131 bytes/sec)

dd of=/dev/null if=/mnt/test/test.dat bs=2048k count=10000
    10000+0 records in
    10000+0 records out
    20971520000 bytes transferred in 2.228325 secs (9411338907 bytes/sec)



Output of ifconfig:
Code:
ix0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 9000
        options=e407bb<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,JUMBO_MTU,VLAN_HWCSUM,TSO4,TSO6,LRO,VLAN_HWTSO,RXCSUM_IPV6,TXCSUM_IPV6>
        ether 00:25:90:91:64:88
        hwaddr 00:25:90:91:64:88
        inet 192.168.2.222 netmask 0xffffff00 broadcast 192.168.2.255
        nd6 options=9<PERFORMNUD,IFDISABLED>
        media: Ethernet autoselect (10Gbase-SR <full-duplex,rxpause,txpause>)
        status: active
ix1: flags=8802<BROADCAST,SIMPLEX,MULTICAST> metric 0 mtu 1500
        options=e407bb<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,JUMBO_MTU,VLAN_HWCSUM,TSO4,TSO6,LRO,VLAN_HWTSO,RXCSUM_IPV6,TXCSUM_IPV6>
        ether 00:25:90:91:64:89
        hwaddr 00:25:90:91:64:89
        nd6 options=9<PERFORMNUD,IFDISABLED>
        media: Ethernet autoselect
        status: no carrier
igb0: flags=8c02<BROADCAST,OACTIVE,SIMPLEX,MULTICAST> metric 0 mtu 1500
        options=6403bb<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,JUMBO_MTU,VLAN_HWCSUM,TSO4,TSO6,VLAN_HWTSO,RXCSUM_IPV6,TXCSUM_IPV6>
        ether 04:92:26:d9:2f:bf
        hwaddr 04:92:26:d9:2f:bf
        nd6 options=9<PERFORMNUD,IFDISABLED>
        media: Ethernet autoselect
        status: no carrier
lo0: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> metric 0 mtu 16384
        options=600003<RXCSUM,TXCSUM,RXCSUM_IPV6,TXCSUM_IPV6>
        inet6 ::1 prefixlen 128
        inet6 fe80::1%lo0 prefixlen 64 scopeid 0x4
        inet 127.0.0.1 netmask 0xff000000
        nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL>
        groups: lo


Thank you for your support =)
 

Morehatz

Dabbler
Joined
Nov 10, 2019
Messages
13
I apologise for not being able to answer in a timely matter, the weekends are the only time I have for such projects.
I'd really appreciate some guidance

also -bump-
 

MikeyG

Patron
Joined
Dec 8, 2017
Messages
442
Those speed tests indicate 3GBps write and over 9GBps read. Did you turn compression off on the dataset before testing? Otherwise your results will be skewed.
 

MikeyG

Patron
Joined
Dec 8, 2017
Messages
442
You are getting 9.4GBps read off 13 wd red disks with compression turned off? Best case scenario for that should be like 2GBps, and even that would shock me. I must be missing something. Maybe your data set is small? I see you are using 20GB. I usually use 100GB on a system with 64GB memory to mitigate the impact of ARC.
 

Morehatz

Dabbler
Joined
Nov 10, 2019
Messages
13
After my understanding of the "dd" command, this should have repeated the test with 100 GB, yes?

Code:
dd if=/dev/zero of=/mnt/test/test.dat bs=2048k count=50000
    50000+0 records in
    50000+0 records out
    104857600000 bytes transferred in 75.221908 secs (1393976878 bytes/sec)
dd of=/dev/null if=/mnt/test/test.dat bs=2048 count=50000
    50000+0 records in
    50000+0 records out
    102400000 bytes transferred in 0.246072 secs (416138921 bytes/sec)
 

MikeyG

Patron
Joined
Dec 8, 2017
Messages
442
First one looks good, and that gives you 1.3GBps. I think you are missing the k on the read test for the second one.

I use dd of=/dev/null if=test.dat bs=2048k count=50000 for my read test.
 

Attachments

  • 1574370001166.png
    1574370001166.png
    6.9 KB · Views: 319

Morehatz

Dabbler
Joined
Nov 10, 2019
Messages
13
Darn, you're right. I repeated the test, the results seem to be all over the place
Code:
dd if=/dev/zero of=/mnt/test/test.dat bs=2048k count=50000
    50000+0 records in
    50000+0 records out
    104857600000 bytes transferred in 63.899300 secs (1640981982 bytes/sec)
dd of=/dev/null if=/mnt/test/test.dat bs=2048k count=50000
    50000+0 records in
    50000+0 records out
    104857600000 bytes transferred in 91.768284 secs (1142634416 bytes/sec)
 

MikeyG

Patron
Joined
Dec 8, 2017
Messages
442
Cool, looks reasonable. I would expect those results to be reversed - read might be about that much faster than write, but not the other way around.

However, 1.2 - 1.6GBps is pretty good either way. At this point I'd look at gstat for drive utilization while performing the tests to see if there's a bottleneck there, and probably try to reduce or eliminate any usage of the pool while testing to try and make sure it's accurate.

I like to look at netdata to see the trend for the test speed too. Is it slower in the beginning or faster? Same throughout? I've made the mistake of running read tests right after write tests complete and not giving the drives enough time to settle, which skews the end result. dd is just reporting an average after all.

I did find that record size mattered for my tests. I went 1MB for all datasets storing media.

You should be able to get close to 1GBps via SMB out of that though, since I think that was your original test. You've done iperf testing to make sure network is ok? You set jumbo frames on the client as well?
 

Morehatz

Dabbler
Joined
Nov 10, 2019
Messages
13
- Throughput seems to be consistent (at least windows shows that), the speed drops maybe 30MB/s over all.
- record size is already set to 1 MB

- When I tried iPerf it seemed fine with ~9,8 Gbit/s, now here comes the culprit:
NAS as Client, PC as Server I only get up to 4Gbit/s, that would relate to the low read speeds, yes? Or am I getting the sides mixed up?
Also I can't seem to improve that speed, testing and swapping cables and transceivers yielded no result. elimination the switch from the equation didn't either.
 

Morehatz

Dabbler
Joined
Nov 10, 2019
Messages
13
Now I swapped the NIC with another one I tested beforehand and was able to achieve 10Gbit/s duplex.
As soon as it is in the NAS only 4Gbit/s (NAS as client, PC as host). Is it worth to check another PCIe slot?
I suspect a setting within freenas that limits the transfer speed, but I have no clue where to look.
 

MikeyG

Patron
Joined
Dec 8, 2017
Messages
442
With iperf and windows, for the direction NAS --> PC I usually am capped between 4-6gbs unless I specify multiple processes (I think -P 3 for example) or increase window size. I've had some luck with -w 1MB. This often doesn't actually affect smb transfer speeds to me though - in other words iperf with a single thread may be a lot slower than my actual SMB transfer.

What card are you using in the PC?

Also, I think I tried an X520 in FreeNAS once and had trouble with it. X550 (copper) and Chelsio (SFP+) cards have worked for me no problem in FreeNAS.

If you search around I think there have been reports of people struggling with speed issues and the X520 in the past.
 

Morehatz

Dabbler
Joined
Nov 10, 2019
Messages
13
I also use the X520 in the PC.
tried your suggested iPerf-flags:
- with "-P 3" I'm able to saturate the line with ~9 Gbit/s
- with the additional -w 1MB flag it does nothing to the speed, but adds a random number of retries (ranging from 1 into the hundrets)

I read you can tune your network with tunables, so I did did that (no positive impact on performance):
8f12d35395.png

At this point I'd look at gstat for drive utilization while performing
370d32a490.png


I like to look at netdata
What column/graph exactly? (Netdata is completely new to me :O)
 

MikeyG

Patron
Joined
Dec 8, 2017
Messages
442
The suggestions for gstat/netdata were more to check if the bottleneck is at the disks. If you are using iperf to test the network it's not as relevant.

I'm not understanding exactly where you are at right now. Are you able to saturate 10gb in both directions now? Which NIC is in FreeNAS and which your PC?

If you are able to get above 9gbps with iperf, and if your disks can handle over 1GBps read/write to the pool with dd, then SMB transfers should be decent as well. If not, requires tweaking someplace else. You might need to detail more about how you set the share up, and how exactly you are doing your testing.
 

Morehatz

Dabbler
Joined
Nov 10, 2019
Messages
13
I'm not understanding exactly where you are at right now. Are you able to saturate 10gb in both directions now? Which NIC is in FreeNAS and which your PC?
to clear things up:
- both, the NAS and my PC have Intel's X520-DA2 NICs
- I am able to achieve ~9 GBit/s tested with iPerf and the parallel-flag.
- I am still unable to get over 350 MB/s read with my smb share.
You might need to detail more about how you set the share up
I create a dataset (basically everyting on default)
3eca28c3dc.png

then create new smb share, the only thing I change here is the "Hosts Allow" section.

how exactly you are doing your testing
one test is simply drag and dropping my 90Gig testfile onto the NAS, then copying it back to my disk (disk's speed should be able to keep up with 1GB/s transfer speeds)
second test is with a program called "NAS-Tester"
ba584e3ead.png


Whilst repeating the tests for this post I reviewed gstat and stared at the OPS/s, could it be that I'm somehow maxing out the IOPS/s of my drives ? Are the shown ops/s even the same as the listed IOPS/s?
 

Morehatz

Dabbler
Joined
Nov 10, 2019
Messages
13
As last test for today I played with different disk layouts, I striped my pool over all 13 disks with no safety:
- 900 MB/s write
- still only 260 MB/s read
gpart shows my disks are barely doing anything (pic during read), something (neither disks nor network) is definitively limiting
343593adb7.png
 

MikeyG

Patron
Joined
Dec 8, 2017
Messages
442
Yeah, that seems clear it's not your disk. I got the program you are using and at file size 8000 (which is 8GB) it's small enough to fit in ARC, so on reads your disks probably aren't super active anyway. I had no problems getting over 1GBps read with it. Seems to be a networking issue, possibly on your windows client.

Have you tried doing multiple small file transfers at once from the NAS? Like, 3 transfers of 2GB files (so that it doesn't stress the disk too much)? SMB seems to be single threaded, and if you are getting 9gbs with the -P flag in iperf, then you might be running up against the limitations of what a single smb thread can do on your Windows machine. This could be a CPU limit if it's older, or it could be drivers with the NIC. I assume you are running at least Windows 10 1809 or something and SMB3 is being taken advantage of.

Not sure if you confirmed 9k frames in Windows as well. That made a huge difference for me, despite the reocmmendations from others not to bother with Jumbo frames. If so, check that 9k frames is working correctly with ping -f -l 8000 freenasip

Otherwise I'm out of ideas on this one.
 

Morehatz

Dabbler
Joined
Nov 10, 2019
Messages
13
This could be a CPU limit if it's older
Quite possible, the CPU is ~ 8 years old (i7 980X)
Not sure if you confirmed 9k frames in Windows as well
Comfirming it now. Jumbo-frames are enabled on both sides.
I assume you are running at least Windows 10 1809 or something and SMB3 is being taken advantage of
I am not! Still using Windows 7 -> SMB 2.1 -> no multithread transfer! Maybe this was the issue all along?
Gonna set up a win 10 instance on another PC and report back.
 
Top