Odd 10GB behavior

Status
Not open for further replies.

icsy7867

Contributor
Joined
Dec 31, 2015
Messages
167
I am running out of things to test, and I was hoping that someone here could offer some suggestions.

I have a freenas box running the most recent stable version (11.1 U5)
I am currently running:
X10SDV-6c+ Motherboard
Xeon-D 1528 (6 core - 12 threads)
24GB DDR4 ECC Memory

I have been using one of the 10GB ports directly tied into my hypervisor running the same setup and motherboard.

I had been getting around 200-400 MB/s and I was happy with that, but recently I noticed problems going above ~140 MB/s, and going as low as 4-8 MB/s. Oddly my 1GB ports don't have this issue. They are rocking a stable 112-115 MB/s.

iPerf tests are also capping out around ~130-140 MB/s, but I can increase this to about ~300MB/s using 4 or more parallel streams. Not quite sure what's doing on and I can't quite figure out what would have cause this change. I don't recall any major upgrades taking place.
 
Joined
Dec 29, 2014
Messages
1,135
What kind of 10G NIC are you using on each side? What hypervisor and version? When you do iperf tests, are you testing from the hypervisor OS or a guest OS?
 

icsy7867

Contributor
Joined
Dec 31, 2015
Messages
167
Thanks for the post!

Both the servers are identical builds. The 10GB NICS are built into the motherboard.
Intel Corporation Ethernet Connection X552/X557-AT 10GBASE-T

I run iperf with Freenas as the server, and I test from a windows VM. However an rsync with --progress from the hypervisor host itself (CentOS7) gets about the same numbers.e

I am running oVirt 4.2.3 as the hypervisor for what its worth.
 
Joined
Dec 29, 2014
Messages
1,135
rsynch isn't the fastest thing, so I would never expect it to fully utilize the link. I can't help but wonder if some updates on the hypervisor or client side had an impact. It is not uncommon to see a small TCP window size (amount of unacknowledged data before it stops transmitting and waits for an ACK) stop you from utilizing the full bandwidth of a NIC.
 

icsy7867

Contributor
Joined
Dec 31, 2015
Messages
167
Thats a good thought. I did some research and found:
https://slaptijack.com/system-administration/freebsd-tcp-performance-tuning/

I added these into my systemctl tunables in freenas:
Code:
net.inet.tcp.rfc1323=1
kern.ipc.maxsockbuf=16777216
net.inet.tcp.sendspace=1048576
net.inet.tcp.recvspace=1048576


However I am getting about the same results. had much smaller values before (around 130000, about 10x less) but the results are about the same.

I am curious though. I have about 15 tunables automatically set in freenas. Is it safe to remove these? Or should I leave them alone?
 
Joined
Dec 29, 2014
Messages
1,135
However I am getting about the same results. had much smaller values before (around 130000, about 10x less) but the results are about the same.

It could also be the window size on the other end too. You could try a different OS on the other side. Which ever side is sending the higher volume of data is the one whose TCP window size will affect the transfer the most.

I am curious though. I have about 15 tunables automatically set in freenas. Is it safe to remove these? Or should I leave them alone?

If you have auto tune enabled, it will put things back that it thinks belong at the next reboot.
 
Status
Not open for further replies.
Top