Network speed issues over 1gbps link.

Status
Not open for further replies.

rodfantana

Dabbler
Joined
Jun 10, 2017
Messages
27
Hello, I've initially posted an issue in the general section, but it sounds like this is a networking issue with FN, rather than storage related.

PROBLEM: I have a test FN VM that i run iperf on and can't get past 750mbps. Other VMs on the same host sustain 940+ mbps, the issue is only with FN VMs.

For full disclosure - test VM only has 4GB RAM and no storage attached. I realized it's below "requirements", but since there is no storage is attached, and the fact that i'm only using it for iperf testing i think that should be fine for all intended purposes. The FN VM that I use for NAS is exhibiting exact same issues and does meet the min requirements. If you guys think it's really needed to beef up the test VM in order to troubleshoot this, let me know... but at this point i personally don't think it's necessary.


Here are the tests that i've tried with iperf.

Iperf2 test from win7 (physical) to FN on ESX1 has:

Code:
Client connecting to 10.0.100.91, TCP port 5001
TCP window size:  208 KByte (default)
------------------------------------------------------------
[  3] local 10.0.100.151 port 57631 connected with 10.0.100.91 port 5001
[ ID] Interval	   Transfer	 Bandwidth
[  3]  0.0- 1.0 sec   106 MBytes   888 Mbits/sec
[  3]  1.0- 2.0 sec  94.2 MBytes   791 Mbits/sec
[  3]  2.0- 3.0 sec  89.8 MBytes   753 Mbits/sec
[  3]  3.0- 4.0 sec  90.8 MBytes   761 Mbits/sec
[  3]  4.0- 5.0 sec  90.5 MBytes   759 Mbits/sec
[  3]  5.0- 6.0 sec  90.9 MBytes   762 Mbits/sec
[  3]  6.0- 7.0 sec  89.5 MBytes   751 Mbits/sec
[  3]  7.0- 8.0 sec  91.0 MBytes   763 Mbits/sec
[  3]  8.0- 9.0 sec  90.8 MBytes   761 Mbits/sec
[  3]  9.0-10.0 sec  89.2 MBytes   749 Mbits/sec
[  3]  0.0-10.0 sec   922 MBytes   774 Mbits/sec

c:\tools\iperf-2.0.8b-win64>



iperf2 test from win7 (physical) to win2k16 VM, that sits on the the same ESX host has:

Code:
c:\tools\iperf-2.0.8b-win64>iperf.exe -c srv-backup1 -p 5001 -i 1
------------------------------------------------------------
Client connecting to srv-backup1, TCP port 5001
TCP window size:  208 KByte (default)
------------------------------------------------------------
[  3] local 10.0.100.151 port 57658 connected with 10.0.100.13 port 5001
[ ID] Interval	   Transfer	 Bandwidth
[  3]  0.0- 1.0 sec   112 MBytes   943 Mbits/sec
[  3]  1.0- 2.0 sec   112 MBytes   942 Mbits/sec
[  3]  2.0- 3.0 sec   112 MBytes   943 Mbits/sec
[  3]  3.0- 4.0 sec   112 MBytes   942 Mbits/sec
[  3]  4.0- 5.0 sec   112 MBytes   943 Mbits/sec
[  3]  5.0- 6.0 sec   112 MBytes   942 Mbits/sec
[  3]  6.0- 7.0 sec   112 MBytes   943 Mbits/sec
[  3]  7.0- 8.0 sec   112 MBytes   942 Mbits/sec
[  3]  8.0- 9.0 sec   112 MBytes   943 Mbits/sec
[  3]  9.0-10.0 sec   112 MBytes   942 Mbits/sec
[  3]  0.0-10.0 sec  1.10 GBytes   942 Mbits/sec

c:\tools\iperf-2.0.8b-win64>



I've tried using both, vmxnet3 and E1000 driver - the results are the same. I spun up an out of the box vanila FN 9.10-U2 VM just to make sure it's not any of my settings that are causing this, and iperf stats are the same. MTU is set to 1500 on FN and vSwitch (all defaults). Do you know if anything needs to be tuned on the FN side to get to 940 Mbit/sec mark?

Thanks,

~Rod.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
So FreeNAS' virtual NIC drivers suck. Try real hardware, either with passthrough or on bare metal.
 

rodfantana

Dabbler
Joined
Jun 10, 2017
Messages
27
Unfortunately, I can't dedicate a NIC for passthrough on this host.....

So are you saying thats what everyone is getting who is using ESXi's VMXNET/E1000 paravirtual adapters with FreeNAS?
 

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
Hello, I've initially posted an issue in the general section, but it sounds like this is a networking issue with FN, rather than storage related.

PROBLEM: I have a test FN VM that i run iperf on and can't get past 750mbps. Other VMs on the same host sustain 940+ mbps, the issue is only with FN VMs.

For full disclosure - test VM only has 4GB RAM and no storage attached. I realized it's below "requirements", but since there is no storage is attached, and the fact that i'm only using it for iperf testing i think that should be fine for all intended purposes. The FN VM that I use for NAS is exhibiting exact same issues and does meet the min requirements. If you guys think it's really needed to beef up the test VM in order to troubleshoot this, let me know... but at this point i personally don't think it's necessary.


Here are the tests that i've tried with iperf.

Iperf2 test from win7 (physical) to FN on ESX1 has:

Code:
Client connecting to 10.0.100.91, TCP port 5001
TCP window size:  208 KByte (default)
------------------------------------------------------------
[  3] local 10.0.100.151 port 57631 connected with 10.0.100.91 port 5001
[ ID] Interval	   Transfer	 Bandwidth
[  3]  0.0- 1.0 sec   106 MBytes   888 Mbits/sec
[  3]  1.0- 2.0 sec  94.2 MBytes   791 Mbits/sec
[  3]  2.0- 3.0 sec  89.8 MBytes   753 Mbits/sec
[  3]  3.0- 4.0 sec  90.8 MBytes   761 Mbits/sec
[  3]  4.0- 5.0 sec  90.5 MBytes   759 Mbits/sec
[  3]  5.0- 6.0 sec  90.9 MBytes   762 Mbits/sec
[  3]  6.0- 7.0 sec  89.5 MBytes   751 Mbits/sec
[  3]  7.0- 8.0 sec  91.0 MBytes   763 Mbits/sec
[  3]  8.0- 9.0 sec  90.8 MBytes   761 Mbits/sec
[  3]  9.0-10.0 sec  89.2 MBytes   749 Mbits/sec
[  3]  0.0-10.0 sec   922 MBytes   774 Mbits/sec

c:\tools\iperf-2.0.8b-win64>



iperf2 test from win7 (physical) to win2k16 VM, that sits on the the same ESX host has:

Code:
c:\tools\iperf-2.0.8b-win64>iperf.exe -c srv-backup1 -p 5001 -i 1
------------------------------------------------------------
Client connecting to srv-backup1, TCP port 5001
TCP window size:  208 KByte (default)
------------------------------------------------------------
[  3] local 10.0.100.151 port 57658 connected with 10.0.100.13 port 5001
[ ID] Interval	   Transfer	 Bandwidth
[  3]  0.0- 1.0 sec   112 MBytes   943 Mbits/sec
[  3]  1.0- 2.0 sec   112 MBytes   942 Mbits/sec
[  3]  2.0- 3.0 sec   112 MBytes   943 Mbits/sec
[  3]  3.0- 4.0 sec   112 MBytes   942 Mbits/sec
[  3]  4.0- 5.0 sec   112 MBytes   943 Mbits/sec
[  3]  5.0- 6.0 sec   112 MBytes   942 Mbits/sec
[  3]  6.0- 7.0 sec   112 MBytes   943 Mbits/sec
[  3]  7.0- 8.0 sec   112 MBytes   942 Mbits/sec
[  3]  8.0- 9.0 sec   112 MBytes   943 Mbits/sec
[  3]  9.0-10.0 sec   112 MBytes   942 Mbits/sec
[  3]  0.0-10.0 sec  1.10 GBytes   942 Mbits/sec

c:\tools\iperf-2.0.8b-win64>



I've tried using both, vmxnet3 and E1000 driver - the results are the same. I spun up an out of the box vanila FN 9.10-U2 VM just to make sure it's not any of my settings that are causing this, and iperf stats are the same. MTU is set to 1500 on FN and vSwitch (all defaults). Do you know if anything needs to be tuned on the FN side to get to 940 Mbit/sec mark?

Thanks,

~Rod.
Forum members will be able to give you better advice if you post your full system specifications, per the forum rules.

Specifically, what brand of ethernet NICs are you using?

There are well-known problems with realtek devices running on FreeBSD, for example.
 

rodfantana

Dabbler
Joined
Jun 10, 2017
Messages
27
Sorry for not posting hardware specs.

The host runs on the Supermicro board X10SDV-TLN4F (xeon d1541) that has 2x Intel 10GBE (d520) and 2x Intel i350-AM2. All of these are on the vmware hardware compatibility list.


UPDATE: added ifconfig output. I also tried loading vmware tools native vmxnet driver (vmxnet3.ko).... that yielded even slightly slower results :(

[root@freenas] ~# ifconfig
vmx0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
options=60039b<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,VLAN_HWCSUM,TSO4,TSO6,RXCSUM_IPV6,TXCSUM_IPV6>
ether 00:0c:29:40:e6:c9
inet 10.0.100.91 netmask 0xffffff00 broadcast 10.0.100.255
nd6 options=9<PERFORMNUD,IFDISABLED>
media: Ethernet autoselect
status: active
lo0: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> metric 0 mtu 16384
options=600003<RXCSUM,TXCSUM,RXCSUM_IPV6,TXCSUM_IPV6>
inet6 ::1 prefixlen 128
inet6 fe80::1%lo0 prefixlen 64 scopeid 0x2
inet 127.0.0.1 netmask 0xff000000
nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL>
[root@freenas] ~#
 
Last edited:

rodfantana

Dabbler
Joined
Jun 10, 2017
Messages
27
UPDATE - i've narrowed it down to a TCP receive window size defaulting to 64K on BSD. When I added a tunable net.inet.tcp.recvspace=262144 to increase it to 256K, all transfers are now running at 112MB/sec - as expected.

Now - while it looks like that solved my problem, i'm curious if i broke anything by adjusting it? or is safe to leave?
 

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
UPDATE - i've narrowed it down to a TCP receive window size defaulting to 64K on BSD. When I added a tunable net.inet.tcp.recvspace=262144 to increase it to 256K, all transfers are now running at 112MB/sec - as expected.

Now - while it looks like that solved my problem, i'm curious if i broke anything by adjusting it? or is safe to leave?
This is cool! I have had some suspensions recently about bsd's default network settings.

People have had strange issues with wireless clients that have good perf to other hardwired clients but not freenas.

Sent from my Nexus 5X using Tapatalk
 

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
UPDATE - i've narrowed it down to a TCP receive window size defaulting to 64K on BSD. When I added a tunable net.inet.tcp.recvspace=262144 to increase it to 256K, all transfers are now running at 112MB/sec - as expected.

Now - while it looks like that solved my problem, i'm curious if i broke anything by adjusting it? or is safe to leave?
Interesting that setting this tunable solves your problem, because all 3 of my FreeNAS systems have the default setting for net.inet.tcp.recvspace and yet I get line-rate speeds.

What version of FreeNAS are you installing?
 

rodfantana

Dabbler
Joined
Jun 10, 2017
Messages
27
Interesting that setting this tunable solves your problem, because all 3 of my FreeNAS systems have the default setting for net.inet.tcp.recvspace and yet I get line-rate speeds.

What version of FreeNAS are you installing?

FreeNAS-9.10.2-U3 (e1497f269)
 

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478

rodfantana

Dabbler
Joined
Jun 10, 2017
Messages
27
Yes it does. I've also tried using E1000, as well as the official VMXNET3 driver from vmware, all had the same slow results. I went back to using the Open VM Tools driver that ships with FN so it's consistent with everyone else's setup here... If you heard that it's preferred to use proprietary drivers - just let me know...
 

Mlovelace

Guru
Joined
Aug 19, 2014
Messages
1,111
UPDATE - i've narrowed it down to a TCP receive window size defaulting to 64K on BSD. When I added a tunable net.inet.tcp.recvspace=262144 to increase it to 256K, all transfers are now running at 112MB/sec - as expected.

Now - while it looks like that solved my problem, i'm curious if i broke anything by adjusting it? or is safe to leave?
You didn't break anything... I've been running with recvspace set to 'net.inet.tcp.recvspace=4194304' for years now. You may want to increase the max buffers as well. Here is a screenshot of my networking tunables to give you a point of reference. Keep in mind that I've tuned this system for 10G networking. I also don't like the new reno congestion control so I run the htcp cc, but your milage may vary.
Tunables.JPG
 

rodfantana

Dabbler
Joined
Jun 10, 2017
Messages
27
You didn't break anything... I've been running with recvspace set to 'net.inet.tcp.recvspace=4194304' for years now. You may want to increase the max buffers as well. Here is a screenshot of my networking tunables to give you a point of reference. Keep in mind that I've tuned this system for 10G networking. I also don't like the new reno congestion control so I run the htcp cc, but your milage may vary.
View attachment 19074


perfect, thanks for sharing. will try some of these settings!
 
Status
Not open for further replies.
Top