40GbE Performance

Joined
Dec 29, 2014
Messages
1,135
No, but I was looking at something similar for my T580 which is a T5 chipset. https://service.chelsio.com/beta/dr...0.1/Chelsio-UnifiedWire-FreeBSD-UserGuide.pdf
Based on my reading of that document and the T6 one, I think I should do the following commands.
Code:
sysctl dev.t5nex.0.toe.tx_zcopy=1
sysctl dev.t5nex.0.toe.ddp=1
ifconfig cxl0 toe

I have no problem figuring out how to add tunables for the first two. How would I do that the third one (ifconfig option). I guess that would be an option in the NIC config?
 
Joined
Dec 29, 2014
Messages
1,135
This is really strange. I was looking through that T6 document and it referenced netserver and netperf commands that were new to me. These programs are installed on FreeNAS, so I ran them with a minor tweak (-A option was not there) and got ~= 35Gb throughput between the two boxes.
Code:
root@freenas2:/nonexistent # netperf -cC -H 192.168.252.23 -D 10 -l 30 -- -a -m 512k -s 2M -S 2M
MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 192.168.252.23 () port 0 AF_INET : histogram : interval : dirty data : demo
Interim result: 36437.44 10^6bits/s over 10.038 seconds ending at 1576637057.812
Interim result: 35866.36 10^6bits/s over 10.159 seconds ending at 1576637067.972
Interim result: 35556.28 10^6bits/s over 9.803 seconds ending at 1576637077.774
Recv   Send    Send                          Utilization       Service Demand
Socket Socket  Message  Elapsed              Send     Recv     Send    Recv
Size   Size    Size     Time     Throughput  local    remote   local   remote
bytes  bytes   bytes    secs.    10^6bits/s  % C      % C      us/KB   us/KB

2097152 2097152 512000    30.00      35955.29   9.86     13.31    0.180   0.243 

Why am I not getting anything close to this with iperf3?
 

ronclark

Dabbler
Joined
Dec 5, 2017
Messages
40
I just ran across this iperf3 at 40Gbps and above might give that a try. my other 40G system is down atm and the workstation i cant push past 10G since its on the chipset pcie lane
 

Rand

Guru
Joined
Dec 30, 2013
Messages
906
The ifconfig option can be set in nic parameters in the gui (where you put the other offloading options like tso/lso or jumbo frames)

Iperf3 is single threaded and needs multiple instances to perform at max capacity. Have you tried iperf2?

Edit: Just saw that the same is pointed out in the link @ronclark provided;)
 
Last edited:
Joined
Dec 29, 2014
Messages
1,135
I just ran across this iperf3 at 40Gbps and above might give that a try. my other 40G system is down atm and the workstation i can't push past 10G since its on the chipset pcie lane
Wow, thanks for that! I had a gut feeling that there was something in the hardware that was limiting the throughput, and this seems to bear that out. The reason I had tweaked the firewall parameters was because I thought that might be causing some of the retries I was seeing in iperf3, but it looks like that it is just outpacing the hardware. This was also borne out but the fact that my secondary FreeNAS which used to have a slower CPU could not push as much throughput. As a grey beard and a (now) network guy, I don't know that I thought I would ever see it this clearly that the CPU wouldn't be able to push a network to its maximum transport capacity. Maybe it is also because I woke up in the middle of the night, but wow! :)
Have you tried iperf2?
Yes, and it was terrible. My CPU seems to be able to push about 20.4Gb with iperf3, but iperf2 only gets to 145Mb.
 
Joined
Dec 29, 2014
Messages
1,135
It might have been messing about with some of the tuning parameters. I rebooted both the FreeNAS boxes to clear all that CLI tuning I did for the tests. Since the culprit has been identified as the limit of what one of my CPU cores can push, I don't feel like I need to beat on it any longer.
 
Joined
Dec 29, 2014
Messages
1,135
One last update. With TOE enabled on the interface and a TCP window size 2M or greater, I can get pretty consistent 25G throughput from iperf3 between the two FreeNAS units. I think I am satisfied now. I also tuned my number of NFS workers threads down to 8 which matches the physical core count. I am actually reasonably happy because I now have a reasonable explanation for why things are the way they are.
 

ronclark

Dabbler
Joined
Dec 5, 2017
Messages
40
One last update. With TOE enabled on the interface and a TCP window size 2M or greater, I can get pretty consistent 25G throughput from iperf3 between the two FreeNAS units. I think I am satisfied now. I also tuned my number of NFS workers threads down to 8 which matches the physical core count. I am actually reasonably happy because I now have a reasonable explanation for why things are the way they are.

How can you check if TOE can be enabled on a Mellanox Connect-X 3?
 
Joined
Dec 29, 2014
Messages
1,135
I discovered something interesting. It appears that TOE was causing me to have some NFS issues. Part of my manual backup strategy uses rsync, but I use snapshots for the machines I run all the time. I shut those VM's down, take a ZFS snapshot, and then bring them back up. I NFS mount a dataset on the backup FreeNAS and manually copy from the snapshot directory. This was working just fine, but I was experiencing a number of "NFS server not responding messages". NFS would come back after a while, but my transfers were taking forever. I disabled TOE on the Chelsio T580-LP-CR NIC's in both FreeNAS boxes (both 11.2-U7), and now NFS is working correctly again. I thought I had discovered a magic bullet with TOE, but it appears not. I do still have some sysctl tuning in place for some of the T5 options.
1576939802934.png

Why would TOE cause NFS to stop working correctly?
 
Top