Upgrade to 25GbE (Mellanox ConnectX-4)

core2duo

Cadet
Joined
Jul 7, 2023
Messages
1
I just bought 2 MCX4121A-ACAT and met a very similar problem. The difference is I use SFP28 so the NIC works at 25Gbps speed, but I am not able to saturate the bandwidth between a Windows desktop and Ubuntu NAS server.

My observations are:
  • I tried run a Ubuntu live on my desktop, and it worked perfectly, a single thread iperf3 can run at 23.5Gbps for both direction
  • Running Windows on my desktop, I can't use full bandwidth. The iperf3 test result is not stable, especially when Windows is receiving data from Ubuntu, sometimes it runs only at 6-7Gbps. Somehow, I think it's because Windows set TCP window size scaling factor to only 4, so the receiving window is only 256K. But I can't find a way to adjust that value anywhere and it has nothing to do with Linux sysctl parameters
  • Speed for copying data from and to NAS with Samba is limited within 10G, the maximum speed I have ever got is 1.18G/s, in the meantime the task manager shows NIC workload is like 10.1Gbps then soon go down below 10Gbps. It gives me a feeling that Windows is deliberately limit the speed under 10Gbps
I even tried installing a Windows Server 2022, no luck. Then I found a blog saying that Windows even has different TCP template: Internet and Datacenter. I tried running the script that's mentioned in the article, still no luck.

And what's confusing me the most is I have my Internet router with a MCX342 (10G NIC) connecting to nas, so nas plays a role as a bridge between desktop and router here. And when I test the performance between my desktop and router, it's only around 5Gbps with single thread iperf3. No matter how many parallelisms I set, the maximum bandwidth I can get when desktop is sending to router is less than 9Gbps. And again, if I run Ubuntu on my desktop, a single thread iperf3 can run at 9.4Gbps.

I have tried everything I can but just no luck with Windows. I don't know whether it's MCX4121's problem or it's Windows' problem or I made some subtle mistake.
 

pixelwave

Contributor
Joined
Jan 26, 2022
Messages
174
Yeah ... difficult at this point to troubleshoot. But I would assume if the cards are faulty .. the results at least would not be so consistent. Idk ..
 

heyitsjel

Dabbler
Joined
Sep 5, 2023
Messages
13
Yeah ... difficult at this point to troubleshoot. But I would assume if the cards are faulty .. the results at least would not be so consistent. Idk ..
Hey guys; did you ever solve any of the above issues? I'm in the same boat, but just prior to purchase (ie. ConnectX-3 QSFP vs. ConnectX-4 SFP28).
 

pixelwave

Contributor
Joined
Jan 26, 2022
Messages
174
Nope ... not yet.
 

SteveinSea

Cadet
Joined
Dec 8, 2023
Messages
3
It looks from your output that you don't have this problem, but I'll add this comment any way.

My Ubiquiti switches (Pro Aggregation and Aggregation) cannot sustain near line rate 10 Gbps or 25 Gbps transfer rates with SMB when the client and server are in different VLANs. A solution is not to use VLANs with their product. The behavior I saw with iPerf was line rate data transfers but SMB transfers always under-performed.
 
Top