TrueNAS 22.12.1 on ESXI8.0 X540-AT2 in VMXNET3/SR-IOV/Passthrough different mode the performance have big difference

BlueBenson

Cadet
Joined
Mar 21, 2023
Messages
2
My Server is DELL R730XD the NIC is X540-AT2

I update the esxi8.0 and truenas 22.12.1(ALL NEW INSTALL). First i use vmxnet3 nic and i use iperf3 test the network speed,before i use Apps,the speed looks good(9.3Gbits/s),but after i set the pool for apps, the iperf3 show the speed only 5.1Gbits/s,even i unset the pool for apps, the speed still only 5.xGbits/s, then I add another SR-IOV mode nic to truenas vm and got 2 ip address, iperf3 test show the ip from vmxnet3 nic still 5.xGbits/s but the ip from the SR-IOV nic is 9.2Gbits/s. Next I test the passthrough nic again something strange has happened the iperf test of passthrough nic only 5.xGbits/s.

the 192.168.1.99 from vmxnet3
the 192.168.1.200 from SR-IOV
sorry i didn't get the passthrough nic test screenshot
 

Attachments

  • Screenshot 2023-03-21 at 17.04.54.png
    Screenshot 2023-03-21 at 17.04.54.png
    662.3 KB · Views: 178

NickF

Guru
Joined
Jun 12, 2014
Messages
763
There’s a couple of things here. First, you are running a test that is inherently single threaded, which is going to limit your performance here. If you run multiple iPerf streams in parallel using the -P flag you will see your speed likely increase.

Second SR-IOV vs VMXNET3 Is going to be inherently faster, which is why SR-IOV exists in the first place. VMWare and NVIDIA are working together to solve the limitations of pass through right now and you can test that in 8 with a Bluefield 2. https://www.servethehome.com/using-...idia-bluefield-2-dpu-and-vmware-vsphere-demo/
 

NickF

Guru
Joined
Jun 12, 2014
Messages
763
VMXNET, by it's very nature and design is going to be slower than bare metal. It's not the driver in SCALE, it's a limitation of the design of paravirtualized hardware https://docs.vmware.com/en/VMware-v...UID-E2271E36-12BB-47CE-A765-5ECB5BBE7CC7.html

While it is absolutely faster than virtualizing an Intel E1000, or previous versions of VMXNET, it's never going to be as fast as native hardware, especially in single threaded applications. Which is why active development with DPUs exists, and Patrick's article above proves that. SR-IOV is the current production-ready solution to the problem, but as I am sure you know there are limitations.
 

BlueBenson

Cadet
Joined
Mar 21, 2023
Messages
2
VMXNET, by it's very nature and design is going to be slower than bare metal. It's not the driver in SCALE, it's a limitation of the design of paravirtualized hardware https://docs.vmware.com/en/VMware-v...UID-E2271E36-12BB-47CE-A765-5ECB5BBE7CC7.html

While it is absolutely faster than virtualizing an Intel E1000, or previous versions of VMXNET, it's never going to be as fast as native hardware, especially in single threaded applications. Which is why active development with DPUs exists, and Patrick's article above proves that. SR-IOV is the current production-ready solution to the problem, but as I am sure you know there are limitations.

Thanks I know the VMXNET3 and SR-IOV. Sorry for last post misspell I mean I still MISUNDERSTANDING why I passthrough the X540-AT2 to TrueNAS the performance is worse than the SR-IOV mode only 5.xGBits/s whatever set Apps or Not.
 

RegularJoe

Patron
Joined
Aug 19, 2013
Messages
330
Thanks I know the VMXNET3 and SR-IOV. Sorry for last post misspell I mean I still MISUNDERSTANDING why I passthrough the X540-AT2 to TrueNAS the performance is worse than the SR-IOV mode only 5.xGBits/s whatever set Apps or Not.
PCI passthru on VMware does not pass thru ALL the bus features, hence not supported by many manufacturers. Take note of LTO tape drives, they could be supported by HP with PCI pass thru.... But they are not. LTO tape drives have been around for decades and the vendors know but won't say why no support. :cool:
 
Top