Interesting Tests with 10GB on ESXi 6.0 U2

Status
Not open for further replies.

Mirfster

Doesn't know what he's talking about
Joined
Oct 2, 2015
Messages
3,215
So, I am doing some similar testing as JoeSchmuck (and others) with running FreeNas 9.10 as a VM on ESXi 6.0 U2. Things are pretty smooth and I have a lot more testing/tweaking to do before I would even consider this "Production Ready".

Anyways, I decided to see what network speeds I could obtain between the FreeNas VM and a SME Server VM (based on Cent OS).

Keep in mind that on both VMs
  • I am using the VMXNet3 Emulated NICs
  • Latest VMTools have been installed
    • FreeNas is using the updated kernel driver as opposed to the one built into 9.10
  • MTU has not been changed from the defaults
  • Main HW is not changed on the VMs (with the exception of an actual 10GB - More on this later)
    • The two ports of the 10GB NIC are connected to each other with a Cisco SFP-H10GB-CU3M 3 Meter Twinax 10GB Cable
  • Freenas is running as the IPerf Server and SME is connecting
In all test cases I simply used the "iperf" command:
  • Example: iperf -p 5001 -c %DesiredNIC% -t 30 -w 512k
Test Case #1 - Using "vSwitch0"
  • Simply leverages the Intel 1GB NICs on the MB
  • Results: Not too shabby for a 1GB NIC since it is showing 2.22 Gbits/sec
  • Code:
    ------------------------------------------------------------
    Client connecting to 10.20.10.14, TCP port 5001
    TCP window size:   244 KByte (WARNING: requested   512 KByte)
    ------------------------------------------------------------
    [  3] local 10.20.10.23 port 40595 connected with 10.20.10.14 port 5001
    [ ID] Interval       Transfer     Bandwidth
    [  3]  0.0-30.0 sec  7.74 GBytes  2.22 Gbits/sec
Test Case #2 - Using "vSwitch1"
  • Created a vSwitch that has an Intel 10GB Dual Port NIC
  • Added a NIC to each of the VMs as a VMXNet3 on vSwitch1
  • Set Static IPs, ensuring they were on a different Subnet
  • Results: Were basically the same as Test Case #1
  • Code:
    ------------------------------------------------------------
    Client connecting to 11.50.0.3, TCP port 5001
    TCP window size:   244 KByte (WARNING: requested   512 KByte)
    ------------------------------------------------------------
    [  3] local 11.50.0.4 port 47986 connected with 11.50.0.3 port 5001
    [ ID] Interval       Transfer     Bandwidth
    [  3]  0.0-30.0 sec  7.92 GBytes  2.27 Gbits/sec
Test Case #3 - Using "Pass Through"
  • Deleted the vSwitch I created in Test Case #2
  • Deleted the NICs I create on the VMs from Test Case #2
  • In VMWare, I passed one of the Intel 10GB Port to each VM
    • FreeNas got #0 and SME got #1
  • Set Static IPs, ensuring they were on a different Subnet
  • Results: At least twice what I received from the previous tests
  • Code:
    ------------------------------------------------------------
    Client connecting to 11.50.1.2, TCP port 5001
    TCP window size:   244 KByte (WARNING: requested   512 KByte)
    ------------------------------------------------------------
    [  3] local 11.50.1.3 port 36900 connected with 11.50.1.2 port 5001
    [ ID] Interval       Transfer     Bandwidth
    [  3]  0.0-30.0 sec  16.7 GBytes  4.79 Gbits/sec

It would appear to me that vSphere/ESXi 6.0 is limiting the actual speed... I wonder if this is due to it being the free version? Perhaps someone who owns the full version can elaborate on this.

I may try out some testing with using the "E1000" NIC; but feel that it would not really make a difference.

Edit: I am also thinking that it may be the actual CPU bottle-necking VMXNet3. VM Host is currently running Dual Xeon Hex Core L5639 @ 2.13 GHz. CPU is set to "High" for FreeNas. Might swap them out for X5650 CPUs @ 2.67GHx to see if that makes any difference but not 100% sure...
 
Last edited:

Rand

Guru
Joined
Dec 30, 2013
Messages
906
2 thoughts -
option -p (parallel threads) on iperf
tweaking options on freenas (compare 10 gb thread)

I can't say that I had the impression that esx limited the speed, but i didn't test that per se...
And I dont think it makes a difference whether you have an unlicensed (new install) esx vs free license vs regular licenses
 

Mirfster

Doesn't know what he's talking about
Joined
Oct 2, 2015
Messages
3,215
True, I could use "-p" and achieve higher numbers. But to keep things consistent I just stuck to the same command in all test cases. Thanks for posting though and reminding me that I still have more tests to try out.
 

Rand

Guru
Joined
Dec 30, 2013
Messages
906
Another thing is the TCP window size.
512K is fairly small - larger sizes will most likely result in better throughput (unless you are cpu limited, which indeed might be an issue)
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
I'm going to ask a stupid question but are both of your VMs on the same ESXi hardware? I ask because things like this are just not that obvious to me at times even though you clearly state you are using a cable to connect the two. What do you get if you place both on the same vswitch and only using the vmxnet3 drivers to connect them together? That should be the fastest you could get.

I'm curious why you want to test to pass from one VM to another VM via a LAN cable. If I were to test 10Gb LAN, I would use two computers (freenas and a workstation) to see what the throughput can be.
 

Mirfster

Doesn't know what he's talking about
Joined
Oct 2, 2015
Messages
3,215
I'm going to ask a stupid question but are both of your VMs on the same ESXi hardware?
Yes, same ESXi system. FreeNas is a VM housed on SSDs (that are owned/controlled by ESXi) with H200 passed through for FreeNas to control/manage. SME (CentOs) is actually housed on the FreeNas pool (connected to ESXi via NFS)

What do you get if you place both on the same vswitch and only using the vmxnet3 drivers to connect them together?
That is similar to what I did in Test #1 and Test #2, however they did have an actual physical NIC in as part of the vSwitch. I for sure can run a test with a vSwitch that only has a Kernel driver to see what speeds I get.

I'm curious why you want to test to pass from one VM to another VM via a LAN cable. If I were to test 10Gb LAN, I would use two computers (freenas and a workstation) to see what the throughput can be.
This was due to a couple reasons:
  1. Simulating a scenario where everything is housed on a single box
    • SME (CentOs) Server has a backup routine that will perform its backups to a CIFS/NFS Share housed by FreeNas
  2. Decided to try the physical interconnect when I wasn't seeing a whole lot of speed from the emulated NICs (not that ~2.25 Gb is bad)
  3. Was curious if off-loading to a physical NIC would negate any possible ESXi or CPU issues/limitations (which produced 100% increase)
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
It's been a few days, have you made any progress?
 

Mirfster

Doesn't know what he's talking about
Joined
Oct 2, 2015
Messages
3,215
Unfortunately not yet, had to handle a few things for a Customer. Took my oldest son with me since I am training him; which prolonged the process, but is to be expected. Will etch out some time this weekend once I get done with the normal "honey do" stuff. ;)
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
I clearly hear you. FreeNAS and anything computer comes in a very distant second, third, or last thing for me too because work (got to pay the bills) or home stuff comes first. I'm just taking a break before I walk the dogs around the area. We got a new dog and she needs to learn how to walk with us and not stop and let me drag her behind us. She's a very small dog and has the mindset of a cat. Does what she wants, when she wants to, and just doesn't behave like a normal dog. I'll break her or she will break me, it's too early to tell right now. Well off to the walk.
 

Mirfster

Doesn't know what he's talking about
Joined
Oct 2, 2015
Messages
3,215
Update: Haven't forgotten this, but @jgreco just mentioned SR-IOV in another thread and now I have more to learn and test. Never thought about it before, but it looks pretty darn sweet. Seems like my 10GB card supports it and so does the Dell C2100/FS12-TY so looking forward to messing around with this... :)
 
Status
Not open for further replies.
Top