FreeNAS iSCSI MPIO RR limited to 1Gb/sec

Status
Not open for further replies.

zeroluck

Dabbler
Joined
Feb 12, 2015
Messages
43
I have set up about 5 different FreeNAS boxes connected to ESXi by now and have followed the same procedure to set them up with MPIO, but no matter what I do I can only get the throughput of 1Gb/s to any of the boxes.

Here's what I've done:
  • Set up two NIC on FreeNAS on different subnets with MTU 9000
  • Set up two NIC on ESXi on different subnets with MTU 9000 with iscsi port binding
  • Enable Round Robin (Active I/O on both Nics)
  • Verify connectivity to both NIC
  • Verify both paths are active and working
  • Set RR IOPS to 1
Still I see only 1Gb/s or less on both nics at any given moment. If I disable RR and use vmware last used path or fixed, I see it max out the single connection. I see that this problem is common on the forums but I don't see a straightforward answer to it. My config:

ESXi:
iSCSI C: 10.0.3.41
iSCSI D: 10.0.4.41

FreeNAS:
iSCSI 1 igb0: 10.0.3.27
iSCSI 2 igb1: 10.0.4.27

ScreenShot2016-03-10at1.00.49PM22f0d.png

ScreenShot2016-03-10at1.02.08PM47da3.png

ScreenShot2016-03-10at1.02.21PMacffd.png

ScreenShot2016-03-10at1.02.39PM05a2c.png

ScreenShot2016-03-10at1.02.45PMf076f.png

ScreenShot2016-03-10at1.03.22PM85096.png


connections on esxi:

Code:
[root@esxi-02:~] esxcli network ip connection list | grep 10.0.3
tcp         0       0  10.0.3.41:27551     10.0.3.27:3260       ESTABLISHED   3615380  newreno  vmm0:Zimbra_Archive
tcp         0       0  10.0.3.41:43698     10.0.3.25:3260       ESTABLISHED         0  newreno
tcp         0       0  10.0.3.41:427       0.0.0.0:0            LISTEN          34525  newreno
udp         0       0  10.0.3.41:123       0.0.0.0:0                            33892           ntpd
[root@esxi-02:~] esxcli network ip connection list | grep 10.0.4
tcp         0       0  10.0.4.41:10656     10.0.4.27:3260       ESTABLISHED         0  newreno
tcp         0       0  10.0.4.41:23905     10.0.4.25:3260       ESTABLISHED     32806  newreno  idle0
tcp         0       0  10.0.4.41:427       0.0.0.0:0            LISTEN          34525  newreno
udp         0       0  10.0.4.41:123       0.0.0.0:0                            33892           ntpd


Network speed:
(at 12:35 I switched it from last used path to round robin)
ScreenShot2016-03-10at1.16.00PMb02b2.png

ScreenShot2016-03-10at1.16.11PM01a4d.png


Are there settings that I should be tweaking to make this go faster? Right now I'm just using vMotion to test the speed. I realize in this example it wasn't maxing out the single connection before switching, so maybe this vMotion is just a bad example, but this is a pattern so far. I will work on a better benchmark to show the difference here, but I am still open to suggestions.
 

zambanini

Patron
Joined
Sep 11, 2013
Messages
479
on the vsphere cli you need to set io switching to 1, there are a couple of forum posts regarding that topic.
post the cli output for the io switching part and how you tested your setup
 

zeroluck

Dabbler
Joined
Feb 12, 2015
Messages
43
on the vsphere cli you need to set io switching to 1, there are a couple of forum posts regarding that topic.

I already did that:

Code:
naa.6589cfc000000c3c181a4d82fe5a96b7
   Device Display Name: Storage1-A
   Storage Array Type: VMW_SATP_ALUA
   Storage Array Type Device Config: {implicit_support=on;explicit_support=off; explicit_allow=on;alua_followover=on; action_OnRetryErrors=off; {TPG_id=1,TPG_state=AO}}
   Path Selection Policy: VMW_PSP_RR
   Path Selection Policy Device Config: {policy=iops,iops=1,bytes=10485760,useANO=0; lastPathIndex=0: NumIOsPending=0,numBytesPending=0}
   Path Selection Policy Device Custom Config:
   Working Paths: vmhba38:C7:T12:L0, vmhba38:C2:T12:L0
   Is USB: false
 

toadman

Guru
Joined
Jun 4, 2013
Messages
619
I have had the same issue for a couple years. Finally virtualized FreeNAS and switched to NFS and 10G vnics on the same server. It's faster.
 
Status
Not open for further replies.
Top