ESXI 6.5 iSCSI MPIO Load-balancing NICs

Status
Not open for further replies.

Kuhaku

Cadet
Joined
Sep 14, 2017
Messages
7
Hi Everyone,

I am doing the typical noob NAS load balancing style as I only currently have 1gbps NICs associated with my NAS, now I have gotten the load balancing to work with no issues but there is one hassle. The connection speed is load balanced between the NICs but the iSCSI target will only hit a max of 110mb/s~ (1gbps). (Both NICs pumping out 500mbps each). Makes 0 sense I suspected a duplex mismatch at first but if I am to disconnect one of the ethernets during the live environment all the data is then routed via that single NIC at full speed, I am wondering if anyone else here has experienced this or is this typical behavior for a Single host associated with the iSCSI LUN. I been reading through the forums and see people either mention they achieve the full load balance or they aggregate on a single interface and increase only total capacity instead. Or am I missing something stupid with my configuration, this is my first FreeNAS setup and use of FreeBSD so my understanding on how everything works is zilch.

System Breakdown:
FreeNAS Host:
Code:
Build: FreeNAS-11.0-U3 (c5dcf4416)
Platform Intel(R) Pentium(R) CPU G3258 @ 3.20GHz
Motherboard: ASUS H97M-E
Memory 16GB DDR3 1600mhz Patriot Non-ecc (please mercy)
Boot Drive: 16gb Flash Drive
HDD 0: 4TB WD RED
HDD 1: 8TB Seagate
NIC 0: On-board Realtek 8111GR
NIC 1: PCI Expansion card with a Realtek 8168 Chipset


ESXi Host:
Code:
ESXi 6.5 Update 1 (Build 5969303)
Server: HP ProLiant DL360P
2x CPUs: Intel Xeon E5-2643
RAM: 64GBs Samsung DDR3 1600mhz ECC RDIMMs
RAID Controller: Smart Array P420i
LUN0: 4x 900GB 10k Official HP SAS Drives
LUN1: 1x 250GB Samsung Evo 850 SSD
4x NICS: Broadcom chipsets using ntg3 drivers.


Here are my Reports:
FreeNas CPU.PNG FreeNas iSCSI.PNG Memory.PNG Network.PNG


Now to explain my network configuration, no middle man L2 switch the FreeNAS is directly attached via two patch cables into my Server
MTU Sizes: 9000
NIC Negotiation hard-set: 1000/FULL
iSCSI IOPS Set to: 1 (For testing purposes)

FreeNAS:
NIC0: 172.16.10.3/29
NIC1: 172.16.10.11/29

ESXi Host:
NIC2: 172.16.10.2/29
NIC3: 172.16.10.10/29

ESXi Host configuration involves vSwitch1 handling the NICs, with overridden default NIC teaming with active VMKs applied to a single interface to reach esxi's iSCSi compliance.

vmRR.PNG

My NETStat shows expected results from the FreeBSD and so does my ESXi host
tcp4 0 0 172.16.10.11.3260 172.16.10.10.53636 ESTABLISHED
tcp4 0 0 172.16.10.3.3260 172.16.10.2.48036 ESTABLISHED

I am thinking it might have something to do with my Storage Array Type Policy for the P420i controller, as I know Dell for their controllers use a specific policy for their controllers to support MPIO.
VM Policy.PNG
 
Last edited:

bigphil

Patron
Joined
Jan 30, 2014
Messages
486
I am thinking it might have something to do with my Storage Array Type Policy for the P420i controller, as I know Dell for their controllers use a specific policy for their controllers to support MPIO.

Why would this have anything to do with it? You're using storage presented from FreeNAS (your Asus box), not the ESXi host (HP box). The PSP and SATP you're using is correct for FreeNAS. You are also correct about Dell arrays using a different setup. For instance, an EqualLogic array will use this PSP: DELL_PSP_EQL_ROUTED, and this SATP: VMW_SATP_EQL. Another thing...iSCSI will not aggregate the network speed, it will just give you more throughput. For a single ESXi host, you'll see no speed increase at all. It seems you're running at max speed for your 1Gb network...and disks (only two disks and one is a WD Red 5400rpm...that'll get you about max 110MB/s).

Things I would look at:

-Your FreeNAS host is using Realtek NIC's. These are known to be flakely and not recommended to use for FreeNAS.
-Make sure you ARE NOT using VMkernal port binding on the ESXi side. This is not the supported method to connect using FreeNAS. Your FreeNAS portal group should have both FreeNAS IP's associated with it by using "extra portal IP" and then configure ESXi to point to both portal IP's for static or dynamic targets. Port binding is for arrays like a Dell EqualLogic where they use some smoke and mirrors to make network setup more easy.

Other than that, I think you're running at about your max performance with your current setup.

*If you want to use a custom ESXi SATP rule, this is what I use to claim FreeNAS disks. You have to reboot or disconnect from the storage for the disk to be claimed properly. I usually just reboot.

esxcli storage nmp satp rule add -s "VMW_SATP_ALUA" -V "FreeNAS" -M "iSCSI Disk" -P "VMW_PSP_RR" -O "iops=3" -c "tpgs_on" -e "FreeNAS array"
 

Kuhaku

Cadet
Joined
Sep 14, 2017
Messages
7
Nah all good, I am getting 200mb/s on my writes, it is just the hardware limitation with the two data disks, I have upgraded the FreeNAS to 32GBs of RAM and installed a Intel NIC since I had the components floating around, been running good :)
 
Last edited:
Status
Not open for further replies.
Top