Real basic MPIO questions

AlexBAI

Cadet
Joined
Feb 12, 2020
Messages
4
Hi,

My environment is VMWare ESXi hosts. Storage is Dell EMC. We are about to add in a Freenas. Each ESXi host as six nics, two are dedicated to storage in an MPIO config:

Storage Nics:
Nic 1: 192.168.0.0/24 VLAN100
Nic 2: 192.168.1.0/24 VLAN200

VMWare iSCSI nic port binding IS enabled (I am in the midst of confirming with VMware if I need the port binding since i have separated the subnets... im a little confused about this). We are looking to add our Freenas server into the environment and I want to set up MPIO. The Freenas has a four port nic. One of these ports will be assigned to each respective subnet as described above.

Question 1:
I have been reading here in the forums that if I have iSCSI port binding set in ESXi, I cannot use MPIO with Freenas. Just want to confirm if this is true? If so, I guess I should look at LACP with NFS instead of iSCSI?

Question 2:
From the ESXi server, both of those storage connections connect to separate switches. There is a link between the switches. Our network admin is worried that if our ESXi servers have connections to each switch, there is a link between the switches, and then each switch has a connection to the Freenas we will run into a switching loop. I'm assuming the Freenas is treated as a singular "target" although there are multiple connections to it and therefore this problem is mitigated?

Question 3:
Is there a definitive guide to setting up MPIO for Freenas? I see reference of configurations with two portal groups, and others with one portal group with multiple IPs added. What is the best way to go?

I imagine I'll get blasted for asking stupid questions but thanks anyway for any help/advice you can give.

Thanks!
Alex
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Question 1:
I have been reading here in the forums that if I have iSCSI port binding set in ESXi, I cannot use MPIO with Freenas. Just want to confirm if this is true? If so, I guess I should look at LACP with NFS instead of iSCSI?

Where'd ya read THAT? Lemme know so I can go thwack them over the head.

Question 2:
From the ESXi server, both of those storage connections connect to separate switches. There is a link between the switches. Our network admin is worried that if our ESXi servers have connections to each switch, there is a link between the switches, and then each switch has a connection to the Freenas we will run into a switching loop. I'm assuming the Freenas is treated as a singular "target" although there are multiple connections to it and therefore this problem is mitigated?

... re-reads this several times trying to puzzle it out.

I don't know what you're talking about.

This feels like there's confusion about how switching works. Every nontrivial modern network I've seen or designed in the past ten or fifteen years involves multiple switches and vlans, and generally the non-stupid design is to toss all the vlans on the switches and let them "figure it out."

So do that.

Then:

Make sure 192.168.0.0/24 is PRIMARILY on switch0 and 192.168.1.0/24 is PRIMARILY on switch1.

With ESXi port binding enabled, within the scope of this example, there isn't really a point to having VLAN 200 exist on switch0 (or VLAN 100 exist on switch1), but there's also no harm in it as long as you place the interfaces properly.

If we loosen up the discussion into non-ESXi-port-binding generalities, you would normally want to build redundancy into the network by creating a LACP on FreeNAS for "lagg0" that had a failover configuration of {primary: em0, backup: em1} and another for "lagg1" that had a failover configuration of {primary: em3, backup: em2}. Then, by connecting em0 and em2 to switch0 on vlans 100 and 200, and em1 and em3 to switch1 on vlans 100 and 200, you can tolerate the loss of either switch without losing access to the NAS. By creating redundancies in your switching environment and letting RSTP sort it all out, you create high availability and the NAS never vanishes from either vlan 100 or 200 as long as one of the switches is working. See how that works?

Your network admin is "correct" in that you CAN create situations where the networking doesn't work optimally, but that's a design issue. You design to make sure that doesn't happen except in degraded scenarios. And in degraded scenarios, it's what saves your butt.

Question 3:
Is there a definitive guide to setting up MPIO for Freenas? I see reference of configurations with two portal groups, and others with one portal group with multiple IPs added. What is the best way to go?

I imagine I'll get blasted for asking stupid questions but thanks anyway for any help/advice you can give.

Thanks!
Alex

One portal group is probably easier.
 

AlexBAI

Cadet
Joined
Feb 12, 2020
Messages
4
Hey jgrego,

Thanks for the information; I'll circle back with our Network Admin. in regards to the switching design.

One final question:

"If we loosen up the discussion into non-ESXi-port-binding generalities, you would normally want to build redundancy into the network by creating a LACP on FreeNAS for "lagg0" that had a failover configuration of {primary: em0, backup: em1} and another for "lagg1" that had a failover configuration of {primary: em3, backup: em2}. Then, by connecting em0 and em2 to switch0 on vlans 100 and 200, and em1 and em3 to switch1 on vlans 100 and 200, you can tolerate the loss of either switch without losing access to the NAS. By creating redundancies in your switching environment and letting RSTP sort it all out, you create high availability and the NAS never vanishes from either vlan 100 or 200 as long as one of the switches is working. See how that works?"


For those lagg interfaces, am I OK to use them as the two ip addresses in the MPIO portal group?

Thanks again,
Alex
 

AlexBAI

Cadet
Joined
Feb 12, 2020
Messages
4
My end goal here is really to achieve some level of redundancy for the network connections to the Freenas. I've setup MPIO between our vSphere stack and our Dell EMC storage arrays and wanted to do a similar thing with the Freenas. I've also read that there are some performance benefits of using MPIO over LACP but I'd definitely quality as a noob in this realm so if an MPIO config doesn't benefit me that much over the LACP design you described above (or if stacking an MPIO portal group on top of LAGG interfaces is a big no-no) then perhaps I could just set up the design you described above, forget about MPIO and be done with it.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
For those lagg interfaces, am I OK to use them as the two IP addresses in the MPIO portal group?

Yes, but it's pointless if you're using it exclusively for ESXi with port binding, so don't bother with the lagg if you're using it exclusively for ESXi. You just set up the two networks, one on each switch, and make sure that the vlans do not need to traverse the link between the switches in order to get from the ESXi vmk bound port to the associated FreeNAS em or igb or whatever type of port. ESXi storage0 net -> switch0:vlan100 -> FreeNAS em0, and then storage1 -> switch1:vlan200 -> em1 for the other net, done.
 
Top