iSCSI MPIO - 4 interfaces on FreeNAS, 2 interfaces on ESXi

5mall5nail5

Dabbler
Joined
Apr 29, 2020
Messages
14
Hey all -

I have an all flash pool (Samsung PM863 960GB x 16) that can easily saturate 2 x 10 Gbps interfaces. I have (2) Dual Port Chelsio T520-CR in the box. I'd like to create (4) x 10 Gbps MPIO off of the FreeNAS box and have each ESXi host (3) use (2) x 10 Gbps interfaces. However, FreeNAS requires each interface be in a different subnet. So how does one do 4 interfaces on FreeNAS to (3) hosts with 2 interfaces? Do I just add vmkernels on the existing 2 x interfaces but with different IPs?
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
Assuming I'm understanding your situation correctly, there's two options here. #1 is better but requires specific capabilities on your switches.

Option 1 - Route your iSCSI traffic
If you have L3 switches that can route at line-rate (usually, in hardware with good ASICs) then consider this.

Set each ethX on FreeNAS to its own subnet. Create static routes so that the FreeNAS:eth0/eth1 subnets can route to each other, ditto eth2/eth3. Your Host1:eth0 will be able to connect to FreeNAS:eth0/eth1, and Host1:eth1 can connect to FreeNAS:eth2/eth3. Configure a QoS policy similar to FCoE, such as the use of bandwidth reservations, priority flow-control, and ensure no-drop policies can be satisfied.

Each host has four paths to storage (two direct, two routed) and FreeNAS sends and receives on all 4x10. Loads balanced.

Option 2 - LACP
Only needs switches that can do aggregation.

Create two LACP pairs between FreeNAS and your switches, and use a source-based load balancing algorithm. eth0/eth1 and eth2/eth3 on FreeNAS go into aggregated interfaces to a pair of switches. Hopefully the load-balancing means that when Host1:eth0 and Host2:eth0 both hit the first switch, Host1:eth0 gets balanced to FreeNAS:eth0 and Host2:eth0 goes to FreeNAS:eth1. Could fiddle with the LB algorithm until you get as close to balanced as possible given your odd # of hosts (or just add a fourth!)

Each host will have 2 paths to storage, FreeNAS will send and receive on all 4x10 but under full load is 2:1 imbalanced on each LACP pair.

Option 3 is you give me some of those PM963's so that you can't break 20Gbps anymore, and that solves the problem as well. ;)

Other array vendors have special plugins or do other shenangians that let multiple IPs on the same subnet work with VMware MPIO, but as far as I've been able to determine there's no way to trick FreeBSD into doing this. Doesn't mean I shouldn't stop trying, though ... now where were those extra NICs I had?
 
Top