iSCSI MPIO setup

Status
Not open for further replies.

rptl

Dabbler
Joined
Apr 2, 2015
Messages
17
Please consider following setup for iSCSI MPIO:

Freenas - 2 nics
iSCSI: 10.1.1.1/32 and 10.1.2.1/32
Portal1 mapped to both 10.1.1.1 and 10.1.2.1 (2 nics)
1 target named "target1" mapped to Portal1.

Server1:
iSCSI - 10.1.1.2 and 10.1.2.2 (2 nics), using 10.1.1.1 and 10.1.2.1 as MPIO round-robin target (target1)

Server2:
iSCSI 10.1.1.3 and 10.1.2.3 (2 nics), using 10.1.1.1 and 10.1.2.1 as MPIO round-robin target (target1)

Am I missing something? I know I am using 4 nics (2 for each server) to map 2 nics on FreeNAS. Is that possible to use this setup or do I arbitrary need more 2 nics on Freenas to map 1-1 nic?

Please let me know if I was clear enough. Thanks.
 
Last edited:

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Yes... so you have two separate networks for MPIO, endlessly better than any of the dumb hack solutions because two networks just magically works right.

I'm not clear what you're asking. Is it "is this right?" If so, yes. If it's something more, please clarify.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
Definitely doing it right. Not necessary, but are you using two separate physical switches for the 10.1.1.1 and 10.1.2.1 networks as well?

What's your iSCSI initiator? If it's VMware there's a few other tweaks you can apply that are best done before presenting any additional targets (setting default PSP to round-robin, single IOP cycle between paths)
 
Last edited:

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
And just so you don't get paranoid, while two separate physical switches are highly recommended, you can absolutely do it on a single switch with two vlans, and in theory it should even work on a single switch on the same vlan (single broadcast domain) though that's a horrible idea for a production network.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
And just so you don't get paranoid, while two separate physical switches are highly recommended, you can absolutely do it on a single switch with two vlans, and in theory it should even work on a single switch on the same vlan (single broadcast domain) though that's a horrible idea for a production network.

Agreed, didn't mean to scare OP into thinking that two physical switches were necessary. They should at least be separated by VLAN though.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Agreed, didn't mean to scare OP into thinking that two physical switches were necessary. They should at least be separated by VLAN though.

Well, yeah on both counts, but also not absolutely mandatory on both counts as well. Just "the sanest/safest/best way."
 

rptl

Dabbler
Joined
Apr 2, 2015
Messages
17
Initiators are Xen Servers, and yes, there is a possibility to use 2 switches, I just got a new one and can manage ports among them.

In this meantime, What would be the benefits by adding 2 new nics on FreeNas server? Like this:
Freenas - 4 nics
iSCSI: 10.1.1.1/32 - 10.1.2.1/32 - 10.1.3.1/32 - 10.1.4.1/32
Portal1 mapped to both 10.1.1.1 , 10.1.2.1, 10.1.3.1, 10.1.4.1 (4 nics)
1 target named "target1" mapped to Portal1.

Server1:
iSCSI - 10.1.1.2 and 10.1.2.2 (2 nics), using 10.1.1.1 and 10.1.2.1 as MPIO round-robin target (target1)

Server2:
iSCSI 10.1.3.2 and 10.1.4.2 (2 nics), using 10.1.3.1 and 10.1.4.1 as MPIO round-robin target (target1)

Will I experience any performance improvement? Also, using this setup I do not have 1 switch for each subnet, only 2 same switches. Should I use 2 subnets per switch?

Thanks for all your replies, I really appreciated it.
 

Josh2079

Cadet
Joined
Sep 6, 2014
Messages
5
Question on this. What exactly is the purpose of using a 32-bit mask? you are already using private IP space. Doing it this way requires a router to summarize the routes in order for them to be accessible.
 

rptl

Dabbler
Joined
Apr 2, 2015
Messages
17
Actually, netmask is /24. I only used /32 on this thread to "illustrate" singles IPs.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Actually, netmask is /24. I only used /32 on this thread to "illustrate" singles IPs.

FreeBSD IPv4 doesn't support /32 netmasks on ethernet, and so I think most readers interpreted it as you meant.

In this meantime, What would be the benefits by adding 2 new nics on FreeNas server? Like this:
Freenas - 4 nics
iSCSI: 10.1.1.1/32 - 10.1.2.1/32 - 10.1.3.1/32 - 10.1.4.1/32
Portal1 mapped to both 10.1.1.1 , 10.1.2.1, 10.1.3.1, 10.1.4.1 (4 nics)
1 target named "target1" mapped to Portal1.

Server1:
iSCSI - 10.1.1.2 and 10.1.2.2 (2 nics), using 10.1.1.1 and 10.1.2.1 as MPIO round-robin target (target1)

Server2:
iSCSI 10.1.3.2 and 10.1.4.2 (2 nics), using 10.1.3.1 and 10.1.4.1 as MPIO round-robin target (target1)

You could just wire them directly and avoid the switches.

This is essentially four independent networks, and if you use a switch to support two networks, you should really do it with vlans in order to separate the broadcast domains.
 

rptl

Dabbler
Joined
Apr 2, 2015
Messages
17
Thanks for your inputs.

Any clue if I need crossover cables using gigabit nics or regular patch cords will do the job? Has been +10 years I don't wire computers directly and it was needed in the past.... :)
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Thanks for your inputs.

Any clue if I need crossover cables using gigabit nics or regular patch cords will do the job? Has been +10 years I don't wire computers directly and it was needed in the past.... :)

The Gigabit spec includes mandatory support for auto-crossover. So, no, regular patch cables will be fine.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Auto-MDI/MDIX was supposedly mandatory in the gigE specificiation, and while there are a very few exceptions to that rule, hooking two Intel cards back to back should work 100% reliably.
 

rptl

Dabbler
Joined
Apr 2, 2015
Messages
17
Sorry to bump this thread, but my question is relevant to this topic.

I followed your advises and wired both Xen servers directly to FreeNAS. I am able map iSCSI MPIO fine from Xenserver1 and it worked fine (using 10.1.1.1 and 10.1.2.1).

When I tried to map the same LUN from Xenserver2 using 10.1.3.1 and 10.1.4.1 as target, Xen complained about, saying it's already mapped and could not create storage for same LUN. It is the way Xen works, It will not allow me to map same LUN again, even using different subnets, because this Storage already belongs to same resource.

I see 2 solution for this, to access Lun2
1 - Since I am directly wired to FreeNAS, on Xenserver2 I just need to create a route to 10.1.1.1 using 10.1.3.1 (nic1) as gateway and do the same to 10.1.2.1(nic2) using 10.1.4.1 as gateway.

All Xen2server <-> FreeNAS traffic will be done over 10.1.3.0 and 10.1.4.0 nics as designed, but using 10.1.1.1 and 10.1.2.1 as target (same address used by Xenserver1 and already available on Xen Pool).

2- Removing 10.1.3.0 and 10.1.4.0 networks by changing 10.1.3.2 (nic1)/10.1.4.2(nic2) IPs to 10.1.1.3 /10.1.2.3 on Xenserver2 and 10.1.1.4 / 10.1.2.4 on FreeNAS.

For option one, I would still have 4 subnets, 1 for each nic, but ended using same end points for both initiator.

Second option, I would have 2 subnets only to handle 4 iSCI connections.

I believe the second option is pretty much what I have asked in first post, but adding more nics (4 nics in FreeNAS and 2 nics on each server)

Any inputs will be really appreciated.
 
Last edited:
Status
Not open for further replies.
Top