Need advice: SAN interface set up

Status
Not open for further replies.

Slu

Dabbler
Joined
Jun 16, 2015
Messages
29
Hello all,

Let me start off by saying that this community has been incredibly helpful in my freeNAS journey and I can't thank ya'll enough.

Background:
- I have a freeNAS SAN box set up right now with 6 vDEVS in mirrors.
- FreeNAS box has 6 Ethernet ports
- They are sharing out one 8 TB zvol via ISCSI(currently over just one port, just as a test).
- I have 2 SAN switches
- 3 ESXi Hosts

Network set up:
- Plan to have 3 Ethernet connections going from the freeNAS SAN to each SAN switch.
- Each SAN switch will have 2 connections to each ESXI host. Ex: SAN SWITCH 1 will have 2 connections to Esxi-1, esxi-2, and esxi3.

Questions:
I need advice on how to properly configure freeNAS to handle this set up. I understand how to share a ISCSI zvol over one port to multiple esxi hosts on vmware. But, how do I set up the interfaces in FreeNAS to accommodate the network set up above?

I know I have to add all the interfaces to freeNAS first. But do I need to add all those interfaces as portals? Do I need to create specific initiators? Right now I just have the wildcards and ALL set up. I'm just a bit confused. Please see pics below for reference...

Also...I know LACP is not recommended for ISCSI, but would Failover or Load balancing improve performance?

Thank you all so much in advance!!

Interfaces:

upload_2015-7-31_14-33-3.png



Portals:
upload_2015-7-31_14-34-47.png

Initiators:
upload_2015-7-31_14-35-16.png

Extents:

upload_2015-7-31_14-35-46.png
 

mav@

iXsystems
iXsystems
Joined
Sep 29, 2011
Messages
1,428
I don't see there any problem. You can do it in three different ways:
1) Create LACP between each three ports of FreeNAS and their switch, assign each LACP some IP from its own network and add those two IPs into iSCSI portal. In that case each VMware box will have two iSCSI connections and will dynamically balance them, while each LACP will statically balance NAS ports.
2) Give each FreeNAS interface IP from own network and add those six IPs into iSCSI portal. In that case each VMware box create six iSCSI connections (three over each its NIC, each connection from own IP) and will balance them somehow. But result I suppose will be the same as above, just NAS ports failover moved from LACP to initiator software.
3) With 6 SAN ports on target and 2 ports on each of 3 initiators you can just direct-wire initiators and targets. Though if one of target ports die, respective initiator will loose one connection. And obviously this won't scale.

I would probably prefer the first setup with two LACPs for its simplicity from VMware PoV.
 

mav@

iXsystems
iXsystems
Joined
Sep 29, 2011
Messages
1,428
From the point of performance 2x10GigE on NAS would give much more then 6x1GigE in any possible configuration.
 
J

jpaetzel

Guest
I'd echo mav's reply. Typically you wouldn't want to use LACP and just put each interface in it's own subnet and let the iSCSI initiator balance the traffic using MPIO, however generally in that scheme you want the same number of ports of the SAN as on the VMWare boxes.

What you'll want to do is put the three SAN ports to each switch in LACP. Make sure you use separate subnets for each lagg interface. For example say you have igb0 igb1 and igb2 going to switch one, and igb3 igb4 and igb5 going to switch two.

Create an LACP lagg for igb0, igb1, and igb2. Give it a 10.10.1.1/24 IP
Create an LACP lagg for igb3, igb4, and igb5. Give it a 10.10.2.1/24 IP

On the vmware side plug one interface into each switch. Give each vmware box a 10.10.1.x/24 and 10.10.2.x/24 IP (usually to keep things sane you'll have x = last octet of the management IP)

Ensure each vmware box can ping FreeNAS on both IPs. vmkping 10.10.1.1 and vmkping 10.10.2.1

On the FreeNAS side create a portal with 10.10.1.1 and 10.10.2.1 as IPs. Initiators can be ALL:ALL

On the vmware side scan for targets on both IPs. Create datastores and be sure to set the pathing policy to round robin. Because the round robin is active/passive you'll want to create as many datastores as you have interfaces on the vmware boxes (so for your example two) This will balance traffic across both links on the vmware side. For traffic for a given datastore, you'll only ever see 100MB/sec on the wire. For traffic from any given host you'll only ever see 200MB/sec on the wire. Because FreeNAS has hardware offload things like storage vmotion and deploying templates will run at ZFS pool speed, not at network bandwidth speed.
 

Slu

Dabbler
Joined
Jun 16, 2015
Messages
29
Thank you very much for your replies! I am extremely appreciative to you guys for taking the time to help me.
 
Status
Not open for further replies.
Top