Port Channel vs Dedicated Interfaces

Status
Not open for further replies.

MorfiusX

Dabbler
Joined
Apr 18, 2012
Messages
12
I am working on rebuilding my VMware lab. My FreeNAS machine is getting a new SuperMicro board with dual Intel NICs. I will be using iSCSI for VMware shared storege.

The question is this: dedicate one interface for iSCSI and one for everything else, or combine them in a Port Channel / Ether Channel / LACP?

I am leaning towards a Port Channel with IP hashing to best utilize the links.
 

ser_rhaegar

Patron
Joined
Feb 2, 2014
Messages
358
Never use bonding with iSCSI. If you utilize two independent NICs, iSCSI can use round robin and utilize both. If you use LACP, it will cap at 1Gbps.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
Never use bonding with iSCSI. If you utilize two independent NICs, iSCSI can use round robin and utilize both. If you use LACP, it will cap at 1Gbps.

To add to this, put your two iSCSI interfaces on non-overlapping subnets, eg: em0 192.168.1.1/24, em1 192.168.2.1/24, to protect your network against itself and stop it from trying to route storage traffic.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Never use bonding with iSCSI. If you utilize two independent NICs, iSCSI can use round robin and utilize both. If you use LACP, it will cap at 1Gbps.

To add to this, put your two iSCSI interfaces on non-overlapping subnets, eg: em0 192.168.1.1/24, em1 192.168.2.1/24, to protect your network against itself and stop it from trying to route storage traffic.

And for the OP... he should have known this. This is going to be a major time-sink for him because he probably doesn't understand these basic fundamentals. I wasn't even going to comment because some things I'm just tired of saying again.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
And for the OP... he should have known this. This is going to be a major time-sink for him because he probably doesn't understand these basic fundamentals. I wasn't even going to comment because some things I'm just tired of saying again.

To be fair, this is what comes with being the popular, easy option. No offense to OP intended, but you get the "TL;DR" crowd who wants to jump in headfirst rather than reading a couple hundred pages of forum posts/guidelines (and who can blame them there?)

iSCSI especially is very easy to set up, but also very easy to set up wrong. FreeNAS I feel falls under that umbrella as well.
 

ser_rhaegar

Patron
Joined
Feb 2, 2014
Messages
358
iSCSI especially is very easy to set up, but also very easy to set up wrong. FreeNAS I feel falls under that umbrella as well.
Even highly supported, well documented, well known products like EMC are easy to setup wrong. I learned my lesson... RTFM first. :)
 

MorfiusX

Dabbler
Joined
Apr 18, 2012
Messages
12
I think you guys are missing the point. I know the correct way to do it when you have multiple interfaces dedicated to iSCSI. I've been designing and building data centers for years. This is for simple home lab. 5(ish) VMs, a single VDEV with 6 WD Red drives, 99.5% idle, .4% serving media, and .1% lab work. I can't saturate a single 1gb link with that anyway. I have 1 nested ESXi machine. I said I was only going to use the two interfaces. iSCSI going down one interface meets my needs, was just simply asking for thoughts on the second interface.

Let's do a thought experiment. Hypothetically: Replace the dual 1gb interfaces with dual 10gb interfaces. You can't and more networking, and you must use iSCSI, plus file mode protocols (CIFS, FTP, NFS, etc). How do you configure them?

And to CyberJock: I get it, attack me personally, you need to do that for some reason, have fun.
 

Serverbaboon

Dabbler
Joined
Aug 12, 2013
Messages
45
There are people who are doing these things for educational and training purposes and sometimes that means doing things that are not the most efficient or the easiest. I am constantly experimenting with ESXI at home because I can do things at home that I cannot do at work so I have configured things which do not make sense from a Freenas point of view but did make more sense from a Vmware point of view. These were not performance changes but functional changes.

You can use the MPIO if you use different subnets on your multiple nics and accepting that one of your freenas interfaces will only be available to your esxi box and not your Desktop but I think things will be messy, you really need a vlan capable switch.

So here is a what I did for a while with my NL36 because I wanted to test MPIO and NFS for ESXI, it worked with no issues with an NFS data store and an ISCSI Data store, I had trouble getting Multiple Targets to work but I had assume this more a case of me not getting the ISCSI target working. The only reason I am not doing this at the moment is that I have switched to an NL54 and I am currently prevaricating on how I will implement ISCSI from a performance and space usage point of view.

N.B Try this at your own risk it worked for me maybe I was lucky, I cant remember if I had issues or not but I got it working. If you don't not understand what is being done then don't do it, make sure you have console access in case you screw up or discover undocumented behaviour such as the NL36 onboard nic not being Jumbo Frame capable . You will lose network access at some point make sure no ones streaming movies or you wont be popular.

My config used LACP but maybe it will work for the non switch assisted methods.

You will need a L2 LAN capable switch (these are cheaper these days) mine is also LACP capable so I could use switch assisted LACP.

If you have the room on your switch create your bonded port pair and leave you Freenas box plugged into a normal port and maybe you can create the Link Aggregation from the web interface and then swap the Freenas nics to the LACP ports you created.

I did this from scratch and so created the lagg on the console already plugged into the LACP ports

I also created 3 more VLANS on my switch and added these along with the default vlan to the freenas and esxi ports (vlans 1,20,30,40)

Once You have logged on to the web interface again you should have a bonded interface on your main lan. Now add two or three vlans to the laggo interface 2 for ISCSI (I did 3 as I wanted NFS)

I ended up with:

Default VLAN - 192.168.50.100 Main management interface and for my PC access of CIFS

ESXI Only
NFS VLAN 20 - 10.10.20.100
ISCSI1 VLAN 30 - 10.10.30.100
ISCSI2 VLAN 40 - 10.10.40.100

You can now create your NFS datastore share on one vlan interface by using an authorised network and setup your ISCSI with a portal on each of the two remaining vlans.

The NFS authorised bit is not necessary I just didn't want other pc accessing the VM Data store, also you need to make sure that you haven't tied NFS to a specific IP address (I tied mine to 192.168.50.100 and 10.10.20.100)

On your ESXI box create the vmkernal port groups you require, in my case an NFS on 10.10.20.75 and two ISCSI vmkernal port groups (10.10.30.75 and 10.10.40.75).

This gave me multipath ISCSI which from an ESXI perspective, but this was relying on the LACP correctly balancing the two different mac/IP addresses from the ESXI server
NFS accessible from my main vlan, and the ESXI NFS vlan which hopefully load balanced across both interfaces.
Using the reporting page in Freenas it looked like I was getting usage across all vlans and nics.

Blue touch paper lit.......

(oh I lied these are not my ip addresses)

Will just add this is Home Lab stuff NOT production.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I know the correct way to do it when you have multiple interfaces dedicated to iSCSI.

We kind of start to worry when we hear people say things like that, because it is entirely too common for storage people to think that the "correct" way is to use a single subnet, and rely on their hardware to "do the right thing." And they'll be adamant about that being the "correct" way to do it. But a proper multipath iSCSI environment actually does involve separate switching and separate networks.

You didn't really provide much information and I think that this has happened often enough now that people have just started to assume the worst. I can see why cyberjock reached his conclusion.
 

MorfiusX

Dabbler
Joined
Apr 18, 2012
Messages
12
We kind of start to worry when we hear people say things like that, because it is entirely too common for storage people to think that the "correct" way is to use a single subnet, and rely on their hardware to "do the right thing." And they'll be adamant about that being the "correct" way to do it. But a proper multipath iSCSI environment actually does involve separate switching and separate networks.

You didn't really provide much information and I think that this has happened often enough now that people have just started to assume the worst. I can see why cyberjock reached his conclusion.

I architect using UCS and VNX/VMAX on a daily basis. I completely get the concept of dual fabrics, which is how I architect ISCSI as well. I get your point about amount of detail provided . I thought it was a simple question, but I can see how some may ask that same question without the understanding that you wouldn't want to run a production workload in that manner.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
I think you guys are missing the point. I know the correct way to do it when you have multiple interfaces dedicated to iSCSI. I've been designing and building data centers for years. This is for simple home lab. 5(ish) VMs, a single VDEV with 6 WD Red drives, 99.5% idle, .4% serving media, and .1% lab work. I can't saturate a single 1gb link with that anyway. I have 1 nested ESXi machine. I said I was only going to use the two interfaces. iSCSI going down one interface meets my needs, was just simply asking for thoughts on the second interface.

As jgreco pointed out, none of this was in the initial post, so you can see why we got the idea that you were trying to do Bad Things.

Let's do a thought experiment. Hypothetically: Replace the dual 1gb interfaces with dual 10gb interfaces. You can't and more networking, and you must use iSCSI, plus file mode protocols (CIFS, FTP, NFS, etc). How do you configure them?

iSCSI MPIO, fixed-path in VM to bind to the first interface with second as failover, file mode protocols on the second link.

And to CyberJock: I get it, attack me personally, you need to do that for some reason, have fun.

Don't take it personally. He barks himself hoarse going after people who see fit to run ZFS on their old Pentium 4 Dell with 2GB of cheap RAM and a replacement power supply they got at Billy Bob's Bait Shop and Computer Emporium, so when he sees a post that's light on information and leaning towards the side of poor practice (and as you yourself admitted, bonding iSCSI isn't the right way to do it at all) he snaps.

Maybe I haven't been here long enough to be jaded yet but I still try to be kind.

Unless you deliberately ignored the sigline about ECC, then Honey Badger don't care about your zpool. ;)
 
Status
Not open for further replies.
Top