LACP with Cisco 3560 switch, laggproto lacp lagghash l2,l2,l4

Status
Not open for further replies.

RegularJoe

Patron
Joined
Aug 19, 2013
Messages
330
Hi All,

LACP should not be much of an issue, but it appears to be an issue on FreeNAS 9.3 / FreeBSD 9.3:

Does anyone know what IFCONFIG command I should use to get the cisco SRC-DST-IP on a FreeNAS/FreeBSD lagg interface? I have tried "ifconfig lagg1 laggproto lacp lagghash l2"

From the FreeNAS box I do the ifconfig for LAGG1
ifconfig lagg1
lagg1: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
options=4019b<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,VLAN_HWCSUM,TSO4,VLAN_HWTSO>
ether 00:15:17:92:83:a9
nd6 options=9<PERFORMNUD,IFDISABLED>
media: Ethernet autoselect
status: active
laggproto lacp lagghash l2,l2,l4
laggport: em3 flags=1c<ACTIVE,COLLECTING,DISTRIBUTING>
laggport: em2 flags=1c<ACTIVE,COLLECTING,DISTRIBUTING>
laggport: em1 flags=1c<ACTIVE,COLLECTING,DISTRIBUTING>
laggport: em0 flags=1c<ACTIVE,COLLECTING,DISTRIBUTING>

=============================================

The Cisco 3560 I am testing with is currently set for "port-channel load-balance src-dst-ip"

This switch only supports these XOR settings:
port-channel load-balance ?
dst-ip Dst IP Addr
dst-mac Dst Mac Addr
src-dst-ip Src XOR Dst IP Addr
src-dst-mac Src XOR Dst Mac Addr
src-ip Src IP Addr
src-mac Src Mac Addr


L2, L3 and L4 hashing is something I could not find much on with Google, I know some much larger switches support a lot more hashing options, i.e. here is the hash options on my 3850:

Switch(config)#port-channel load-balance ?
dst-ip Dst IP Addr
dst-mac Dst Mac Addr
dst-mixed-ip-port Dst IP Addr and TCP/UDP Port
dst-port Dst TCP/UDP Port
extended Extended Load Balance Methods
src-dst-ip Src XOR Dst IP Addr
src-dst-mac Src XOR Dst Mac Addr
src-dst-mixed-ip-port Src XOR Dst IP Addr and TCP/UDP Port
src-dst-port Src XOR Dst TCP/UDP Port
src-ip Src IP Addr
src-mac Src Mac Addr
src-mixed-ip-port Src IP Addr and TCP/UDP Port
src-port Src TCP/UDP Port

Switch(config)#port-channel load-balance extended ?
dst-ip Dest IP
dst-mac Dest MAC
dst-port Dest Port
ipv6-label IPV6 Flow Label
l3-proto L3 Protocol
src-ip Src IP
src-mac Src MAC
src-port Src Port
<cr>

=============================================

As you can see from here VMware is round robin load balancing the packets to VLAN991-994 very well, the physical interfaces are em0-3 and they are not looking very good from a load balancing standpoint.

/0 /1 /2 /3 /4 /5 /6 /7 /8 /9 /10
Load Average |||||

Interface Traffic Peak Total
vlan990 in 0.000 KB/s 1.877 KB/s 60.038 KB
out 0.000 KB/s 1.985 KB/s 62.167 KB

vlan994 in 73.537 KB/s 42.501 MB/s 25.217 GB
out 13.319 MB/s 20.975 MB/s 23.355 GB

vlan993 in 65.365 KB/s 38.893 MB/s 26.081 GB
out 11.708 MB/s 23.106 MB/s 23.548 GB

vlan992 in 71.980 KB/s 45.410 MB/s 25.576 GB
out 12.867 MB/s 19.849 MB/s 23.528 GB

vlan991 in 76.026 KB/s 41.777 MB/s 25.863 GB
out 13.323 MB/s 21.099 MB/s 23.466 GB

lagg1 in 286.911 KB/s 117.013 MB/s 102.736 GB
out 51.217 MB/s 79.533 MB/s 93.896 GB

lagg0 in 0.116 KB/s 2.244 KB/s 3.833 MB
out 1.809 KB/s 8.894 KB/s 12.613 MB

lo0 in 0.000 KB/s 11.223 KB/s 3.697 MB
out 0.000 KB/s 11.223 KB/s 3.697 MB

em3 in 0.000 KB/s 0.120 KB/s 50.018 KB
out 25.027 MB/s 44.006 MB/s 33.380 GB

em2 in 0.000 KB/s 0.120 KB/s 38.422 KB
out 26.190 MB/s 40.267 MB/s 36.277 GB

em1 in 0.000 KB/s 0.120 KB/s 15.738 GB
out 0.000 KB/s 0.496 KB/s 15.880 GB

em0 in 287.032 KB/s 117.013 MB/s 86.998 GB
out 0.000 KB/s 2.482 KB/s 8.360 GB

igb1 in 0.000 KB/s 0.973 KB/s 657.866 KB
out 0.000 KB/s 0.000 KB/s 0.000 KB

igb0 in 0.116 KB/s 2.211 KB/s 3.791 MB
out 1.809 KB/s 8.894 KB/s 12.613 MB


================================================
the cisco switch shows bits 0 and load 00, the load should be more than 00 and the bits as I am doing LACP Src-Dst-Ip and the least significant bit is different on FreeNAS and VMware. Hashing does not look like it is working right.

c3560SAN#sh etherchannel 3 det
Group state = L2
Ports: 4 Maxports = 16
Port-channels: 1 Max Port-channels = 16
Protocol: LACP
Minimum Links: 0


Ports in the group:
-------------------
Port: Gi0/5
------------

Port state = Up Mstr Assoc In-Bndl
Channel group = 3 Mode = Active Gcchange = -
Port-channel = Po3 GC = - Pseudo port-channel = Po3
Port index = 0 Load = 0x00 Protocol = LACP

Flags: S - Device is sending Slow LACPDUs F - Device is sending fast LACPDUs.
A - Device is in active mode. P - Device is in passive mode.

Local information:
LACP port Admin Oper Port Port
Port Flags State Priority Key Key Number State
Gi0/5 SA bndl 32768 0x3 0x3 0x106 0x3D

Partner's information:

LACP port Admin Oper Port Port
Port Flags Priority Dev ID Age key Key Number State
Gi0/5 SA 32768 0015.1792.83a9 22s 0x0 0x1EB 0x9 0x3D

Age of the port in the current state: 0d:00h:21m:21s

Port: Gi0/6
------------

Port state = Up Mstr Assoc In-Bndl
Channel group = 3 Mode = Active Gcchange = -
Port-channel = Po3 GC = - Pseudo port-channel = Po3
Port index = 0 Load = 0x00 Protocol = LACP

Flags: S - Device is sending Slow LACPDUs F - Device is sending fast LACPDUs.
A - Device is in active mode. P - Device is in passive mode.

Local information:
LACP port Admin Oper Port Port
Port Flags State Priority Key Key Number State
Gi0/6 SA bndl 32768 0x3 0x3 0x107 0x3D

Partner's information:

LACP port Admin Oper Port Port
Port Flags Priority Dev ID Age key Key Number State
Gi0/6 SA 32768 0015.1792.83a9 22s 0x0 0x1EB 0x8 0x3D

Age of the port in the current state: 0d:00h:21m:21s

Port: Gi0/7
------------

Port state = Up Mstr Assoc In-Bndl
Channel group = 3 Mode = Active Gcchange = -
Port-channel = Po3 GC = - Pseudo port-channel = Po3
Port index = 0 Load = 0x00 Protocol = LACP

Flags: S - Device is sending Slow LACPDUs F - Device is sending fast LACPDUs.
A - Device is in active mode. P - Device is in passive mode.

Local information:
LACP port Admin Oper Port Port
Port Flags State Priority Key Key Number State
Gi0/7 SA bndl 32768 0x3 0x3 0x108 0x3D

Partner's information:

LACP port Admin Oper Port Port
Port Flags Priority Dev ID Age key Key Number State
Gi0/7 SA 32768 0015.1792.83a9 28s 0x0 0x1EB 0xA 0x3D

Age of the port in the current state: 0d:00h:21m:21s

Port: Gi0/8
------------

Port state = Up Mstr Assoc In-Bndl
Channel group = 3 Mode = Active Gcchange = -
Port-channel = Po3 GC = - Pseudo port-channel = Po3
Port index = 0 Load = 0x00 Protocol = LACP

Flags: S - Device is sending Slow LACPDUs F - Device is sending fast LACPDUs.
A - Device is in active mode. P - Device is in passive mode.

Local information:
LACP port Admin Oper Port Port
Port Flags State Priority Key Key Number State
Gi0/8 SA bndl 32768 0x3 0x3 0x109 0x3D

Partner's information:

LACP port Admin Oper Port Port
Port Flags Priority Dev ID Age key Key Number State
Gi0/8 SA 32768 0015.1792.83a9 28s 0x0 0x1EB 0xB 0x3D

Age of the port in the current state: 0d:00h:21m:27s

Port-channels in the group:
---------------------------

Port-channel: Po3 (Primary Aggregator)

------------

Age of the Port-channel = 0d:09h:30m:46s
Logical slot/port = 2/3 Number of ports = 4
HotStandBy port = null
Port state = Port-channel Ag-Inuse
Protocol = LACP
Port security = Disabled

Ports in the Port-channel:

Index Load Port EC state No of bits
------+------+------+------------------+-----------
0 00 Gi0/5 Active 0
0 00 Gi0/6 Active 0
0 00 Gi0/7 Active 0
0 00 Gi0/8 Active 0

Time since last port bundled: 0d:00h:21m:27s Gi0/7
Time since last port Un-bundled: 0d:00h:21m:29s Gi0/8
 

RegularJoe

Patron
Joined
Aug 19, 2013
Messages
330
Hi All,

Part of the issue might be that VMware things the iSCSI Array only supports a single connection/path. I am running this in a LAB : FreeNAS-9.3-STABLE-201506292130

Thanks,
Joe
 

RegularJoe

Patron
Joined
Aug 19, 2013
Messages
330
I have it working but...

sh etherchannel 3 det | i Active

Shows load is 00 on all interfaces. I am 100% sure Cisco knows how to do Etherchannel...

Thanks,
Joe
 

Dave Genton

Contributor
Joined
Feb 27, 2014
Messages
133
Yes you are correct, Cisco does know how to etherchannel as they invented it. You do however have older hardware with limited options on the tuples available in the chipsets and IOS alike. Today most all hardware in the data center uses 7 tuple algorithms in silicon that include the src-dest mac-addresses, ip addresses, layer 4 ports, and then to tie breaker is 802.1q vlan tag. Its become common since 10Gb chips started shipping but 3560's have been around for awhile. I have a 3560G in my lab among others but the 3560G is my dedicated server switch in my home office. FreeNAS and VMware Servers reside directly connected into that switch for both block and file access. All connections are dual with LACP port-channels.

That being said we're limited in options with this switch but the goal is to always get both sides as close as possible. Load balancing has become quite an issue in data centers recently in that they have become polarized. The typical access, distro, and core layout all using same algorithms tend to create hot "paths" if you will while others remain under utilized. This is where we stagger the methods used between layers but again always between two devices we need them to match or performance suffers exponentially. I've seen mismatches between 2 switches cause consistent ping drops ! pinging 5 times and 2 or 3 would drop, two then threee, then two and so on..we dont have that type of inconsistency here, nor have odd number of links in a port-channel but its simply not 50 50 distribution either. If you want to improve it then do what vmware likes doing, set to use source mac-address. I do that often in my lab, especially when vmware is hosting my servers as guests. At least this way you keep at layer 2, have more options between end device NICs and the switches you have and a little better distribution of traffic across the links.

With iSCSI it becomes more complex as you can use port-channels to load balance iscsi flows across a bundle but more so you can also use MPIO and use both fabrics. This grants much higher performance possibilities across the board. Depending on components and vendors used you can even multiplex at tcp level by using multiple IP addresses in each subnet. each ip stack has a theoretical throughput limitation on given hardware platform, increasing IP addresses allows tcp multiplexing to occur and raise bandwidth beyond 10Gb easily. I dont mess with freebsd beyond setting to lacp, but often alter cisco side to l2 with src or src-dest mac pairs. Also you could just use the "load balance" method and lose the lacp altogether. This is more like what vmware does and likely will give you better results across the board. Stay away from lacp on vmware esxi, the ip hash algorithm is much worse than what you are seeing now.

good luck,
d-
 

RegularJoe

Patron
Joined
Aug 19, 2013
Messages
330
VMware is picky. FreeBSD guys need describe things the same, is lagghash l2, l3 l4 on FreeeBSD the same as VMware #16 the same as on a Cisco 3850 src-mixed-ip-port Src IP Addr and TCP/UDP Port?

vSphere 5.5 supports these load balancing types:
  1. Destination IP address
  2. Destination IP address and TCP/UDP port
  3. Destination IP address and VLAN
  4. Destination IP address, TCP/UDP port and VLAN
  5. Destination MAC address
  6. Destination TCP/UDP port
  7. Source IP address
  8. Source IP address and TCP/UDP port
  9. Source IP address and VLAN
  10. Source IP address, TCP/UDP port and VLAN
  11. Source MAC address
  12. Source TCP/UDP port
  13. Source and destination IP address
  14. Source and destination IP address and TCP/UDP port
  15. Source and destination IP address and VLAN
  16. Source and destination IP address, TCP/UDP port and VLAN
  17. Source and destination MAC address
  18. Source and destination TCP/UDP port
  19. Source port ID
  20. VLAN
 
Status
Not open for further replies.
Top