Slightly off-topic question : Would you recommend using LACP between two Netgear GS724T switches, or is it also worthless ?
My understanding is, if I aggregate 4 ports, switches will have 4 Gb/s connection between them.
So if I'm getting it right: LACP means an auto LAG Config, which dynamically/constantly changes or adapts if for example one connection fails by sending LACP packages.
A normal LAG is configured once by hand and always stays like that, so if we have the same case and one port goes down, then you manually have to fix that.
So LACP is just easier to use, but a hand configured LAG should get the same performance?
I'm asking because the TP-Link TL-SG1016DE supports LAG but no LACP, so if manually configured, then I still should be able to tie the 4 networkcards from a Supermicro A1SAi-2550F together and have full gigabit speed on 4 clients simultaneously, right?
Or should I get an LACP capable switch? Haven't bought anything yet.
802.3ad means LACP capable, right?
What any individual vendor means by link aggregation is basically that the marketing department takes what the engineering department tells them (which might be "static link aggregation"), then they see the word "aggregation", so then they trumpet that the product is capable of link aggregation, and someone else tacks on "LACP."
This is why I often cynically assume that especially low end products claiming LACP might not actually be LACP. But as previously noted, even LACP doesn't mean you'll
No version of LACP ever means that. It means that there is theoretically the potential for that to happen, if you're really lucky or if you're really OCD and arrange things just perfectly so that it happens in practice. On average, if you want four network cards in an LACP to serve clients and be maxxed out, you probably need more like a dozen active clients, and even then they won't be evenly served. You probably need dozens of active clients to get approximately "fair" service.
With static link aggregation there is the problem with packages send not in order, right?
Which LACP prevents - so I should be better off with a Switch which officially supports 802.3ad.
These Zyxel 1900 Series Switches for example looking quiet nice for the price.
Great read about LACP.
I am finally thinking (not concluding) that LACP might be troubling my setup. Though, I have read through this thread and the forum and can't find anyone facing the same issue as me.
I am posting a link to my problem. Thanks if anybody can have a suggestion about it.
I finally got LACP going on my c2750d4i using a RT-AC88U.
Turns out it's far easier and less messy to configure the Link Aggregation through the startup console option 1/2 in IPMI than via the GUI.
Even tweaking it via console menu after initially setting up in GUI (and losing connection) gave me much grief.
First shot setting it up via console worked perfectly and gave me less interrupted control over the config.
UPDATE: I'm one of these REAL special kind of people.
I told the Netgear the WRONG ports to LAGG. I have (1) out of the (2) correct..
Thanks for the sounding board!! LOL
So i'm finally messing around with LAGG on my old server with the latest stable Freenas installed (U4).
I have a Netgear GS108T switch that supports LACP.
I was able to set up my LAGG in FN and turn everything on in my Netgear to get LAGG running.
Now, i KNOW.. my crappy Freenas system is NOT to code, I don't use it other then to test things out so don't freak out on me!
I have looked everything over and played with setting but I keep getting this message.
So, I did "ifconfig" and found this..
I see that for whatever reason, only (1) of my ports are working. So I went and did more digging on my Netgear but couldn't find any more "things" to mess with.
I have the following info from my Netgear. You'll notice that it too only shows (1) port active. But if I change the LAG type to "Static" I loose connection to my Freenas but it shows activity on BOTH ports...
Only thing I can think of is do I need to type "up" in the options in the Freenas LAGG section?
Ok, sorry for all the pictures, I just want to make sure I gave any info that I have in front of me.
LACP to get 4 1gig ports to deliver 4gb of iSCSI traffic to VMware storage for VMware guests is the only way to make this work. VMware has a feature in the enterprise license and the vSphere Distributed vSwitch. The load balance algo is : "Route based on physical NIC load" The speed is only 4gb when you use 4 VLAN's with 4 subnets on the FreeNAS side. You are talking about a lot of work, easier to just go get a 10gig nic from ebay and call it a day. LOL
Any other benefit you will get you need to have hundreds or thousands of end stations requesting data from your LACP interface to reap LACP benefits. LACP is fine for redundancy to your Ethernet infrastructure.
Cisco switches work, I have had too many issues with whitebox switches and generic ones. The best option is to just buy a Cisco switch from ebay. Whitebox fun starts when you try to do a SNMP walk and it locks up the control plane and ssh/telnet interface. Whitebox fun never ends.
for VMware and iSCSI when using FreeNAS as the SAN it does work because VMware will round robin the packets across all of the interfaces that it sees the LUN is bound to. This is one need at the SAN application layer.
Did anyone manage to connect LACP from FreeNAS to a Ubiquity EdgeSwitch portChannel ?
I was able to ping the FreeNAS and vice versa, but no DHCP allocation and even if I manually set the IP, the web interface was not accessible.
Fwiw, I have a bug tracker in with the development team re my LAGG not detecting that the primary LAGG interface is not working. Interestingly, they told me that building a failover with LACP will be far more robust than LAGG at detecting issues (assuming your switch supports LACP, of course).
Once I send them the FreeNas’ debug info, they likely will pinpoint the cause quickly.