Link Bonding in IPMI vs FreeNAS Link Aggregation

Status
Not open for further replies.

Stanri010

Explorer
Joined
Apr 15, 2014
Messages
81
I've got a motherboard that allows for link bonding/link aggregation in the IPMI and I was wondering to set this up for link aggregation, should I be setting that up in the IPMI or just within the FreeNAS UI? Or Both?
 

aufalien

Patron
Joined
Jul 25, 2013
Messages
374
I would advise to keep it simple and down to 1 link and see how it goes. If you choose to set up LACP etc... you'd do it via the OS.

Look at a conversation as a client link. However most ppl mistake fatter pipes/plumbing since you can physically see it, as something that needs fixing. Its what you don't readily see that is most likely the issue. Meaning you must find a way to see the bits flowing.

I have found that IO issues arise in the application layer and rarely in the link layer.
 

Nick2253

Wizard
Joined
Apr 21, 2014
Messages
1,633
What motherboard do you have?

I'm wondering if you're confusing link sharing (doing both IPMI and system network traffic over the same ethernet port) with link aggregation (using multiple ports together as one big pipe).

If you really do have link aggregation at a BIOS level, then the guest OS should only see one ethernet port, so you couldn't do link aggregation from within FreeNAS.
 

xcom

Contributor
Joined
Mar 14, 2014
Messages
125
What motherboard do you have?

I'm wondering if you're confusing link sharing (doing both IPMI and system network traffic over the same ethernet port) with link aggregation (using multiple ports together as one big pipe).

If you really do have link aggregation at a BIOS level, then the guest OS should only see one ethernet port, so you couldn't do link aggregation from within FreeNAS.



Hu?

Is there such thing? Is this the same thing as LACP?

I had no idea that there was such feature of making two physical NIC look as one. I only knew of LACP aka 802.3ad which bonds two interfaces together but they are still two physical NIC's at the OS level.
 

Stanri010

Explorer
Joined
Apr 15, 2014
Messages
81
Not gonna lie, this is confusing the crap out of me.

image.png


image.png
image.png
 

Stanri010

Explorer
Joined
Apr 15, 2014
Messages
81
Can someone enlighten me as to why people say LACP is only for multiple computers connecting to one source and is not the same as just bandwidth boosting. I can kind of get data has to go from one Ethernet port to another Ethernet port but what happens if you have two computers with 2 ports each. Why can't you get a full 2 gigabits/s?

Assuming you have the following set up. How come I'm told the green desktop can't read or write 2 gigabits/s from the NAS? To me it just seems awfully simple but does it just not work that way?

1000x1000px_LL_a3e0d76c_Untitled.png
 

aufalien

Patron
Joined
Jul 25, 2013
Messages
374
Nice diagrams BTW, I really like em.

Well, in your case if all clients are LACP as well, you could potentially get more speed as data transmission (frame distribution) will be spread amongst the links.

However be aware that you may not get a boost in speed as one can look at network communication as a freeway.

You increase the lanes of a freeway but the number of cars on it hasn't changed and is say 20 cars pass an off ramp every minute or so.

Its going to be 20 cards per minute regardless.

Now if you have a very bizzy server, then you could have say 40 cars per minute but your original freeway may be enough to handle that.

Also, you should take in to consideration weather your server can even saturate a single link let alone 2.

Anyways, I think what you are doing is cool so find out if you get a boost from this or not. Monitor its behavior before and after and don't simply set and forget.

Now say those cars are a sedan and only 5 ppl max fit, if you want to cram more ppl per car, get a minivan and this is frame size or MTU (standard frame of 1500 byte vs jumbo of up to 9000 bytes). Now you can transport more ppl per minute.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Your "link" speed is still going to be the speed of your NIC. Since they are 1Gb each you'll never get more than 1Gb from any one server to any one client.

On the inside, LACP attempts to distribute the clients(notice I say clients and not the load). It usually does a hash function, so if you have 2 clients there's a 50% chance they will share the same link, and absolutely nothing is gained. Hopefully you now understand why its not throughput boosting in the way you are hoping. Now if you have 10+ clients there's a good chance that 2 things will be possible:

  • They'll be distributed somewhat evenly(which isn't possible with just a few clients)
  • Multiple clients may use a significant amount of throughput for a single link, making the LACP worth the effort.

So without lots of clients that you can *expect* will use more than a single Gb link combined, LACP gains you nothing. And since plenty of hardware either has no LACP support or very very crappy LACP support it's something you shouldn't use unless you are 100% certain that the gains are worth the potential of having network problems.

In your case, all those 2Gb links are providing more redundancy, but not really gaining you a darn thing in the performance arena. I would *never* have recommended LACP in your setup.* If you really want more throughput, go 10Gb. The cards are around $100 on ebay if you know what to look for.

As for changing the MTU for jumbo frames, that's something that either every single network device should support and have setup to match, or you shouldn't use it at all. Hint: You aren't going to see an appreciable benefit with Gb LAN, but you can almost certainly expect random network performance problems because not all of your devices will match.

* - I did do exactly what your diagram shows back in 2008. As you will no doubt find out, it's a lot of complexity that doesn't necessarily work well but does hurt reliability if not properly implemented and you don't see a performance improvement. I'd disable LACP and go single link for all of those machine if I were you(that's what I did until I went 10Gb).

The recommendation here on the forum is to not use jumbo frames or LACP. Especially so for home users.
 

Stanri010

Explorer
Joined
Apr 15, 2014
Messages
81
Thanks, that concisely answers my misunderstanding. I had a feeling from what I was reading that that was the case but wasn't convinced.

As for the 10-Gigabit network cards, which ones were you referring to? Are you talking about a intel card, mellanox card, or a Freenas community favorite?
Also, are there any small switches 10-gigabit switches for a few hundred dollars, instead of the server type 10-gigabit switches that cost upwards to thousands?
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
I own http://www.ebay.com/itm/INTEL-D9908...758?pt=LH_DefaultDomain_0&hash=item1e8a8fccc6

I have a direct link between my server and my primary desktop. I didn't go with any network switch because they are all outrageously expensive for my budget. I also only really had a use for 10Gb on a single computer.. my primary desktop. So the need for buying a switch just wasn't there anyway.

Keep in mind that if you do CIFS you have no chance of getting more than about 400MB/sec since CIFS is single threaded. That is, unless you bought a CPU for 4-figures. ;)
 

Stanri010

Explorer
Joined
Apr 15, 2014
Messages
81
A Xeon E3 would only get you 400MB/s?

You're able to pull more than 300mb/s on your Z2?
 

xcom

Contributor
Joined
Mar 14, 2014
Messages
125
Thanks, that concisely answers my misunderstanding. I had a feeling from what I was reading that that was the case but wasn't convinced.

As for the 10-Gigabit network cards, which ones were you referring to? Are you talking about a intel card, mellanox card, or a Freenas community favorite?
Also, are there any small switches 10-gigabit switches for a few hundred dollars, instead of the server type 10-gigabit switches that cost upwards to thousands?



If you fins a 10GigE switch for the cheap please point me that direction :)

One piece of advice. There is two sides of the coin on all this... You can take it as a learning curve and setup your network and devices how ever you want... LACp/Jumbo Frames/etc... One thing for sure is that you are going to gain knowledge and understanding of the technologies you are playing with... Than you can determine if it does or not work for you.
I my self run LACP on allot of my servers here at home.... I also run Jumbo Frames... I do not see any performance degradation on any of my devices that do not support jumbo frames... I take advantage of Jumbo Frames on the NFS arena....
Than again my network setup is not a normal home user type of setup... So I cant speak for any issues that you could encounter or could arise...

The performance gain is not base on your link been faster but the ability to do multiple data transfers without easily bottle necking your link... Thus been able to do multiple conversations over the lag...
Also if you are trying to archive the best and cleanest performance you can get out of one link than the best way to do that is to straight link to each other...
 

aufalien

Patron
Joined
Jul 25, 2013
Messages
374
Well, the funny thing is that I don't think there is no longer such a thing as a normal home user. Or rather, that definition is constantly changing.

With both data and storage use increasing constantly by an order of magnitude, and with increasing commoditization of technology, well stuff thought specialized a few years ago is common place now a days.
 

xcom

Contributor
Joined
Mar 14, 2014
Messages
125
Well, the funny thing is that I don't think there is no longer such a thing as a normal home user. Or rather, that definition is constantly changing.

With both data and storage use increasing constantly by an order of magnitude, and with increasing commoditization of technology, well stuff thought specialized a few years ago is common place now a days.


+1

I completely agree with your statement... The recommendations given base on a "home user" some times makes me wonder the source... You cant judge a user or his requirements base on "home".... You could back in 1990. =P J/K...

At any rate I tend to stay away from this type of conversation... I just give my advice and go on =]
 

aufalien

Patron
Joined
Jul 25, 2013
Messages
374
I also forgot to add that ppl are getting more sophisticated with there needs and abilities. I thought I was so bad a#$ in 2000 when I did Linux, Apple and Windows simultaneous all talking together on the same LAN.

Now, I realize that I'm just a chump amongst chumps! :)
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
A Xeon E3 would only get you 400MB/s?

You're able to pull more than 300mb/s on your Z2?

I can do almost 700MB/sec over NFS. But with CIFS, 300-350MB/sec is it. The CPU has one core at 100% when doing those speeds. That's direct connect between my server and desktop.
 

psdynpt

Dabbler
Joined
May 12, 2014
Messages
16
I can do almost 700MB/sec over NFS. But with CIFS, 300-350MB/sec is it. The CPU has one core at 100% when doing those speeds. That's direct connect between my server and desktop.
Cyberjock, can you please explain your direct connection using 10GigE? How did you make the NAS and your desktop work with the rest of your network? It sounds like your NAS and your primary desktop had 2 ports each, one of them 10GigE, and you used the 10GigE ports to connect directly to each other, with the other ports connected to the rest of your LAN? Does that even work? Did you have to create static routes between your main desktop and the NAS over 10GigE?

I am very curious about getting this implemented, between my main PC and my NAS.
 
Status
Not open for further replies.
Top