Link Aggregation -- Lets clear this up

Status
Not open for further replies.

antsrealm

Explorer
Joined
Jan 28, 2013
Messages
82
So I have had my FreeNas setup for a while now and I have a few machine but only 2 are relevant for this conversation The Freenas box and my desktop (win 7).

Now I have read a lot about lagg and spoken to my cisco certified mates and dug through this forum and I get the idea that LACP is really only good when having lots of clients to load balance the connections.

The problem I have is I can get 120 MB/s out of my desktop HDD and I want to get that speed across the network. Currently I get around 80 - 100 MB/s average. I have just setup a HP procurve 1810g-24 switch that supports LACP and installed a intel pro/1000 PT quad port nic in the freenas box. The next thing on order was a dual port card for the desktop in the hope I could increase bandwidth and get the speeds up around 120 MB/s.

Now I get LACP is probably no good but then I read about static vs dyanmic LACP and then FEC etc. If LACP is no good then is there any other mode that could be used to increase the bandwidth between these two machine??

And no I cant afford 10GB ethernet hardware :p

Thanks guys I know this is a repetitive topic.
 

Hanakuso

Cadet
Joined
Jul 7, 2013
Messages
8
The short answer is no, any kind of port bonding isn't going to have much of a benefit for you. FEC and LACP only really shows its strengths when load balancing multiple streams between a server and a few clients. You'd be better off trying to use a protocol that establishes multipaths further up the networking stack, like iSCSI.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
You could try switching from CIFS to NFS(or vice versa for whatever your situation is). I know that using an Intel NIC is possibly the only way you're going to get over 100MB/sec.
 

antsrealm

Explorer
Joined
Jan 28, 2013
Messages
82
The short answer is no, any kind of port bonding isn't going to have much of a benefit for you. FEC and LACP only really shows its strengths when load balancing multiple streams between a server and a few clients. You'd be better off trying to use a protocol that establishes multipaths further up the networking stack, like iSCSI.


Ok, I have put an intel NIC in the freenas box and have one on the way for the desktop. I'm not familiar with iSCI at all and had a quick read on Wikipedia just then. I mainly need the big speeds when I'm copying from a mates HDD to mine to minimize waiting time which might happen once a month. How would iSCI help and what is the basic concept ??

I am happy to use any protocol at all to transfer it and usually use SFTP in filezilla to get the best out of it at the moment.
 

Whatts

Dabbler
Joined
Jul 26, 2013
Messages
13
I mainly need the big speeds when I'm copying from a mates HDD to mine to minimize waiting time which might happen once a month. How would iSCI help and what is the basic concept ??

How will you be connecting that disk physically? SATA-to-USB solution? Is it a laptop?
Some speed figures I'm getting with my laptop (Realtek NIC) and my E3-1230v2 + Supermicro X9SCM-F setup in CIFS:
Write from laptop HDD to FreeNAS WD Red 3TB disk (no raid, single-disk ZFS stripe): 100MB/s
Write from laptop SSD to FreeNAS WD Red 3TB disk (no raid, single-disk ZFS stripe): 110MB/s
Write from laptop (SSD & HDD) to FreeNAS WD Green 2TB disk (no raid, single-disk ZFS stripe): 95MB/s

So considering your source and target disk, you might never get over 90-something, even if everything else could in theory go faster.
 

antsrealm

Explorer
Joined
Jan 28, 2013
Messages
82
How will you be connecting that disk physically? SATA-to-USB solution? Is it a laptop?
Some speed figures I'm getting with my laptop (Realtek NIC) and my E3-1230v2 + Supermicro X9SCM-F setup in CIFS:
Write from laptop HDD to FreeNAS WD Red 3TB disk (no raid, single-disk ZFS stripe): 100MB/s
Write from laptop SSD to FreeNAS WD Red 3TB disk (no raid, single-disk ZFS stripe): 110MB/s
Write from laptop (SSD & HDD) to FreeNAS WD Green 2TB disk (no raid, single-disk ZFS stripe): 95MB/s

So considering your source and target disk, you might never get over 90-something, even if everything else could in theory go faster.



Yeah well lets say for now as an example I'll use my internal WD green 3.5" sata disk copying over the network to the freenas box that has 5 x 3tb drives in raid z1 (soon to be upgraded to z2) I know I can get at least 120MB/s read out of the WD green copying to a ssd in the same system so I expect I can get that on the freenas box as they are also green drives in the array and more spindles = more speed to some degree (I think) so I think I can get more then 100MB/s if the network link was better.

I'm currently improving the network components intel nics, hp procurve switch etc. Just wondering assuming everything else is good how do I get faster then gigabit speed ?? or is 10gb hardware the only way ?
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
10Gb is the only way. As someone who has a server and workstation that can push/pull over 200MB/sec if it weren't for the 10Gb hardware I'd have implemented it a year ago.
 

antsrealm

Explorer
Joined
Jan 28, 2013
Messages
82
10Gb is the only way. As someone who has a server and workstation that can push/pull over 200MB/sec if it weren't for the 10Gb hardware I'd have implemented it a year ago.


What do you think of these cards ? If I get two of them (8 available) freenas - windows 7. I wouldn't use a switch and just attempt to go directly wired desktop to freenas in addition to my regular Gb network that can run along side it for all the other machines that don't need the higher speed.

http://www.ebay.com.au/itm/Intel-X5...s&hash=item19e048e86e&_uhb=1&autorefresh=true
 

antsrealm

Explorer
Joined
Jan 28, 2013
Messages
82
Hmm looks like the cabling isn't designed to go very far, I suppose from the card to a rack mount switch nearby and not 20m across my house :P

Might have to explore other options.
 

TheSmoker

Patron
Joined
Sep 19, 2012
Messages
225
You will need sfp modules. With optical multimode (the cheapest) you can go up to 500m. Point to point only.if you will need point to multipoint (star) you will need 10gb switch which is ... Expensive ...


Sent from my iPad using Tapatalk HD
 

Setius

Dabbler
Joined
Sep 29, 2011
Messages
11
What is your hardware spec for the freenas box, since CIFS is single CPU thread? As long as your not on super old hardware, it should be ok. I would recommend iSCSI MPIO setup for VM storage, if you insist on having 1gb+ bandwidth and don't want to buy 10gb hardware. iSCSI for normal home/smb file sharing, it would work, but has some trade offs that I would only recommend to the advanced user.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Just for a comparison, I had a 4 core first gen Xeon without HT(I think it was the cheapest xeon that worked with 1366) and I could saturate both of my Gb LAN ports simultaneously with CIFS.
 

antsrealm

Explorer
Joined
Jan 28, 2013
Messages
82
Agreed cyber, just never know what can be pulled from the hardware grave


It's only a sempron processor atm but it will upgraded to the xeon, with ecc ram and server MB etc in the coming weeks. The main thing is I just didn't like the idea of the network being the bottleneck. I want to read and write to the NAS as if it was a HDD in my dekstop as far as performance goes. I would of though there'd of been a relatively simple link agg solution but obviously not.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
So 100MB/sec isn't fast enough? That's what I get over CIFS....

Now, your sempron almost certainly can't reach those speeds, but any i3/i5/7 CPU should be able to easily if it isn't starved for RAM.
 
Status
Not open for further replies.
Top