PCI intel pro/1000 GT NIC performance compared to Realtek (iperf)

Status
Not open for further replies.

carcrusher

Dabbler
Joined
Aug 25, 2014
Messages
11
Hi!
Finally I have almost everything for my first FreeNAS box. I started some testing, and the results wasn't really what I was expecting. The problem I think was NIC performance. Current server on debian sid and integrated NIC is much faster. So I started making some more tests (iperf), and here are the results:
http://1drv.ms/ZAhbft
http://1drv.ms/ZAhbft

short: the iperf measured bandwidth was over 200 Mbits on server mode and over 300 Mbits on client mode higher on integrated realtek NIC than on PCI intel pro/1000. The bandwidth was tested with machines under linux with integrated realtek and atheros NIC's. Sadly (or happily - in terms of money) I don't have second intel pro/1000 to compare (intel to intel).

the layout (tried also direct link - results were quite the same):
Code:
    test machine
          |
          |
     |--dlink--|
     |         |
   pairI     pairII



test machine:
  • mobo: GIGABYTE on AMD785
  • cpu: AMD Phenom II X4 965 (4x3.4GHz)
  • ram: 2x4GB DDR3 1600 CL9 crucial
  • NIC1: integrated Realtek 8111C
  • NIC2: PCI intel pro/1000 GT
  • os1: windows 7 home premium (up to date)
  • os2: lubuntu linux 3.13
  • os3: FreeNAS 9.2.1.8

pair I (originally for FreeNAS):
  • mobo: ASUS on AMD760G
  • cpu: AMD Athlon II X2 240e (2x2.8GHz)
  • ram: 2x8GB DDR3 1600 CL11 kingston (ECC)
  • NIC: integrated Atheros 8161/8171
  • os: lubuntu linux 3.13

pair II:
  • mobo: MSI on intel B75
  • cpu: intel celeron G530 (2x2.4GHz)
  • ram: 1x4GB DDR3 1600 CL9 kingston
  • NIC: integrated Realtek 8111E
  • os: debian linux sid 3.9
dlink: cheap gigabit switch

I tried some 'network tunning' with sysctls but with no results. My conclusion is: If you are using realtek or atheros NIC's on your client machines then (based on iperf) maximum throughput will be notiecably better on integrated popular realtek NIC than on PCI intel pro/1000 GT. It looks like it was a waste of money, but in the end I learned that higher price and intel sticker does't always mean better/faster hardware.
In the defence of intel: I was also trying some atto benchmarks and intel was equal and sometimes marginally better on small transfer sizes. However it was compared only Freenas on lintel to linux on realtek [windows-atto --> FreeNAS+intel, windows-atto --> linux+RLT]. And linux looks a bit better than FreeBSD in terms of network speeds.

Next step: Think I'll buy some cheap (compared to intel) realtek based NIC. If anyone is interested in tests with it, I would be happy to make some (and maybe atto tests also).

And some questions and doubts ;)
  • Do you think iperf is sufficient for NIC benchmarking?
  • What about benchmarking 'hard to send data' (not fitting MTU/a lot and small - don't know)?
  • What about testing both directions the same time?
  • Is it possible that PCI is limiting the NIC? Theoretically PCI is faster than GbE, but what about reality?
thanx!
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Hi!
Finally I have almost everything for my first FreeNAS box. I started some testing, and the results wasn't really what I was expecting. The problem I think was NIC performance. Current server on debian sid and integrated NIC is much faster. So I started making some more tests (iperf), and here are the results:
http://1drv.ms/ZAhbft

short: the iperf measured bandwidth was over 200 Mbits on server mode and over 300 Mbits on client mode higher on integrated realtek NIC than on PCI intel pro/1000. The bandwidth was tested with machines under linux with integrated realtek and atheros NIC's. Sadly (or happily - in terms of money) I don't have second intel pro/1000 to compare (intel to intel).

the layout (tried also direct link - results were quite the same):
Code:
    test machine
          |
          |
     |--dlink--|
     |         |
   pairI     pairII



test machine:
  • mobo: GIGABYTE on AMD785
  • cpu: AMD Phenom II X4 965 (4x3.4GHz)
  • ram: 2x4GB DDR3 1600 CL9 crucial
  • NIC1: integrated Realtek 8111C
  • NIC2: PCI intel pro/1000 GT
  • os1: windows 7 home premium (up to date)
  • os2: lubuntu linux 3.13
  • os3: FreeNAS 9.2.1.8

pair I (originally for FreeNAS):
  • mobo: ASUS on AMD760G
  • cpu: AMD Athlon II X2 240e (2x2.8GHz)
  • ram: 2x8GB DDR3 1600 CL11 kingston (ECC)
  • NIC: integrated Atheros 8161/8171
  • os: lubuntu linux 3.13

pair II:
  • mobo: MSI on intel B75
  • cpu: intel celeron G530 (2x2.4GHz)
  • ram: 1x4GB DDR3 1600 CL9 kingston
  • NIC: integrated Realtek 8111E
  • os: debian linux sid 3.9
dlink: cheap gigabit switch

I tried some 'network tunning' with sysctls but with no results. My conclusion is: If you are using realtek or atheros NIC's on your client machines then (based on iperf) maximum throughput will be notiecably better on integrated popular realtek NIC than on PCI intel pro/1000 GT. It looks like it was a waste of money, but in the end I learned that higher price and intel sticker does't always mean better/faster hardware.
In the defence of intel: I was also trying some atto benchmarks and intel was equal and sometimes marginally better on small transfer sizes. However it was compared only Freenas on lintel to linux on realtek [windows-atto --> FreeNAS+intel, windows-atto --> linux+RLT]. And linux looks a bit better than FreeBSD in terms of network speeds.

Next step: Think I'll buy some cheap (compared to intel) realtek based NIC. If anyone is interested in tests with it, I would be happy to make some (and maybe atto tests also).

And some questions and doubts ;)
  • Do you think iperf is sufficient for NIC benchmarking?
  • What about benchmarking 'hard to send data' (not fitting MTU/a lot and small - don't know)?
  • What about testing both directions the same time?
  • Is it possible that PCI is limiting the NIC? Theoretically PCI is faster than GbE, but what about reality?
thanx!

You're using regular PCI? No wonder you get crappier speeds. God known what else is interrupting the PCI bus, keeping it busy with other stuff, bottlenecking the NIC.
 

carcrusher

Dabbler
Joined
Aug 25, 2014
Messages
11
You're using regular PCI? No wonder you get crappier speeds. God known what else is interrupting the PCI bus, keeping it busy with other stuff, bottlenecking the NIC.
For instance? What could possibly keep it busy? Is there any solution for that - turning interferring device off or sth? The results were in most cases really smooth - small (or even none) standard deviation...
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
For instance? What could possibly keep it busy? Is there any solution for that - turning interferring device off or sth? The results were in most cases really smooth - small (or even none) standard deviation...

There's not much you can do. Stuff like SuperIO controllers used to hang off the PCI bus all the time. Note that 33MHz PCI barely keeps up with a half-duplex GbE connection - the problem is that GbE is always full-duplex, so it can't possibly not be a bottleneck. 66MHz PCI fixes this, but only barely, and I'm not sure how widespread it actually is.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
PCI will single-handledly bottleneck a 1Gb NIC.

Sorry, but you've got many many flaws in your benchmarks. I give you props for the time and effort but your final data has too many holes to really be of any kind of value. For benchmarks you need the fastest machine you can buy so you can rule out things like a slow CPU from being a potential bottleneck. That's not your Phenom from circa 2008 or 2009. Sorry, it's just not.

You also have to strategize your benchmarks and make sure you aren't creating bottlenecks yourself. If you are using PCI (which it looks like you are) then you clearly aren't a good candiate for giving opinions about benchmarks because you need more "mad skillz".

I will say this:

On the server side, Intel 1Gb is the way to go. There is just no Realtek or Atheros version that compares. That's the rotten truth. Some Realteks and Atheros can closely approximate the performance (and reliability), but to rule those out if you have a problem, get an Intel NIC.

On the client side, use whatever you want. But, just like with the server-side comments above, if you have problems, Intel is the undisputed leader in ensure you aren't having problems.

This argument over Realtek vs Atheros vs Intel just won't ever die. Frankly its laughable how many people think they can prove this wrong.

The bigger problems with Realtek and Atheros are not throughput (although some do hate the poor throughput). The problems are that they randomly just stop working. People will take 80MB/sec over 0MB/sec any day. And your tests didn't disprove the Realtek and Atheros aren't still prone to randomly not working. And if you did you'd get about 100 people that would laugh at you for looking so foolish when there's probably 1000 threads that would disprove you. Those values you got on the Realtek, that's pretty close to what I'd expect from an Intel. But that doesn't prove Intel is better (see above for proper benchmarking).

90% of the time we aren't telling people to go Intel for better speeds. We're telling them to go Intel just so it works. There's plenty of evidence that if you don't have some crappy underpowered CPU that Realteks can perform. But their reliability is still complete shit and won't ever be fixed because the drivers will never, ever, ever ever, be adequate without serious help from Realtek. They have profit margins to worry about, and that means they don't care about FreeBSD.
 

carcrusher

Dabbler
Joined
Aug 25, 2014
Messages
11
thank you for response!
There's not much you can do. Stuff like SuperIO controllers used to hang off the PCI bus all the time. Note that 33MHz PCI barely keeps up with a half-duplex GbE connection - the problem is that GbE is always full-duplex, so it can't possibly not be a bottleneck. 66MHz PCI fixes this, but only barely, and I'm not sure how widespread it actually is.
OK, if you say so - don't have reasons to disagree. Thanks for good advice, I will remember PCI NIC != good idea.
For benchmarks you need the fastest machine you can buy so you can rule out things like a slow CPU from being a potential bottleneck. That's not your Phenom from circa 2008 or 2009. Sorry, it's just not.
Well, I don't agree with that. You can benchmark it on every hardware you want and it will tell you how it works with hardware like that. If a simple GbE NIC needs fastest machine on market to work well then... it's a really crappy NIC ;) I mean in home/SOHO environment. I need it to work on that hardware and results achieved on hyper-ultra-octa-core-quad-socket-xeon-machine are totally irrelevant _for me_ (and folks who use similar hardware - which is cheap and popular). I made also tests with 'heavy' underclocked phenom (from 4x3.4GHz to 1x1.0GHz), here are the results:

cpu.png

...difference less than 1% in all cases.
On the server side, Intel 1Gb is the way to go. There is just no Realtek or Atheros version that compares. That's the rotten truth. Some Realteks and Atheros can closely approximate the performance (and reliability), but to rule those out if you have a problem, get an Intel NIC.
And some Realteks and Atheros are much faster (as you can see) than PCI intel GbE on home-use hardware.
This argument over Realtek vs Atheros vs Intel just won't ever die. Frankly its laughable how many people think they can prove this wrong.
I'm not trying to prove anything, I just made some tests and wanted to share them with you. <EDIT: ment FreeNAS forum community> They prove that on that cpus intel _PCI_ pro/1000 GT NIC is much slower in iperf that realtek, nothing more, nothing less.
The bigger problems with Realtek and Atheros are not throughput (although some do hate the poor throughput). The problems are that they randomly just stop working. People will take 80MB/sec over 0MB/sec any day. And your tests didn't disprove the Realtek and Atheros aren't still prone to randomly not working. And if you did you'd get about 100 people that would laugh at you for looking so foolish when there's probably 1000 threads that would disprove you. Those values you got on the Realtek, that's pretty close to what I'd expect from an Intel. But that doesn't prove Intel is better (see above for proper benchmarking).

90% of the time we aren't telling people to go Intel for better speeds. We're telling them to go Intel just so it works. There's plenty of evidence that if you don't have some crappy underpowered CPU that Realteks can perform. But their reliability is still complete shit and won't ever be fixed because the drivers will never, ever, ever ever, be adequate without serious help from Realtek. They have profit margins to worry about, and that means they don't care about FreeBSD.

Yes, my test were not about durability... But on the other hand - are there any test that proves realteks are prone to braking down? I would be happy to read... I used some NICs at home, all non-intel, all working well - under windows, under linux. I've seen dead CPUs, motherboards, RAM, HDDs, PSUs, graphics, but never a dead NIC... Well maybe once, but it was old 3com BNC. I'm not saying that it's not true, but for non-enterprise use they just don't die so easily. I think it's more likely you change your motherboard earlier than the NIC will fail. And if the integrated NIC works just fine (as you can see - it can)...

Don't get me wrong - I also think that overall intel > realtek, but sometimes it's not worth buying - itegrated NIC is... really cheap I think, and standalone realtek is also about 2x cheaper than intel. On the FreeBSD website there is a list of compatible ethernet chips, so it's not a big deal choosing the right one.

EDIT: What are these "mad skillz" anyway?
 
Last edited:

Fraoch

Patron
Joined
Aug 14, 2014
Messages
395
I have both the PCI and PCIe versions of the Intel PRO/1000 - the PCI version is the PRO/1000 GT and the PCIe version is the PRO/1000 CT.

I was never able to get the GT much higher than what you got, and indeed, the onboard Realtek was better in terms of transfer speed. However the CT works at gigabit speeds.

I just did brief iperf tests, but the results were apparent:

Code:
Intel PCI:

iperf -c [IP address] -d

------------------------------------------------------------
Client connecting to 192.168.1.56, TCP port 5001
TCP window size:  101 KByte (default)
------------------------------------------------------------
[  5] local 192.168.1.15 port 49876 connected with 192.168.1.56 port 5001
[  4] local 192.168.1.15 port 5001 connected with 192.168.1.56 port 50571
[ ID] Interval  Transfer  Bandwidth
[  5]  0.0-10.0 sec  389 MBytes  326 Mbits/sec
[  4]  0.0-10.0 sec  397 MBytes  332 Mbits/sec

$ iperf -c 192.168.1.56 -d

------------------------------------------------------------
Client connecting to 192.168.1.56, TCP port 5001
TCP window size:  114 KByte (default)
------------------------------------------------------------
[  5] local 192.168.1.15 port 49878 connected with 192.168.1.56 port 5001
[  4] local 192.168.1.15 port 5001 connected with 192.168.1.56 port 50574
[ ID] Interval  Transfer  Bandwidth
[  5]  0.0-10.0 sec  368 MBytes  309 Mbits/sec
[  4]  0.0-10.0 sec  421 MBytes  352 Mbits/sec

$ iperf -c 192.168.1.56 -d
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
------------------------------------------------------------
Client connecting to 192.168.1.56, TCP port 5001
TCP window size: 22.9 KByte (default)
------------------------------------------------------------
[  4] local 192.168.1.15 port 49879 connected with 192.168.1.56 port 5001
[  5] local 192.168.1.15 port 5001 connected with 192.168.1.56 port 50575
[ ID] Interval  Transfer  Bandwidth
[  4]  0.0-10.0 sec  370 MBytes  310 Mbits/sec
[  5]  0.0-10.0 sec  419 MBytes  351 Mbits/sec


Code:
Intel PCIe:

iperf -c [IP address] -d

------------------------------------------------------------
Client connecting to 192.168.1.56, TCP port 5001
TCP window size:  155 KByte (default)
------------------------------------------------------------
[  5] local 192.168.1.67 port 38768 connected with 192.168.1.56 port 5001
[  4] local 192.168.1.67 port 5001 connected with 192.168.1.56 port 50612
[ ID] Interval  Transfer  Bandwidth
[  5]  0.0-10.0 sec  1.05 GBytes  904 Mbits/sec
[  4]  0.0-10.0 sec  1.09 GBytes  934 Mbits/sec

$ iperf -c 192.168.1.56 -d
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
------------------------------------------------------------
Client connecting to 192.168.1.56, TCP port 5001
TCP window size:  174 KByte (default)
------------------------------------------------------------
[  4] local 192.168.1.67 port 38770 connected with 192.168.1.56 port 5001
[  5] local 192.168.1.67 port 5001 connected with 192.168.1.56 port 50613
[ ID] Interval  Transfer  Bandwidth
[  4]  0.0-10.0 sec  1.05 GBytes  905 Mbits/sec
[  5]  0.0-10.0 sec  1.09 GBytes  934 Mbits/sec

$ iperf -c 192.168.1.56 -d
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
------------------------------------------------------------
Client connecting to 192.168.1.56, TCP port 5001
TCP window size:  123 KByte (default)
------------------------------------------------------------
[  5] local 192.168.1.67 port 38771 connected with 192.168.1.56 port 5001
[  4] local 192.168.1.67 port 5001 connected with 192.168.1.56 port 50614
[ ID] Interval  Transfer  Bandwidth
[  5]  0.0-10.0 sec  1.05 GBytes  903 Mbits/sec
[  4]  0.0-10.0 sec  1.09 GBytes  934 Mbits/sec


Code:
Onboard Realtek NIC

iperf -c [IP address] -d

------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
------------------------------------------------------------
Client connecting to 192.168.1.56, TCP port 5001
TCP window size:  105 KByte (default)
------------------------------------------------------------
[  5] local 192.168.1.68 port 35936 connected with 192.168.1.56 port 5001
[  4] local 192.168.1.68 port 5001 connected with 192.168.1.56 port 50931
[ ID] Interval  Transfer  Bandwidth
[  5]  0.0-10.0 sec  887 MBytes  744 Mbits/sec
[  4]  0.0-10.0 sec  1.09 GBytes  936 Mbits/sec

$ iperf -c 192.168.1.56 -d
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
------------------------------------------------------------
Client connecting to 192.168.1.56, TCP port 5001
TCP window size:  123 KByte (default)
------------------------------------------------------------
[  5] local 192.168.1.68 port 35937 connected with 192.168.1.56 port 5001
[  4] local 192.168.1.68 port 5001 connected with 192.168.1.56 port 50932
[ ID] Interval  Transfer  Bandwidth
[  5]  0.0-10.0 sec  900 MBytes  755 Mbits/sec
[  4]  0.0-10.0 sec  1.09 GBytes  936 Mbits/sec

$ iperf -c 192.168.1.56 -d
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
------------------------------------------------------------
Client connecting to 192.168.1.56, TCP port 5001
TCP window size:  123 KByte (default)
------------------------------------------------------------
[  5] local 192.168.1.68 port 35939 connected with 192.168.1.56 port 5001
[  4] local 192.168.1.68 port 5001 connected with 192.168.1.56 port 50933
[ ID] Interval  Transfer  Bandwidth
[  5]  0.0-10.0 sec  880 MBytes  738 Mbits/sec
[  4]  0.0-10.0 sec  1.09 GBytes  936 Mbits/sec


So the Intel PCIe NIC is the fastest, the onboard Realtek NIC is almost as fast and the Intel PCI NIC is constrained by the PCI bus (although I did not have any other PCI devices).

Of course the Intel NIC is much more reliable and has better hardware, but the GT version is crippled by the PCI bus.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Hey, basically everything I said above validated. No surprises there. :P

Seeing those PCI bus speeds is just disappointing. And to think that back in 2003 or 2004 I bought an Intel PCI 1Gb card for $150 or so. LOL!
 

Fraoch

Patron
Joined
Aug 14, 2014
Messages
395
Seeing those PCI bus speeds is just disappointing. And to think that back in 2003 or 2004 I bought an Intel PCI 1Gb card for $150 or so. LOL!

Yeah, there's not much point with gigabit over PCI. I have a very old desktop board with an onboard Realtek 10/100 NIC so my GT stays in that board.

I'm finding the new Intel "Clarkville" controller built into the Z87 and Z97 chipsets (with the accompanying I217-V and I218-V PHYs) to be as fast as the CT. Information on these is a little harder to find, but it does seem to have TCP offloading - however it's so tightly integrated into the chipset it's hard to tell if it has its own dedicated processor. It certainly takes up less physical space so it could go either way...

I'll be doing tests with the CT versus the I218-V but I doubt if iperf is enough to show any differences.

My desktop board has a Realtek RTL8111GR as well. I also doubt if iperf will be able to show a difference here, but there's no reason to use it over the I218-V.;)

Simply put, I doubt if iperf can show any differences based on raw speed alone. As you stated, the difference is in driver support and long-term reliability. There are multiple accounts of very odd things happening with Realtek NICs.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
iperf is meant for qualitative diagnostics and less for quantitative. That is, it's a test to know if there is a problem. If one card does 900Mb/sec and another does 930Mb/sec on a 1Gb link there's no reason to even think that the other card really is 30Mb/sec faster. Generally, anything above 850Mb/sec on a 1Gb link is considered "saturation". In theory, all 1Gb cards with 1Gb links should be able to do 980Mb/sec+, but they don't for various reasons.

So even if iperf showed a difference, you are correct in that it's not really particularly valuable unless the numbers are showing that something is wrong (for example, the Realtek NICs that people have that peak at 700Mb/sec on Gb LAN).
 

pschatz100

Guru
Joined
Mar 30, 2014
Messages
1,184
Perhaps I am missing something, but it isn't clear to me which benchmarks were run on FreeNAS and which were run on Linux. Linux is not the same as FreeNAS (FreeBSD), and comparing them makes no sense for the purposes of this forum. I would think that performance on any given hardware could be quite a bit different between the operating systems.

If the OP wants support for the notion that Realteks are unreliable, a search through the forums should turn up many examples of bad experiences from FreeNAS users. I can attest to the unreliability of Realtek with current versions of FreeNAS (older versions of FreeNAS seemed to support older Realteks better - I suspect this is related to FreeBSD drivers.)

To the OP: Not all Gigabyte boards with the AMD785 chipset support ECC memory. You didn't give any details, so if you plan to configure the test machine for FreeNAS the only comment I can make is "Do your homework and choose carefully."
 
Status
Not open for further replies.
Top