slow network throughput on HP microserver

Status
Not open for further replies.

kashiwagi

Dabbler
Joined
Jul 5, 2011
Messages
35
I just tested my N36L with the onboard NIC because I am not really seeing any of the problems mentioned. Result with iperf: average 935Mbps (it fluctuates between 935-936).
 

Winol

Dabbler
Joined
Aug 13, 2011
Messages
21
Cant believe it .. Now with the new onboard NIC the max BW with Iperf is 267 Mbits/sec ...

Did you guys change anything in the BIOS ?



[root@freenas] ~# iperf -s -u -p 5001 -P 0 -i 0 -f m
WARNING: interval too small, increasing from 0.00 to 0.5 seconds.
------------------------------------------------------------
Server listening on UDP port 5001
Receiving 1470 byte datagrams
UDP buffer size: 0.04 MByte (default)
------------------------------------------------------------
[ 3] local 192.168.0.50 port 5001 connected with 192.168.0.46 port 61822
[ ID] Interval Transfer Bandwidth Jitter Lost/Total Datagrams
[ 3] 0.0- 0.5 sec 14.0 MBytes 236 Mbits/sec 0.033 ms 0/10014 (0%)
[ 3] 0.5- 1.0 sec 17.4 MBytes 291 Mbits/sec 0.035 ms 0/12393 (0%)
[ 3] 1.0- 1.5 sec 16.9 MBytes 284 Mbits/sec 0.036 ms 0/12089 (0%)
[ 3] 1.5- 2.0 sec 15.4 MBytes 259 Mbits/sec 0.035 ms 0/11000 (0%)
[ 3] 2.0- 2.5 sec 17.3 MBytes 291 Mbits/sec 0.034 ms 0/12372 (0%)
[ 3] 2.5- 3.0 sec 16.9 MBytes 284 Mbits/sec 0.032 ms 0/12059 (0%)
[ 3] 3.0- 3.5 sec 16.6 MBytes 279 Mbits/sec 0.035 ms 0/11867 (0%)
[ 3] 3.5- 4.0 sec 17.5 MBytes 294 Mbits/sec 0.031 ms 0/12489 (0%)
[ 3] 4.0- 4.5 sec 18.0 MBytes 302 Mbits/sec 0.033 ms 0/12844 (0%)
[ 3] 4.5- 5.0 sec 17.9 MBytes 301 Mbits/sec 0.038 ms 0/12785 (0%)
[ 3] 5.0- 5.5 sec 17.5 MBytes 293 Mbits/sec 0.362 ms 0/12453 (0%)
[ 3] 5.5- 6.0 sec 18.0 MBytes 302 Mbits/sec 0.181 ms 0/12851 (0%)
[ 3] 6.0- 6.5 sec 18.1 MBytes 303 Mbits/sec 0.060 ms 0/12883 (0%)
[ 3] 6.5- 7.0 sec 17.5 MBytes 294 Mbits/sec 0.038 ms 0/12483 (0%)
[ 3] 7.0- 7.5 sec 17.9 MBytes 301 Mbits/sec 0.040 ms 0/12784 (0%)
[ 3] 7.5- 8.0 sec 17.6 MBytes 296 Mbits/sec 0.037 ms 0/12567 (0%)
[ 3] 8.0- 8.5 sec 17.0 MBytes 285 Mbits/sec 0.033 ms 0/12119 (0%)
[ 3] 8.5- 9.0 sec 18.1 MBytes 304 Mbits/sec 0.031 ms 0/12945 (0%)
[ 3] 9.0- 9.5 sec 17.2 MBytes 289 Mbits/sec 0.032 ms 0/12303 (0%)
[ 3] 9.5-10.0 sec 18.1 MBytes 303 Mbits/sec 0.035 ms 0/12898 (0%)
[ 3] 0.0-10.0 sec 345 MBytes 289 Mbits/sec 0.849 ms 0/246360 (0%)
[ 3] 0.0-10.0 sec 1 datagrams received out-of-order
[ 4] local 192.168.0.50 port 5001 connected with 192.168.0.46 port 61823
[ 4] 0.0- 0.5 sec 13.8 MBytes 232 Mbits/sec 0.033 ms 0/ 9869 (0%)
[ 4] 0.5- 1.0 sec 16.4 MBytes 275 Mbits/sec 0.039 ms 0/11701 (0%)
[ 4] 1.0- 1.5 sec 16.0 MBytes 268 Mbits/sec 0.034 ms 0/11407 (0%)
[ 4] 1.5- 2.0 sec 16.0 MBytes 268 Mbits/sec 0.040 ms 0/11400 (0%)
[ 4] 2.0- 2.5 sec 15.9 MBytes 266 Mbits/sec 0.034 ms 0/11315 (0%)
[ 4] 2.5- 3.0 sec 16.0 MBytes 269 Mbits/sec 0.031 ms 0/11427 (0%)
[ 4] 3.0- 3.5 sec 16.0 MBytes 268 Mbits/sec 0.031 ms 0/11407 (0%)
[ 4] 3.5- 4.0 sec 16.0 MBytes 268 Mbits/sec 0.049 ms 0/11396 (0%)
[ 4] 4.0- 4.5 sec 16.0 MBytes 268 Mbits/sec 0.032 ms 0/11406 (0%)
[ 4] 4.5- 5.0 sec 15.9 MBytes 267 Mbits/sec 0.031 ms 0/11356 (0%)
[ 4] 5.0- 5.5 sec 15.9 MBytes 267 Mbits/sec 0.321 ms 0/11370 (0%)
[ 4] 5.5- 6.0 sec 16.0 MBytes 268 Mbits/sec 0.214 ms 0/11401 (0%)
[ 4] 6.0- 6.5 sec 16.1 MBytes 269 Mbits/sec 0.051 ms 0/11456 (0%)
[ 4] 6.5- 7.0 sec 15.9 MBytes 267 Mbits/sec 0.038 ms 0/11359 (0%)
[ 4] 7.0- 7.5 sec 15.8 MBytes 265 Mbits/sec 0.035 ms 0/11288 (0%)
[ 4] 7.5- 8.0 sec 16.1 MBytes 270 Mbits/sec 0.033 ms 0/11467 (0%)
[ 4] 8.0- 8.5 sec 16.0 MBytes 268 Mbits/sec 0.033 ms 0/11412 (0%)
[ 4] 8.5- 9.0 sec 16.0 MBytes 268 Mbits/sec 0.033 ms 0/11408 (0%)
[ 4] 9.0- 9.5 sec 16.0 MBytes 269 Mbits/sec 0.034 ms 0/11419 (0%)
[ 4] 9.5-10.0 sec 15.8 MBytes 266 Mbits/sec 0.026 ms 0/11303 (0%)
[ 4] 0.0-10.0 sec 318 MBytes 267 Mbits/sec 0.973 ms 0/226805 (0%)
[ 4] 0.0-10.0 sec 1 datagrams received out-of-order





The light on the back of the Server is Amber/Yellow so I got Gigabit connection ...

Weird ..
 

Milhouse

Guru
Joined
Jun 1, 2011
Messages
564
That's testing UDP, which has never resulted in good performance no matter what hardware I've used (Core i7, N36L; Broadcom, Realtek, Intel NICs etc.).

How is your TCP performance?
 

Milhouse

Guru
Joined
Jun 1, 2011
Messages
564
I just tested my N36L with the onboard NIC because I am not really seeing any of the problems mentioned. Result with iperf: average 935Mbps (it fluctuates between 935-936).

Which is strange, as I repeatedly saw the bge timeouts within seconds of starting an iperf TCP test between two N36L (both onboard NICs), also an N36L/onboard and a Core-i7/Realtek). Trawling through BSD forum posts and mailing lists it seems to be a fairly common complaint in respect of the bge driver and Broadcom hardware. Maybe other network variables such as Jumbo frames also play a part.

Obviously if you don't have a problem that's great, but anyone that does is (IMHO) better off replacing the Broadcom NIC rather than wasting time trying to workaround the flaky FreeBSD bge driver.
 

Winol

Dabbler
Joined
Aug 13, 2011
Messages
21
UDP is more BW oriented because it does not have any windows in his headers .. Less overhead too ...
I dunno how the TCP is right now because I m not home. I have remote access to the NAS but cant really test stuff ..

What is the value you setup for the Jumbo frames ?
 

Milhouse

Guru
Joined
Jun 1, 2011
Messages
564
My switch would only support 7K Jumbo Frames, but I've since disabled Jumbo Frame support throughout my LAN as the benefit was barely perceptible.
 

Winol

Dabbler
Joined
Aug 13, 2011
Messages
21
I got a netgear one than can handle Jumbo

http://www.netgear.com/business/products/switches/unmanaged-desktop-switches/gs105.aspx


[root@freenas] ~# ifconfig
em0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 9000
options=219b<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,VLAN_HWCSUM,TSO4,WOL_MAGIC>
ether 00:1b:21:d2:95:72
inet 192.168.0.50 netmask 0xffffff00 broadcast 192.168.0.255
media: Ethernet autoselect (1000baseT <full-duplex>)
status: active
bge0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
options=c019b<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,VLAN_HWCSUM,TSO4,VLAN_HWTSO,LINKSTATE>
ether 3c:4a:92:74:2f:d0
inet 192.168.0.150 netmask 0xffffff00 broadcast 192.168.0.255
media: Ethernet autoselect (none)
status: no carrier
lo0: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> metric 0 mtu 16384
options=3<RXCSUM,TXCSUM>
inet6 fe80::1%lo0 prefixlen 64 scopeid 0x3
inet6 ::1 prefixlen 128
inet 127.0.0.1 netmask 0xff000000
nd6 options=3<PERFORMNUD,ACCEPT_RTADV>
 

kashiwagi

Dabbler
Joined
Jul 5, 2011
Messages
35
My iperf test is for TCP, not UDP (I tried UDP and got terrible figures).

I am not using jumbo frames. I have tried, but get lots of dropped packets etc. Not worth the trouble in my case.

My bios is using a patched russian thing based on the April (?) HP bios. I don't think it does anything except enable full speed on the fifth sata port (ODD).

Could the people experiencing trouble connect directly (no switches) using a non-faulty cat5e or cat6 cable, and see if the problem persists? It would be nice to rule out network environment factors before going to the NIC itself. Also, has the N36L ever shipped with different NIC chipsets? I am not sure of the different revisions (or if it has only been one revision, with N40L being the first update).

Nov 7 22:16:26 *** kernel: bge0: <HP NC107i PCIe Gigabit Server Adapter, ASIC rev. 0x5784100> mem 0xfe9f0000-0xfe9fffff irq 18 at device 0.0 on pci2
Nov 7 22:16:26 *** kernel: bge0: CHIP ID 0x05784100; ASIC REV 0x5784; CHIP REV 0x57841; PCI-E
Nov 7 22:16:26 *** kernel: brgphy0: <BCM5784 10/100/1000baseTX PHY> PHY 1 on miibus0
Nov 7 22:16:26 *** kernel: brgphy0: 10baseT, 10baseT-FDX, 100baseTX, 100baseTX-FDX, 1000baseT, 1000baseT-master, 1000baseT-FDX, 1000baseT-FDX-master, auto, auto-flow
 
Status
Not open for further replies.
Top