StandardAtomic
Cadet
- Joined
- Nov 20, 2011
- Messages
- 8
System specs:
HP DX2450
CPU: AMD Athlon(tm) Dual Core Processor 4450B (2310.52-MHz K8-class CPU)
real memory = 4294967296 (4096 MB)
avail memory = 3974856704 (3790 MB)
I always start the iperf server like this:
iperf -fM -B 192.168.1.x -s
Where the X in the address was changed for the appropriate interface I was binding to to receive on.
I always ran the client like this with the -B being the interface I transmitted out of and -c the destination:
iperf -fM -t 60 -B 192.168.1.x -c 192.168.1.x
The imac is connected to a Cisco E3000 router over GigE. The Freenas box has the 3 network interfaces also connected to the E3000 via GigE.
Below I have the feature summary of each nic port followed by the test results receiving and then transmitting.
So I have a bit of an anomaly on the on-board ethernet when receiving data. I then re-ran the tests of receiving on the freenas and looked at the output of top:
bge interface:
nfe interface:
msk interface:
I did two rounds of testing this while monitoring top and the CPU stats were right about the same each time. But I noticed the performance on the nfe interface went up to 111MB. Here is the history of tests on that interface:
I'm confused by the change in performance. I've tried replicating the lower perf but have not found the smoking gun. But I think I'll just rule out using the on-board nfe interface because of the CPU stats. The CPU is only 33% idle when the interface is loaded up. It's all tied up handling system calls. So that driver must be polling the hardware for packets vs. interrupt based processing. It's too bad I don't have the CPU stats when it was performing at the 73MB/sec level.
The differences between the sys-konnect (marvell) adapter and HP (broadcom) are minimal. The marvell card is tiny and is PCI-E based compared to the 64 bit wide PCI-X broadcom, I think I'll go with the marvell based card.
Now to do some extended testing with the network loaded up along with the storage loaded up to make sure there are no load related issues with this config I've got. I recall at one point today while messing around, before I took a more orderly look at testing my setup, I had an instance where I lost the network and had watchdog timeouts on the marvell card. I forgot to save the messages to root cause and had rebooted the host. Darn freenas logging only to the ramfs.
HP DX2450
CPU: AMD Athlon(tm) Dual Core Processor 4450B (2310.52-MHz K8-class CPU)
real memory = 4294967296 (4096 MB)
avail memory = 3974856704 (3790 MB)
I always start the iperf server like this:
iperf -fM -B 192.168.1.x -s
Where the X in the address was changed for the appropriate interface I was binding to to receive on.
I always ran the client like this with the -B being the interface I transmitted out of and -c the destination:
iperf -fM -t 60 -B 192.168.1.x -c 192.168.1.x
The imac is connected to a Cisco E3000 router over GigE. The Freenas box has the 3 network interfaces also connected to the E3000 via GigE.
Below I have the feature summary of each nic port followed by the test results receiving and then transmitting.
Code:
bge0: <Compaq NC7770 Gigabit Server Adapter bge0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500 options=8009b<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,VLAN_HWCSUM,LINKSTATE> media: Ethernet autoselect (1000baseT <full-duplex,flowcontrol,master,rxpause,txpause>) RX------ [ 3] local 192.168.1.100 port 51458 connected with 192.168.1.10 port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0-60.0 sec 6253 MBytes 104 MBytes/sec TX------ [ 3] local 192.168.1.10 port 5001 connected with 192.168.1.100 port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0-60.0 sec 6652 MBytes 111 MBytes/sec
Code:
nfe0: <NVIDIA nForce MCP61 Networking Adapter> nfe0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500 options=82008<VLAN_MTU,WOL_MAGIC,LINKSTATE> media: Ethernet autoselect (1000baseT <full-duplex,flowcontrol,rxpause,txpause>) RX------ [ 3] local 192.168.1.100 port 51503 connected with 192.168.1.11 port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0-60.0 sec 4393 MBytes 73.2 MBytes/sec TX------ [ 3] local 192.168.1.11 port 5001 connected with 192.168.1.100 port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0-60.0 sec 6586 MBytes 110 MBytes/sec
Code:
mskc0: <SK-9Exx Gigabit Ethernet> msk0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500 options=c011a<TXCSUM,VLAN_MTU,VLAN_HWTAGGING,TSO4,VLAN_HWTSO,LINKSTATE> media: Ethernet autoselect (1000baseT <full-duplex,flowcontrol,rxpause,txpause>) RX------ [ 3] local 192.168.1.100 port 51551 connected with 192.168.1.12 port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0-60.0 sec 6684 MBytes 111 MBytes/sec TX------ [ 3] local 192.168.1.12 port 5001 connected with 192.168.1.100 port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0-60.0 sec 6641 MBytes 111 MBytes/sec
So I have a bit of an anomaly on the on-board ethernet when receiving data. I then re-ran the tests of receiving on the freenas and looked at the output of top:
bge interface:
Code:
CPU: 0.8% user, 0.0% nice, 19.9% system, 30.0% interrupt, 49.3% idle Mem: 129M Active, 43M Inact, 3019M Wired, 140M Buf, 627M Free Swap: 12G Total, 12G Free PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND 9706 root 4 76 0 13972K 2620K CPU1 1 0:17 39.36% iperf
nfe interface:
Code:
CPU: 2.9% user, 0.0% nice, 59.7% system, 4.1% interrupt, 33.4% idle Mem: 129M Active, 42M Inact, 3019M Wired, 140M Buf, 627M Free Swap: 12G Total, 12G Free PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND 9708 root 4 108 0 13972K 2620K RUN 0 0:26 57.86% iperf
msk interface:
Code:
CPU: 1.4% user, 0.0% nice, 22.4% system, 29.5% interrupt, 46.7% idle Mem: 129M Active, 42M Inact, 3019M Wired, 140M Buf, 627M Free Swap: 12G Total, 12G Free PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND 9709 root 4 76 0 13972K 2620K RUN 1 0:24 46.29% iperf
I did two rounds of testing this while monitoring top and the CPU stats were right about the same each time. But I noticed the performance on the nfe interface went up to 111MB. Here is the history of tests on that interface:
Code:
[root@fatman] ~# iperf -fM -B 192.168.1.11 -s ------------------------------------------------------------ Server listening on TCP port 5001 Binding to local address 192.168.1.11 TCP window size: 0.06 MByte (default) ------------------------------------------------------------ [ 4] local 192.168.1.11 port 5001 connected with 192.168.1.100 port 52491 [ ID] Interval Transfer Bandwidth [ 4] 0.0-60.0 sec 4403 MBytes 73.4 MBytes/sec [ 5] local 192.168.1.11 port 5001 connected with 192.168.1.100 port 52642 [ 5] 0.0- 0.4 sec 29.0 MBytes 73.4 MBytes/sec [ 4] local 192.168.1.11 port 5001 connected with 192.168.1.100 port 52840 [ 4] 0.0-60.0 sec 6658 MBytes 111 MBytes/sec [ 5] local 192.168.1.11 port 5001 connected with 192.168.1.100 port 53005 [ 5] 0.0-60.0 sec 6631 MBytes 111 MBytes/sec
I'm confused by the change in performance. I've tried replicating the lower perf but have not found the smoking gun. But I think I'll just rule out using the on-board nfe interface because of the CPU stats. The CPU is only 33% idle when the interface is loaded up. It's all tied up handling system calls. So that driver must be polling the hardware for packets vs. interrupt based processing. It's too bad I don't have the CPU stats when it was performing at the 73MB/sec level.
The differences between the sys-konnect (marvell) adapter and HP (broadcom) are minimal. The marvell card is tiny and is PCI-E based compared to the 64 bit wide PCI-X broadcom, I think I'll go with the marvell based card.
Now to do some extended testing with the network loaded up along with the storage loaded up to make sure there are no load related issues with this config I've got. I recall at one point today while messing around, before I took a more orderly look at testing my setup, I had an instance where I lost the network and had watchdog timeouts on the marvell card. I forgot to save the messages to root cause and had rebooted the host. Darn freenas logging only to the ramfs.