mxtrck
Cadet
- Joined
- Jan 19, 2019
- Messages
- 5
Hello to all :)
Very inexperienced Freenas user, so please bare with me if the terminology is all over the place.
I am setting up a simple data server using as motherboard a freshly bought Asus z10PaU8-10gs .
This fine piece of hardware has served as nicely the last three years, running with Freenas 9.3 (i think), in our other location.
Have successfully put together all hardware and Freenas 11.2 U1 is already running. My only problem is with the 10gbe interfaces.
Hardware specifications:
Data Server setup:
Freenas11.2U1
Mobo: Asusz10paU8-10gs latest Bios 3703
On board NIC: Dual Port BCM57840S 10GbE LAN controller & 2 x Intel® I210AT
Rams: 1*16gb ECC @2133 ( got a second 16gb stick to install after finishing with the fiddling)
Bunch of SSD to stripe and Hdds to backup (i guess this is irrelevant to the networking issue)
and further parts for caching etc, but they are not setup yet.
From bios CCM(Comprehensive Configuration Management v7.10.31) we have info for the Broadcom sfp+ NICs:
BCM57840 - Mac Address - MBA : v7.10.33 CCM : v7.10.31
Mikrotik Switch Router CRS328-24P-4S+RM on bridge mode
From Freenas to the Switch i am using Mikrotik S+DA0003 (SFP+DAC 3M) cables.
Windows Client : connected to one ethernet port of the switch via Cat6e cable ( 2 meters long )
Now the problem :
Running iperf -sD while connected to the intel 1Gbe ports and it works ok. Specifically:
[ ID] Interval Transfer Bandwidth
[312] 0.0-100.0 sec 11589300 KBytes 949170 Kbits/sec
When assigning proper IP on one of the 2 10gbe sfp+ ports and running iperf , the performance is all over the place.
(disabling the intel interface first, as i do not know how to handle vlans yet, so i am trying to not mix also different subnets in this story)
Please see the iperf log below:
[ ID] Interval Transfer Bandwidth
[312] 80.0-81.0 sec 29050 KBytes 237978 Kbits/sec
[312] 81.0-82.0 sec 25700 KBytes 210534 Kbits/sec
[312] 82.0-83.0 sec 15600 KBytes 127795 Kbits/sec
[312] 83.0-84.0 sec 25550 KBytes 209306 Kbits/sec
[312] 84.0-85.0 sec 29200 KBytes 239206 Kbits/sec
[312] 85.0-86.0 sec 44400 KBytes 363725 Kbits/sec
[312] 86.0-87.0 sec 39750 KBytes 325632 Kbits/sec
[312] 87.0-88.0 sec 26250 KBytes 215040 Kbits/sec
[312] 88.0-89.0 sec 43900 KBytes 359629 Kbits/sec
[312] 89.0-90.0 sec 25450 KBytes 208486 Kbits/sec
[312] 90.0-91.0 sec 29750 KBytes 243712 Kbits/sec
[312] 91.0-92.0 sec 56000 KBytes 458752 Kbits/sec
[312] 92.0-93.0 sec 25450 KBytes 208486 Kbits/sec
[312] 93.0-94.0 sec 27750 KBytes 227328 Kbits/sec
[312] 94.0-95.0 sec 27550 KBytes 225690 Kbits/sec
[312] 95.0-96.0 sec 26950 KBytes 220774 Kbits/sec
[312] 96.0-97.0 sec 32350 KBytes 265011 Kbits/sec
[312] 97.0-98.0 sec 25350 KBytes 207667 Kbits/sec
[312] 98.0-99.0 sec 53000 KBytes 434176 Kbits/sec
[312] 99.0-100.0 sec 16200 KBytes 132710 Kbits/sec
[ ID] Interval Transfer Bandwidth
[312] 0.0-100.0 sec 3218350 KBytes 263612 Kbits/sec
Done.
I am understanding that this is not the expected behavior.
I have read the 10gbe Primer and i am aware that the Broadcoms are not the most reliable pieces of hardware in this case. However based on the Freebsd hardware support list the Nic should be supported , which is maybe why both sfp links are up and they are blinking nice green light.
Also upon hooking the cable in the sfp+ port freenas reports: NIC Link is Up , 10000Mbps full duplex, Flow control : ON - receive & transmit
Should flow control be on? I do not know where i can adjust this in freenas to test if this is causing the problem.
Lastly, the switch has 4 sfp+ports, i have tried all of them, they all have the same problem in performance.
So, i do not know if there is a way to identify if that is indeed a problem of the driver compatibility (if any), hardware failure on the side of motherboard, hardware failure on the side of the switch (while running the iperf the switch does not record any error or drops) or cable problem (cable test via the switch shows link is OK).
I will setup everything now to be accessed via the intel NIC but i would love if somebody has some idea to help me with.
Also please feel free to point out any stupidity mentioned or practiced in the post.
Thank you so much for your patience reading this ! :))
Very inexperienced Freenas user, so please bare with me if the terminology is all over the place.
I am setting up a simple data server using as motherboard a freshly bought Asus z10PaU8-10gs .
This fine piece of hardware has served as nicely the last three years, running with Freenas 9.3 (i think), in our other location.
Have successfully put together all hardware and Freenas 11.2 U1 is already running. My only problem is with the 10gbe interfaces.
Hardware specifications:
Data Server setup:
Freenas11.2U1
Mobo: Asusz10paU8-10gs latest Bios 3703
On board NIC: Dual Port BCM57840S 10GbE LAN controller & 2 x Intel® I210AT
Rams: 1*16gb ECC @2133 ( got a second 16gb stick to install after finishing with the fiddling)
Bunch of SSD to stripe and Hdds to backup (i guess this is irrelevant to the networking issue)
and further parts for caching etc, but they are not setup yet.
From bios CCM(Comprehensive Configuration Management v7.10.31) we have info for the Broadcom sfp+ NICs:
BCM57840 - Mac Address - MBA : v7.10.33 CCM : v7.10.31
Mikrotik Switch Router CRS328-24P-4S+RM on bridge mode
From Freenas to the Switch i am using Mikrotik S+DA0003 (SFP+DAC 3M) cables.
Windows Client : connected to one ethernet port of the switch via Cat6e cable ( 2 meters long )
Now the problem :
Running iperf -sD while connected to the intel 1Gbe ports and it works ok. Specifically:
[ ID] Interval Transfer Bandwidth
[312] 0.0-100.0 sec 11589300 KBytes 949170 Kbits/sec
When assigning proper IP on one of the 2 10gbe sfp+ ports and running iperf , the performance is all over the place.
(disabling the intel interface first, as i do not know how to handle vlans yet, so i am trying to not mix also different subnets in this story)
Please see the iperf log below:
[ ID] Interval Transfer Bandwidth
[312] 80.0-81.0 sec 29050 KBytes 237978 Kbits/sec
[312] 81.0-82.0 sec 25700 KBytes 210534 Kbits/sec
[312] 82.0-83.0 sec 15600 KBytes 127795 Kbits/sec
[312] 83.0-84.0 sec 25550 KBytes 209306 Kbits/sec
[312] 84.0-85.0 sec 29200 KBytes 239206 Kbits/sec
[312] 85.0-86.0 sec 44400 KBytes 363725 Kbits/sec
[312] 86.0-87.0 sec 39750 KBytes 325632 Kbits/sec
[312] 87.0-88.0 sec 26250 KBytes 215040 Kbits/sec
[312] 88.0-89.0 sec 43900 KBytes 359629 Kbits/sec
[312] 89.0-90.0 sec 25450 KBytes 208486 Kbits/sec
[312] 90.0-91.0 sec 29750 KBytes 243712 Kbits/sec
[312] 91.0-92.0 sec 56000 KBytes 458752 Kbits/sec
[312] 92.0-93.0 sec 25450 KBytes 208486 Kbits/sec
[312] 93.0-94.0 sec 27750 KBytes 227328 Kbits/sec
[312] 94.0-95.0 sec 27550 KBytes 225690 Kbits/sec
[312] 95.0-96.0 sec 26950 KBytes 220774 Kbits/sec
[312] 96.0-97.0 sec 32350 KBytes 265011 Kbits/sec
[312] 97.0-98.0 sec 25350 KBytes 207667 Kbits/sec
[312] 98.0-99.0 sec 53000 KBytes 434176 Kbits/sec
[312] 99.0-100.0 sec 16200 KBytes 132710 Kbits/sec
[ ID] Interval Transfer Bandwidth
[312] 0.0-100.0 sec 3218350 KBytes 263612 Kbits/sec
Done.
I am understanding that this is not the expected behavior.
I have read the 10gbe Primer and i am aware that the Broadcoms are not the most reliable pieces of hardware in this case. However based on the Freebsd hardware support list the Nic should be supported , which is maybe why both sfp links are up and they are blinking nice green light.
Also upon hooking the cable in the sfp+ port freenas reports: NIC Link is Up , 10000Mbps full duplex, Flow control : ON - receive & transmit
Should flow control be on? I do not know where i can adjust this in freenas to test if this is causing the problem.
Lastly, the switch has 4 sfp+ports, i have tried all of them, they all have the same problem in performance.
So, i do not know if there is a way to identify if that is indeed a problem of the driver compatibility (if any), hardware failure on the side of motherboard, hardware failure on the side of the switch (while running the iperf the switch does not record any error or drops) or cable problem (cable test via the switch shows link is OK).
I will setup everything now to be accessed via the intel NIC but i would love if somebody has some idea to help me with.
Also please feel free to point out any stupidity mentioned or practiced in the post.
Thank you so much for your patience reading this ! :))