MasterTacoChief
Explorer
- Joined
- Feb 20, 2017
- Messages
- 67
I'm upgrading to 100GBe using ConnectX-4 cards (MCX455A-ECAT). I have configured them for Ethernet mode using a separate PC running Ubuntu, and verified they cards link up and pass traffic on that same PC.
When installing one in my server (R730, both CPUs populated, currently in a x8 slot - yes I know this limits bandwidth but it's temporary) I get the following messages at boot:
mlx5: Mellanox Core driver 3.7.1 (November 2021)ugen0.3: <no manufacturer Gadget USB HUB> at usbus0
mlx5_core0: INFO: mlx5_port_module_event:707:(pid 12): Module 0, status: plugged and enabled
mlx5_core0: WARN: mlx5_vsc_set_space:127:(pid 0): Space 0x7 is not supported.
mlx5_core0: WARN: mlx5_fwdump_prep:102:(pid 0): VSC scan space is not supported
mlx5_core0: INFO: init_one:1660:(pid 0): cannot find SR-IOV PCIe cap
mlx5_core: INFO: (mlx5_core0): E-Switch: Total vports 1, l2 table size(65536), per vport: max uc(1024) max mc(16384)
mlx5_core0: Failed to initialize SR-IOV support, error 2
mlx5_core0: ERR: mlx5_cmd_check:712:(pid 0): ACCESS_REG(0x805) op_mod(0x1) failed, status bad parameter(0x3), syndrome (0x6c4d48)
mlx5_core0: ERR: mlx5_cmd_check:712:(pid 0): ACCESS_REG(0x805) op_mod(0x1) failed, status bad parameter(0x3), syndrome (0x6c4d48)
The card doesn't show up as an available NIC in TrueNAS, though it does show the link came up and traffic is at least being transmitted to the port from the switch.
Don't know if I need to change some BIOS settings, tunables, or move it to the single x16 slot in this system. I have to move some NVMEs from a x16 expansion card to some x8 cards that I just ordered to get that slot freed up.
Thanks for the help.
More details:
root@nas:~ # pciconf -lv | grep mlx -A 4
mlx5_core0@pci0:135:0:0: class=0x020000 rev=0x00 hdr=0x00 vendor=0x15b3 device=0x1013 subvendor=0x15b3 subdevice=0x0013
vendor = 'Mellanox Technologies'
device = 'MT27700 Family [ConnectX-4]'
class = network
subclass = ethernet
When installing one in my server (R730, both CPUs populated, currently in a x8 slot - yes I know this limits bandwidth but it's temporary) I get the following messages at boot:
mlx5: Mellanox Core driver 3.7.1 (November 2021)ugen0.3: <no manufacturer Gadget USB HUB> at usbus0
mlx5_core0: INFO: mlx5_port_module_event:707:(pid 12): Module 0, status: plugged and enabled
mlx5_core0: WARN: mlx5_vsc_set_space:127:(pid 0): Space 0x7 is not supported.
mlx5_core0: WARN: mlx5_fwdump_prep:102:(pid 0): VSC scan space is not supported
mlx5_core0: INFO: init_one:1660:(pid 0): cannot find SR-IOV PCIe cap
mlx5_core: INFO: (mlx5_core0): E-Switch: Total vports 1, l2 table size(65536), per vport: max uc(1024) max mc(16384)
mlx5_core0: Failed to initialize SR-IOV support, error 2
mlx5_core0: ERR: mlx5_cmd_check:712:(pid 0): ACCESS_REG(0x805) op_mod(0x1) failed, status bad parameter(0x3), syndrome (0x6c4d48)
mlx5_core0: ERR: mlx5_cmd_check:712:(pid 0): ACCESS_REG(0x805) op_mod(0x1) failed, status bad parameter(0x3), syndrome (0x6c4d48)
The card doesn't show up as an available NIC in TrueNAS, though it does show the link came up and traffic is at least being transmitted to the port from the switch.
Don't know if I need to change some BIOS settings, tunables, or move it to the single x16 slot in this system. I have to move some NVMEs from a x16 expansion card to some x8 cards that I just ordered to get that slot freed up.
Thanks for the help.
More details:
root@nas:~ # pciconf -lv | grep mlx -A 4
mlx5_core0@pci0:135:0:0: class=0x020000 rev=0x00 hdr=0x00 vendor=0x15b3 device=0x1013 subvendor=0x15b3 subdevice=0x0013
vendor = 'Mellanox Technologies'
device = 'MT27700 Family [ConnectX-4]'
class = network
subclass = ethernet
Last edited: