Looking to get 10 GB networking, somewhat confused on my options. Need x4 pcie

ncc74656

Dabbler
Joined
May 29, 2023
Messages
17
I have an MSI b660 motherboard. I'm using both of my 16x slots for my SAS controllers. I have an X4 NVMe slot available that I am looking to use for network expansion.

This motherboard does not support bifurcation

I bought a Asus 10 gig Nic but, I cannot get it functioning on the server. I am running core and from what I've read, it will only work on scale. I don't know if I can migrate without losing data?

My switch is a zyktell (spelling?) That has two SFP ports on it. My thought was, if I could find an Enterprise card and just use an SFP interconnect cable? Then I could figure out my windows-based machine maybe with this ASUS card

I read that there is supposedly a list of supported hardware somewhere on here but I haven't been able to find it. I did find some other information on 10 gig networking but, it was a bit confusing
 

NickF

Guru
Joined
Jun 12, 2014
Messages
763
The TLDR is you want an Intel or Chelsio based card for CORE or FreeBSD in general. You aren't going to find muchh in the way of X4 cards that are server grade. You can modify an X4 slot to accept an X8 card tho. https://www.youtube.com/watch?v=93yEag-aryk
I think you may be suffering from the "shoot first, ask questions later" approach to this, so that's really my best advice.

Some Options from reputable (IMO) eBay sellers:
Intel X520 from UNIX Surplus OLDER
Intel X710 from UNIX Surplus NEWER
Chelsio N320-E from Unix Surplus OLDER
Chelsio T520-LL-CR from UNIX Surplus NEWER


10G Networking Primer:

General HW recommendations:

Info on FAKE cards:

Why SFP+ Fiber and DACs are generally better than 10G Base-T copper:
 
Last edited:

ncc74656

Dabbler
Joined
May 29, 2023
Messages
17
Yeah I'm driving down into this head first. I think I found some potential options for parts. I think it makes the most sense to use SFP between the server and switch, then I can run copper from my switch to the desktop.

My Asus card is functional on my Windows 11 machine so I might stick with that. I've heard bad things about it but, so far it is stable. We'll see how long that lasts..

I found an Intel x550 T2, $150. It is a 4X card. So I might pick that up. Then I should be able to just pick up a standard SFP interconnect cable and be good to go?
 

ncc74656

Dabbler
Joined
May 29, 2023
Messages
17
I put the Asus card into the windows machine and so far it's working okay.
Do you know if the LSI 9223 controller card will work in a 4X slot?

I'm reaching the point that maybe I need to buy a new motherboard. But, I'm just trying to find an X4 SFP card. I bought an adapter to go from my NVMe slot to a X4 PCIe.

If I buy a new motherboard I guess I might go DDR5 because that's the only chipset that seems to have five expansion slots.

It's very frustrating trying to find 10 gig networking.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
If I buy a new motherboard I guess I might go DDR5 because that's the only chipset that seems to have five expansion slots.

Slots aren't based on chipsets.

Why do you have your heart set on so few slots?

X10SRL-F - 7 PCIe
X10SRI-F - 6 PCIe
X11SRL-F - 6 PCIe
X10DRI-F - 6 PCIe
H12SSL-F - 7 PCie
X9DRH-7F - 7 PCIe
etc

It's very frustrating trying to find 10 gig networking.

It's really not. You just have to stop trying to shoehorn it onto a consumer or gaming mainboard. There are dozens of different 10G cards, and almost all of them are PCIe x8 (or wider). They fit just fine on to hundreds of different server boards that are readily available. The difference is that the server boards are designed to accept the server-oriented cards. Your typical consumer board uses a chintzy chipset designed for a chintzy processor with a crappy small number of PCIe lanes, because that audio card or extra USB port card or extra SATA port card doesn't require x8.
 

ncc74656

Dabbler
Joined
May 29, 2023
Messages
17
Ain't that the truth. I guess I just assumed that with how much bandwidth we're getting out of these processors these days, that they would have more connectivity. I remember back in the day motherboards had seven or eight PCI Express slots. Nowadays it's hard to find any more than two. And the boards that do have five slots, three of them run at 1x speeds.

I think I managed to find what I need though. I found this card, https://www.ebay.com/itm/153991705362? I bought this cable, https://a.co/d/j2zbmSO and that card is confirmed to work on core. That's been another hard thing is finding a card that works on core instead of scale. Maybe I should switch over at some point.

I'm using the ASUS card in the desktop and I bought this adapter for the switch. https://a.co/d/1prM29c
 

ncc74656

Dabbler
Joined
May 29, 2023
Messages
17
From what people have told me, I should be able to backup my configuration and reinstall scale. Without losing any data. I might do that at some point but for now, I want to get the network up and stable and then I'll reassess.

When copying a file directly on the server, I get about 960 MBs a second transfer speed. So in theory, I should nearly be able to max out my 10 gig network from my desktop to the server
 

NickF

Guru
Joined
Jun 12, 2014
Messages
763
In any meaningful way?
When I have some time to do an A-B comparison for you, I will re-do my testing to sanity check. But I had better single-stream iPerf3 performance on SCALE than CORE last I checked. Multiple streams didn't see as much of a differance. There may be other variables I need to verify also.

FWIW this was with both the card linked here and a ConnectX4 25 Gig card (at 10 Gig), and a ConnectX4 100 Gig card (at 40Gig).
 

ncc74656

Dabbler
Joined
May 29, 2023
Messages
17
I think I saw that core and scale traded blows based on how they were being used. So that might not be a network driver thing, as much as it is a kernel thing
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
When I have some time to do an A-B comparison for you I will re-do my testing to sanity check. But I had better single-stream iPerf3 performance on SCALE than CORE last I checked.

Guy just said he got 960MB/sec. Doesn't get any better than that, that's running the red line.
 

NickF

Guru
Joined
Jun 12, 2014
Messages
763
When copying a file directly on the server, I get about 960 MBs a second transfer speed. So in theory, I should nearly be able to max out my 10 gig network from my desktop to the server
Between the ASUS card on your desktop and the ConnectX3 on CORE?
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
You've been pointed to a resource with specific recommendations for hardware that's known to work well under both CORE and SCALE. You've even been given links to specific cards. None of that hardware is Mellanox. What possessed you to ignore those recommendations?
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
None of that hardware is Mellanox. What possessed you to ignore those recommendations?

I think I actually do say somewhere that Mellanox is the Realtek of the 10G world. It's expected to be possible to make it work, but all the multiple personality stuff and some other finicky aspects make it not something that I'd recommend for newbies.
 

NickF

Guru
Joined
Jun 12, 2014
Messages
763
Mellanox of today isn’t the same as Mellanox of the ConnectX2/3 era. They have really come a long way and are probably the defacto NIC for 100Gb+ NVIDIA bought them for a good reason.

While that doesn’t necessarily mean they are of high quality, the Broadcom NICs on Dell computers and servers are kinda crap. But I don’t think Mellanox is in that same camp.

The dual IB/Eth mode kinda comes with the territory in the converged Ethernet adapters universe, so I don’t think it’s fair to sing them on that and compare them to Realtek. You are right that it’s confusing tho
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
The dual IB/Eth mode kinda comes with the territory in the converged Ethernet adapters universe, so I don’t think it’s fair to sing them on that and compare them to Realtek. You are right that it’s confusing tho

I can't in good conscience recommend something that's been made deliberately difficult to work with. There are enough silicon manufacturers, and better silicon manufacturers at that, that I can comfortably recommend people avoid the crummy ones. It isn't like HBA's where your options are LSI or Avago.

The other problem is that the Mellanox stuff, especially the older CX-2 and CX-3 stuff, does not implement as much offload in silicon as other cards such as the Intels or Chelsios. Getting driver support to line up with what the silicon theoretically supports has been an ongoing problem. Personally I find it a lot easier just to use the stuff that's known to work. I don't have a huge need to beat my head against the side of the rack.
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
I don't have a huge need to beat my head against the side of the rack.
Some people might find that enjoyable.

Anyway, if the mellanox NIC is working for him that's great; hopefully it will continue to do so. It's good to see people willing to be guinea pigs and contribute to increase the knowledge of the forum.
 
Last edited:

ncc74656

Dabbler
Joined
May 29, 2023
Messages
17
my problem is the motherboard limitations i have. many of the cards that were linked are too large to fit into my x4 pcie slot i have available. those that do fit are much more expensive. while searching around i found a user who has a video of that mellanox working on core. that again has been an issue, i wish id started with scale but at the time i didnt know that scale was more compatible with hardware. i may switch over but, one thing at a time.

plus the card was cheap so... good way to test things.

as for the copy speed, that was just copying a folder on the server itself. not through the network.
 
Top