10 Gig Networking Primer

10 Gig Networking Primer

horizonbrave

Explorer
Joined
Nov 15, 2016
Messages
56
Hi,
this card is very cheap! (Chelsio 10GbE Dual SFP+ N320E T320):
is it asking for drama if used inside a Dell T320 to be connected to a 10GB switch?
What eventually the downsides?
Thanks!

EDIT: apparently the "SFN6122F" suggested in the 10Gig primer is cheap as well: I guess it makes a better choice ;)
 

nikalai2

Dabbler
Joined
Jan 6, 2016
Messages
40
I can confirm that N320E is running very hot (maybe because very high power consumption). From what i read, SFN6122F doesn't have this issue. :)


I am also thinking to buy a Solarflare SFN6122F for using with pfSense running in ESXI 6.7. It's a good choice?

Regards!
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,681
I can confirm that N320E is running very hot (maybe because very high power consumption). From what i read, SFN6122F doesn't have this issue. :)


I am also thinking to buy a Solarflare SFN6122F for using with pfSense running in ESXI 6.7. It's a good choice?

Regards!

I've slowly been swapping out Intel X520's for SFN6122's in our hypervisors over the last half year. I am *very* pleased with them, but I haven't been daring enough to install any in distant data centers (where smarthands cost is an issue and I prefer the proven track record of the Intels).
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
I've slowly been swapping out Intel X520's for SFN6122's in our hypervisors over the last half year. I am *very* pleased with them, but I haven't been daring enough to install any in distant data centers (where smarthands cost is an issue and I prefer the proven track record of the Intels).
Have you taken any measurements on the power consumption? I wonder if it worth swapping mine out.
 

Mlovelace

Guru
Joined
Aug 19, 2014
Messages
1,111
Have you taken any measurements on the power consumption? I wonder if it worth swapping mine out.
The power consumption for the Solarflare cards are rated for 5.9W-typical, while the Intel cards are listed as 14.4W-maximum and 4.8W-typical. Anecdotally I will say the Solarflare cards do seem to run a bit cooler, but I would also say the Intel cards don't run particularly hot either. At least when you compare them to the early versions of the Chelsio/Qlogic cards that is.
 

RichR

Explorer
Joined
Oct 20, 2011
Messages
77
I've slowly been swapping out Intel X520's for SFN6122's in our hypervisors over the last half year. I am *very* pleased with them, but I haven't been daring enough to install any in distant data centers (where smarthands cost is an issue and I prefer the proven track record of the Intels).
Re: smarthands -ain't that the truth???? In Australia, to plug in a USB external drive - over $200 (USD)!
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,681
Re: smarthands -ain't that the truth???? In Australia, to plug in a USB external drive - over $200 (USD)!

I remember years ago having a client pay $175/hr for some acute issue to a "major player" where we needed a power cycle (not a reset) of a server. This was in the mid-2000's before IPMI was really popular and when the 24-in-4U layout was still brand new; the power and reset switches on the AIC RMC4E2-XP were on the back panel.

246.jpg

248.jpg

We knew that a failed disk could hard-wedge the 3Ware RAID controller. Smarthands had no problem pulling the correct disk. But sometimes it's the simple things that get you...

So if you look at the bottom right of the back panel you'll notice a red and a black button, plus symbols for power and reset. Spent over an hour trying multiple "power cycles" because that was the only way to clear the 3Ware RAID card. Turns out the dum-eff had been pressing the red button when he'd been told to "power cycle" the unit, and the other dum-eff on the other end of the phone line (i.e. me) hadn't picked up on this. We kept going through this long ~ten minute boot cycle and the 3Ware card would just hang (because the controller had crashed).

Needless to say I was not impressed that "smarthands" couldn't identify the symbol for power.
 

Constantin

Vampire Pig
Joined
May 19, 2017
Messages
1,828
"950W 3 + 1 hot-swap redundant power supply" :eek:

My NAS is feeling totally inadequate now. Only one 650W power supply (and that runs at <25% of capacity at peak, 13% during idle)
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,681
Have you taken any measurements on the power consumption? I wonder if it worth swapping mine out.

Random bench server X is 174 watts with no 10G.

Same server is 191 watts with two X520's (that's two cards of two ports each) and SX optics.

Same server is 185 watts with two SFN6122's and SX optics.

To me that looks like each SFN6122 is taking about 6 watts and each X520 is taking about 9 watts.

Note that this is relatively unscientific. This is measuring the load at the plug, which is always jittery, and I only measured with two cards (even though we have a bunch of each of these).
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,681
"950W 3 + 1 hot-swap redundant power supply" :eek:

My NAS is feeling totally inadequate now. Only one 650W power supply (and that runs at <25% of capacity at peak, 13% during idle)

Oh those effin' Zippy-Emacs things were the bane of my existence throughout the 20[04-12] era. That and the crap-grade backplanes AIC used.

The PSU modules were something like 325W each. The problem is that in a facility with redundant A+B power, the loss of a PSU on A meant that if the B rail failed, the other A PSU would be instantly overloaded, and would smoke itself. Then when the B would come back, the ~600W available was really not sufficient to restart the system and things would brownout. And the modules weren't real high quality, so modules blew with somewhat alarming regularity.

If anyone ever wondered why I go on such a tear when people under-size PSU's, ... ah, experience.

But that was what it was like in the early days of SATA. We were one of the earliest shops doing large scale SATA storage servers for FreeBSD.
 

Exhorder

Explorer
Joined
Jul 12, 2019
Messages
66
The cards don't seem to be in production any more though, and I can't find them new anywhere near where I live in Europe.
Same problem here. It seems in Europe only the Intel X520 cards are available at all. Starting at ~250 € for new bulk cards in trustworthy online shops. Everything else just is n/a.
 

Josif

Dabbler
Joined
Aug 15, 2015
Messages
12
Hello Guys,
I am struggling to find the right solution for my case and I need your support.
I`ve 2x Freenas boxes + 2x ESXi 6.7 HPV using 1Gbps link over the switch.
I am planning to upgrade the network and move to 10Gbps but what I am wondering is what model card to buy for my ESXi 6.7

I`ve already purchased Chelsio T520-CR for my Freenas but I have no idea what card to buy for my ESXi 6.7
Please, note that the ESXi is version 6.7!

I am also planning to buy Microtik CRS305-1G-4S+IN for a Switch.

I`ve 2 specific questions:
1. Please advise me what card to buy for my ESXi 6.7?
2. With this configuration is it possible to bypass the switch and to have directly attached between the network cards?
3. What cables should I buy?

I am planning all this because I would like to use FT + vMotion on ESXi

Thanks in advance for your time!
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,681
For VMware, the Intel X520's are still the rock solid card of choice. Works out of the box. If you go with directly attached network cards, you won't be able to do FT/vMotion on those interfaces. The first post in this thread describes cables to buy; short version: buy fiber plus optics for lowest possible drama.
 

Josif

Dabbler
Joined
Aug 15, 2015
Messages
12
For VMware, the Intel X520's are still the rock solid card of choice. Works out of the box. If you go with directly attached network cards, you won't be able to do FT/vMotion on those interfaces. The first post in this thread describes cables to buy; short version: buy fiber plus optics for lowest possible drama.
Thank you!

What about having the X520-DA2 and a switch? can I use this card for not Direct attachment but with SFP+ switch? Like the one, I`ve mentioned above?
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,681
The "DA2" simply means optics weren't included. The DA2 cards work fine with Intel branded optics. Read the first post for more information about vendor locked optics.
 

ThreeDee

Guru
Joined
Jun 13, 2013
Messages
698
so I have acquired some items and wondering if 10gb would be possible between my computer (Windows 10) and my FreeNAS box ..

DELL FORCE10 S60 44GIGABIT 4 MINI GBIC SWITCH 2x 10GB SFP+ S60-10GE-2S PSU
731850-001 HPE 10GB Ethernet 1-Port 544+FLR SPF+ Adapter card
HP NC523SFP 10Gb 2-port Server Adapter
total network noob ..especially all things 10gb

I read somewhere that the HP NC523SFP will work in FreeNAS .. I'd throw the HPE 10GB card in my windows box and use the 2 10GB uplinks on the switch...?
 

Constantin

Vampire Pig
Joined
May 19, 2017
Messages
1,828
I use a twinax DAC cable between my NAS and the Mikrotik switch. That works as long as either device it's plugged into does not feature a "vendor lock". DACs usually cost $30 or less and this solution is good for short distances.

If you encounter vendor lock issues or need to bridge longer distances, your likely best bet is to invest in a respective HP and Dell 10Gbe optical transceiver meant for SFP+ ports. For short distances, the SR (850nm) type (see previous links), along with a LC-connector confectioned OM3 fiber optic cable will do this very cost-effectively. All in, you're under $50 for distances up to ~300m @ 10Gb/s. Another benefit is potential electrical isolation from the rest of your network.
 

ThreeDee

Guru
Joined
Jun 13, 2013
Messages
698
this is one of the articles I ran across
https://forums.lawrencesystems.com/t/thinking-about-a-10gb-network-this-might-help/738

FreeNAS Drivers:
This particular HP card, is in fact a re-brand of the QLogic cLOM8214 Chipset. Luckily FreeBSD (the base of FreeNAS) supports this chipset through the qlxgb driver 6. The only modifications I made were to add
Variable: if_qlxgb_load
Value: YES
under System->Tunables, AND to manually set the MTU of both interfaces (since I had dual port NICs) to 9000. This can either be done by adding the option mtu 9000 to the network interface options through the GUI, or via the command line with ifconfig <interface> mtu 9000. If you don’t perform this step, you’ll notice a barrage of hw_packet_error’s in the FreeNAS shell.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,681

Yeah, that's nice and all, some random guy did some random thing on the Internet and others followed. Sorry, I'm tired of dealing with that. As the URL notes, "this might help." But that also means "this might not." And I'm leaning towards "not" as the Qlogic based card isn't a common choice.

One of the things about hardware is that it's nice to believe that any random P.O.S. card will work, will work great, and you'll love it and never have any problems with it. Many years of professional experience with PC hardware leads me to believe that such beliefs bear no resemblance to the truth, and therefore I am a huge fan of using the known-tested-proven-good hardware that is suggested here in the forums. You *can* pick some random technology to attach your hard disks, but you'll be your own guinea pig in most cases. We know that the LSI HBA's with a particular version of firmware are problem-free and have billions of aggregate run-hours to back that up. We know that certain Intel and Chelsio network cards are problem-free. Etc.

Basically if you want to do 10G networking with FreeNAS and you want a competent starting point, forget everything else you've heard, and go to the first post in this thread. Most of the best information about 10G especially with respect to FreeNAS lives here in this thread. You might be able to get it to work on other random 10G cards. This will have to do with who wrote the driver, how well it was written, whether or not the author had access to chipset docs, whether the chipset had any bugs, and how well various arcane features such as vlans, hardware offload, jumbo MTU, etc., work. There are a number of us who do this professionally and there is fantastic knowledge collected in this thread.

Anyways ...

The upside to a network card is that if it ends up causing problems, you can pull it and give it a tap with the magic fix-it wand. Then you can go out and buy an Intel X520 or a Chelsio. So the Qlogic isn't a catastrophe. If it turns problematic, remediate it.

That's very different than drive-to-host attachment, where a poor choice of attachment can land you in a world of data loss pain, and there isn't any turning back from that.
 
Top