Will this gear do 10 gigabits?

Status
Not open for further replies.

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
I want to dive into 10 gigabit networking... So I hope you networking gurus will look over my plan and advise whether or not it will work.

My goal is to connect my two ESXi servers to the two SFP+ ports on a Dell 5524 switch using Intel X520-D1 NICs. These servers are both All-in-Ones, running FreeNAS 9.10.2-U1 as VMs (see 'my systems' below). One of my criteria is to select NICs that are supported by both ESXi 6.0/6.5 and FreeNAS itself, in case I ever decided to run it on-the-metal. My google-foo shows that the Intel NIC meets this requirement.

This is the gear I've picked out:

2 x Intel X520-D1 NICs @ $64 each on eBay.

1 x Dell 5524P switch @ $195 on eBay.

2 x 3m Intel-compatible Twinax cables w/transceivers @ ~$57 shipped, from www.fs.com.

Questions:
  • Will I run into any problems with the fact that this is a PoE-capable switch?
  • Am I missing anything?
Looking forward to hearing your input.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Last I checked, X520s, were good for ~7Gb/s on FreeBSD. Dunno if the driver has improved since then.
Will I run into any problems with the fact that this is a PoE-capable switch?
Not an issue. Unless it's doing some nasty, non-802.3at stuff - which I'm fairly certain it's not.
 

chris crude

Patron
Joined
Oct 13, 2016
Messages
210
Not trying to hijack this thread, but when i first read the 10GB primer thread i was confused with transceivers. i understand better now how they let you merge different styles hardware/connectors, but i decided since i will not be using a switch (just direct connecting client-server) i could use 2 cards like this
Intel E10G42BTDA Server Adapter X520-DA2 10Gbps PCI Express 2.0 x8 2 x SFP+
and a direct connect cable
2.5m (8.20ft) Intel XDACBL2.5M Compatible 10G SFP+ Passive Direct Attach Copper Twinax Cable
and be ready to go. Please let me know if this seems like a good idea.
thanks.
 

Mlovelace

Guru
Joined
Aug 19, 2014
Messages
1,111
I want to dive into 10 gigabit networking... So I hope you networking gurus will look over my plan and advise whether or not it will work.

My goal is to connect my two ESXi servers to the two SFP+ ports on a Dell 5524 switch using Intel X520-D1 NICs. These servers are both All-in-Ones, running FreeNAS 9.10.2-U1 as VMs (see 'my systems' below). One of my criteria is to select NICs that are supported by both ESXi 6.0/6.5 and FreeNAS itself, in case I ever decided to run it on-the-metal. My google-foo shows that the Intel NIC meets this requirement.

This is the gear I've picked out:

2 x Intel X520-D1 NICs @ $64 each on eBay.

1 x Dell 5524P switch @ $195 on eBay.

2 x 3m Intel-compatible Twinax cables w/transceivers @ ~$57 shipped, from www.fs.com.

Questions:
  • Will I run into any problems with the fact that this is a PoE-capable switch?
  • Am I missing anything?
Looking forward to hearing your input.
The X520 cards will need some tuning on the freeNAS side but does work well in ESXi. Chelsio and Solarflare are also well supported by freeNAS and ESXi as an alternative. Either the T420 or T520 cards for Chelsio or the SFN 5162F for Solarflare.
 

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
Not trying to hijack this thread, but when i first read the 10GB primer thread i was confused with transceivers. i understand better now how they let you merge different styles hardware/connectors, but i decided since i will not be using a switch (just direct connecting client-server) i could use 2 cards like this
Intel E10G42BTDA Server Adapter X520-DA2 10Gbps PCI Express 2.0 x8 2 x SFP+
and a direct connect cable
2.5m (8.20ft) Intel XDACBL2.5M Compatible 10G SFP+ Passive Direct Attach Copper Twinax Cable
and be ready to go. Please let me know if this seems like a good idea.
thanks.
Yes, it's a good idea and should 'just work', according to my understanding of these gizmos.

The X520-DA2 has dual ports, where the X520-DA1 models I've selected only have one. You only need one port on each NIC for the setup you're considering, but it certainly won't do any harm to have the extra ports AFAIK.
 

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
The X520 cards will need some tuning on the freeNAS side but does work well in ESXi. Chelsio and Solarflare are also well supported by freeNAS and ESXi as an alternative. Either the T420 or T520 cards for Chelsio or the SFN 5162F for Solarflare.
I'll be using these w/ ESXi 6.0U2 for the moment, but as I mentioned above, I want to keep the option open of using the cards directly w/ FreeNAS. So it's good to have confirmation that they'll work from someone who has actually done it.

I'm curious about tuning FreeNAS for these cards... Do you mean modifying the buffer sizes and such? Something more esoteric?
 

Mlovelace

Guru
Joined
Aug 19, 2014
Messages
1,111
I'll be using these w/ ESXi 6.0U2 for the moment, but as I mentioned above, I want to keep the option open of using the cards directly w/ FreeNAS. So it's good to have confirmation that they'll work from someone who has actually done it.

I'm curious about tuning FreeNAS for these cards... Do you mean modifying the buffer sizes and such? Something more esoteric?
Yes, buffer sizes and I also found it advantageous to change the default congestion control to the H-TCP Congestion Control Algorithm. My Chelsio card was line-rate from the start, but it was significantly more expensive when I bought it.

Edit: Here is a screen grab of the network tuning for the X520-DA2 I have inservice. You can ignore the (hw.mps.max_chains) loader as it's for the HBA cards.
NetworkTune.JPG
 
Last edited:

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
Yes, buffer sizes and I also found it advantageous changed the default congestion control to the H-TCP Congestion Control Algorithm. My Chelsio card was line-rate from the start, but it was significantly more expensive when I bought it.

Edit: Here is a screen grab of the network tuning for the X520-DA2 I have inservice. You can ignore the (hw.mps.max_chains) loader as it's for the HBA cards.
View attachment 16126
I'll file away your screenshot for future use. Thanks!
Do I need to worry about firmware updates on these cards? Intel doesn't provide any links to firmware that I could find.
 

Mlovelace

Guru
Joined
Aug 19, 2014
Messages
1,111
I'll file away your screenshot for future use. Thanks!
Do I need to worry about firmware updates on these cards? Intel doesn't provide any links to firmware that I could find.
Intel doesn't change/update the firmware on the X520 cards. There are some manufactures, (HP) for instance, that use the Intel ASIC on their own card (HP 560SFP+ as I recall) that do publish firmware updates. So, if you are buying the vanilla Intel card you'll be good to go.

Also, you'll want to tune those buffers to your system as you wouldn't want to steal too much ram from the ARC for networking. If you're running 32GB+ the ones I have should be okay for the most part, It's always best to test for your specific needs. ;)
 
Joined
Feb 2, 2016
Messages
574
We run the Intel X520-DA2 with no tuning. Out of the box: 6-7 gbps. That's a lot better than the bonded pair of gigabit NICs it replaced so we haven't bothered to do anything else.

Cheers,
Matt
 

chris crude

Patron
Joined
Oct 13, 2016
Messages
210
The X520-DA2 has dual ports, where the X520-DA1 models I've selected only have one. You only need one port on each NIC for the setup you're considering, but it certainly won't do any harm to have the extra ports AFAIK.
yes, i am torn between over-provisioning just in case i need it in the future and saving $100 per card. While i am not a rich man and still work for a living to support a family, my lifestyle does afford wasting a few $$$ every now and then on new toys.
 
Joined
Feb 2, 2016
Messages
574
Save the cash, @chris crude. By the time you need additional capacity, you'll be able to pick up 40G cards for the same price or 10G cards for even less.

Cheers,
Matt
 

chris crude

Patron
Joined
Oct 13, 2016
Messages
210
Save the cash, @chris crude. By the time you need additional capacity, you'll be able to pick up 40G cards for the same price or 10G cards for even less.

Cheers,
Matt
Thanks for the advice.
Side note: I earned a network management degree from a community college/tech school back when 1G was new. I work in a different field and dont use those skills often therefore am having a hard time wrapping my head around 10-40 for home use!
 

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
TL;DR: In answer to my original question: Yes! This gear will do 10 gigabits!

I've replaced the elderly Dell 2816 switch in my lab with a Dell PowerConnect 5524P unit purchased on eBay. I installed Intel X520-DA1 NICs (also purchased on eBay) in my primary and secondary All-in-One (AIO) servers and connected them to the 5524's two 10G SFP+ ports using 'custom made' TwinAx cables from The Fiberstore (www.fs.com). By 'custom made', I mean that I added a note to my order indicating I would be connecting X520-DA1 cards to a Dell 5524, and the good folks at the Fiber Store made up cables with Intel- and Dell-compatible transceivers on either end - at no upcharge!

Ten Gigabit Goodness Ensued! :)

Well, not quite ten... I get a maximum transfer rate of ~8.2 Gbit/s on the primary server (an X10SL7-F), but only ~3.7 Gbit/s on the secondary (an X8SIE-4LNF). The primary server's results are well within what I'd been led to expect by the experiences of others, but I'm a little surprised at the secondary server's poor showing. In its defense, it is an older machine. I've tried tweaking PCI-related BIOS settings and using an alternative ESXi NIC driver, so far to no avail: 3.7Gbit/s seems to be the limit for this box. But I welcome suggestions from the experts here on the forum for improving its performance and proving me wrong about this.

Both AIO servers run VMware ESXi 6.0U3 with FreeNAS 9.10.2-U1 installed as a VM (see 'my systems' below for details).

I tested transfer rates by running iperf in server mode on the appropriate FreeNAS instance, and connecting to it from four client iperf instances running on a pair of desktops, a laptop, and the FreeNAS VM on the other AIO server. See the screenshots below for results.

Overall I'm quite happy with this setup. I'm getting the desired performance where it counts, on my primary AIO server, and the secondary server, while not quite as fast as hoped, nevertheless has gained a four- or five-fold increase in transfer rates vs plain old gigabit Ethernet. And I'm especially pleased with the relatively low cost!

Other notes:
The Dell 5524P is loud! I expected it to be, because it's a PoE switch. It has a beefier 600W power supply containing two integral 40x40x28mm fans - Delta FFB0412SHN units rated at 54.5 dBA, the kind of 'screamers' we've all seen in 1U gear - plus two additional chassis fans which are temperature-controlled and never seem to come on. I've ordered a pair of Sunon GM1204PQV1-8A fans rated at 36.5 dBA to replace the screamers. This gear is all located in my shop, so I don't mind things being a little noisy, but not screaming Banshee levels of noisy!

I purchased my 5524P for $195. The same seller has since raised the price to $215... better get 'em while they're hot! But if you're willing to forego PoE support, you can still get the common, garden variety 5500-series switches for under $200.

Transfer rates, primary FreeNAS server:
10gig-boomer-network-speed.jpg

Transfer rates, secondary FreeNAS server:
10gig-bacon-network-speed.jpg
 

chris crude

Patron
Joined
Oct 13, 2016
Messages
210
I just ordered 2 X520-DA1 cards and a Cisco direct attach cable yesterday! Cable is verified to work with the Intel NICs, looking forward to testing this weekend.
 
Joined
Feb 2, 2016
Messages
574
I get a maximum transfer rate of ~8.2 Gbit/s on the primary server (an X10SL7-F), but only ~3.7 Gbit/s on the secondary (an X8SIE-4LNF).

Are you maxing out the bus? The Intel X4360 has a 2.5 GT/s DMI maximum bus rate while the E3-1241 is 5 GT/s. You also have 700 MHz more oomph with the 1241.

The X520-DA1 shows PCIe v2.0, 5.0GT/s.

Cheers,
Matt
 

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
Are you maxing out the bus? The Intel X4360 has a 2.5 GT/s DMI maximum bus rate while the E3-1241 is 5 GT/s. You also have 700 MHz more oomph with the 1241.

The X520-DA1 shows PCIe v2.0, 5.0GT/s.

Cheers,
Matt
Yes, sir, I believe the bottleneck here is the CPU; all of the 3400-series Xeons are limited to 2.5GT/s DMI. I installed the NIC in the X8SIE board's only PCIe 2.0 x16 slot, which shows 5.0GT/s in the user guide's system block diagram, but I think Supermicro over-stated the truth just a little... :smile:
x8sie-pcie-block-diagram.jpg
 

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
I just ordered 2 X520-DA1 cards and a Cisco direct attach cable yesterday! Cable is verified to work with the Intel NICs, looking forward to testing this weekend.
FWIW, I ordered a 'generic' cable, too, and used it to run a basic test between a desktop system and my primary FreeNAS server before the Dell switch arrived. It worked just fine... so you should be 'Good to Go!'
 

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
Replaced the two 5524 power supply fans with Sunon GM1204PQV1-8A units... much better! The Screaming Banshees are gone! :smile:

Old fans removed... waiting for UPS delivery:
dell-5524-power-supply.jpg
 
Joined
Jan 18, 2017
Messages
525
but only ~3.7 Gbit/s on the secondary (an X8SIE-4LNF).

Yikes I just installed 10Gbe CX-4 to my switch and X2 10Gbe transceiver to connect my desktop and get half of that on iperf. I had been thinking about replacing the processors on my X7 board for a long time now but maybe I should just be looking for a reasonable X10 board. system specs in sig.
 
Status
Not open for further replies.
Top