Intel x540-T2 versus Intel x550-T2

dwchan69

Contributor
Joined
Nov 20, 2013
Messages
141
Anyone aware if TrueNAS Core support Intel x550-T2? Given both are 10GB, but one is PCIe 3 x 4 vs PCI2.1 x8 Any advantage of 550 over 540?
 

SeedyRom

Dabbler
Joined
Dec 22, 2020
Messages
16
The only difference I know of is the x540 is rated for PCIe v2.1 (5.0 GT/s) and the x550 is rated for PCIe v3.0 (8.0 GT/s). If anything, I would recommend the x540 as it has been used more and likely will be better supported.

Is there a reason you need to go RJ45 though? Why not use SFP+ and get something like the X710-DA2? When I priced things out, it was way cheaper going with SFP+ ports using twinax cables. Plus, the SFP+ ports are modular so you can just insert optics in the future if needed.

I also would like to recommend looking at Supermicro NICs instead of buying Intel branded. I run AOC-STGN-i2s cards which are X710-DA2 equivalent. They were plug and play with FreeNAS as they are with most operating systems. For RJ45, you would want the AOC-CTG-i2t which is the X540 equivalent.
 

dwchan69

Contributor
Joined
Nov 20, 2013
Messages
141
OK, I will double-check the motherboard as to the PCIe configuration. But I tend to agree with about the 540 over 550. One quick note. The reason I am thinking of 10GB versus just dual 1GB is that the TrueNAS will be hosting iSCSI volume to my esx host (3 of them) which currently is only running 1GB. I do not want to oversaturate the network bandwidth on the NAS.
 

SeedyRom

Dabbler
Joined
Dec 22, 2020
Messages
16
I hear you on that...Regardless if you go with the x540, x550, or the x710, they are all 10Gbe cards. The x540/x550 use RJ45 so that would use cat6a cables. The x710 uses SFP+ so the ports will either need DAC (Twinax) cables or you can insert a modular transceiver to make it any type of port you want. The reason I recommend SFP+ is the price of a 10Gb SFP+ switch is cheaper than a 10Gb RJ45 switch in most cases. You can also make the port any type of connection you need using a pluggable transceiver.

Even if you are just direct attaching the two devices without a switch, DAC cables cost about $11 each and the SFP+ 10gig cards are usually cheaper than the 10gig RJ45 cards and you aren't tied to one specific connector type. If you do plan on upgrading in the future to a 10Gb switch, you will also be paying less for an SFP+ capable switch but even so, with SFP+ transceivers, you can use any type of connector you want. Not so much with RJ45.
 

dwchan69

Contributor
Joined
Nov 20, 2013
Messages
141
I will have to check this out on eBay regarding to the price for the x710. I believe i Can use the 10GB port on my Dell 6248 without getting a new 10GB switch yet ;) However, I want to double-check one thing here. Given all my servers (4 currently active) and win10 (5) clients are still GB, I should still be able to take advantage of the 10GB through put from the NIC on TrueNAS right?
 

SeedyRom

Dabbler
Joined
Dec 22, 2020
Messages
16
I wrestled with the idea setting up 10gig without a switch and ran into nothing but problems. Since you do have a switch, you should be ok using that.

To answer your question, technically only the TrueNAS server will be able to use the 10Gb. Although, if multiple machines are trying to access it at the same time you will notice a difference using 10Gb over 1Gb since your drives can more than likely go faster than 1Gb speed.

If you'd like some recommendations on hardware:

I'm using a 24 port 1gb/4port 10gb switch from Fiberstore and it has worked great so far. (https://www.fs.com/products/72944.html)

I also purchased their generic DAC cables to connect everything (https://www.fs.com/products/74621.html). For a Dell switch, you will want the Dell compatible versions.

For network cards, I'm using Supermicro AOC-STGN-I2S which were plug and play with TrueNAS and ESXi. (Similar to what I bought: https://www.ebay.com/itm/Supermicro...2599-2-Port-10GbE-SFP-Controller/303587875787)

For the NIC, just make sure the PCI bracket is compatible with the slot you are installing it in. The one I linked is low profile but you might need the regular sized one depending on your server and PCI slot placement.

Edited my post after re-reading the last reply
 
Last edited:

dwchan69

Contributor
Joined
Nov 20, 2013
Messages
141
Let me clarify, my Dell PowerConnect 6248 (I have 2 of them bound together) does have 4 built-in 10GB port, it is SPF, not RJ45. So I was thinking if I can just use those port on the 6248 for now since all my end points are still just GB
 

SeedyRom

Dabbler
Joined
Dec 22, 2020
Messages
16
You can use the SFP+ ports to do 10gig over RJ45 but you'll need to buy the transceivers to insert into the SFP+ ports. The more affordable option would be to buy SFP+ NICs for the servers and use DAC cables like the one I linked previously. The 10gig RJ45 modules were around $25 when I priced them out compared to $12 for each DAC cable.
 

dwchan69

Contributor
Joined
Nov 20, 2013
Messages
141
Understood, so is the Intel X710 the most supported version? I see that it is already PCIe v3. SO in theory, even through my servers are only GB, but my TrueNAS is 10GB, giving the direction I am going, I should not have network bottleneck when I have all 4 of my esx host running at GB getting data from the 10GB enable TrueNAS
 

SeedyRom

Dabbler
Joined
Dec 22, 2020
Messages
16
I'd say that sounds right. 10Gb equates to roughly 1250 MBytes per second. As long as your drives can keep up you should get that. You could also do LACP on your servers with two 1Gb and get 2Gb per server.

I've only used Intel cards as they are usually well supported in Linux or FreeBSD.
 
Last edited:

smcclos

Dabbler
Joined
Jan 22, 2021
Messages
43
I also would like to recommend looking at Supermicro NICs instead of buying Intel branded. I run AOC-STGN-i2s cards which are X710-DA2 equivalent. They were plug and play with FreeNAS as they are with most operating systems. For RJ45, you would want the AOC-CTG-i2t which is the X540 equivalent.

Thanks for suggesting another 10GBe card, I too am looking to make that jump.
 
Top