10GBase-T Network card

Status
Not open for further replies.

Grams

Cadet
Joined
Mar 20, 2013
Messages
1
Hello everyone,

we are looking to build our first FreeNAS based NAS for company requirements.
In order to reach the networking preformance we would like, we are looking for a 10GBase-T nic.

Has anyone used a 10GBase-T nic with FreeNAS and if so which and what are your results / experiences?

Thanks for any helpful information in advanced! :)
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
I'm fairly certain that Intel 10Gb NICs have been used in FreeNAS. I only say "fairly certain" because I can't find an actual thread and I haven't done it myself. But Intel is the leader for NIC driver support in FreeBSD and I'd expect that if anyone has support it would definitely be Intel.

As for performance, I'd expect that for 99% of the people in this forum the bottleneck won't be the NIC, so you can expect that performance will be bottlenecked at how fast the pool can read/write and the CPU can checksum/verify checksums. I have yet to see anyone benchmark values that are >1Gb/sec read or write, which would be required to saturate the link.

If you are looking for absolute maximum performance, I'd go with the most powerful CPU you can get, which is certainly going to be a very expensive Intel. Perhaps even a dual processor design and LOTS of RAM. Perhaps even an SSD ZIL. I doubt an L2ARC would help much because the L2ARC SSD would be so slow compared to 10Gb LAN it you'd get faster transfer rates from the zpool itself except for when it is fully loaded constantly. Definitely a zpool that follows the power of 2s for RAIDZ(X) and at least 2 vdevs.

Edit: I don't recommend people buy a ZIL or L2ARC until they have a system installed and in production. It's easy to add and remove in a production environment, and more often than not the benefits are very small.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
We have a Supermicro X9DR7-TF+ with an onboard Intel X540 10GbE on the bench right now. It shows up as an ix-based ("ixgbe"-based) driver, version 2.4.5, whereas the Intel released version is 2.5.1, and the FreeBSD head is 2.5.7. Given that the ixgbe driver's more than five years old, I wouldn't expect substantial issues.

We've only tested it at 1GbE but seems to work fine.
 

mstang1988

Contributor
Joined
Aug 20, 2012
Messages
102
I'm fairly certain that Intel 10Gb NICs have been used in FreeNAS. I only say "fairly certain" because I can't find an actual thread and I haven't done it myself. But Intel is the leader for NIC driver support in FreeBSD and I'd expect that if anyone has support it would definitely be Intel.

As for performance, I'd expect that for 99% of the people in this forum the bottleneck won't be the NIC, so you can expect that performance will be bottlenecked at how fast the pool can read/write and the CPU can checksum/verify checksums. I have yet to see anyone benchmark values that are >1Gb/sec read or write, which would be required to saturate the link.

If you are looking for absolute maximum performance, I'd go with the most powerful CPU you can get, which is certainly going to be a very expensive Intel. Perhaps even a dual processor design and LOTS of RAM. Perhaps even an SSD ZIL. I doubt an L2ARC would help much because the L2ARC SSD would be so slow compared to 10Gb LAN it you'd get faster transfer rates from the zpool itself except for when it is fully loaded constantly. Definitely a zpool that follows the power of 2s for RAIDZ(X) and at least 2 vdevs.

Edit: I don't recommend people buy a ZIL or L2ARC until they have a system installed and in production. It's easy to add and remove in a production environment, and more often than not the benefits are very small.

It's pretty easy to saturate 1Gb line speed, I even do it with a virtual NIC (ext3 on ESXi 5.1) when transferring large files. Of course random 4K read/writes are much slower, especially with something like CIFS. CIFS is my current bottle neck.

SSD's are far more capable then saturating 1Gb and using a substantial amount of 10Gb bandwidth. SSD's on SATA3 are read/writing at 550MB/s for each disk. That's 4.4Gb/s. Now, I realize that there is ZFS overhead but there could be a case if he's got the CPU's to keep up and is running multiple SSD's. 4K random read/write performance is also substantial.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
You are also making the assumption that your CPU can actually do the checksums/redundnacy at that speed. There's lots of potential bottlenecks once you go over 1Gb/sec. I have 18 drive in a RAIDZ3, and even with a RAID0 I barely hit 1GB/sec, which won't even saturate 1x10Gb link. I'm fairly sure my cap is my CPU.
 

mstang1988

Contributor
Joined
Aug 20, 2012
Messages
102
You are also making the assumption that your CPU can actually do the checksums/redundnacy at that speed. There's lots of potential bottlenecks once you go over 1Gb/sec. I have 18 drive in a RAIDZ3, and even with a RAID0 I barely hit 1GB/sec, which won't even saturate 1x10Gb link. I'm fairly sure my cap is my CPU.

You are correct, your cap is likely your CPU, however 1GB/sec is still around 8Gb/sec which would do a dang good job of keeping that NIC busy (using sequential I/O). Link below has a user getting 3.8Gb/s on his 10Gb Link. I really wish we could do RoCE/IB though with Freenas. Get rid of the CPU overhead of networking I/O and allow the CPU to focus strictly on ZFS. What kind of CPU are you running?

http://forums.freenas.org/showthrea...abit-Network-Card-AOC-NXB-10G-from-Supermicro
 

pdanders

Dabbler
Joined
Apr 9, 2013
Messages
17
The Intel 10 Gbps NICs definitely work. We use them for our FreeNAS and BSD based SANS at work
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
I realize he hasn't posted yet, but his performance(just like most FreeNAS servers) will vary greatly depending on a whole bunch of factors including but not limited to CPU, zpool design, how full the zpool is, zpool write history, number of users and type of loading, sharing type, amount of RAM, network card brand and model and if a scrub is in progress(just to name a few).
 

pdanders

Dabbler
Joined
Apr 9, 2013
Messages
17
I don't have exact performance numbers.. but our goal is to support large numbers of IOPS and random read/write (as opposed to straight up throughput). The SAN I have is SuperMicro based and is built to be the datastore for a Dell blade server with 8 blades each running ESXi and a few hundred VMs. It's doing very well so far in it's intended role. We have dual 10 Gbps links connected up to the built in switch on the Dell blade server (each blade has 2 internal 10 g links into this switch). Each ESXi blade uses round robin load balancing to talk to an iSCSI file extent

The SAN has 128 GB of RAM and dual high end Xeons (forget the exact model.. but I think they are 6 core chips). We have 16x 4TB drives configured as a stripe across 5 sets of 3-way mirrors (the 16th disk is set as a spare). We also have 3 256 GB SSDs, 2 configured as a mirrored ZIL and the other as L2 ARC. It's a monster system and it's not *That* much slower (thoughput wise) than another system I have that is all SSD!
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
I don't have exact performance numbers.. but our goal is to support large numbers of IOPS and random read/write (as opposed to straight up throughput). The SAN I have is SuperMicro based and is built to be the datastore for a Dell blade server with 8 blades each running ESXi and a few hundred VMs. It's doing very well so far in it's intended role. We have dual 10 Gbps links connected up to the built in switch on the Dell blade server (each blade has 2 internal 10 g links into this switch). Each ESXi blade uses round robin load balancing to talk to an iSCSI file extent

The SAN has 128 GB of RAM and dual high end Xeons (forget the exact model.. but I think they are 6 core chips). We have 16x 4TB drives configured as a stripe across 5 sets of 3-way mirrors (the 16th disk is set as a spare). We also have 3 256 GB SSDs, 2 configured as a mirrored ZIL and the other as L2 ARC. It's a monster system and it's not *That* much slower (thoughput wise) than another system I have that is all SSD!

I'm jealous!
 
Status
Not open for further replies.
Top