10 Gig Networking Primer

10 Gig Networking Primer

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,681
This document is now available in the Resources section! Use the tabs above to navigate to the document itself.

The current version also follows inside the spoiler tag:

This is a discussion of 10 Gig Networking for newcomers, with specific emphasis on practical small- to medium-scale deployments for home labs or small office users. It originated with a forum thread located here that has many pages of additional information and discussion.

History

In the early 1990's, we had 10Mbps ethernet. In 1996, 100Mbps ethernet. In 1999, gigabit ethernet. In 2002, 10 gigE. About every three or four years, an order of magnitude increase in network speeds was introduced. In all cases but the last, within about 5 years of introduction, reasonably priced commodity gear became available for that technology. We stalled out with 10G because the technology became more difficult. Copper based 10G wasn't practical at first. Further, and perhaps unexpectedly, it seemed that gigabit was actually finally sufficient for many or even most needs.

LACP for gigabit

A lot of people have worked around the lack of 10gigE with Link Aggregation, that is, using multiple gigabit connections from a server to a managed switch. Unfortunately, the load balancing protocols used do not provide a standard way to balance traffic to a small number of destinations, such as what often happens with fileservers. LACP kind of sucks for NAS.

10 gigabit Ethernet technologies

There's a bunch, but let's stick to the practical stuff you might actually want to use, and leave out stuff like XFP or CX4. The modern stuff you want to use boils down to just two: SFP+ and 10GBASE-T.

SFP+ is a technology that allows an engineer to put one of several different modules into a switch or network interface card. An evolution of older SFP technology, it is usually backwards-compatible with SFP. An SFP+ module is essentially a transceiver that talks to the switch and requires a "device driver" of sorts, so you need an SFP+ module that is compatible with the switch or ethernet adapter (some vendors also engage in vendor lock-in requiring you to use their own branded modules). This kind of sucks, but once you get over that hurdle, you have lots of options and it is incredibly flexible. SFP+ is available in various flavors. The ones you're likely to use:

SR optics (the word we usually use to refer to optical SFP modules) are short range, and when used with OM3 (aqua colored) or OM4 fiber, can run for up to 300 meters. This is usable for any distance shorter than 300 meters.

LR optics are longer range, and when used with the proper singlemode fiber, can run up to 10 kilometers.

These are laser based products and you should not look into the optics or ends of the fiber. ;-)

Also available are direct-attach SFP+'s, where two SFP+ modules have been permanently connected together via twinax cable. These are essentially patch cables for SFP+. The downside is that sometimes you run into compatibility issues, especially when you have two SFP+ endpoints from different manufacturers who both engage in vendor lock-in. The upside is that sometimes they're cheaper and (maybe?) more durable than optics and fiber, where you need to not be totally stupid and careless with kinking the fiber.

Originally, no SFP+ modules were available for 10GBASE-T. The wattage requirements for 10GBASE-T exceeds what's available for the SFP+ port specification. There are SFP 1GBASE-T modules for gigabit however. More recently, some SFP+ manufacturers have created low power 10GBASE-T SFP+'s with limited link distance. Because the power required increases with distance, this may be an option. Unfortunately, these still seem very expensive.

From a cost and performance aspect, SFP+ has somewhat lower latency and reduced power consumption compared to 10GBASE-T.

One of the biggest caveats here, though, is that once you go down the SFP+ path, you probably want to stick with it. There's no easy switching away from it except to do a forklift upgrade. (But don't feel bad, SFP+ marks you as a diehard networker.)

The Intel X520 card is an example of an SFP+ card, which is available in one and two port configurations, and -DA (direct attach) and -SR (short range) optic variants. The difference between -DA and -SR is simply that the -SR will include Intel SR optics.

10GBASE-T is the copper 10G Ethernet standard. Much more familiar to most end users, this uses RJ45 modular connectors on Category 6 or better cable. Category 6 will typically reach up to around 50 meters. This was basically a worthless standard up until recently, when several manufacturers have started to create less-expensive switches that support 10GBASE-T. Probably the most notable of these is the Netgear ProSafe XS708E, available for $800-$900.

I believe that 10GBASE-T will ultimately be the prevailing technology, but it is very much the VHS (ref VHS-vs-Betamax) of the networking world. It is an inferior technology that burns more power and makes more compromises. Most of the deployed 10G networking out there today is still NOT copper, so for the next several years, at least, the best deals on eBay are likely to be for SFP+ or other non-copper technologies.

What Do You Need?

While it is tempting to think of your entire network as 10 gigabit, in most cases this is at least a several thousand dollar exercise to make happen, factoring in the cost of a switch, ethernet cards, and wiring.

There are some alternatives. One easy target is if gigabit is acceptable for your endpoints (PC's and other clients), it is not that hard to find a gigabit switch with several 10G uplinks. The cheapest decent one I've seen is probably the Dell Networking 5500 series, such as the 5524, often available for around $120 (2018) on eBay. That model comes with two 10G SFP+ slots, which could be used for a FreeNAS box and a workstation at 10G, while also allowing all remaining stations to share in the 10G goodness. Now that it's 2016 we're also seeing the Dell Networking N2024, which is an entry-level Force10 based switch. If you don't mind eBay for all purchases, you can get a basic 10G setup for your NAS and one workstation for less than $500. These are both fully-managed full-feature "enterprise" switches.

We recently debated another alternative, which is to abuse the FreeNAS box itself as a bridge using FreeBSD's excellent bridging facility. This is very cost-effective but has some caveats ... primarily that you need to be more aware that you've got a slightly hacked-up configuration. Since modern ethernet technologies are fully capable of point-to-point operation, without a switch, clients can be hooked up directly to the server (via a 10Gbase-T crossover cable, or SFP+). The simple case of a single workstation hooked up to the server via a direct cable is fairly easy. Multiple workstations might involve bridging. If you wish your clients to receive Internet connectivity, that's more complicated as well.

In 2013, Netgear introduced a few new 10GBASE-T switch options including the ProSafe XS708E which offers 8 ports for a cost around $100 per port.

The Dell PowerConnect 8024F is often available on eBay for around $400, offering a mix of SFP+ ports along with four 10GBASE-T. This is probably the cheapest option to get 10gigE for a NAS or two, some ESXi boxes, and then a few runs of 10GBASE-T for workstations.

A variety of new entrants exist from Ubiquiti etc, and now that it's 2018/2019 there are some more affordable options. Notably, MikroTik now has the CRS305-1G-4S+IN which is a 4-SFP+ and 1Gbps copper 5 port switch that's very inexpensive and looks like a real contender for smaller home networks, less than $150 new. MikroTik also offers the CRS309-1G-8S+IN which is an 8-SFP+ and 1Gbps copper 9 port switch for $270. Both of these products are reported to perform poorly IF you use their advanced routing features, but are reported to do wirespeed layer 2 routing just fine. I have not used either of these personally, but they're pretty compelling.

What Card Do I Pick?

This forum has been very pro-Intel for gigabit cards, because historically they've "just worked." However, for 10gigE, there have been some driver issues in versions of FreeNAS prior to 9.3 that lead to intermittent operation. Additionally, the Intel adapters tend to be rather more expensive than some of the other options. 10gigE is not in high demand, so often some of the niche contenders have products that may, counterintuitively, be very inexpensive on the used market. These cards may be just as good a choice - if not better - than the Intel offerings. We're Intel X520 here but the following notes are gathered from forum users.

@depasseg and I note: Intel X520 and X540 are supported via the ixgbe driver. Intel periodically suffers from knockoff cards in the new and used markets. There should be a Yottamark sticker on it that'll help authenticate the card as genuine. Check the country, datecode, and MAC address Yottamark gives you, don't just blindly trust it. Not a good choice if you wish to run versions prior to 9.3. https://bugs.freenas.org/issues/4560#change-23492 Also note that there's been a variety of problem reports with the X540 and TSO.

@Mlovelace, @depasseg, and @c32767a note: Chelsio is iXsystems' card of choice. @Norleif notes that the S320E-SR-XFP can sometimes be found for less than $100 on eBay. The Chelsio T3, T4 and T5 ASICs are fully supported by the current version of FreeNAS and are the cards shipped for 10gigE if you buy a TrueNAS system. iXsystems: "FreeNAS 9.2.1.5 supports the Chelsio T3 with the cxgb driver. It supports the T4/T5 with the cxgbe driver. We keep these drivers current with the latest code from the vendor. By far and away the Chelsio cards are the best FreeBSD option for 10Gbe." Also note that the S310E only supports PCIe 1, so speeds may be limited especially in an x4 slot. @Mlovelace also has found a great vendor for generic Chelsio SFP+ optics.

@depasseg and @c32767a note: SolarFlare: Some users recommend the SFN 5162F. @jgreco notes he just got four SolarFlare SFN6122F on eBay for $28/each, with SR optics (3/2019). This is awesome for ESXi as the SolarFlare burn half the watts of the Intel X520's.

@Norleif reports: IBM NetXTREME II 10GBit (Broadcom BCM 57710) Works in FreeNAS 9.3, can sometimes be found for less than $100 on eBay.

@Borja Marcos notes: Beware the Emulex "oce" cards - serious issues with them, panics when moving some traffic. There is a patch (see relevant discussions on the freebsd-net mailing list) but the stock driver crashes badly.

Notes on Fiber

A home user won't need anything other than short range ("multimode") fiber, which runs anywhere within a few hundred feet if done right. The ins and outs of other types of fiber is too complex for this forum.

Fiber is graded similarly to category cable, where you have Cat3 (10Mbps), Cat5 (100Mbps), Cat6 (1Gbps), etc. In fiberspeak these are "OM". OM1 and OM2 are older standards, typically clad with an orange jacket. These are actually probably fine for short runs of 10Gbps, up to a few dozen feet, depending on whose specs you believe. However, 10Gbps is laser light while older standards were LED, and to maintain signal integrity, aqua-colored OM3 was introduced, and then also OM4, to get to 100Gbps speeds.

Fiber is sensitive to bending, as the transmission of light is dependent upon optical tricks to keep the signal integrity. Do not make sharp bends in fiber.

Some newer fiber is referred to as "bend insensitive fiber" but the word "insensitive" really means "less-sensitive". At the same time, some BIF is being put into a single jacket, which renders it incredibly flexible and easy to work with. Check out this image for a comparison of 1Gb copper, 10Gb OM3, and 100Gb BIF OM4.

The BIF OM4 in that image was sourced from fs.com, OM4-ULC-DX, and can be ordered in custom lengths at a very good price. This is not an endorsement, I get no kickbacks, etc. I just found it to be an amazing product.

But I Want 10G Copper

Understandably so. Or, rather, you THINK you want it. (If you don't, skip this section!) The 1G copper you're used to has a lot of upsides including the ability to field terminate. However, it is also very tricky to work with at 10G speeds. We got from 10Mbps to 100Mbps by moving from Cat3 to Cat5 wire, which increased the bandwidth that could be handled by improving the RF characteristics of the wire. The change from 100Mbps to 1Gbps was accomplished by some more modest cabling improvements, using all four pairs bidirectionally, using echo cancellation and 4D-PAM-5 modulation, which is really pushing the boundaries of what's reasonable on copper.

To get to 10G on copper is a lot harder. There has to be a tenfold increase in capacity. We already burned through the additional two pairs, AND went to bidirectional, at the 100-1G transition. In order to get another 10X multiplier, there are basically only three things we can do: one, slightly better cabling. This is an incremental improvement, unlike the jump from Cat3->Cat5. Two, better modulation and encoding. Three, use more power, a side effect of better modulation and encoding. There's a nice slideshow that goes into the details if you're interested. This means that cabling becomes ever more fiddly and hard to work with.

But here's the way this works out for you, the FreeNAS user who doesn't have a 10G network, and wants one.

There's a depressingly small amount of 10GBASE-T stuff on the market. If you buy it, it'll probably have to be new. It'll be expensive. It'll be power-hungry. This stuff only became vaguely affordable around 2013-2014, and hasn't sold well. It doesn't work over existing copper, unless your existing copper plant was already wired to an excessive standard like CAT6A. It has done so badly in the marketplace that manufacturers came out with fractional upgrades, 2.5GBASE-T and 5GBASE-T, that are eating away at some of the markets that might have driven 10GBASE-T. If you try to run 10GBASE-T, you'll probably need new cabling. There are a small number of 10G copper network cards out there, most of which you'll need to buy new, because no one used these in the data center.

By way of comparison, you can go out and get yourself an inexpensive 10G SFP+ card with SR optics for about $100, and a Dell 5524 switch for about $100 as well. This works swimmingly well, without drama. This stuff has been in the data center for more than 15 years, and people nearly give away their "old" stuff.

There's also been some excitement about SFP+ 10GBASE-T modules. Don't get excited. The realities of SFP+ mean that these modules can never work well. Most devices will not have the correct "drivers" to drive a copper SFP+ module. Those that do are likely to find themselves limited by cable length, as the SFP+ form factor only provides 2.5W *MAX* per SFP+, which is far below the 4-8 watts it may take to drive 10GBASE-T at full spec. Even if you only need to go 1 meter, the copper SFP+'s are expensive with relatively low compatibility. So in general these are nowhere near as useful or usable as we'd like.

As usual, this post isn't necessarily "complete" and I reserve the right to amend it and/or delete, integrate, and mutilate the reply thread as I see fit in order to make this as useful as possible.

Also, if you'd like to repost this elsewhere, I'd appreciate it if you would have the courtesy to ask. And to credit the source.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,681
I'm out of time this morning. I am happy to take and integrate further information you contribute especially if it contains links to relevant discussions on the forum, so you are encouraged to contribute such goodies. I will probably wipe out such messages but I'll be sure to give you credit. Some of the previous discussions got out of control because I wasn't willing to play thread police.
 

Mlovelace

Guru
Joined
Aug 19, 2014
Messages
1,111
I have been using the Chelsio T520-CR on 9.2.x.x and it has been rock solid. I had started with the Intel X520-DA2 and had to switch when I ran into the driver issues. Here is an example of the throughput I get on my Veeam backup jobs that write to the freeNAS.

Code:
                    /0   /1   /2   /3   /4   /5   /6   /7   /8   /9   /10
     Load Average   ||

      Interface           Traffic               Peak                Total
           cxl0  in    874.172 MB/s        877.139 MB/s          168.011 GB
                 out     6.998 MB/s          7.005 MB/s          276.701 GB

            lo0  in      0.000 KB/s          0.000 KB/s            7.423 MB
                 out     0.000 KB/s          0.000 KB/s            7.423 MB
 
Last edited:

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
For the folks who had issues with the Intel cards/drivers - when was that? And have you retested on 9.3, or after the OS/driver issue was fixed?
 

Dennis.kulmosen

Explorer
Joined
Aug 13, 2013
Messages
96
For the folks who had issues with the Intel cards/drivers - when was that? And have you retested on 9.3, or after the OS/driver issue was fixed?
The issue was mostly if you used NFS as far as I know, and the issue was dropped connection unless you manually set some kind of tuneable.
I have been using the Intel X540 T2 in various setups using mostly AFP without any troubles. I get about 700-850MB/s depending on client Mac OS X version with various thunderbolt 10GbE adapters.
Only about 400 MB/s on the latest old Mac Pro which is puzzling me, maybe something is limiting in the PCI bus. I have tried with Myricom based cards (SonnetTech) and both Solarflare and ATTO, but it wont go further than about 400MB/s.
On a sidenote on Mac OS X client side you need to set delayed_ack manually in sysctl to either 2 or 0 depending on the rest of your network. Lucky its easy to test which setting is best. [emoji41]
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
Do you mean something like running out of 9k jumbo mbufs? I saw that with Intel GbE (not 10GbE) cards.
Here's the deep dive from Josh. It's a pretty interesting read.
https://bugs.freenas.org/issues/4560#note-31
79c5bd44676b9e8ba7ad4044307fbe1a
Updated by Josh Paetzel 4 months ago


It's not quite that simple. If the problem were that easy to reproduce we would've caught it some time ago.

I am not saying that copying a 100 GB file can't trip the problem, because it can.

The problem I was able to reproduce that was causing NFS failures is due to a mismatch between what FreeBSD does and the hardware capabilities of the intel 520/540.

TSO allows the card to create TCP segments up to 64K in length, as opposed to segments limited to the MTU (typically 1500 or 9000 bytes). The hardware in the 520/540 has a 64K TSO buffer. FreeBSD detects the card supports TSO and sends it up to 64K in data for a TCP segment. The card then attaches an ethernet header to the segment, and if that ethernet header + data is over 64K the segment will not fit in the buffer and gets dropped. If you drop enough segments fast enough you'll get an NFS outage. (You can get a link down event from the server too, if you are pathological and only send packets that will get dropped)

When doing NFS the client and server negotiation a read size and write size. If both sides are set to auto-negotiate the client will choose the highest value offered by the server that it supports. FreeNAS supports a max of 128K. Linux supports a max of 1GB. So typically unless it's manually specified a VMWare client will negotiate a 128K read size. In order to reproduce this problem the read size must be either 64K or 128K.

ZFS uses a variable block size with a maximum block of 128K by default. When NFS does a read it is operating at the file level. However if that file is comprised of blocks those will get chunked across NFS read requests. If the blocks are smaller than 64K you won't trip the problem. Because FreeNAS defaults to using lz4 compression unless you go out of your way to ensure your data is not compressable it's very likely the block size will not be 128K. (ZFS compresses at the block level, not at the file level)

If you met all these criterion then a read can be used to trip the problem. (Has to be a read if you have a 520/540 in the server and are trying to tip it over. Writes use LRO on the server side which isn't affected by this)

If you are able to cause an outage, then the next step is to disable TSO by doing ifconfig -tso -vlantso. Some adapters go link down then link up after doing this. If your doesn't then you must explicitly ifconfig down then ifconfig up them.

If you are unable to reproduce the problem after disabling TSO then the fix in FreeNAS 9.3-ALPHA will allow you to run with TSO enabled.

In the lab I had to turn on jumbo frames to reproduce the problem. Getting that working properly is problematic for enough people that it may mask the problem. Also there are reports in the field that the problem can be tripped with a 1500 MTU. I was unable to reproduce this.

My test rig survived 64 hours of sustained traffic that exercised the problem. (started it Friday afternoon, checked it Monday morning)
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,681
Do you mean something like running out of 9k jumbo mbufs? I saw that with Intel GbE (not 10GbE) cards.

Actually getting jumbo to work properly seems like an exercise in frustration. The amount of engineering effort required for the modest gains isn't worth it. The drivers suck (including the segregation of jumbo mbufs), the operating system support sucks (not just FreeBSD), the hardware sucks, the routers suck (crappy MTU/frag support), and you can get screwed in any of a half dozen ways. I've spent too much time on jumbo in the last decade or two. I've finally gotten annoyed enough that I'm slowly killing off jumbo support in our networks. Modern cards do sufficient magic with offload to make 1500 fast that the annoyance of jumbo is no longer worth it. With modern switches that can support switching a gigapacket per second ... screw jumbo.
 

Dennis.kulmosen

Explorer
Joined
Aug 13, 2013
Messages
96
Actually getting jumbo to work properly seems like an exercise in frustration. The amount of engineering effort required for the modest gains isn't worth it. The drivers suck (including the segregation of jumbo mbufs), the operating system support sucks (not just FreeBSD), the hardware sucks, the routers suck (crappy MTU/frag support), and you can get screwed in any of a half dozen ways. I've spent too much time on jumbo in the last decade or two. I've finally gotten annoyed enough that I'm slowly killing off jumbo support in our networks. Modern cards do sufficient magic with offload to make 1500 fast that the annoyance of jumbo is no longer worth it. With modern switches that can support switching a gigapacket per second ... screw jumbo.

I have come to exact same conclusion. Its not worth it. :smile:
 

Mlovelace

Guru
Joined
Aug 19, 2014
Messages
1,111
Actually getting jumbo to work properly seems like an exercise in frustration. The amount of engineering effort required for the modest gains isn't worth it. The drivers suck (including the segregation of jumbo mbufs), the operating system support sucks (not just FreeBSD), the hardware sucks, the routers suck (crappy MTU/frag support), and you can get screwed in any of a half dozen ways. I've spent too much time on jumbo in the last decade or two. I've finally gotten annoyed enough that I'm slowly killing off jumbo support in our networks. Modern cards do sufficient magic with offload to make 1500 fast that the annoyance of jumbo is no longer worth it. With modern switches that can support switching a gigapacket per second ... screw jumbo.
Same here, the late 90's - early 2000's called, they want their jumbo frames back. Modern gear is fast enough that jumbo frames are barely noticeable.
 
Joined
Oct 2, 2014
Messages
925
So just to add here, i looked for a long time to find a dedicated 10Gb switch that wasnt outrageously priced....I work with ALOT of cisco stuff (Cisco 5000 and 7000 series) but new and *if* you can find it used is still $1,000 to $5,000 and well, for a homelab that isnt really acceptable (for most) So instead i went and scoured ebay :) . Heres my findings, my setup WAS a bunch of daisy chained...but realized that at the rate im moving thats unacceptable, so i found a switch, now she isnt new...and she isnt the prettiest of the bunch, but she was affordable and did 10Gb :D

Here's the 10Gb switch i purchased : ~$400 shipped , i figured i can live with that.
The 10Gb XFP's i was able to get from that seller for $20 each (might have been able to go lower...but wasnt too worried)
I already had 4 of the Chelsio T420-CR cards, but i wanted to include them anyways: ~ managed to get them for $165 each from that seller
And then i got the SFP+ 's from this seller for $25 each.
10Gb fiber cables: I picked up x8 2M Multimode aqua color

Now all in all, the switch is old yes (2006-2007) But it gets the job done, and gives me a 10Gb backbone for all my stuff, my 2 ESXi servers + my FreeNAS server and my SAN for ESXi storage.

Now i know this kind of setup isnt for everyone...but it might help someone some day lol. Now i will tell you i contact Fujitsu and verified the XFP's i was purchasing were compatible as no where in the hardware manual or there website did it list compatible XFP's....

So yea, thats my story. Dont troll me too much for wanting 10Gb that bad :P
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,681
10G stuff is still depressingly expensive, though you can get some deals on XFP and CX4 stuff. The problem with that is that most of that era stuff burns serious watts, and is harder to find useful information on.
 
Joined
Oct 2, 2014
Messages
925
This switch is said to use between 125-140Watts, so i didnt think that was too terrible. And yea it took me a week or so to gather all the info but in the end i feel it was worth it....i got a 10Gb backbone rocking and rolling lol
 

Rand

Guru
Joined
Dec 30, 2013
Messages
906
How about adding some performance tweaks to the OP? Larger buffers and such... Or are those obsolete in 9.3+?
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
How about adding some performance tweaks to the OP? Larger buffers and such... Or are those obsolete in 9.3+?

Basically unnecesary with 9.3 since the defaults have been increased over time. So unless you are overriding the defaults, you're fine. This is why I tell people not to deviate from the defaults unless they want to own their own server, for better or for worse. (hint: it seems to always be for worse)
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,681
Are we discussing autotune defaults here, or ...? Because generally speaking the FreeBSD defaults have been a bit conservative, and the autotune defaults tend to get config-baked-in, which is kinda bad. If they're actually raising in-kernel defaults, that's another thing entirely.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
It's important to recognize there are 3 default at work:

FreeBSD has some defaults.

FreeNAS has some defaults (that are different from FreeBSD).

FreeNAS' autotune values have some defaults (that are different from FreeNAS and FreeBSD).
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,681
From my perspective as a sysadmin, that middle point is more or less vaporware. If it is configured in the kernel, it is part of the base OS - FreeBSD. It might and can be different than the default GENERIC's configuration value, but it is still a system default and that has very specific effects on the system. The userland (including what you might call "middleware") can override that, which usually happens in FreeNAS through explicit configuration (consider CIFS options in the GUI). But it is also possible to affect the kernel in other ways, including loader tweaks and outright twiddling at the source code level. autotune is effectively setting loader tweaks and is kind of a bodgy way to handle tuning things that might more appropriately be fixed "in code" (i.e. in the kernel). The upside to fixing the kernel would be that you wouldn't see people with rusty autotune settings stuck in the config from back when they only had 8GB of RAM, wondering why 32GB doesn't seem much faster. The downside, of course, is the difficulty of maintaining diff against FreeBSD.

So it'd be interesting to know, specifically, what you were referring to when you say "the defaults."
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,681
Here's some of the parts you might be using with a fiber deployment.

On the top, a nicely built OM3 aqua fiber patch. This is a nice quality cable we sourced from "cablesandkits.com", which has several little awesome touches. First, look at how neatly they assemble it for shipping, with the fancy cross-laced twist-ties on both connectors. You can see at a glance that there's some TLC that goes into it. There's a section of heat shrink tubing that prevents further splitting of the fiber a few inches from the connector, and that the connectors are neatly spaced from the end of the fiber. This is something I've found to be often off on cheap fiber - one connector being off by a centimeter or more. And polarity markings, so that you can identify which strand is which at each end.

Grades of fiber below OM3 exist. Most 62.5/125 fiber (OM1) is capable of running 10G for at least two dozen meters, and basic 50/125 fiber (OM2) is sufficient for most SOHO purposes (several dozen meters). OM3 will get you 300 meters. But just like HDMI, it is a digital signal - there's no value to be had in buying a $1000 cable if a $10 cable works without error.

fiber2.jpg

Click to enlarge.

On the bottom, an Intel X520-SR2. This card includes SR optics, which are removable from the SFP+ cages. If using optics, Intel does require the use of their optics, and the cards usually come with the Intel variant of the Finisar FTLX8571D3BCV-IT. Do note that other non-Intel optics are not likely to work in the card!

The other end would be an optic plugged into the switch of your choice. Again, these may be vendor-locked. We have the occasional argument about this over on NANOG, but basically most people understand that vendor locking is a transparent ploy to make you buy marked-up optics. So do yourself a favor and buy them on eBay, used, from a reputable reseller, probably for about what a generic optic would be new.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
So it'd be interesting to know, specifically, what you were referring to when you say "the defaults."

What I mean when I say "defaults" is a parameter that you aren't forcing to a particular value yourself in the WebGUI. Also, setting autotune would constitute forcing values. AFAIK anytime you add more RAM, you have to disable autotune, delete the autotune values, then enable autotune again, followed by a reboot. On the TrueNAS side of things, very little "upgrading" of the RAM is done, so there are never any real problems with autotune values not being kept up-to-date.

The devs have changed some of the values from FreeBSD to make things better for the OS as FreeNAS is specifically designed to be a good fileserver and little else.

In essence, if you force some network card buffer to xyzMB back in 2013, and next year the decision is made that a higher value is better, you'll still be on the old values because *you* forced them to those values. So while the rest of us might enjoy better throughput, you'll be here wondering why you can't get the same kind of performance.
 
Top