Poweredge FC630 as FreeNAS controller

SubnetMask

Contributor
Joined
Jul 27, 2017
Messages
129
I have acquired a Dell FX2s chassis with FN410S IO modules, with the main purpose being to populate it with two servers for my VMWare hosts and one for my backup server. That leaves one free slot.

Currently, I have a R620 that is my FreeNAS controller, connected via two LSI HBAs to two Supermicro JBOD enclosures that house my disk pools, and in the interests of consolidation, am considering a FC630 to replace the R620.

The FX2s chassis has eight PCIe 3.0 slots that map to the compute modules, which could house the HBAs, and the networking is onboard the server in the form of either a dual port or quad port NDC..

I would think in theory that there would be no performance hit in the bridge between the blade and the PCIe slots, and that the performance would be equal to that of say a R630 or R730 with the same basic specs (CPU, memory).

That being said, does anyone have any experience running FreeNAS on a Dell FC server with one or more SAS HBAs in a FX2s chassis connecting to external disk arrays?
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
That being said, does anyone have any experience running FreeNAS on a Dell FC server with one or more SAS HBAs in a FX2s chassis connecting to external disk arrays?
I have not had hands on with this type of equipment, but it sounds like an interesting option that should work. Are the four blades able to communicate with each other over some internal, high-speed, network?
If you do try this, please let us know how it comes out.
 

SubnetMask

Contributor
Joined
Jul 27, 2017
Messages
129
So the FX2s Chassis essentially has two options for connectivity - Passthrough modules and IO Aggregators (there are a few variations of each, but the different variations of each type all function the same). The Passthrough modules do exactly what their name suggests and just present the blade NIC ports to the 'outside' in a 1:1 ratio, so each passthrough module will present up to two network links per server, assuming the server has quad port NICs (With a dual port NIC, you get one link per module). With Passthrough modules, your blade to blade connectivity is at the mercy of of whatever switch you are connecting to. The IO Aggregators (which I have) are switches, and each one provides up to two 10Gb links per server, again, assuming a quad port NIC. So the IO Aggregators provide full 10Gb connectivity within the chassis, and all east-west traffic stays within the aggregators and at 10Gbps. Each aggregator then has four external 10Gb ports that would then link to the TOR switch. For the external ports on the FN410S, you can use a 1Gb Base-T SFP module (I believe a 1Gb fiber SFP module is also supported), 10Gb fiber SFP+ modules or Twinax DAC cables.

Obviously, the FC630 itself would have no issue running FreeNAS, and the networking connectivity isn't an issue - the only real 'unknown' really is with regards to the SAS HBAs and the PCIe passthrough functionality.
 
Last edited:

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Obviously, the FC630 itself would have no issue running FreeNAS, and the networking connectivity isn't an issue - the only real 'unknown' really is with regards to the SAS HBAs and the PCIe passthrough functionality.
I would be optimistic. Are you going to try?
 

SubnetMask

Contributor
Joined
Jul 27, 2017
Messages
129
It's been a while since I've touched on this - I still haven't migrated over to a FC630, however, I've done testing with one set up as an additional FreeNAS controller connected to my VMWare environment, and I've also used it to run badblocks on a batch of six disks at once - it hasn't skipped a beat under any load or condition.

My main 'concern' was with how the PCIe bus gets routed through the 'midplane' of the FX2s chassis - under certain circumstances, the PCIe slots are 'reroutable' to other compute module slots, which means there's some form of 'programmable logic' involved - my concern came in with that 'programmable logic', and that it might cause latency or other odd issues. If it was a straight, hard data path from 'X' compute module slot to 'Y' and 'Z' PCIe slots, with nothing but hard etched traces between the two, I would have been far less 'concerned'. But the PCIe connectivity in the FX2s chassis seems to be solid - Dell has their own storage sleds that can be installed in the FX2s chassis that as far as I know, essentially provide at least one PERC controller via PCIe to the compute module, so I am having a hard time seeing it as anything but solid. It wouldn't do Dell any good to have this solution that blows up all the time. That being said, I'll be messing around with it some more, but odds are I'll be migrating my FreeNAS setup over to the FC630.
 

SubnetMask

Contributor
Joined
Jul 27, 2017
Messages
129
So I finally, after quite a bit of time running FreeNAS on the FC630 for testing with its own enclosure connected, took the plunge and moved my 'real' enclosures and the boot SSD over to the FC630. Everything's been working fine since then, so I'd say a FC630 in a FX2s chassis with one or more HBAs in the assigned PCIe slots works well.
 
Top