[Sanity check]Building FreeNAS box on used server + DAS

Status
Not open for further replies.

Ender117

Patron
Joined
Aug 20, 2018
Messages
219
I have been wanting to build a FreeNAS box for a while now, but the hardware prices have been stopping me (DDR4 RAMs, forum's favorite Supermicro MB and 846, etc.) I have a R620 that is being replaced by a R720, and quite impressed by its noise and power consumption (98W idle, 400W full) so looking at if it could be a good freenas box.

Here are the current spec:
dual E5 2690 v2
64G DDR3 RECC RAM
intel 10G NIC
H710P mini 1GB raid card
8 2.5 sas bays.

What I am planning to do is drop in a external HBA card (9211-8e? not sure about the model) and hook it up a DAS, lenovo SA120 for example, and put the HDDs there. Maybe sell my 2690 v2 and get a less powerful e5 if I can get this work.Here are my questions:
1. It is my understanding that DAS (or some DAS) just present a mini-sas port to the host, so FreeNAS will happily take those HDDs in DAS as if there were in the same chassis fed by a internal HBA, say 9211-8i. Am I correct? Any caveat to this? Driver incompatibility? start up sequence? FreeNAS and DAS communication problem?
2. The RAID card. I understand that is usually frowned upon if someone try to use HW raid card with FreeNAS. What if it's only used for boot drive, L2ARC and SLOG? I think I even read a old post mentioning using raid card+HDD as SLOG because the on-board cache have very low latency, etc. And the H710P does have 1GB of NV cache.
3.Expansion. Some DAS allow a daisy-chain setup, my understanding is that this is a cascade of SAS expander. Again I think FreeNAS should support this as it should be OS transparent?


I got lots of questions but I think these are the most important ones that I should find answers to before further investigate along this path. I am basically a newbie in FreeNAS and FreeBSD world, so don't get overly harsh if I have some terrible mistakes. Thank you
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
It is my understanding that DAS (or some DAS) just present a mini-sas port to the host, so FreeNAS will happily take those HDDs in DAS as if there were in the same chassis fed by a internal HBA, say 9211-8i. Am I correct? Any caveat to this? Driver incompatibility? start up sequence? FreeNAS and DAS communication problem?
As long as the enclosure is powered on before the kernel boots and the HBA is supported (basically all LSI) there are no real issues. This is how its done.
2ARC and SLOG? I think I even read a old post mentioning using raid card+HDD as SLOG because the on-board cache have very low latency, etc. And the H710P does have 1GB of NV cache.
Skip all that nonsense.
3.Expansion. Some DAS allow a daisy-chain setup, my understanding is that this is a cascade of SAS expander. Again I think FreeNAS should support this as it should be OS transparent?
Its not transparent but fully supported to an extent (ha get it?) that's up to the HBA and the sas expanders but generally will allow more than you ever have a reason to unless you a big IT shop and at that point you have no business asking this question if your the storage guy.
Side note, Lots of enclosures have 2 IO modules for multipath redundancy. You don't want one bad connector taking hundreds of TB offline. This again is generally fully supported and handheld transparently by multipathd (is that the name in BSD? Someone? Cyberjock?)
 

rvassar

Guru
Joined
May 2, 2018
Messages
972
If I remember correctly, the R620 was the basis for the Equalogic FS7600 / FS7610 NAS head unit (clustered of course). Very capable unit, it could really get up and move bits...
 

Ender117

Patron
Joined
Aug 20, 2018
Messages
219
If I remember correctly, the R620 was the basis for the Equalogic FS7600 / FS7610 NAS head unit (clustered of course). Very capable unit, it could really get up and move bits...
yeah, powerful as hell, most impressive is that it cost me less than my 8700k rig (a lost less if I finally get myself a GPU) but twice the CPU power.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
You've got the potential for one hell of a beefy virtual-FreeNAS setup here.

If that was my hardware, I'd run ESXi on the bare metal, and install a couple drives in the R620 itself to act as a small local datastore, connected to the mini-H710p - that's where the FreeNAS .vmx would live, along with maybe a couple other simple VMs that I'd want to boot up immediately with the host.

One of the PCIe slots would hold an LSI 9207-8e or 9300-8e, the other would have an Optane drive, and I'd use PCI passthrough to connect them straight to the FreeNAS VM. Even if I gave, say, 4 vCPU/24GB of RAM to the FreeNAS VM, that would leave me with 16 cores and 40GB left on the host to run other VMs; and I'd actually have a use for the 8x 2.5" bays in the R620 itself.

That said, unless you're comfortable with ESXi, I wouldn't necessarily start off by virtualizing FreeNAS.

The downside to running bare-metal FreeNAS is that your H710p is basically useless in that setup. You might be able to wire up four of your R620's front bays to the onboard SATA ports by using a SAS reverse-breakout cable and use them there, but I don't know if the backplane will function that way. If it doesn't, you'd need to buy an HBA for your internal bays (consuming your PCIe LP slot) and then your second slot would be consumed by the external HBA. At that point, you're out of slots and you're stuck with SATA SSDs if you want an SLOG in the head (which you probably do) and SATA SLOGs are way slower than NVMe ones. Check this thread for some numbers on that - the P3700 or Optane drives just absolutely slaughter even the fastest SATA/SAS SSDs.

https://forums.freenas.org/index.php?threads/slog-benchmarking-and-finding-the-best-slog.63521/

Basically if that system had one more PCIe slot it would be incredible. But it's a 1U, and there's only so much space in those things.
 

Ender117

Patron
Joined
Aug 20, 2018
Messages
219
As long as the enclosure is powered on before the kernel boots and the HBA is supported (basically all LSI) there are no real issues. This is how its done.
That is good to know. Had never used a DAS before, do you just have to manually power on thee DAS before the head unit? What happens if not? I assume it would appear to FreeNAS that a bunch of disk was hot plugged? Not sure how that would be handled.


Skip all that nonsense.

Could you elaborate a bit more? I don't feel it particularly unsafe for boot drive and L2ARC because even they are gone the data will still be there. SLOG is a bit different though.

Its not transparent but fully supported to an extent (ha get it?) that's up to the HBA and the sas expanders but generally will allow more than you ever have a reason to unless you a big IT shop and at that point you have no business asking this question if your the storage guy.
Side note, Lots of enclosures have 2 IO modules for multipath redundancy. You don't want one bad connector taking hundreds of TB offline. This again is generally fully supported and handheld transparently by multipathd (is that the name in BSD? Someone? Cyberjock?)

I am aware of the SAS path redundancy, but not planning to use it. My plan was to use SATA drives anyway.
 

Ender117

Patron
Joined
Aug 20, 2018
Messages
219
You've got the potential for one hell of a beefy virtual-FreeNAS setup here.

If that was my hardware, I'd run ESXi on the bare metal, and install a couple drives in the R620 itself to act as a small local datastore, connected to the mini-H710p - that's where the FreeNAS .vmx would live, along with maybe a couple other simple VMs that I'd want to boot up immediately with the host.

One of the PCIe slots would hold an LSI 9207-8e or 9300-8e, the other would have an Optane drive, and I'd use PCI passthrough to connect them straight to the FreeNAS VM. Even if I gave, say, 4 vCPU/24GB of RAM to the FreeNAS VM, that would leave me with 16 cores and 40GB left on the host to run other VMs; and I'd actually have a use for the 8x 2.5" bays in the R620 itself.

That said, unless you're comfortable with ESXi, I wouldn't necessarily start off by virtualizing FreeNAS.

The downside to running bare-metal FreeNAS is that your H710p is basically useless in that setup. You might be able to wire up four of your R620's front bays to the onboard SATA ports by using a SAS reverse-breakout cable and use them there, but I don't know if the backplane will function that way. If it doesn't, you'd need to buy an HBA for your internal bays (consuming your PCIe LP slot) and then your second slot would be consumed by the external HBA. At that point, you're out of slots and you're stuck with SATA SSDs if you want an SLOG in the head (which you probably do) and SATA SLOGs are way slower than NVMe ones. Check this thread for some numbers on that - the P3700 or Optane drives just absolutely slaughter even the fastest SATA/SAS SSDs.

https://forums.freenas.org/index.php?threads/slog-benchmarking-and-finding-the-best-slog.63521/

Basically if that system had one more PCIe slot it would be incredible. But it's a 1U, and there's only so much space in those things.

Yeah I know it's a massive overkill for FreeNAS. The R620 is currently my ESXi host, works great expect only 3 PCIE slots(low profile, half length). The R720 I am getting is as powerful (almost same spec other than dual E5 2667 v2 instead of 2690 v2), and I am not really into setting up a cluster at home, though it would be fun. I read somewhere that in this forum it's best to run FreeNAS bare metal, so that is what I am currently leaning to.

Dell dose offer a configuration in R620 that you can add 2 U2 NVME drives at front, but it is rare on the market at least for now. So I am exploring using it for boot drives, L2ARC (maybe not, with 24 DIMM slots I can never really fill them all) and SLOG. It is my understanding that @jgreco have tried this kind of config in the past. If it doesn't work, well I have other 2 PCIE slots (other one taken by external HBA) for P3700s.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
The R620 is currently my ESXi host, works great except only 3 PCIE slots(low profile, half length)

If yours has 3x actual low-profile PCIe slots, and none of them is the type that's locked to Dell-branded "storage controllers" then you could run bare-metal FreeNAS with two HBAs (internal + external) plus an NVMe SLOG device and be very happy with that setup. Maybe even switch the CPUs around so that one of the E5-2667s finds a home in the FreeNAS machine and you use the 2690s with the higher core count as your ESXi host.

I thought the R620 only had two PCIe slots, one full-height and one half-height, plus the spot for the "integrated/mini" RAID controllers.
 

Ender117

Patron
Joined
Aug 20, 2018
Messages
219
If yours has 3x actual low-profile PCIe slots, and none of them is the type that's locked to Dell-branded "storage controllers" then you could run bare-metal FreeNAS with two HBAs (internal + external) plus an NVMe SLOG device and be very happy with that setup. Maybe even switch the CPUs around so that one of the E5-2667s finds a home in the FreeNAS machine and you use the 2690s with the higher core count as your ESXi host.

I thought the R620 only had two PCIe slots, one full-height and one half-height, plus the spot for the "integrated/mini" RAID controllers.
R620 come in 2 variations, 2 PCIE and 3. That is what I am planning to do expect and don't really want to consume a PCIE slot just for 8 2.5 bays. I (and I think many others) will only find use in 2.5 drives for:
1. boot drive
2. L2ARC
3. SLOG

Now skipping the last one. Is there any tangible benefit to let FreeNAS to have raw access of the first 2?
 
Status
Not open for further replies.
Top