Help with building a new TrueNAS Server for VMs

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
Four of my resources and a quote from me in the .sig. Trying to curry favor? Haha.
I'll stop linking your content when you quit writing such good stuff. And the 40GbE quote seemed extra-relevant for this thread. ;)

(Need to get a few VMware and iSCSI things written up myself though.)
 

uberwebguru

Explorer
Joined
Jul 23, 2013
Messages
97
@HoneyBadger @jgreco

Reading this makes me not want to setup NAS

like why one has to know and do all this just to setup shared storage
one should NOT need to know all this or get this indepth

1) Set up patrol reads to verify the RAID1 disks

2) Set up a script to notify you if sas2ircu picks up on any problems

3) Add a fourth SSD to the mix as a spare in case any of the first three fail

from what i know, one can just ignore the RAID controller and it will leave things alone
but now am hearing i should replace with HBA
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
like why one has to know and do all this just to setup shared storage
one should NOT need to know all this or get this indepth

Do you have Mario Andretti's driving skills?

If not, why do you drive a car (assuming you do)?

There are all sorts of things that are discussed, some of which are truly important, some of which are less important for the average use case.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
like why one has to know and do all this just to setup shared storage
one should NOT need to know all this or get this indepth

I don't mean to be rude, but this statement seems a bit in contrast to your original comment of

Mentioned production because this will not be some home lab stuffs, will be running mission critical stuffs

You certainly can "set up shared storage" just by throwing together some components, and certainly they may work. For how long and how well - that's the question.

Are you willing to be the person that gets called if/when things break?

from what i know, one can just ignore the RAID controller and it will leave things alone
but now am hearing i should replace with HBA

There is a physical limit to the number of slots in a server. Typically, the RAID controller will occupy a proprietary form factor slot. If you aren't going to use it, remove it so that the proprietary slot can be used for a compatible controller, leaving a standard form factor PCIe slot free. If you are going to use it (such as for part of the highly-available boot pool strategy) then leave it there, but you'll lose a standard PCIe slot for the HBA.

For example, your R730XD has an option for a rear 2x2.5" internal swap bay. If you leave that in place, you could cable those two bays into the H730P and set it up as a hardware-mirrored boot pool. Put an HBA330 into one of the PCIe x8 slots and cable the front 24 bays into that. You'll need to use another x8 slot for your 40GbE card, and I'd even suggest buying into the 4x PCIe U.2 hotswap option which means consuming an x16 slot for the adaptor card. Then you can stuff a bunch of Optane devices up front to accelerate your NFS sync writes.

(The R730XD is a really nice platform to build on, if anyone else was wondering.)
 

uberwebguru

Explorer
Joined
Jul 23, 2013
Messages
97
For example, your R730XD has an option for a rear 2x2.5" internal swap bay. If you leave that in place, you could cable those two bays into the H730P and set it up as a hardware-mirrored boot pool. Put an HBA330 into one of the PCIe x8 slots and cable the front 24 bays into that. You'll need to use another x8 slot for your 40GbE card
What if i do single boot drive? I mean if that fails one will just install another OS right?
RAID on boot drives of same age, what advantage really? What is probability of boot drive failing? especially using an 870 EVO SSD?

Also maybe i dont need the 40G, i can use the 2 x 10G and see if i use that up first
Am thinking i wont need the 40G then, might be overkill, for a start at least
My setup is surely not powering the next facebook.com, just couple of hypervisor hosts and hundreds of VMs
Most of the VMs are not going to be PROD workloads


s-l1600.jpg


I'd even suggest buying into the 4x PCIe U.2 hotswap option which means consuming an x16 slot for the adaptor card. Then you can stuff a bunch of Optane devices up front to accelerate your NFS sync writes
Can you explain this part?

(The R730XD is a really nice platform to build on, if anyone else was wondering.)
What about R740xd then? if that will help refrain from too much customizations?
 
Last edited:

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
What if i do single boot drive? I mean if that fails one will just install another OS right?
Sure, it's no big deal. It's also not expensive to add a second one, so you typically see that in enterprise deployments.
RAID on boot drives of same age, what advantage really?
You're not wrong, but there's more to reliability than age. It's a sort of low-impact, low-cost thing that can nudge uptime up a bit. Of course, you can also go with crazier setups, but few people do.
What is probability of boot drive failing? especially using an 870 EVO SSD?
With good SSDs, not very likely.
What about R740xd then? if that will explain refrain from too much customizations?
Better, but not that much better. The only major improvements are better CPUs (probably not a major limitation) and support for more than 4x bays with PCIe SSDs in some models. Cost is much higher. The HBA/RAID controller situation is the exact same, Gen13 and Gen14 use the same proprietary form factor for the embedded controller.
 

uberwebguru

Explorer
Joined
Jul 23, 2013
Messages
97
Do you have Mario Andretti's driving skills?

If not, why do you drive a car (assuming you do)?

There are all sorts of things that are discussed, some of which are truly important, some of which are less important for the average use case.
I can drive pretty fast/well compared to most drivers
but i get your point, i also just want things simple
simple is easier to manage, than complex
 

uberwebguru

Explorer
Joined
Jul 23, 2013
Messages
97
Better, but not that much better. The only major improvements are better CPUs (probably not a major limitation) and support for more than 4x bays with PCIe SSDs in some models. Cost is much higher. The HBA/RAID controller situation is the exact same, Gen13 and Gen14 use the same proprietary form factor for the embedded controller.
What about this?
I'd even suggest buying into the 4x PCIe U.2 hotswap option which means consuming an x16 slot for the adaptor card. Then you can stuff a bunch of Optane devices up front to accelerate your NFS sync writes

Optane drives to speed up NFS writes? How do i put this up front?
Not sure i understand this setup
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
What about this?


Optane drives to speed up NFS writes? How do i put this up front?
Not sure i understand this setup
If using sync writes, you'd typically an SLOG to get some semblance of performance. For a while now, NVMe is the way to go and the R730XD in some configurations, has four bays that can take PCIe SSDs.
 

uberwebguru

Explorer
Joined
Jul 23, 2013
Messages
97
If using sync writes, you'd typically an SLOG to get some semblance of performance. For a while now, NVMe is the way to go and the R730XD in some configurations, has four bays that can take PCIe SSDs.
You are right it says support up to 4 Express Flash NVMe PCIe SSDs

So where can i read up more how to speed up write with this SLOG/accelerating write?
How to setup this write performance configuration etc

What size of NVMe SSD and what size they usually should be?
For my setup i dont think i will ever need more than 16TB of shared storage really
Thin provisioning usually helps keep things in check, which is top reason for considering NAS
 
Last edited:

timblaktu

Cadet
Joined
Mar 16, 2023
Messages
1
@uberwebguru, Did you end up using a Dell 6WMMV daughtercard in your r730xd to get 25Gbps connectivity? I'm wanting to direct connect 2 r730xds (like this), but am trying to research whether this daughtercard will work in a 13th gen poweredge, as the original spec sheet for the r730xd only shows support for up to 10G SFP+, and I have found conflicting info elsewhere, e.g. [this listing](https://www.serversupply.com/NETWORKING/DAUGHTER%20CARD/2%20PORT/DELL/6WMMV_327896.htm) says it's only supported in the r740* gen, but [this one](https://www.stikc.com/dell-poweredge-r730xd-custom-build.html) shows the option to install a Dell Mellanox ConnectX-4 2-port 25Gb SFP+ Daughtercard.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
While I do not have an answer either way, I do know that Dell artificially limits rNDCs. For instance, Gen 12s will refuse to work with the X710+I350 rNDCs.

I've seen some vendors sell Gen 13s with Mellanox Connect-X 25GbE rNDCs, though. Worth digging through Dell's docs to see if you find anything.
 
Top