FreeNAS running on ESXi host

zirophyz

Dabbler
Joined
Apr 11, 2016
Messages
26
Hi,

I know this has been asked a few times but I really just wanted to clarify my situation before I go ahead with it.

My goal - move my BSD jails into Docker containers with direct GPU access (yes, Plex and Tdarr the later which doesn't seem to support BSD).

The pitfall - bhyve doesn't support GPU pass thru, so the Docker containers running inside a Ubuntu 20.04 VM cannot get access to the GPU for hardware transcoding.

The solution (?) - change the bare metal OS from FreeNAS to ESXi. Run FreeNAS in a VM with pass thru/HBA for direct access to my drives. Run Ubuntu in a separate VM for Docker containers, that will have GPU direct access - leaving FreeNAS to do what it does best; being a NAS.

I don't have a heap of experience with ESXi, but I do have some experience via work (Cisco UC) - so, I'm familiar but no expert. Also, my storage consists of 5 x 2TB HDDs in RAIDZ1 that are 92% full. Boot drive is M2 SSD. My server is just for home use.

Here are my questions and concerns;
- Confirm that I can pass thru the HDD's to FreeNAS VM and re-import existing ZFS pools? (Ideally, if I could just restore a backup even better).
- Can I use the M2 SSD as both ESXi boot device and datastore for VMs? Or, am I better off booting ESXi off a USB stick and dedicating the SSD to datastore? (I don't want to do anything crazy or time consuming to use the SSD as both ESXi boot and datastore - this needs to be supportable by future me).
- Anyone know of gotcha's that might catch me out with i3-10100 processor for GPU pass thru in ESXi? Processor support IOMMU, VT-x and VT-d. It should be fine from what I've read but if anyone else has wisdom here I would be very appreciative.

Thanks in advance!
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
The pitfall - bhyve doesn't support GPU pass thru, so the Docker containers running inside a Ubuntu 20.04 VM cannot get access to the GPU for hardware transcoding.
Wait a few months for TrueNAS 12 Core... you will get that.

change the bare metal OS from FreeNAS to ESXi. Run FreeNAS in a VM with pass thru/HBA for direct access to my drives. Run Ubuntu in a separate VM for Docker containers, that will have GPU direct access - leaving FreeNAS to do what it does best; being a NAS.
Perfectly OK to do that too, just know that you're adding complexity that you will need to manage

It should be fine from what I've read but if anyone else has wisdom here I would be very appreciative.
The way ESXi does passthrough is a bit weird... you need to enable the device for passthrough, then reboot ESXi before you can actually use it in a VM, but theory looks OK for what you want.

Can I use the M2 SSD as both ESXi boot device and datastore for VMs?
Yes...

Or, am I better off booting ESXi off a USB stick and dedicating the SSD to datastore?
But this is probably better for the long life of your system... make sure you backup the ESXi config regularly though.

Confirm that I can pass thru the HDD's to FreeNAS VM and re-import existing ZFS pools? (Ideally, if I could just restore a backup even better).
If you pass through the HBA with all of the right disks, building a fresh FreeNAS VM and restoring your config from the original FreeNAS should go well.
 

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
Also, my storage consists of 5 x 2TB HDDs in RAIDZ1 that are 92% full. Boot drive is M2 SSD. My server is just for home use.
Your pool is dangerously full. FreeNAS performance drops off drastically once a pool is 80% full, so you should consider replacing the drives with larger-capacity disks.
- Can I use the M2 SSD as both ESXi boot device and datastore for VMs? Or, am I better off booting ESXi off a USB stick and dedicating the SSD to datastore? (I don't want to do anything crazy or time consuming to use the SSD as both ESXi boot and datastore - this needs to be supportable by future me).
You can install ESXi and the FreeNAS VM on your M.2 SSD and I recommend this over booting from USB. With a large enough SSD, you can also provision an L2ARC partition for the FreeNAS VM to use. I take this approach in my newer All-in-One system (server 'BRUTUS', see 'my systems' below).
 

zirophyz

Dabbler
Joined
Apr 11, 2016
Messages
26
Hey,

Thanks for all the great information.

Your pool is dangerously full. FreeNAS performance drops off drastically once a pool is 80% full, so you should consider replacing the drives with larger-capacity disks.

Yes, it is indeed full but I didn't know that this would cause a performance impact. Thanks for the heads up. HDD's are next in line to be upgraded and will replace the 5x 2TB with 5x 4TB and then expand the volume. Will still be RAIDZ1 which I've read before isn't ideal so down the line I may increase this to 6 or 7 disks and move to RAIDZ2.

With a large enough SSD, you can also provision an L2ARC partition for the FreeNAS VM to use.

I've only got a 120GB SSD - once I move to ESXi and put two VM's on there I might not quite be able to squeeze it in. If I've read correctly, I'd need 7/8th of the system RAM. If I give FreeNAS 10GB (yeah, I know it's low but I've only got 16GB) then give it an ~8GB L2ARC partition ... it might all fit, we'll see. How much of an improvement does it give for file access?

just know that you're adding complexity

Yes, adding some complexity that I don't mind so much. It opens up some other uses for the server, as well as increases my exposure to ESXi which is beneficial to my professional life.

But this is probably better for the long life of your system

Seems there might be two camps about this. I did read the ESXi will wear down USB thumb drives or an SD card quite quickly. How does this differ from an SSD? Is this because of the amount of reads/writes? My preference is boot and datastore on the single SSD - but, if it's going to tear up my SSD just as quickly then I'd rather frequently replace USB sticks than SSDs.

Always find such helpful information and users on this forum. Thanks for taking the time to reply to my post with details @sretalla and @Spearfoot.
 

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
Seems there might be two camps about this. I did read the ESXi will wear down USB thumb drives or an SD card quite quickly. How does this differ from an SSD? Is this because of the amount of reads/writes? My preference is boot and datastore on the single SSD - but, if it's going to tear up my SSD just as quickly then I'd rather frequently replace USB sticks than SSDs.
ESXi won't wear out your SSD. It's not that ESXi wears out disks -- it's just that USB sticks aren't very durable.
 

zirophyz

Dabbler
Joined
Apr 11, 2016
Messages
26
@Spearfoot This is interesting to see that VMWare does not recommend both boot and VM VMFS datastore on the same M.2 disk, as they consider M.2 as low-endurance storage the same as a USB drive.



Unlike USB flash devices, the ESXi installer creates system storage volumes and a VMFS datastore on M.2 and other non-USB low-end flash media. If you deploy a virtual machine or migrate a virtual machine to this boot device datastore, the boot device can be worn out quickly depending on the endurance of the flash device and the characteristics of the workload. As even read-only workloads can cause problems on low-end flash devices, you should install ESXi only on high-endurance flash media.


Also, on another page from VMWare;
if you install ESXi 7 on a M.2 or other non-USB low-end flash media, beware that the storage device can be worn out quickly if you, for example, host a VM on the VMFS datastore on the same device. Be sure to delete the automatically configured VMFS datastore on the boot device when using low-end flash media. It is highly recommended to install ESXi on high-endurance flash media.


My M.2 drive is just a WD Green. Quickly searching I couldn't find the DWDP for this drive to work out endurance, but I'm going to assume it'll be low. What I might do is combine boot and datastore for now, but in the near future move the VM datastore to a higher-endurance SSD and leave the lower-endurance M.2 SSD as boot.
 

phier

Patron
Joined
Dec 4, 2012
Messages
400
@Spearfoot I was wondering why are you running that many truenas installs? is there some specific purpose behind it?

Also for example on BILBO you have the following setup
ESXi boot and datastores: 100GB Intel DC S3500 SSD + 512GB Samsung SM961 M.2 NVMe SSD
Pool: Mirror (2 x 12TB HGST Ultrastar DC HC520 (0F30141)

comparing to BOOMER there is only one drive acting as boot and datastore
ESXi boot and datastore: 100GB Intel DC S3500 SSD

What's the purpose of having ESXi boot and datastore drives? I mean for a what reason do you use datastore drive? As you also have a Pool.

Also isn't for ESXi boot only 32GB required?

Do you have some specific requirement that you virtualize TrueNAS via ESXi, or for a what purpose do you run ESXi bc on the other hand you can use bhyve inside TrueNAS.

appreciate!
 
Last edited:

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
What's the purpose of having ESXi boot and datastore drives? I mean for a what reason do you use datastore drive? As you also have a Pool.
You must boot ESXi from at minimum one drive.

For your TrueNAS VM, you need at least a single VHD, which needs a datastore to exist on (which can't be provided from the machine itself... chicken and egg problem).

Also isn't for ESXi boot only 32GB required?
I have seen recent posts in this forum mentioning 100GB is the space that modern VMware wants for its own purposes, so larger than that is ideal.

Do you have some specific requirement that you virtualize TrueNAS via ESXi, or for a what purpose do you run ESXi bc on the other hand you can use bhyve inside TrueNAS.
VMware is a type1 (and very mature) hypervisor.

Bhyve is Type2 and is not very mature (although does continue to improve).

Things like USB passthrough and handling of CPU and memory are all done better by VMware.
 

phier

Patron
Joined
Dec 4, 2012
Messages
400
You must boot ESXi from at minimum one drive.

For your TrueNAS VM, you need at least a single VHD, which needs a datastore to exist on (which can't be provided from the machine itself... chicken and egg problem).
hello,
the worst thing from my point of view is how to such a drive for VHD images upfront. ;/
 
Top