hypervisor + virtualized Truenas

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
You must pass through an entire HBA and connect the drives you intend to use for TrueNAS to that. This topic has been discussed on the first page of this very thread including the same link that I just posted. If you must run virtualised in ESXi do it this way. Any other way you are on your own and nobody will even be capable of helping you when you lose data. This is the only configuration known to work.
 

phier

Patron
Joined
Dec 4, 2012
Messages
400
You must pass through an entire HBA and connect the drives you intend to use for TrueNAS to that. This topic has been discussed on the first page of this very thread including the same link that I just posted. If you must run virtualised in ESXi do it this way. Any other way you are on your own and nobody will even be capable of helping you when you lose data. This is the only configuration known to work.
yes but i dont have HBA, i want to use native SATA ports on mainboard.
thats why i was wondering...
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
Then you cannot run TN virtualised in ESXi. The software and applications you intend to run dictate what hardware you need to buy. Always been this way. You can still use the virtualisation features of TrueNAS.
 

phier

Patron
Joined
Dec 4, 2012
Messages
400
@Patrick M. Hausen then I do miss something, so basically, you cant virtualise TN using SATA ports on the mainboard? Virtualization can be done only using SATA ports on the HBA PCIe controller card? Maybe I did miss something.

thx
 

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
@Patrick M. Hausen then I do miss something, so basically, you cant virtualise TN using SATA ports on the mainboard? Virtualization can be done only using SATA ports on the HBA PCIe controller card? Maybe I did miss something.

thx

The problem is the native SATA ports are controlled by the CPU's built-in AHCI controller, which can't be detached from the CPU for passthrough to a VM. It's also not recommended to create virtual drives for the TrueNAS VM to use. This will appear to work at first, but these drives WILL absolutely corrupt over time.
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
Well, POSSIBLY you can pass through your entire SATA controller to TrueNAS - IF it's a PCIe device on your mainboard, but most frequently these are not as Samuel stated. But then what are you going to use to boot ESXi from?

You cannot pass through a single SATA port. No.

We have been discussing this back and forth in this thread. You need to make up your mind what you want to run. What for do you need ESXi? Why can't you run VMs in TrueNAS when you need VMs? If you absolutely need ESXi, why do you want to run TrueNAS on the same machine? Running TrueNAS alongside ESXi has some very specific hardware requirements. And then with PCIe pass through, ESXi mandates that you reserve all the memory you assign to the TrueNAS VM. So you cannot share memory with other VMs.

This is high end enterprise data center technology. You can run it perfectly well in a home lab, but there are certain constraints. I run an ESXi and TrueNAS SCALE combo. But all disks used by TrueNAS are NVMe - which are PCIe devices - which are passed through. So this works.

No, you cannot use the SATA ports on your mainbord with TrueNAS AND ESXi at the same time. You can install TrueNAS on your system perfectly well.
 

phier

Patron
Joined
Dec 4, 2012
Messages
400
@Samuel Tai @Patrick M. Hausen
Ok, so i need to buy a HBA cards to pci slots https://www.supermicro.com/en/products/motherboard/x11ssl-f
1 PCI-E 3.0 x8 (in x16),
1 PCI-E 3.0 x8,
1 PCI-E 3.0 x4 (in x8)

as i am using one x4 for a nvme to boot a system, now i have only one x8 and other x8 in x16. It means max 2x HBA cards.

There is no performance issue comparing native sata ports vs HBA sata ports? I saw some reccomendations in TN documentations... but there were mentioned ~4 HBA...

is it possible to say which HBAs are most stable and can be used with my pcie slots?

thanks
 

phier

Patron
Joined
Dec 4, 2012
Messages
400
And then with PCIe pass through, ESXi mandates that you reserve all the memory you assign to the TrueNAS VM. So you cannot share memory with other VMs.
what do you mean to share with other VMs? I assume VMs cant share allocated memory?
I run an ESXi and TrueNAS SCALE combo. But all disks used by TrueNAS are NVMe - which are PCIe devices - which are passed through. So this works.
yes but i have standard SATA disks.

At the moment i am using PCIE to NVME reduction, and this one is used to boot esxi.
So going with scenario to use HBA i think i should use local SATA to boot esxi and leave all pcie for a pasthrough.
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
You can overprovision memory in ESXi, i.e. allocate more memory to VMs that is physcially present in the system. You cannot do this with a VM that has passed through PCIe devices, though. Hence my warning.

So going with scenario to use HBA i think i should use local SATA to boot esxi and leave all pcie for a pasthrough.
Sounds good. If you want to run a virtualised TrueNAS that's the recommended way to go.
 

phier

Patron
Joined
Dec 4, 2012
Messages
400
okay, but whats the connection of memory with image of VM? I mean it really matters if u load VM from local sata drive via pci pashthrough drive?

Is it possible to say if there is some speed drawback comparing HBA sata via local sata ports?
Also do you guys recommend some best HBA /regarding the stability, compatibility ... card?

thanks
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
*sigh*

You do not load the VM from the passed-through HBA. You load the VM from a VMFS datastore in ESXi. Which most probably will be a local SATA drive in your case.

You pass through the HBA to TrueNAS to connect all the disks you want to manage with TrueNAS. TrueNAS is a storage appliance, remember? So you are going to connect a whole bunch of disks to build fancy pools with ZFS and stuff. Right? These disks MUST be connected to a passed through HBA that is under full control of TrueNAS.

You can BOOT TrueNAS from a virtual disk just fine. You cannot use virtual disks for storage managed by TrueNAS.

And then there's simply a constraint by ESXi. If you use the PCIe PASS THROUGH FEATURE in ESXi for any VM, ALL memory for that VM must be reserved and cannot be shared. That's simply how ESXi works.

If you are not intending to connect a whole bunch of disks to TrueNAS but to ESXi instead, then what do you need TrueNAS for?

We are now on the forth page of this discussion and you already asked (and received answers) about where to store VM images etc.

If you want a hybrid installation you need

1. At least a single disk drive to install ESXi on and provide some space for a VMFS datastore to keep a 16 G virtual disk for TrueNAS boot. 250 G minimum recommended, SSD recommended but not necessary. NVMe possible but SATA will do as well.
2. A dedicated HBA to connect all the disks you want to use in TrueNAS and NOT in ESXi to build a ZFS pool with redundancy etc. The first disk for ESXi cannot be connected to this HBA but needs some different storage controller - motherboard SATA is good.

You can then create an NFS share in TrueNAS and share this back to ESXi to store more VM images than your first disk can hold and also have some redundancy, snapshots etc. for them, which the single boot disk does not provide.
 
Last edited:

phier

Patron
Joined
Dec 4, 2012
Messages
400
And then there's simply a constraint by ESXi. If you use the PCIe PASS THROUGH FEATURE in ESXi for any VM, ALL memory for that VM must be reserved and cannot be shared. That's simply how ESXi works.
sorry I dont undertand what are you saying. We are mixing apples and oranges.
We are talking here about Memory for TrueNAS VM only, correct? No other VM fits here. Only truenas VM will use PCIe passthrough to be able to access drives and create pool.
Also i dont know what is - reserved and cant be shared. Normally if u allocate 8GB per VM, its allocated for that VM, how do u want to share that RAM?

You can then create an NFS share in TrueNAS and share this back to ESXi to store more VM images than your first disk can hold and also have some redundancy, snapshots etc. for them, which the single boot disk does not provide.
Here I think we said already, that VMs cant be store on truenas; there will be performance issue etc... thats why we said for VMs other separeate drive is required.

Regardin the HBA cards, i am still not clear if there is some performance drawback/issue regardin the speed compare to mainboard sata ports.


thanks
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
sorry I dont undertand what are you saying. We are mixing apples and oranges.
We are talking here about Memory for TrueNAS VM only, correct? No other VM fits here. Only truenas VM will use PCIe passthrough to be able to access drives and create pool.
Also i dont know what is - reserved and cant be shared. Normally if u allocate 8GB per VM, its allocated for that VM, how do u want to share that RAM?
Yes, of course we are talking only TrueNAS VM. The memory you assign in ESXi for that VM needs to be reserved.

Don't you know that in ESXi you can have e.g. 32 G of physical memory and still create e.g. 8 VMs of 8 G each as long as they do not USE that memory all at the same time? Similarly you can have 8 CPU cores and still create 8 VMs with 2 virtual cores each.

Well, you can. But not for the TrueNAS VM which needs this memory exclusively because of the PCIe pass through. This is my entire point. I just want to make you aware of any consequences of your virtualisation approach. You will need more memory, for one. If you virtualise on TrueNAS you will need less.

Regardin the HBA cards, i am still not clear if there is some performance drawback/issue regardin the speed compare to mainboard sata ports.
Normally you cannot pass the mainboard SATA ports to TrueNAS in ESXi so you MUST use a HBA to virtualise. It is simply not possible. Unless the mainboard SATA ports are connected to an on-board SATA/SAS controller and not to the CPU/chipset integrated AHCI. Some Supermicro boards have that. It will be mentioned in your mainboard manual if you have a dedicated LSI controller on that board.
In this case and only in this case you can pass through the whole controller - but then you cannot boot ESXi from it!

You need two completely separate storage paths - one for ESXi and one for TrueNAS. This is not a matter of performance drawbacks but a matter of "works at all or doesn't". You cannot connect the ESXi disk to the controller used in TrueNAS and you cannot use any disk connected to the controller that is used by ESXi for TrueNAS storage. ESXi and TrueNAS must be completely separate.
 

phier

Patron
Joined
Dec 4, 2012
Messages
400
Don't you know that in ESXi you can have e.g. 32 G of physical memory and still create e.g. 8 VMs of 8 G each as long as they do not USE that memory all at the same time? Similarly you can have 8 CPU cores and still create 8 VMs with 2 virtual cores each.
I didnt know that and even still not get it ... so what you are saying is with 32G phy ram, i can create 1VM with 32G ram, 2.VM with 32GB ram... and in case the 1. VM uses only 1GB ; the 2. VM can use (share) 31GB which are free?



the board was mentioned - https://www.supermicro.com/en/products/motherboard/x11ssl-f - cant see it can handle it as u described.

thx
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
I didnt know that and even still not get it ... so what you are saying is with 32G phy ram, i can create 1VM with 32G ram, 2.VM with 32GB ram... and in case the 1. VM uses only 1GB ; the 2. VM can use (share) 31GB which are free?

If no VMs are using PCI device passthrough then yes - you can have 32G of physical RAM, and have a total number of machines that use >32G in total. VM1 will only be actually consuming/using 1G of physical RAM, and the other 31G (less the hypervisor overhead) will be available for VM2 to share.

However, if you reach a point where you exhaust the physical memory in the machine (VM1 uses 16G, VM2 uses 16G) then your performance will be massively impacted, likely to the point of "non-responsive VM"

But as a technical requirement of PCI device passthrough (for your HBA or SATA controller) you must reserve 100% of of the memory allocated to the VM. This memory cannot be shared. So if you have a TrueNAS VM with 16G of RAM, you will only have 16G of "sharable" memory for the rest of your machines.

the board was mentioned - https://www.supermicro.com/en/products/motherboard/x11ssl-f - cant see it can handle it as u described.

I do not see an NVMe M.2 slot on here, so you don't have a separate storage controller that can be left "not passed through" for the host hypervisor to have access to. It likely is capable of booting from NVMe, so you could add one on a simple M.2 to PCIe adaptor card, and use that to boot ESXi and as a VMFS datastore, but you could also install a SAS HBA and pass that device through to your TrueNAS VM.
 

phier

Patron
Joined
Dec 4, 2012
Messages
400
But as a technical requirement of PCI device passthrough (for your HBA or SATA controller) you must reserve 100% of of the memory allocated to the VM. This memory cannot be shared. So if you have a TrueNAS VM with 16G of RAM, you will only have 16G of "sharable" memory for the rest of your machines.
you meant if u have 32G of RAM, 16 for TN VM, then 16G will remain as sharable.

Here maybe one more thing, this is only Feauture of ESXi i assume? I mean using bhyve for example if i have 32G ram for VMs and I create 2 each of 16G and first will use 1G ; the second one will be able to use 16G only - ie it wont be able to share remaining 15G ram from the first VM.

I do not see an NVMe M.2 slot on here, so you don't have a separate storage controller that can be left "not passed through" for the host hypervisor to have access to. It likely is capable of booting from NVMe, so you could add one on a simple M.2 to PCIe adaptor card, and use that to boot ESXi and as a VMFS datastore, but you could also install a SAS HBA and pass that device through to your TrueNAS VM.
Thats how is it set up right now, I have nvme->to PCIe adaptor and esxi is boot from it.
But i need to buy SAS HBA so i can connect 3.5" drives and passthrough them to TN which will run as VM.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
Here maybe one more thing, this is only Feauture of ESXi i assume? I mean using bhyve for example if i have 32G ram for VMs and I create 2 each of 16G and first will use 1G ; the second one will be able to use 16G only - ie it wont be able to share remaining 15G ram from the first VM.
I honestly haven't used bhyve for anything beyond very simple "does this boot?" testing. I also don't oversubscribe RAM in my VMware configurations as it's historically been a great way to shoot yourself in the foot under sudden memory pressure or host hardware failure. A guest OS doesn't react well to its "RAM" suddenly being at the speed of storage.

Thats how is it set up right now, I have nvme->to PCIe adaptor and esxi is boot from it.
But i need to buy SAS HBA so i can connect 3.5" drives and passthrough them to TN which will run as VM.
If you are presently using an NVMe->PCIe adaptor and booting ESXi from that, then you can pass the entire Intel C232 SATA controller (not an individual port or disk) through to TrueNAS. Create a VMFS datastore on the remaining free space on your NVMe boot device, and store the .vmx and initial small "boot VMDK" for TrueNAS there. Once you've installed TrueNAS, shut the VM down, complete the PCIe passthrough, boot it up, and you should be able to make a pool out of the disks attached to the SATA ports.
 

phier

Patron
Joined
Dec 4, 2012
Messages
400
complete the PCIe passthrough
what do u mean by that... how to complete that?


Well i am confused now ... i was told here in the whole discussion that it cant be done as you are proposing ie to pass the entire Intel C232 SATA controller.
I was told HBA card is needed.

@Samuel Tai said
"The problem is the native SATA ports are controlled by the CPU's built-in AHCI controller, which can't be detached from the CPU for passthrough to a VM. It's also not recommended to create virtual drives for the TrueNAS VM to use. This will appear to work at first, but these drives WILL absolutely corrupt over time."


confused...
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
what do u mean by that... how to complete that?

VMware documentation link for 7.0 below:


If you don't have vCenter you may need to use a different process from the vSphere Host Client:


Well i am confused now ... i was told here in the whole discussion that it cant be done as you are proposing ie to pass the entire Intel C232 SATA controller.
I was told HBA card is needed.

@Samuel Tai said
"The problem is the native SATA ports are controlled by the CPU's built-in AHCI controller, which can't be detached from the CPU for passthrough to a VM. It's also not recommended to create virtual drives for the TrueNAS VM to use. This will appear to work at first, but these drives WILL absolutely corrupt over time."


confused...

Sam might have been working under the assumption that you are currently booting ESXi from a device attached to one of the SATA ports on this controller, which of course would make the SATA controller be "in use" by the hypervisor and therefore unavailable for passthrough. In this case, you are booting from a separate NVMe device and the SATA controller will not be in use.
 
Top