hypervisor + virtualized Truenas

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
Also one more point; in case described above... still seems its not a good idea to store these VMs (ubuntu, win) on truenas drive... is that correct? SO in that case i have to mount separate drive in Truenas where i store only VMs data?
You store the VM data in Zvols in your main storage pool. What do you mean by TrueNAS drive? You will have multiple drives and create a zpool with redundancy, right? You cannot put anything on the drive TrueNAS boots from. The boot drive is for booting only and that is that. TrueNAS needs at least 2 drives (one small, cheap, for booting, one larger for storage) and better 2 or more storage drives for redundancy to do anything useful at all.

You cannot virtualise inside Proxmox inside TrueNAS - no nested virtualisation. If Proxmox can run containers, fine. I run docker-compose in Ubuntu VMs in bhyve. It's just nested virtualisation (KVM) that is not possible inside bhyve.

Passing of USB ports is only possible if you can pass an entire USB controller. Depends on your hardware if there is one that is not also responsible for e.g. the keyboard, even when accessed via IPMI.
 

phier

Patron
Joined
Dec 4, 2012
Messages
400
Not sure if i articulated that properly.

It was said that ESXi should use one drive to boot from (lets say ~128GB), then it was said VMs should be store on another drive - and also thats not recommended to store VMs as iscsi mount from TrueNAS.

Was that somehow tested - that it doesnt work?
You cannot virtualise inside Proxmox inside TrueNAS - no nested virtualisation.
In that case i hope you can run containers in Proxmox inside TrueNAS (Where proxmox is installed inside Ubuntu virtualized via bhyve)

Thanks
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
If you run VMs on ESXi most commonly you store the VMs on datastores formatted with VMFS, VMware's cluster capable filesystem. In enterprise settings that more or less mandates hardware RAID because ESXi does not support any form of software based redundancy.

Going one step further you can store your VMs on a separate storage appliance. There are multiple protocols and connection types available like Fibre Channel, iSCSI, or NFS. Of course you can use TrueNAS to serve an entire cluster of ESXi machines via iSCSI. Nothing about this is "not recommended". But you need to take certain limitations into account if you expect any decent performance. But even if you adhere to the rule of thumb to fill your TrueNAS storage to 50% at most, preferably less, you will save big compared to Netapp or EMC2. This is more or less the central selling point of iXsystems.

Now in a home/hobbyist setup the capacity penalty is just to high in my opinion to really recommend iSCSI. Of course you can do it, but what's the point? Local SSD storage in ESXi will be faster, but not redundant. So buy something sufficiently wear resistant and perform frequent backups to e.g. TrueNAS via NFS.

Now, when you use TrueNAS as a hypervisor the VM images still need to go somewhere. And of course that's one or more ZFS storage pools because ZFS is the only storage TrueNAS provides. Many use a dedicated pool e.g. built from mirrored SSDs for VMs for performance and a way larger pool built from spinning disks for general file sharing. This has the additional advantage that you can perform regular backups from your SSD pool to your HDD pool, that probably has a higher level of redundancy like RAIDZ2.

As for the last question: there is no need to try anything. Bhyve does not support nested virtualisation. You cannot run ESXi or KVM as a bhyve guest. Not even VirtualBox on an Ubuntu box. Nada. Not implemented, and that fact well documented.

You can run Docker on a Linux guest inside bhyve just fine. But then why mess with Proxmox? Install Ubuntu, create docker-compose.yml, docker-compose up, done.

HTH,
Patrick
 

phier

Patron
Joined
Dec 4, 2012
Messages
400
So much appreciate your detailed responses, and agree with what was said.

Now in a home/hobbyist setup the capacity penalty is just to high in my opinion to really recommend iSCSI. Of course you can do it, but what's the point? Local SSD storage in ESXi will be faster, but not redundant. So buy something sufficiently wear resistant and perform frequent backups to e.g. TrueNAS via NFS.

The thing here is in case you use a local SSD - the hard part would be how to size that SSD - I mean the idea behind iSCSI in TrueNAS was nice; bc you can extend size whenever you want, but there is that issue that you can use then only ~50% of pool which really is a big penalty.
So maybe one will start with 512GB of local SSD, once the space is reached - buy the new one etc. nothing else is on my mind (how to approach it in more ellegant way).

Now, when you use TrueNAS as a hypervisor the VM images still need to go somewhere. And of course that's one or more ZFS storage pools because ZFS is the only storage TrueNAS provides. Many use a dedicated pool e.g. built from mirrored SSDs for VMs for performance and a way larger pool built from spinning disks for general file sharing. This has the additional advantage that you can perform regular backups from your SSD pool to your HDD pool, that probably has a higher level of redundancy like RAIDZ2.
Thats great idea, but again the problem i can see is how to size that SSD pool.

As for the last question: there is no need to try anything. Bhyve does not support nested virtualisation. You cannot run ESXi or KVM as a bhyve guest. Not even VirtualBox on an Ubuntu box. Nada. Not implemented, and that fact well documented.

You can run Docker on a Linux guest inside bhyve just fine. But then why mess with Proxmox? Install Ubuntu, create docker-compose.yml, docker-compose up, done.

Regarding the docker and Proxmox, sometimes there are apps documented/described how to run them on Proxmox, even some rebuild "containers", so in case container for docker is not here you have to build it by your own... so that's why i was thinking about proxmox.

thanks!
 

phier

Patron
Joined
Dec 4, 2012
Messages
400
one more details is there any point to use drive with DRAM cache for TrueNas boot or ESXi boot dev? thanks

Having no built-in DRAM within today’s NVMe SSDs also enables a lower power draw, efficient PCB routing, and better thermal management characteristics.
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
Depends on the storage needs of the applications you are going to run inside the VMs. I have two VMs running Ubuntu and Confluence. They have 60 G virtual disks which is more than enough. But how should I know about your capacity requirements? A VM can run in as little as 10 G comfortably. I ran FreeBSD on 500 M once ... some years ago.

So my VM SSDs are 1 T each, also storing all of my jails. And about one third full including all the snapshots I keep.
 

phier

Patron
Joined
Dec 4, 2012
Messages
400
Depends on the storage needs of the applications you are going to run inside the VMs. I have two VMs running Ubuntu and Confluence. They have 60 G virtual disks which is more than enough. But how should I know about your capacity requirements? A VM can run in as little as 10 G comfortably. I ran FreeBSD on 500 M once ... some years ago.

So my VM SSDs are 1 T each, also storing all of my jails. And about one third full including all the snapshots I keep.
@Patrick M. Hausen noone can preditct - i mean thats the issue of whole design ... that u have to plan ahead cant increase/decrease as per need ;/
what brand of 1tb do u use/ thanks
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
Of course you can expand as needed. I could add another mirrored pair to the same pool or I could replace the 1 T SSDs with 2 T ones, all the while retaining the data in place without the need for a full backup and restore.

All my hardware is listed in my signature. See "Main NAS".
 

phier

Patron
Joined
Dec 4, 2012
Messages
400
So my VM SSDs are 1 T each, also storing all of my jails. And about one third full including all the snapshots I keep.
@Patrick M. Hausen thank you for the information. So you basically mount the VM SSDs inside truenas / create a zfs fs on them and store VMs images and jails, correct? To store jails you can easily set it somehow where data should be stored?

May I ask you if u use native zfs encryption? Just trying to find out which one / how to use it .... going to forum posts...
also found

Regarding the encyption - is the best practice to use :password: instead of keys? As in case, the keys are used and stored on boot device - everyone with physical access is able to decript data. Question would be how could one protect the keys, not clear regardin that architecture rigth now.

thanks
 
Last edited:

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
You can configure which pool the jails reside on. The VM images can be placed anywhere per VM, so I decided to create a dataset named ssd/vms and place all images there. ssd is my pool on the mirrored SSDs.

I don't use encryption. I need the system to come up on its own unattended.
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
I don't know because I am not using encryption. When I tried it years ago, FreeNAS (as it was named at the time) wanted a password on the console or the UI at every startup. That's why I removed encryption and never looked at it again. Doesn't the manual answer that question? There's a ton of documentation, you know?
 

phier

Patron
Joined
Dec 4, 2012
Messages
400
I don't know because I am not using encryption. When I tried it years ago, FreeNAS (as it was named at the time) wanted a password on the console or the UI at every startup. That's why I removed encryption and never looked at it again. Doesn't the manual answer that question? There's a ton of documentation, you know?
yes i got it ... maybe thats why it has to be designed somehow properly with keys / etc... no clue how to design that... need to google.
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
Why google?

 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
You won't find much real TrueNAS experience outside of this forum, and many of the videos on YT are not very helpful - just so not to say utter BS. And then, after 40 years in this industry, I somehow still expect documentation to precisely describe what to expect from a product in real life. I avoid products without exhaustive documentation.

One might claim that would rule out TrueNAS, but TN is really "just" a middleware and UI on top of FreeBSD so there's all the FreeBSD reference material in addition to the TrueNAS docs. And I am not intending to belittle the effort of iX and co. Integrating a zoo of different open source components into a reliable and functional product is a herculean task.

Anyway the difference of keys and passwords and the concept of locked and unlocked datasets is explained in the document I linked. What keeps you from firing up a VM and just trying things instead of theoreticising on the forum for weeks? The best way to get to know a software product is to run it, toy with it, eventually break it. Then throw everything away and start over for production. :wink:
 
Last edited:

phier

Patron
Joined
Dec 4, 2012
Messages
400
hello,
yes you are correct, i will read and try.

Also i was looking here for todo/howto regarding the setup of Truenas as vm using esxi, also nothing specific, just out-to date youtube videos :(
I mean no idea where to set sata passthrough etc...

thanks
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
Oh on that topic there is extensive documentation. You must use PCIe passthrough. Disk passthrough will not work.

 

phier

Patron
Joined
Dec 4, 2012
Messages
400
but why PCIe, drives are connected to SATA, PCIe are different slots?
 
Top