Can't Install ESXi

Status
Not open for further replies.

yottabit

Contributor
Joined
Apr 15, 2012
Messages
192
I'm using FreeNAS 11.2 beta 2 (new UI).

I created a 16-core VM with 16 GB RAM, 1 GB zvol, UEFI boot, added the ESXi installation ISO, and enabled VNC.

ESXi installer boots and loads modules, then at the last minute shuts off with an error message that mentions firmware and UEFI (not sure exactly what the whole message says since it's fleeting).

I tried using UEFI-CSM as an alternative, but that won't even boot the ISO ("Boot Failed. CDROM 0").

Any ideas?

Edit: caught the full message. "Shutting down firmware services..." "Using 'simple offset' UEFI RTS mapping policy" "Relocating modules and starting up the kernel..."

So it's not really an error message, but then it goes to black screen and a bhyve process sticks at 103-108% CPU.
 
Last edited:

JohnK

Patron
Joined
Nov 7, 2013
Messages
256
Just to be clear. Are you trying to install ESXI inside FreeNAS?
 

yottabit

Contributor
Joined
Apr 15, 2012
Messages
192
I am, yes. Mainly due to the long-standing problem with the Windows VirtIO driver. I would like to run Windows as a guest in ESXi, with ESXi attaching the Windows guest block device over iSCSI.

I would rather not virtualize FreeNAS since I run almost everything else in jails.
 

JohnK

Patron
Joined
Nov 7, 2013
Messages
256
I am, yes. Mainly due to the long-standing problem with the Windows VirtIO driver. I would like to run Windows as a guest in ESXi, with ESXi attaching the Windows guest block device over iSCSI.

I would rather not virtualize FreeNAS since I run almost everything else in jails.

If you do a quick google of ESXI inside Bhyve, you will note that it cannot be done.

Even if it was possible, I would strongly recommend not doing it!

You are much better off virtualizing FreeNAS, jails and all. I ran windows inside bhyve for less than a day before moving it and all my jails, Plex and NextCloud to their own VM's inside ESXI. Works like a charm and much easier to maintain.
 

yottabit

Contributor
Joined
Apr 15, 2012
Messages
192
I did some searching earlier this week and saw that bhyve did not support nested VMs (EPT support, I think it was referring to being incomplete/missing), but was added in a more recent version. I thought it was added to FreeBSD 11, but I guess I'm wrong. :smile:

Alas I will continue using Google Compute Engine for now, but it's $1.40 per hour (or $0.70 for preemptable instance) because of the Windows license for 16 cores, plus $3/mo. for the persistent disk image. Oh, and the network egress on top of that if you want to send any data out of the cloud.

I have Windows already running in bhyve successfully, but using AHCI emulation for the boot disk. This unfortunately makes it excruciatingly slow whenever the OS is accessing the boot disk (e.g., updates). I'll just keep waiting for the VirtIO driver to be fixed (I can always dream).
 

JohnK

Patron
Joined
Nov 7, 2013
Messages
256
For a long time the norm was that running FreeNas on Esxi was bad. You loose control over your disks etc. so I ran my virtual windows and Linux on Esxi with FreeNas on it own server. I was very excited when bhyve came along as I could move these vms into FreeNas and save some power. Took me one day to realize that though I could do it, bhyve is years behind Esxi in pure manageability. Esxi is great with sharing resources among devices, both processor and ram. So I moved everything back and shortly after virtually FreeNas. I moved everything out of jails and.ran sepearate vms.

Now if you think about it, do you really want a platform like Esxi that is great middleware between your hardware and your VMs to sit on top of a so-so middleware and loose basically all its capabilities to manage resources?

For me it is a no brainer. Use architecture what it is designed for. FreeNas is a data server, Esxi is a hypervisor.
Virtualize your FreeNas and you will never look back.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
For a long time the norm was that running FreeNas on Esxi was bad.

It's still bad. It used to be worse because there were so many problems with the pre-Sandy Bridge hardware platforms people were trying to abuse, and because certain frequently promoted (at the time) strategies such as RDM-hackery for SATA would cause catastrophes. Recoverability is one of the key things I've pushed for by emphasizing PCI passthru, which ESXi and most modern server boards support pretty well, but I'm going to say that the other big risk is lack of sufficient knowledge. ESXi itself is an esoteric bit of technology, and you can paint yourself into a bad corner if you're not careful.

You loose control over your disks etc. so I ran my virtual windows and Linux on Esxi with FreeNas on it own server. I was very excited when bhyve came along as I could move these vms into FreeNas and save some power. Took me one day to realize that though I could do it, bhyve is years behind Esxi in pure manageability.

Of course. VMware's hypervisor technology has been around for at least a decade longer than bhyve. It was mature many years ago. ESXi is the Cadillac of hypervisors. Enterprises pay a princely sum for the product, so they are very well funded, and it's their big product. Companies like Microsoft (Hyper-V) look at virtualization differently. Xen and KVM have the fragmented open source development model. bhyve is nice but is developed by a small under-resourced group, as much as I hate to say that.

Esxi is great with sharing resources among devices, both processor and ram. So I moved everything back and shortly after virtually FreeNas. I moved everything out of jails and.ran sepearate vms.

Now if you think about it, do you really want a platform like Esxi that is great middleware between your hardware and your VMs to sit on top of a so-so middleware and loose basically all its capabilities to manage resources?

For me it is a no brainer. Use architecture what it is designed for. FreeNas is a data server, Esxi is a hypervisor.
Virtualize your FreeNas and you will never look back.

Especially if you screw it up.
 

JohnK

Patron
Joined
Nov 7, 2013
Messages
256
jgreco, having read hundreds of your posts over the past five years, I would love to hear your thoughts about placing ESXI inside Bhyve! :)


It's still bad. It used to be worse because there were so many problems with the pre-Sandy Bridge hardware platforms people were trying to abuse, and because certain frequently promoted (at the time) strategies such as RDM-hackery for SATA would cause catastrophes. Recoverability is one of the key things I've pushed for by emphasizing PCI passthru, which ESXi and most modern server boards support pretty well, but I'm going to say that the other big risk is lack of sufficient knowledge. ESXi itself is an esoteric bit of technology, and you can paint yourself into a bad corner if you're not careful.

Of course. VMware's hypervisor technology has been around for at least a decade longer than bhyve. It was mature many years ago. ESXi is the Cadillac of hypervisors. Enterprises pay a princely sum for the product, so they are very well funded, and it's their big product. Companies like Microsoft (Hyper-V) look at virtualization differently. Xen and KVM have the fragmented open source development model. bhyve is nice but is developed by a small under-resourced group, as much as I hate to say that.

Especially if you screw it up.

I agree absolutely with you about having the knowledge before virtualizing. I ran ESXI for a few years before placing FreeNas in there and even though it has been rock solid for more than a year, I keep a separate backup server on simple non-virtualized hardware, just in case. Actually, everything I virtualize, I also have backup hardware servers that I can fall back on if ever required.
 

NasKar

Guru
Joined
Jan 8, 2016
Messages
739
I moved everything out of jails and.ran sepearate vms.
Trying to learn about ESXi. Not sure what you mean by that. Is the Plex running under a version of Linux as a VM in ESXi and the same for nextcloud?

big risk is lack of sufficient knowledge.
I love to learn new things but your making me think it's not a great idea. Maybe I should leave my orginal freenas running and play with ESXi in the new server till I decide.
 

JohnK

Patron
Joined
Nov 7, 2013
Messages
256
Trying to learn about ESXi. Not sure what you mean by that. Is the Plex running under a version of Linux as a VM in ESXi and the same for nextcloud?


I love to learn new things but your making me think it's not a great idea. Maybe I should leave my orginal freenas running and play with ESXi in the new server till I decide.

You can install both Plex and Nextcloud in jails, inside FreeNas. Plex is not that bad, but NextCloud is far more complex and difficult to upgrade. I found it much easier to install ubuntu server with LAMP on a VM and then install NextCloud on top. Did the same with my Plex. Placed it on its own Ubuntu server inside ESXI.

I think virtualizing FreeNas has been well documented and with the right hardware seems solid. With that said, ESXI is enterprise software and it takes some time to learn. I would recommend keeping your old server as a backup, if you can.

Would love the chassis in your future build. If I didn't already have 2 836's, I would buy that one, just for the two drives on the back! Not sure how hot that E5-2650 is running at, but I find that with the 836 you can get away with passive cooling.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Trying to learn about ESXi. Not sure what you mean by that. Is the Plex running under a version of Linux as a VM in ESXi and the same for nextcloud?

Plex makes a FreeBSD version that works fine, so you can always run it as a FreeBSD VM, and mount your storage via NFS.

I love to learn new things but your making me think it's not a great idea. Maybe I should leave my orginal freenas running and play with ESXi in the new server till I decide.

The thing that's really key to remember is that you're building a much more complex house of cards.

In the old days, we stored files in not-hard-to-debug filesystems stored on a single hard disk. There were actually tools available to go and suck down retrievable files if the disk got corrupted or partially failed.

ZFS takes storage to a more insane level, where multiple disks and lots of parts conspire to store your data. The risk is increased, but at the same time steps such as checksums and SMART monitoring offset the likely threats. You can suddenly get into new kinds of trouble such as "half my disks dropped out" because a HBA fails or a cable is loose. But data recovery becomes more challenging if there's a catastrophic failure.

Adding a virtualization layer creates even more challenges. If you're using virtual VM disks, raw device mappings, or any other "clever" VM technologies to provide the ZFS backing, you are really running a shake table on top of a house of cards. It only takes a failure and the whole thing can come apart.

With that said, I will also say that - having probably written the books both on not virtualizing AND also on virtualizing - I'm convinced that you can absolutely virtualize FreeNAS, as I've been doing for many years, but you need to do it carefully and thoughtfully. Familiarity with the tools you're using is a big thing. If you've never used a table saw, you can probably figure out that it'll cut wood for you, but the appropriate and safe use are not entirely obvious, and some education and practice are in order. Anyone who reads all the stuff here, follows hardware recommendations, etc., can learn everything they need to know.

Take some time to get to know and love the Elastic Sky X. Once you feel comfortable with it, having created and destroyed and tinkered with a bunch of VM's and the ESXi management interface, you're going to be able to make the call yourself if you feel comfortable. This is enterprise software and it doesn't hold your hand too much. Only exposure and experience will advise whether you are "compatible" with it. :smile:
 

NasKar

Guru
Joined
Jan 8, 2016
Messages
739
NextCloud is far more complex and difficult to upgrade
There is an occ command that automates the upgrade of nextcloud to the latest version (pretty simple). I can get it for you when I get home if you need it. I've never got the GUI to work for an upgrade.

Would love the chassis in your future build.
Waiting for my motherboard to come and then I can build it. I think you can buy the a kit to add the 2 2.5" drives to the back of the case if you don't have them. I already have my 2.5" SSD installed waiting to install the OS to it.
 

NasKar

Guru
Joined
Jan 8, 2016
Messages
739
Plex makes a FreeBSD version that works fine, so you can always run it as a FreeBSD VM, and mount your storage via NFS.
Any reason not to run plex in a jail on FreeNas virtualized on ESXi?

My plan is to run Freenas with all my jails and PFsense as VMs in ESXi.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Any reason not to run plex in a jail on FreeNas virtualized on ESXi?

Possibly, depends.

One of the things about a hypervisor is that it shares resources between VM's, but there's a negative aspect to this as well. If you have an occasionally busy 2-CPU NAS and an occasionally busy 2-CPU Plex VM, and all your 8 virtual/4 physical core CPU resources are in use doing other VM's, each VM will get a reduced share. This slows everything down somewhat.

However, if you merge those two VM's into a single VM and give it 4 CPU's, or "half" of what you have, that is NOT an equivalent solution. The hypervisor will ALWAYS give those four cores to the bigger VM to run the tasks on the VM, and if your environment is already very busy, the scheduling timeslice will be much smaller than the separate VM solution.

http://www.gabesvirtualworld.com/how-too-many-vcpus-can-negatively-affect-your-performance/

If you aren't hitting peak CPU, then this isn't such an issue. But you asked.
 

JohnK

Patron
Joined
Nov 7, 2013
Messages
256
There is an occ command that automates the upgrade of nextcloud to the latest version (pretty simple). I can get it for you when I get home if you need it. I've never got the GUI to work for an upgrade.

Waiting for my motherboard to come and then I can build it. I think you can buy the a kit to add the 2 2.5" drives to the back of the case if you don't have them. I already have my 2.5" SSD installed waiting to install the OS to it.
Thanks, now that NextCloud is running on it's own VM, everything is just fine. When I started using NextCloud inside a jail, I had problems with the jail version not always being compatible with the NextCloud version etc. My use case is relatively simple, auto backup of all my iphone pictures, but I need 100% up time. (Wife requirement)

Unfortunately, my 836's are really old versions that cannot take the 2.5" kit. I did look into it and ended up doing the next best thing. 2.5" converter cases in the front. Means I cannot use the SAS expander, but I don't see myself going with more than 6 drives in the near future.
 

JohnK

Patron
Joined
Nov 7, 2013
Messages
256
Possibly, depends.

However, if you merge those two VM's into a single VM and give it 4 CPU's, or "half" of what you have, that is NOT an equivalent solution. The hypervisor will ALWAYS give those four cores to the bigger VM to run the tasks on the VM, and if your environment is already very busy, the scheduling timeslice will be much smaller than the separate VM solution.

That is the part that worries me most about running things in jails. I don't know how well it does resource management. Here I believe ESXI is much better.
 

JohnK

Patron
Joined
Nov 7, 2013
Messages
256
My plan is to run Freenas with all my jails and PFsense as VMs in ESXi.

I must be paranoid, but I prefer running my PFSense on its own hardware.
 

NasKar

Guru
Joined
Jan 8, 2016
Messages
739
2.5" converter cases in the front. Means I cannot use the SAS expander
Why would that prevent the SAS expander from working. I will have mine connected to an HBA card with one cable and my understanding is that all 16 drives will work. I don't think it will know if I have a 2.5" installed with an adapter or a 3.5".
 

JohnK

Patron
Joined
Nov 7, 2013
Messages
256
Why would that prevent the SAS expander from working. I will have mine connected to an HBA card with one cable and my understanding is that all 16 drives will work. I don't think it will know if I have a 2.5" installed with an adapter or a 3.5".
Great question. I'm using some 2.5" drives for ESXI datastores. Once you pass through the HBA card to FreeNas, all drives belongs to FreeNas. If that makes sense. :)
 
Joined
Dec 29, 2014
Messages
1,135
I must be paranoid, but I prefer running my PFSense on its own hardware.

You certainly have a point there, but it depends on your security policy/posture requirements. I am unaware of any currently active VLAN hopping/Hypervisor context hopping exploits, but that doesn't mean they don't exist or that it isn't possible. Spectre/Meltdown would make it possible for a compromised VM to make information about other VM's/Hypervisor visible which could make it easier to compromise them. I wouldn't say never do it, but it matters who you are and what you trying to protect.
 
Status
Not open for further replies.
Top