Can't Install ESXi

Status
Not open for further replies.

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
That is the part that worries me most about running things in jails. I don't know how well it does resource management. Here I believe ESXI is much better.

Jails are about as good as you can get for resource sharing, as you're really just scheduling processes on the host UNIX system, and there's decades of work that's been put into that. ESXi suffers a lot because people who do not understand how to properly size a VM can really jank it up pretty badly and actually create performance problems. Pretty much every virtualization admin I've talked to spends a nontrivial amount of time trying to talk people out of asking for thousand-core-million-MB VM's. However, it gets complicated when you are trying to do *both* simultaneously. I would tend to believe that you are likely to get better performance out of the two-smaller-VM scenario, or at least not be so likely to create a problem, as a single larger VM that is also running jails inside.
 

JohnK

Patron
Joined
Nov 7, 2013
Messages
256
You certainly have a point there, but it depends on your security policy/posture requirements. I am unaware of any currently active VLAN hopping/Hypervisor context hopping exploits, but that doesn't mean they don't exist or that it isn't possible. Spectre/Meltdown would make it possible for a compromised VM to make information about other VM's/Hypervisor visible which could make it easier to compromise them. I wouldn't say never do it, but it matters who you are and what you trying to protect.
My paranoia is about 2% security and 98% availability. I have virtualize PFSense in the past until something went wrong with an upgdate. I am much more happy running two PFSense servers in parallel. If something goes wrong, I boot up a second one and just plug everything into it. With a 100% up time expected by wife and kids, it is far less stress! :(
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
My paranoia is about 2% security and 98% availability. I have virtualize PFSense in the past until something went wrong with an upgdate. I am much more happy running two PFSense servers in parallel. If something goes wrong, I boot up a second one and just plug everything into it. With a 100% up time expected by wife and kids, it is far less stress! :(

That's what you're supposed to do with *VM's*, not real hardware. :smile: VM's are easily created and manipulated. When you discard one, all the bits are recycled. :smile:
 

Mlovelace

Guru
Joined
Aug 19, 2014
Messages
1,111
Take some time to get to know and love the Elastic Sky X.

While studying for my VCP6.5-DCV I put together an acronym study guide, it's a bit longer ;)
Code:
AAM Automated Availability Manager
ADM Application Discover Manager
APM Application Performance Manager
CB Chargeback
CBM Chargeback Manager
CBRC Content Based Read Cache
CF Cloud Foundry
CIM Common Interface Model
CIQ Capacity IQ
CIS Cloud Infrastructure Suite
CSI Clustering Services Infrastructure
DaaS Desktop as a Service
DAS Distributed Availability Service
DBaaS Database as a Service
DD Data Director
DPM Distributed Power Management
DRS Distributed Resources Scheduler
DVS Distributed Virtual Switch
ERS Enterprise Ready Server
ESX Elastic Sky X
ESXi Elastic Sky X Integrated
EUC End User Computing
EVC Enhanced vMotion Compatibility
EVDC Elastic Virtual Data Center
FDM Fault Domain Manager
FT Fault Tolerance
GFE GemFire Enterprise
GSX Ground Storm X
HA High Availability
HCL Hardware Compatibility List
HoL Hands-On Labs
IaaS Infrastructure as a Service
IODM I/O Device Management
MVP VMware Mobile Virtualization Platform
NEE Next-Generation Education Environment
NETIOC Network I/O Control
NIOC Network I/O Control
OVDC Organization Virtual Data Center
P2V Physical to Virtual
PaaS Platform as a Service
PAE Propero Application Environment
PDL Permanent Device Loss
PSO Professional Services Organisation
PVDC Provider Virtual Data Center
S2 SpringSource
SaaS Software as a Service
SDDC Software Defined Data Center
SDRS Storage Distributed Resource Scheduling
SI Spring Integration
SIOC Storage I/O Control
SM Service Manager
SMP Symmetrical Multi Processing
SQLF SQLFire
SR-IOV Single Root I/O Virtualization
SRM Site Recovery Manager
STS SpringSource Tool Suite
TAM Technical Account Manager
V2V Virtual to Virtual
VAAI vStorage API for Array Integration
VADM vCenter Application Discovery Manager
VC Virtual Center
VCA4-DT VMware Certified Associate 4 - Desktop
VCAC vCloud Automation Center
VCAP VMware Certified Advanced Professional
VCAP4-DCA VMware Certified Advanced Professional 4 - Datacenter Administration
VCAP4-DCD VMware Certified Advanced Professional 4 - Datacenter Design
VCAP5-DCA VMware Certified Advanced Professional 5 - Datacenter Administration
VCAP5-DCD VMware Certified Advanced Professional 5 - Datacenter Design
VCAP-CID VMware Certified Advanced Professional – Cloud Infrastructure Design
VCAP-DTD VMware Certified Advanced Professional - Desktop Design
VCAT vCloud Architecture Toolkit
VCD vCloud Director
VCDX VMware Certified Design Expert
VCDX4-DV VMware Certified Design Expert 4 - Datacenter Virtualiziation
VCDX5-DV VMware Certified Design Expert 5 - Datacenter Virtualiziation
VCDX-DT VMware Certified Design Expert – Desktop
VCIM vCloud Integration Manager
VCLI vSphere Command Line Interface
VCM vCenter Configuration Manager
VCO vCenter Orchestrator
VCOPS vCenter Operations
VCP4-DT VMware Certified Professional 4 - Desktop
VCP4-DV VMware Certified Professional 4 - Datacenter Virtualization
VCP5-DT VMware Certified Professional 5 - Desktop
VCP5-DV VMware Certified Professional 5 - Datacenter Virtualization
vCSA vCenter Server Appliance
VCSN vCloud Security and Networking
VDC Virtual Data Center
VDP vSphere Data Protection
VDR VMware Data Recovery
VDS vNetwork Distributed Switch
VIM Virtual Infrastructure Management
VIN vCenter Infrastructure Navigator
VIX Virtual Infrastructure eXtension
VM Virtual Machine
VMA vSphere Management Assistant
VMFS Virtual Machine File System
VMNIC Physical Network Interface
VMKNIC Virtual Network Interface (VMKernel)
VMRC VMware Remote Console
VMS vFabric Management Service
VMSA VMware Security Advisory
VMTN VMware Technology Network
VMW VMware
VMX Virtual Machine eXecutable
VPX Virtual Provisioning X
VPXA Virtual Provisioning X Agent
VPXD Virtual Provisioning X Daemon
VR vSphere Replication
VRM vCloud Request Manager
VSA vSphere Storage Appliance
VSM VMware Service Manager
VSP VMware Sales Professional
vswif Virtual Switch Interface
VTSP VMware Technical Sales Professional
VUM vCenter Update Manager
VXLAN Virtual Extensible Local Area Network
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
Anyone who reads all the stuff here, follows hardware recommendations, etc., can learn everything they need to know.
I wouldn't go that far...
The hypervisor will ALWAYS give those four cores to the bigger VM to run the tasks on the VM
CPU time is not favored by relative core count. This is an oversimplification of the how the CPU schedule works in relation to varying core count in the VMs. The fact is that a VM with 4 cores will always be scheduled with 4 cores even if 3 are idle. This means that on a 6 core CPU (just forget hyper threading for the example) can only run 1 4 core VM per clock cycle. If all of your VMs had 4 cores, in effect you would always be wasting 2 cores. Granted even in that case the VMM could use the cores for housekeeping etc.
At the OS or the guest level, you CAN schedule Plex or whatever jail without plugging up other cores. Virtualization works best at scale as it can better take advantage of the law of averages.
I must be paranoid, but I prefer running my PFSense on its own hardware.
I do too but I have 4 or 5 VLANs that my environment depends on. I have even considered to an HA setup with a pfSense VM working with the physical one.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
CPU time is not favored by relative core count. This is an oversimplification of the how the CPU schedule works in relation to varying core count in the VMs. The fact is that a VM with 4 cores will always be scheduled with 4 cores even if 3 are idle. This means that on a 6 core CPU (just forget hyper threading for the example) can only run 1 4 core VM per clock cycle. If all of your VMs had 4 cores, in effect you would always be wasting 2 cores. Granted even in that case the VMM could use the cores for housekeeping etc.
At the OS or the guest level, you CAN schedule Plex or whatever jail without plugging up other cores. Virtualization works best at scale as it can better take advantage of the law of averages.

Well we're trying to keep it comprehensible to a "newbie." Oversimplifications are beneficial sometimes.
 

Mlovelace

Guru
Joined
Aug 19, 2014
Messages
1,111
Geez, no NSX.
Well... I looked into integrating NSX into our environment but they quoted us around $6000/socket X 36 sockets ~$240,000 with support. So, it was a non-starter. Though the NSX-t that will be replacing NSX-v over the next couple years looks very interesting.
 

JohnK

Patron
Joined
Nov 7, 2013
Messages
256
That's what you're supposed to do with *VM's*, not real hardware. :) VM's are easily created and manipulated. When you discard one, all the bits are recycled. :)
Fine balance between saving some power and over complicating things. Once you throw vlans, DHCP and Static IP into the mix, recovery impossible if I should be away from home. Also, I have two Atoms dedicated to PFsense, so it is easy enough to cycle between them when needed.
 

JohnK

Patron
Joined
Nov 7, 2013
Messages
256
. I have even considered to an HA setup with a pfSense VM working with the physical one.
Good luck with that! I ran PFSense in HA for a while and found that unless you have two external IP's it just becomes a mess. Better to just unplug one and start the other :)
 
Status
Not open for further replies.
Top