Allocating all threads to bhyve

hexadecagram

Dabbler
Joined
Jul 15, 2016
Messages
32
I am not a virtualization expert but it seems reasonable to allocate ALL of my CPU's threads to my VMs and let SMP work it out. For example, suppose I have a 4-core HT CPU with 8 threads in all, so I set up my VM to use 1 vCPU, 4 cores, and 2 threads.

Unfortunately, when I try to do so with ArchLinuxux LTS, the VM often locks up solid with no console message, causing me to poweroff and restart (sometimes reboot).

Can someone give me a technical reason why allocating all threads to bhyve is a bad idea? It also seems reasonable enough that one might want to leave a thread or two available to the host OS so that it doesn't starve, but full allocation seems to be a more efficient use of CPU resources.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
If you tell a virtualization system to allocate ALL of the resources to a VM, ... it very well might. That leaves you with nothing for the host system.

Virtualization is better off with oversubscription. What you actually want is to allocate just enough CPU resources to a VM to accomplish the job. Vladan talks about that in this post, near the bottom.

 

hexadecagram

Dabbler
Joined
Jul 15, 2016
Messages
32
Hi jgreco, thanks for responding.

That article talks focuses the number of vCPUs, and I never diverge that from the number of physical CPUs in the host (to me, having 1 pCPU means I allocate at most 1 vCPU). But it does get me thinking of trying 1 vCPU, 1 core, and 1 thread and increase as needed. Is that what you are getting at? Also, is there a tool similar to esxtop for TrueNAS?
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
In the ESXi model, a vCPU is just a schedulable entity that gets mapped to a physical CPU.

The problem is, if you take a host with six pCPU's and create a VM that demands six vCPU's, one of two things happens --

1) That VM runs during a time slice only because there are NO OTHER THINGS to run, because any other thing running has to be placed on a pCPU, reducing the available pCPU's to 5 (or less).

2) That VM doesn't run because other jobs are consuming CPU's.

So the really weird thing virtualization newbies often try is to allocate 6 cores to their VM, thinking that it means "up to" 6 cores. And this works really well as long as it is the only VM on the machine. But then they add another VM with just one core, and do something CPU-intensive like a compile on it, and suddenly the 6-core VM comes to a screeching halt, because it can rarely get a timeslice. This results in this really weird situation where the VM goes super-slow despite having 6 cores.

The usual thing is to look at the VM when it is running normally, see how much CPU it is using, and then adjust accordingly. More experienced admins guess, come close, and then tweak a bit if they're off. Usually it is better to slightly undersize compute resources, especially if you're running a lot of VM's. I've got lots of hypervisors running dozens or hundreds of VM's.

As for esxtop, no, I am not aware of one. I believe top can give you some limited idea of performance, and I seem to recall bhyve has some other interface to retrieve details. However, it is not polished in the way that the 15 years of development over at VMware is.
 

hexadecagram

Dabbler
Joined
Jul 15, 2016
Messages
32
So the really weird thing virtualization newbies often try is to allocate 6 cores to their VM, thinking that it means "up to" 6 cores
This raises the question, if allocating n cores to a VM doesn't mean "up to" n cores, does the hypervisor then devote those cores to the VM? So in my example where I have 4 HT cores and therefore 8 total threads, I would likely not want to run more than 3 VMs, each with one 1 vCPU containing 1 HT core (2 threads) allocated, leaving the remaining HT core for the host?

The usual thing is to look at the VM when it is running normally, see how much CPU it is using, and then adjust accordingly.
This statement has me thinking that the answer to the first question I asked in my previous post is "yes".
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
does the hypervisor then devote those cores to the VM?

No. However, if all six cores are not available and the VM configuration is for six cores, then the VM cannot be assigned the timeslice. You get.. nothing! (Gene Wilder voice)

So if trying to imagine timeslicing hundreds of times per second hurts your brain, you can just think of it as the VM getting put on hold until nothing else is calling for resources. Perhaps the scheduler makes a bigger effort to accommodate the six-core request as time drags on with the VM not getting any timeslices.

You can also tell a hypervisor to devote cores to a given VM. This is clearly related in some ways, but is a different thing.

I happened to run across the following while looking for something else I didn't find. This may be helpful. I eyeballed it and it looks generally correct:

https://www.joshodgers.com/2012/07/22/common-mistake-using-cpu-reservations-to-solve-cpu-ready/
 

hexadecagram

Dabbler
Joined
Jul 15, 2016
Messages
32
you can just think of it as the VM getting put on hold until nothing else is calling for resources.
And I imagine the guest operating system has to do all the compensating for that, which would explain all those "CPU soft lockups" which they say can safely be ignored and they say are "fixed in the most recent kernel" (yes, we promise this time).

You get.. nothing! (Gene Wilder voice)
Good day, sir!
 
Top