Freenas on ESXi - are 2 vCPUs really required?

Status
Not open for further replies.

norbs

Explorer
Joined
Mar 26, 2013
Messages
91
I just read through this article: http://www.freenas.org/blog/yes-you-can-virtualize-freenas/ and noticed that it mentioned 2vCPU minimum.

What is the logic behind this minimum? I've tried 1 vCPU and it works just fine in my case, I'm just scared there might be some important reason why I should have two and I rather not find out the hard way.

Current config is:
ESXi 6.0 host:
e3-1230v5
64GB ECC

It runs:
1vCPU / 32GB freenas VM (VT-D to LSI-9207 SAS + 6X 5400RPM 4TB drives)
4 other VMs on the host all with 1 vCPU each and on certain occasions I power on another 4-5 VMs.

Thanks in advanced.
 
Last edited by a moderator:

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
I'm not aware of any requirement to run two vCPUs for a FreeNAS VM on ESXi. With that said, you will get better performance when dealing with jails, plugins and such if you can give up two or three cores.

Maybe you could provide some system specs?

The way ESXi works is you can provision (give) many cores to a VM and it will use them as needed. They do not run at 100% speed and they are not taing anything away from the other VMs in your system. So lets say you have a 4 core CPU, no hyperthreading. You can give all 4 cores to each VM you have if you like. ESXi will timeshare the physical cores as it sees fit. If you have four VMs and givr each one core, that does not mean you are giving giving up a single core to each VM, ESXi will still do what it wants. Remember it's a Virtual CPU, not physical.
 

norbs

Explorer
Joined
Mar 26, 2013
Messages
91
I'm not aware of any requirement to run two vCPUs for a FreeNAS VM on ESXi. With that said, you will get better performance when dealing with jails, plugins and such if you can give up two or three cores.
I was just going by the article I provided you from this website.

You can give all 4 cores to each VM you have if you like. ESXi will timeshare the physical cores as it sees fit. If you have four VMs and givr each one core, that does not mean you are giving giving up a single core to each VM, ESXi will still do what it wants. Remember it's a Virtual CPU, not physical.
That's actually only partially true... vCPUs are not like RAM where they can be shared as effectively.

There are two big things that make right-sizing VM vCPUs very important:
  1. A 4 vCPU VM will need 4 physical core on the same physical CPU cycle to run one VM cycle.
  2. When a 4 core VM has only one or two busy core, it still needs to wait for 4 available physical CPU cores before it can run one VM cycle.
Essentially by giving every VM 4 cores it mean even if your VM needs only 1 core they at a particular moment, it will always have to wait for 4 cores physical cores to be available even if it could had gotten by with 1 core, the physical CPU will just be busy running mostly idle cycles and you VMs will always just be waiting on available physical cores.
By dropping this down to at least 2 you now can run up to 2 VM cycles on one physical CPU cycle and by dropping it down to 1 vCPU you can run up to 4 VM cycles per 1 host cycle.

So while yes, you can oversubscribe vCPUs to physical CPUs, right sizing is very important because more cores on a VM means they have to wait longer for physical cores to become available even if they are not even if they could have gotten away with one core.
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
o while yes, you can oversubscribe vCPUs to physical CPUs, right sizing is very important because more cores on a VM means they have to wait longer for physical cores to become available even if they are not even if they could have gotten away with one core.
I completely agree with you however if someone is using ESXi then time sharing the CPU should be expected otherwise you would be using dedicated hardware. At my work they overprovision like crazy which makes for a slow VM for the end user however they can cram more users into a server and while things are not snappy, the VM is still fast enough to get real work done. I myself at home want snappy performance.

So are you going to give it 2 vCPU cores or remain with the 1?
 

norbs

Explorer
Joined
Mar 26, 2013
Messages
91
I completely agree with you however if someone is using ESXi then time sharing the CPU should be expected otherwise you would be using dedicated hardware. At my work they overprovision like crazy which makes for a slow VM for the end user however they can cram more users into a server and while things are not snappy, the VM is still fast enough to get real work done. I myself at home want snappy performance.

So are you going to give it 2 vCPU cores or remain with the 1?

I'm going to try both 1 and 2 and see if there is a noticeable benefit to 2.

I've got a pretty funky setup right now, on that same ESXi host I'm also doing PCIE passthrough to a Video card and basically using the VM as an HTPC for my media. Oddly enough the HTPC VM barely uses any CPU when it's playing videos however if I give it 1 vCPU the video become incredibly choppy. If I give it 2 vCPUs it still hardly uses any CPU but the video becomes silky smooth.

Which is what kind of led me to ask this question. However it a bit harder to notice things not running 100% smoothly on storage without benchmarking or knowing how the software is written.
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
There was some testing done a few years back about how many CPUs (not virtual) would be optimal for FreeNAS running a few specific programs in jails and memory tells me it was 3 cores being the optimum. While you can't typically buy a 3 core CPU, you can buy a 4 core.

You would have to search for that testing but again, it was pretty specific and it was an older version of FreeNAS.
 

norbs

Explorer
Joined
Mar 26, 2013
Messages
91
Yeah I don't do any jails since I'm already using ESXi so I think 1 or 2 vCPU is still probably good enough. I'm probably just going to have to run a benchmark to see if going up to 2 vCPU actually shows worthwhile improvement.
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
The only jail I have is Plex.

When running the benchmarks, I suspect you will be doing SMB and maybe other protocols, but don't forget timing a Scrub and anything else you can think of. I'm not sure how FreeNAS 10 would test, it does have the new Java GUI.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
There is always going to be something interrupting something on a FreeNAS/unix system.

2 CPUs means it can actually do two things at once, avoiding the majority of the interruptions, whereas one CPU means it can't.

Also, ZFS goes through a helluva lot of effort to enable multi-threaded access to storage.

Be a shame to disable that ;)

I'd go at least two vCPUs ;)
 

darkwarrior

Patron
Joined
Mar 29, 2015
Messages
336
Same here. +1
Now a days any server or VM should have at least 2 "CPUs". :cool:
Your freenas VM will start coughing, if for example SAMBA is doing crazy and maxes out your single core.
The pain of using a single CPU machine in a busy environment is just so immense, it will make you start a killing spree ;)

It's really difficult to notice the particular impacts of the %WMWait phenomenon on a "normal" Home/Lab environment.
You would actually need more many VMs doing some heavy lifting with many busy vCPUs to actually see issues.
But you can keep an eye on that using the Windows and vmware performance counters. CPU Queue length is the magic word.
If the VM is waiting for CPU resources to the point where the OS needs to Queue up operations, than we have an issue.:D

That being said: The potential performance issues we are talking about here is probably the primary problem related to over provisioning )the hell out of the systems) that is being encountered in many enterprise environments, where everything has been virtualised (even stuff that should never have been installed into a VM). :rolleyes:
 
Status
Not open for further replies.
Top