In a non-production environment I was required to create a large number of small workload virtual machines on a ESXi hosts. Following provisioning of the virtual machines and during the power on stage I received the following error message to which I was no longer able to power on additional virtual machines on a particular host system.
An error was received from the ESX host while powering on VM vm186. Failed to start the virtual machine. Failed to power on VM.Could not power on VM : msg.vmk.status.VMK_VCPU_LIMIT_EXCEEDED. Max VMs limit reached: 184 (32768 worlds)
On investigation, by default the maximum number of vCPUs that should run on a single core on a ESXi Hosts is determined at runtime. It is possible (whilst not recommended or supported) to configure this behaviour by modifying the kernel settings for the host system using the vSphere Command-Line Interface.
To provide a workaround to the above error message is to modify the value of the ‘maxVCPUsPerCore’ in the kernel settings, firstly we will list the default values.
~ # esxcli system settings kernel list -o maxVCPUsPerCore Name Type Configured Runtime Default maxVCPUsPerCore uint32 0 0 0
Now we will modify the value, in this example to configure the maximum number of vCPUS per core to be ‘1000’.
~ # esxcli system settings kernel set -s maxVCPUsPerCore -v 1000
In my example, the virtual machines will be configured with a single vCPU, and therefore I will also modify the maximum number of virtual machines on the host to also be ‘1000’.
~ # esxcli system settings kernel set -s maxVMs -v 1000
In order to apply the configuration change the host system will require a restart.
Hello Dean,
I’ve implemented this settings in my ESXI 6 server, but after the reboot, it became UN-responsive and didn’t boot anymore ( screen started to flash several times – hdmi connection ), the only way to recover this ESXi its to load the installation again, right?
Thanks,
Ricardo
LikeLike