vNUMA | Right-Sizing VM’s

By default, ESXi NUMA scheduling and related optimizations are enabled only on systems with a total of atleast four CPU cores and with at least two CPU cores per NUMA node. On such systems, virtual machines can be separated into the following two categories:

+ Virtual machines with a number of vCPUs equal to or less than the number of cores in each physical NUMA node. These virtual machines will be assigned to cores all within a single NUMA node and will be preferentially allocated memory local to that NUMA node. This means that, subject to memory availability, all their memory accesses will be local to that NUMA node, resulting in the lowest memory access latencies.

+Virtual machines with more vCPUs than the number of cores in each physical NUMA node (called “wide virtual machines”). These virtual machines will be assigned to two (or more) NUMA nodes and will be preferentially allocated memory local to those NUMA nodes. Because vCPUs in these wide virtual machines might sometimes need to access memory outside their own NUMA node, they might experience higher average memory access latencies than virtual machines that fit entirely within a NUMA node.

 

Because of this difference, there can be a slight performance advantage in some environments to virtual machines configured with no more vCPUs than the number of cores in each physical NUMA node. Conversely, some memory bandwidth bottlenecked workloads can benefit from the increased aggregate memory bandwidth available when a virtual machine that would fit within one NUMA node is nevertheless split across multiple NUMA nodes. This split can be accomplished by limiting the number of vCPUs that can be placed per NUMA node by using the maxPerMachineNode option (do also consider the impact on vNUMA, however, by referring to “Virtual NUMA (vNUMA)” on page 50). On hyper-threaded systems, virtual machines with a number of vCPUs greater than the number of cores in a NUMA node but lower than the number of logical processors in each physical NUMA node might benefit from using logical processors with local memory instead of full cores with remote memory. This behavior can be configured for a specific virtual machine with the numa.vcpu.preferHT flag.

For pages referenced and article used for this see below

vNUMA-Reference

Advertisements