Hyperthreading – VMware vSphere vCenter Server 4.0 User Manual

Page 20

Advertising
background image

The ESX CPU scheduler can interpret processor topology, including the relationship between sockets, cores,

and logical processors. The scheduler uses topology information to optimize the placement of virtual CPUs

onto different sockets to maximize overall cache utilization, and to improve cache affinity by minimizing

virtual CPU migrations.
In undercommitted systems, the ESX CPU scheduler spreads load across all sockets by default. This improves

performance by maximizing the aggregate amount of cache available to the running virtual CPUs. As a result,

the virtual CPUs of a single SMP virtual machine are spread across multiple sockets (unless each socket is also

a NUMA node, in which case the NUMA scheduler restricts all the virtual CPUs of the virtual machine to

reside on the same socket.)
In some cases, such as when an SMP virtual machine exhibits significant data sharing between its virtual CPUs,

this default behavior might be sub-optimal. For such workloads, it can be beneficial to schedule all of the virtual

CPUs on the same socket, with a shared last-level cache, even when the ESX/ESXi host is undercommitted. In

such scenarios, you can override the default behavior of spreading virtual CPUs across packages by including

the following configuration option in the virtual machine's

.vmx

configuration file:

sched.cpu.vsmpConsolidate="TRUE"

.

To find out if a change in this parameter helps with performance, please do proper load testing. You cannot

easily predict the effect of a change in this parameter. If you do not see a performance boost after changing the

parameter, you have to revert the parameter to its default value.

Hyperthreading

Hyperthreading technology allows a single physical processor core to behave like two logical processors. The

processor can run two independent applications at the same time. To avoid confusion between logical and

physical processors, Intel refers to a physical processor as a socket, and the discussion in this chapter uses that

terminology as well.
Intel Corporation developed hyperthreading technology to enhance the performance of its Pentium IV and

Xeon processor lines. Hyperthreading technology allows a single processor core to execute two independent

threads simultaneously.
While hyperthreading does not double the performance of a system, it can increase performance by better

utilizing idle resources leading to greater throughput for certain important workload types. An application

running on one logical processor of a busy core can expect slightly more than half of the throughput that it

obtains while running alone on a non-hyperthreaded processor. Hyperthreading performance improvements

are highly application-dependent, and some applications might see performance degradation with

hyperthreading because many processor resources (such as the cache) are shared between logical processors.

N

OTE

On processors with Intel Hyper-Threading technology, each core can have two logical processors which

share most of the core's resources, such as memory caches and functional units. Such logical processors are

usually called threads.

Many processors do not support hyperthreading and as a result have only one thread per core. For such

processors, the number of cores also matches the number of logical processors. The following processors

support hyperthreading and have two threads per core.

n

Processors based on the Intel Xeon 5500 processor microarchitecture.

n

Intel Pentium 4 (HT-enabled)

n

Intel Pentium EE 840 (HT-enabled)

vSphere Resource Management Guide

20

VMware, Inc.

Advertising
This manual is related to the following products: