Software-based memory virtualization, Hardware-assisted memory virtualization – VMware vSphere vCenter Server 4.0 User Manual

Page 27

Advertising
background image

The amount of memory saved by memory sharing depends on workload characteristics. A workload of many

nearly identical virtual machines might free up more than thirty percent of memory, while a more diverse

workload might result in savings of less than five percent of memory.

Software-Based Memory Virtualization

ESX/ESXi virtualizes guest physical memory by adding an extra level of address translation.

n

The VMM for each virtual machine maintains a mapping from the guest operating system's physical

memory pages to the physical memory pages on the underlying machine. (VMware refers to the

underlying host physical pages as “machine” pages and the guest operating system’s physical pages as

“physical” pages.)
Each virtual machine sees a contiguous, zero-based, addressable physical memory space. The underlying

machine memory on the server used by each virtual machine is not necessarily contiguous.

n

The VMM intercepts virtual machine instructions that manipulate guest operating system memory

management structures so that the actual memory management unit (MMU) on the processor is not

updated directly by the virtual machine.

n

The ESX/ESXi host maintains the virtual-to-machine page mappings in a shadow page table that is kept

up to date with the physical-to-machine mappings (maintained by the VMM).

n

The shadow page tables are used directly by the processor's paging hardware.

This approach to address translation allows normal memory accesses in the virtual machine to execute without

adding address translation overhead, after the shadow page tables are set up. Because the translation look-

aside buffer (TLB) on the processor caches direct virtual-to-machine mappings read from the shadow page

tables, no additional overhead is added by the VMM to access the memory.

Performance Considerations

The use of two-page tables has these performance implications.

n

No overhead is incurred for regular guest memory accesses.

n

Additional time is required to map memory within a virtual machine, which might mean:

n

The virtual machine operating system is setting up or updating virtual address to physical address

mappings.

n

The virtual machine operating system is switching from one address space to another (context switch).

n

Like CPU virtualization, memory virtualization overhead depends on workload.

Hardware-Assisted Memory Virtualization

Some CPUs, such as AMD SVM-V and the Intel Xeon 5500 series, provide hardware support for memory

virtualization by using two layers of page tables.
The first layer of page tables stores guest virtual-to-physical translations, while the second layer of page tables

stores guest physical-to-machine translation. The TLB (translation look-aside buffer) is a cache of translations

maintained by the processor's memory management unit (MMU) hardware. A TLB miss is a miss in this cache

and the hardware needs to go to memory (possibly many times) to find the required translation. For a TLB

miss to a certain guest virtual address, the hardware looks at both page tables to translate guest virtual address

to host physical address.
The diagram in

Figure 3-1

illustrates the ESX/ESXi implementation of memory virtualization.

Chapter 3 Managing Memory Resources

VMware, Inc.

27

Advertising
This manual is related to the following products: