Tagged: VMware

Its no longer physical – vSphere virtual machine resource allocation

Since the release of ESX Server over 12 years ago, the IT landscape has changed dramatically with a massive shift away from the old physical datacenter to today’s virtual datacenter. As the technology matured and complexity of systems increased so did the requirement for administrators to rethink the way they manage their systems. When it comes to VM resource allocations unfortunately there are still many administrators today who are stuck with the old mentality of administering their virtual servers as if they are physical.

A VMs resource allocation should be based on real data and not guesstimates, the vCenter performance charts provide a heap of data to be able to make informed decisions on the VMs vCPU and memory allocations. It is always best following VMware best practice by starting out with the least amount of resources and then increasing where required, remembering in virtualization usually less is more. I constantly see administrators blindly adding more vCPUs in a bid to improve the VMs performance only to degrade it by increasing physical CPU contention which can be witnessed by the rise of CPU ready time. Out of desperation they will then double the memory allocation in the hope that things will magically improve, unknowingly they are not only degrading the performance of that particular VM they have the potential to effect the performance of the rest of the virtual environment.



VMware vSphere Storage API’s for Array Integration (VAAI)

VAAI is a set of APIs which define a set of features or primitives, these primitives enable communication between the hypervisor and the array though, more importantly VAAI improves performance, which is mainly achieved by the offloading certain storage workloads to the arrays hardware.

VAAI is nothing new, it was originally released back in vSphere 4.1 with three primitives, in later releases more primitives have been added and improvements made. In this blog I will cover these original primitives in some detail and touch on the others.

These days you will find most modern storage arrays support VAAI, for a full list see the VMware Hardware compatibility List.

VAAI Primitives

Atomic Test & Set

Atomic Test & Set or ATS is a primitive that performs granular locking on VMFS volumes. The VMFS is a clustered file system allowing multiple hosts to simultaneously access the data located on it, therefore when a host is required to update the VMFS metadata during operations such as virtual machine power-on or snapshots, a locking mechanism is required to maintain the VMFS metadata integrity. Without ATS a SCSI-2 reservation is used locking the whole LUN, which can cause contention issues especially on large LUNs containing many virtual machines. ATS dramatically decreases contention by granular locking on only the VMFS sectors that required updating, tests have shown that impact on IOPs and throughput during locking workload is negligible which is a real plus when it comes to performance.

On newly created VMFS5 volumes, ESXi writes a flag to the volume known as a “ATS Only” flag. The flag is only written once the hypervisor has confirmed that the array provides support for VAAI, the flag signifies that only ATS will be used for metadata locking operations. Prior to vSphere 6.0, the ATS only flag could not be enabled for VMFS volumes upgraded from VMFS3 to VMFS5, this is due to VMFS3 not having all the ATS features of VMFS5 therefore during the upgrade some hosts may still be using SCSI-2 reservations for certain metadata operations.

In a scenario where a VMFS3 volume has been upgraded to VMFS5, mixed mode locking is used, where firstly ATS locking is attempted and reverts to SCSI-2 reservations only if ATS was unsuccessful.

The table below indicates which locking mechanism used based on your VMFS


New VMFS5 volumes, the ATS only flag can be manually enabled or disabled using:

vmkfstools –configATSOnly

Now in vSphere 6.0 new and upgraded VMFS5 volumes can have the ATS only flag set using a new command:

esxcli storage vmfs lockmode set [–ats|–scsi

Other enhancements to ATS in vSphere 6.0 is the support for LVM operations, prior to vSphere 6 all LVM operations were using SCSI-2 reservations.

Write Same (Zero)

The write same primitive offloads the writing of zeros to the storage array reducing CPU cycles on the host and freeing up the storage fabric.

The write same primitive is used in the following operations:

  • Provisioning eagerzeroedthick vmdks
  • Allocating new blocks for thin provisioned vmdks
  • Initializing unwritten file blocks for zeroedthick disks
  • Creating a clone of a eagerzeroedthick vmdks

Performance benefits of the write same primitive can differ between array makes, this is because some arrays only write the zeros to metadata where others write the actual zeros to disk.

XCOPY (SCSI Extended Copy)

XCOPY offloads clone and migration operations such as Storage vMotion to the array again reducing CPU cycles and freeing up the storage fabric. Without the XCOPY primitive to perform a clone or copy operation the ESXi host uses the VMkernel software Data Mover driver to send blocks via a SCSI read command to the array, the array returns the results back to the ESXi host via a data read sent command, finally the ESXi host sends a SCSI write command with the data retrieved from the array to create the vmdk clone or copy. This process generates a lot IO on the storage fabric and overhead on the ESXi hosts.

The XCOPY primitive solves this by instructing the ESXi host to send the required block addresses to the array, the array performs the copy and then informs the host once completed.