📜 ⬆️ ⬇️

VMware vSphere 6 Storage Technology. Part 1 - Old School

I present to you the first part of a series of publications on VMware vSphere storage technologies. This article will look at the old proven features available in 4 and 5 versions of the product.

VASA - vSphere API for Storage Awareness / Storage Monitoring API Kit


VASA is a set of APIs provided by VMware and designed to develop vSphere infrastructure storage providers. Storage providers are software components provided by vSphere or developed by 3rd parties for integrating (tracking) storage (software and hardware storage) and input / output filters (VAIO) with the vSphere infrastructure.

Storage provider (VASA-provider) is needed in order for the virtual infrastructure:


The availability of the corresponding VASA-provider provides the above possibilities and allows them to be used in policies (SPBM). If the required storage provider is not available, vSphere does not see the characteristics of storage systems, VAIO filters, cannot work with vVOL, there is no possibility to create corresponding rules in policies.
')
Third-party VASA providers are used as tracking services for this vSphere storage. Such providers require separate registration and installation of the corresponding plug-ins.

Built-in storage providers are vSphere components and do not require registration. For example, the provider for Virtual SAN is automatically registered when it is deployed.

vSphere collects information about storage (characteristics, status, capabilities) and data services (VAIO filters) across the entire infrastructure through the storage provider, this information is available for monitoring and decision making through the vSphere Web Client.

Information collected by VASA-providers can be divided into 3 categories:


Developing a VASA provider for product integration (3rd party), in particular storage, with vSphere falls on the vendor’s shoulders. Such storage providers can be installed on virtually any infrastructure node with the exception of the vCenter server. As a rule, third-party VASA-providers are installed on the storage controller or a dedicated server.

Multiple vCenter servers can simultaneously access the same storage provider. One vCenter can simultaneously interact with multiple storage providers (multiple arrays and I / O filters).

VAAI - vSphere API for Array Integration / Array Integration API Kit


API of this type can be divided into 2 categories:


Storage Hardware Acceleration (VAAI for Hardware Acceleration)


This functionality integrates ESXi hosts and compatible storage systems, allows you to transfer individual VM maintenance operations and storage from the hypervisor (ESXi host) to an array (storage system), thereby increasing the speed of these operations, reducing the load on the processor and host memory. as well as a storage area network.

Storage Hardware Acceleration is supported for block (FC, iSCSI) and file (NAS) storage. For the operation of the technology, it is necessary that the block device supports the T10 SCSI standard or has a VAAI plug-in. If the block array supports the T10 SCSI standard, then the VAAI plugin is not needed to support Hardware Acceleration, everything will work directly. File storage requires a separate VAAI plugin. The development of VAAI plug-ins falls on the shoulders of the storage vendor.

In general, VAAI for Hardware Acceleration allows to optimize and transfer the following processes to an array:


For block devices, Hardware Acceleration optimizes operations:


Explanation

VMFS is a cluster FS (file system) and supports parallel operation of several ESXi hosts (hypervisors) with one LUN (which is formatted for it). On a LUN with VMFS, multiple VM files can be stirred, as well as metadata. In normal mode, while changes are not made to the metadata, everything works in parallel, many hosts access VMFS, no one bothers anyone, there are no locks.

If Hardware Acceleration (VAAI) is not supported by a block device, then any host has to use the SCSI reservation command to make changes to the metadata on VMFS, the LUN is transferred to the exclusive use of this host, for other hosts at the time of making changes to the metadata this LUN becomes unavailable, which can cause a noticeable loss of performance.

Metadata contains information about the VMFS partition itself and VM files. Metadata changes occur in the following cases: turning on / off VMs, creating VM files (creating VMs, cloning, migrating, adding a disk, creating snapshots), deleting files (deleting VMs or VM disks), changing the owner of the VM file, increasing the VMFS partition, resizing VM files (if VMs have thin disks or snapshots are used, this happens all the time).

Hardware Acceleration for VMFS will not work and the load will fall on the host if:


For file storage, Hardware Acceleration optimizes operations:



Multipathing Storage APIs - Pluggable Storage Architecture (PSA) / API Multipathing Kit


The ESXi hypervisor uses a separate set of Storage APIs called Pluggable Storage Architecture (PSA) to manage multi-patching. PSA is an open modular framework coordinating the simultaneous operation of multiple multipathing plug-ins (MPPs). PSA allows manufacturers to develop (integrate) their own multi-looping technologies (load balancing and disaster recovery) to connect their storage systems to vSphere.

PSA performs the following tasks:


By default, ESXi uses VMware's built-in Native Multipathing Plug-In (NMP) plugin. In general, NMP supports all types and storage models compatible with vSphere and selects the default multi-pattern algorithm, depending on the specific model.

NMP in turn is also an extensible module that manages two sets of plug-ins: Storage Array Type Plug-Ins (SATPs), and Path Selection Plug-Ins (PSPs). SATPs and PSPs can be embedded VMware plug-ins or third-party development. If necessary, the storage developer can create their own MPP for use in addition to or instead of NMP.

The SATP is responsible for restoring the path after a failure (failover): monitoring the state of physical paths, informing them of a change in their state, switching from a bad path to a working path. NMP provides SATPs for all possible array models supported by vSphere, and selects the appropriate SATP.

The PSP is responsible for choosing the physical data transfer path. NMP offers 3 PSP built-in options: Most Recently Used, Fixed, Round Robin. Based on the SATP selected for the array, the NMP module selects the default PSP option. In this case, the vSphere Web Client allows you to select the PSP option manually.

The principle of the PSP options:


Thank you for your attention, to be continued.

Source: https://habr.com/ru/post/314400/


All Articles