I present to you the first part of a series of publications on VMware vSphere storage technologies. This article will look at the old proven features available in 4 and 5 versions of the product.
VASA - vSphere API for Storage Awareness / Storage Monitoring API Kit
VASA is a set of APIs provided by VMware and designed to develop vSphere infrastructure storage providers. Storage providers are software components provided by vSphere or developed by 3rd parties for integrating (tracking) storage (software and hardware storage) and input / output filters (VAIO) with the vSphere infrastructure.
Storage provider (VASA-provider) is needed in order for the virtual infrastructure:
- received information about the status, characteristics and capabilities of the storage;
- could work with entities like Virtual SAN and Virtual Volumes;
- could interact with input / output filters (VAIO).
The availability of the corresponding VASA-provider provides the above possibilities and allows them to be used in policies (SPBM). If the required storage provider is not available, vSphere does not see the characteristics of storage systems, VAIO filters, cannot work with vVOL, there is no possibility to create corresponding rules in policies.
')
Third-party VASA providers are used as tracking services for this vSphere storage. Such providers require separate registration and installation of the corresponding plug-ins.
Built-in storage providers are vSphere components and do not require registration. For example, the provider for Virtual SAN is automatically registered when it is deployed.
vSphere collects information about storage (characteristics, status, capabilities) and data services (VAIO filters) across the entire infrastructure through the storage provider, this information is available for monitoring and decision making through the vSphere Web Client.
Information collected by VASA-providers can be divided into 3 categories:
- Opportunities and services storage. This is exactly what SPBM Common rules and Rules Based on Storage-Specific Data Services rules are based on — the capabilities and services provided by Virtual SAN, vVol, and I / O filters.
- The state of the repository. Information on the status and events on the storage side, incl. alarm events, configuration changes.
- Information Storage DRS. This information allows to take into account the internal processes of storage management in the work of the Storage DRS mechanism.
Developing a VASA provider for product integration (3rd party), in particular storage, with vSphere falls on the vendor’s shoulders. Such storage providers can be installed on virtually any infrastructure node with the exception of the vCenter server. As a rule, third-party VASA-providers are installed on the storage controller or a dedicated server.
Multiple vCenter servers can simultaneously access the same storage provider. One vCenter can simultaneously interact with multiple storage providers (multiple arrays and I / O filters).
VAAI - vSphere API for Array Integration / Array Integration API Kit
API of this type can be divided into 2 categories:
- Hardware Acceleration APIs. Designed for transparent transfer of loads to perform individual operations related to storage from hypervisors to storage systems.
- Array Thin Provisioning APIs. Designed to monitor the space on the "thin" partitions of arrays to prevent situations with lack of space and perform the recall (unused) space.
Storage Hardware Acceleration (VAAI for Hardware Acceleration)
This functionality integrates ESXi hosts and compatible storage systems, allows you to transfer individual VM maintenance operations and storage from the hypervisor (ESXi host) to an array (storage system), thereby increasing the speed of these operations, reducing the load on the processor and host memory. as well as a storage area network.
Storage Hardware Acceleration is supported for block (FC, iSCSI) and file (NAS) storage. For the operation of the technology, it is necessary that the block device supports the T10 SCSI standard or has a VAAI plug-in. If the block array supports the T10 SCSI standard, then the VAAI plugin is not needed to support Hardware Acceleration, everything will work directly. File storage requires a separate VAAI plugin. The development of VAAI plug-ins falls on the shoulders of the storage vendor.
In general, VAAI for Hardware Acceleration allows to optimize and transfer the following processes to an array:
- VM migration through Storage vMotion.
- Deploy VM from template.
- Clone VM or VM templates.
- VMFS locks and metadata operations for VMs.
- Work with "thick" disks (block and file access, eager-zero disks).
For block devices, Hardware Acceleration optimizes operations:
- Full copy (clone blocks or copy offload). Allows the array to make a complete copy of the data, avoiding host read / write operations. This operation reduces the time and network load when cloning, deploying from a template, or migrating (moving a disk) to a VM.
- Block zeroing (write same). It allows the array to reset a large number of blocks, which significantly optimizes the creation of “eager zero thick” disks for VMs.
- Hardware assisted locking (atomic test and set - ATS). Allows you to avoid blocking LUNs with VMFS entirely (there is no need to use the SCSI reservation command) thanks to the support of selective blocking of individual blocks. Loss (reduced probability of loss) of storage performance is eliminated when the hypervisor makes changes to the metadata on the LUN with VMFS.
Explanation
VMFS is a cluster FS (file system) and supports parallel operation of several ESXi hosts (hypervisors) with one LUN (which is formatted for it). On a LUN with VMFS, multiple VM files can be stirred, as well as metadata. In normal mode, while changes are not made to the metadata, everything works in parallel, many hosts access VMFS, no one bothers anyone, there are no locks.
If Hardware Acceleration (VAAI) is not supported by a block device, then any host has to use the SCSI reservation command to make changes to the metadata on VMFS, the LUN is transferred to the exclusive use of this host, for other hosts at the time of making changes to the metadata this LUN becomes unavailable, which can cause a noticeable loss of performance.
Metadata contains information about the VMFS partition itself and VM files. Metadata changes occur in the following cases: turning on / off VMs, creating VM files (creating VMs, cloning, migrating, adding a disk, creating snapshots), deleting files (deleting VMs or VM disks), changing the owner of the VM file, increasing the VMFS partition, resizing VM files (if VMs have thin disks or snapshots are used, this happens all the time).
Hardware Acceleration for VMFS will not work and the load will fall on the host if:
- VMFS source and destination partitions have different block sizes
- Source file is in RDM format, non-RDM destination file
- Source file "eager-zeroed thick", destination file "thin"
- VM has snapshots
- VMFS stretched across multiple arrays
For file storage, Hardware Acceleration optimizes operations:
- Full File Clone. Allows you to clone VM files at the NAS device level.
- Reserve Space. Allows you to reserve space for VMs with “thick” disks (by default, NFS does not reserve space and does not allow making “thick” disks).
- Native Snapshot Support. Support for creating snapshots VM at the array level.
- Extended Statistics. It gives the opportunity to see the use of space on the array.
Multipathing Storage APIs - Pluggable Storage Architecture (PSA) / API Multipathing Kit
The ESXi hypervisor uses a separate set of Storage APIs called Pluggable Storage Architecture (PSA) to manage multi-patching. PSA is an open modular framework coordinating the simultaneous operation of multiple multipathing plug-ins (MPPs). PSA allows manufacturers to develop (integrate) their own multi-looping technologies (load balancing and disaster recovery) to connect their storage systems to vSphere.
PSA performs the following tasks:
- Load and unload multi-plugin plugins
- Hides from VM the specifics of the work of multi-plugin plugins
- Forwards MPP I / O requests
- Handles I / O queues
- Allocates bandwidth between VMs.
- Performs detection and deletion of physical paths.
- Collects I / O statistics
By default, ESXi uses VMware's built-in Native Multipathing Plug-In (NMP) plugin. In general, NMP supports all types and storage models compatible with vSphere and selects the default multi-pattern algorithm, depending on the specific model.
NMP in turn is also an extensible module that manages two sets of plug-ins: Storage Array Type Plug-Ins (SATPs), and Path Selection Plug-Ins (PSPs). SATPs and PSPs can be embedded VMware plug-ins or third-party development. If necessary, the storage developer can create their own MPP for use in addition to or instead of NMP.
The SATP is responsible for restoring the path after a failure (failover): monitoring the state of physical paths, informing them of a change in their state, switching from a bad path to a working path. NMP provides SATPs for all possible array models supported by vSphere, and selects the appropriate SATP.
The PSP is responsible for choosing the physical data transfer path. NMP offers 3 PSP built-in options: Most Recently Used, Fixed, Round Robin. Based on the SATP selected for the array, the NMP module selects the default PSP option. In this case, the vSphere Web Client allows you to select the PSP option manually.
The principle of the PSP options:
- Most Recently Used (MRU) - the host chooses the path that was used last (recently). If this path becomes unavailable, the host proceeds to the alternate path. Return to the original path after its recovery does not occur. Ability to set the preferred path is missing. MRU is the default option for most active-passive arrays.
- Fixed - the host uses the preferred path, which can be set manually, or selects the first working path that was detected during system boot. Manually set the preferred path retains its status even being inaccessible, so after its restoration, the host will switch back to it. If the preferred path is not explicitly set manually and was chosen automatically, if it is not available, a duplicate path is assigned to the preferred path and there will be no return to the original path after its restoration. Fixed is the default option for most active-active arrays.
- Round Robin (RR) - the host uses an automatic path selection algorithm performing the rotation of active paths for active-passive arrays, or the rotation of all paths for active-active arrays. As you can see, RR can be used for both types of arrays and allows load balancing on paths for different LUNs.
Thank you for your attention, to be continued.