I present to you the final publication on storage technologies VMware vSphere 6. It will be considered technology VVOL.
VVOL Technology Description
In all editions of VMware prior to vSphere 6, VMs (virtual machines) were stored as files in the file system. In the case of an array with block access, VMware's own file system, VMFS, was used; NFS was used for file storage. The capacity of the array was divided into LUNs or NFS-balls and presented (mounted to) ESXi hosts in the form of a datastore. Datastors, as a rule, have a large volume (from 1 TB) and place on themselves a lot of VMs (5-10 and more). This is primarily due to the fact that the allocation of a separate datastore for each VM is too inconvenient and labor intensive in terms of administration.
With this approach, the granularity of operations of the VM storage support by the array forces is at the datastore level (LUN or NFS-ball entirely), and not at the level of individual VMs. This refers to such operations as snapshots, replication, deduplication, encryption, QoS. They are performed at the storage level, and not by means of the hypervisor, which allows them to be performed faster while unloading the computing resources of the hosts and the data network. First of all, it concerns block access with VMFS, some file array models (for example, NetApp) give granularity at the level of individual VMs - for example, storage systems allow snapshots of a separate VM, and not the entire datastore.
The traditional VM storage technology described above in vSphere 6 is still supported, but in addition to it, the concept of Virtual Volumes (VVOL) was developed, providing object storage of VMs on arrays supporting this technology, regardless of their type (block or file).
VVOL is an object containing VM files, virtual disks and their derivatives. Storage with VVOL support gets the opportunity to work directly with these objects at the expense of its own hardware resources. As a rule, each VVOL has a unique GUID for identification. There is no need to pre-allocate space for VVOL, they are created automatically when performing operations with VMs: creation, cloning, snapshotting.
')
vSphere (ESXi and vCenter hosts) associates one or more VVOL with a VM:
• VVOL data corresponding to the virtual disk file (.vmdk);
• VVOL configuration — a small home directory containing VM metadata (including a .vmx file, virtual disk handle files, log files, etc.). This type of VVOL is formatted in VMFS or NFS, depending on the type of storage (block or file).
• Additional VVOLs can be created for other VM components (for example, VVOL for a swap file or a VM RAM file for snapshots) or derived virtual disks (clones, replicas, snapshots).
Thanks to VVOL, we get the granularity of storage management at the level of individual VMs and we can directly use storage resources for this. VVOL allows you to use flexible storage management capabilities based on SPBM policies. Dividing VM data into VVOL of several types allows you to place them on storage with different levels of service: VVOL data into productive tier with high level of service, VVOL with configuration or secondary VM disk into simpler tier.
For storage, VVOL does not need classic LUNs or NFS balls. VVOL is stored on conditionally “raw” storage space in objects called storage container. Storage containers are logical units of array space, their number and capacity are determined by the specific storage system and infrastructure. At least 1 storage container must be created on the storage system; you cannot stretch the 1 storage container into several physical arrays.
Storage containers logically group VVOL for administrative or management reasons: you can create separate storage containers for different customers, departments, or VM groups. One storage container can include multiple service profiles, so that VMs with different service requirements and storage policies can be placed on the same storage container.
To integrate storage systems with vSphere, in order for vCenter and hypervisors to work with VVOL and connect to the storage container, it is necessary to deploy and register with the vCenter a special VASA-based (VMware APIs for Storage Awareness) for VVOL, which should be developed and be provided by the storage manufacturer.
For the access of ESXi hosts to VVOL, logical I / O proxies are used, called protocol endpoints. ESXi uses protocol endpoints to establish an on-demand data-path between the VM and its VVOL. Each VVOL is connected to a specific protocol endpoints. Typically, storage systems require a small number of protocol endpoints, since 1 protocol endpoints can serve many VMs (hundreds and thousands).
VVOL supports the main storage protocols: Fiber Channel, FCoE, iSCSI, and NFS, similar to traditional vSphere storage.
To support VVOL, you must use HBA adapters and driver versions for them adapted to work with (supporting) VVOL.
On the storage side, configuration of the protocol endpoints occurs — one or more per storage container. Protocol endpoints are part of the storage system and are provided to hosts along with associated storage containers via the storage provider. The vSphere Web Client displays protocol endpoints connected to the infrastructure through a T10-based LUN WWN for block arrays and IP / DNS for file arrays. Protocol endpoints only support multi-looping for SCSI connection options (not supported for NFS).
Storage containers are connected to the infrastructure in the form of a virtual datastore, these entities are their direct mapping to the vSphere Web Client. Virtual datastore are similar to ordinary datastore vSphere, can display VVOL by VM name, they can be mounted and unmounted. However, their configuration, incl. resizing is done at the storage level outside of the vSphere. Virtual datastore can be used in conjunction with regular VMFS and NFS datastore, as well as with Virtual SAN.
Placing a VM on a virtual datastore (using VVOL) requires storage policies (SPBM). VM storage policy contains a set of rules for placement and quality of service for VMs and guarantees their implementation. In the absence of specific policies, the system will use the default policy without restrictions; in this case, the array will independently select the optimal VM location at its discretion.

Advantages and disadvantages of VVOL
Important advantages of the technology are support for creating snapshots and clones of individual VMs at the level and at the expense of storage resources, as well as providing services such as replication, encryption, deduplication and compression of individual virtual disks. Separately, it should be noted support for SPBM, which opens up a large potential for managing policy-based VM storage.
VMware products supporting VVOL:
• VMware vSphere 6.0.x
• VMware vRealize Automation 6.2.x
• VMware Horizon 6.1.x
• VMware vSphere Replication 6.0.x
VMware products do not support VVOL:
• VMware vRealize Operations Manager 6.0.x to 6.1.0
• VMware Site Recovery Manager 5.x to 6.1.0
• VMware vSphere Data Protection 5.x to 6.1.0
• VMware vCloud Director 5
VMware technologies supporting VVOL:
• High Availability (HA)
• Linked Clones
• Native Snapshots
• NFS version 3.x
• Storage Policy Based Management (SPBM)
• Storage vMotion
• Thin Provisioning
• View Storage Accelerator / Content Based Read Cache (CBRC)
• Virtual SAN (VSAN)
• vSphere Auto Deploy
• vSphere Flash Read Cache
• vSphere Software Development Kit (SDK)
• vSphere API for I / O Filtering (VAIO)
• vMotion
• xvMotion
• VADP
VMware technologies NOT supporting VVOL:
• Fault Tolerance (FT)
• IPv6
• Microsoft Failover Clustering
• NFS version 4.1
• Raw Device Mapping (RDM)
• SMP-FT
• Storage Distributed Resource Scheduler (SDRS)
• Storage I / O Control
There is an assumption that VMware relies on VVOL, their development, and in the near future, the technology will become compatible with all vendor solutions. In the future, VVOL may become the main storage technology of VMware, which will lead to a gradual departure from obsolete traditional repositories and the termination of their support.
At the time of publication, articles from 200 (approximately) storage system manufacturers whose products are compatible with vSphere (according to VMware Compatibility Guide), only 18 support VVOL. I could not find any real feedback on the practical use of VVOL: neither on the Internet, nor on the VMware VMUG forum (even on the bourgeois). The lack of VVOL compatibility with the technologies listed above at this stage will force many customers to abandon VVOL, since VVOL incompatible technology or a set of them can often be more important for many infrastructures.
From this we can conclude that theoretically VVOL technology is very interesting and useful. However, at this stage, its practicality and the need for implementation raise doubts. We need a positive experience in production and we need compatibility with other important features of vSphere, while this is not.
Thank you for your attention, this series of articles on storage technologies vSphere 6 is over. The following article will look at VMware solutions for replication and disaster recovery (DR): vSphere Replication and Site Recovery Manager (SRM).