📜 ⬆️ ⬇️

Hyper-V Architecture: Deep Dive

Everyone take their places! Batten the hatches! Get ready to dive!
In this article I will try to talk about the Hyper-V architecture in more detail than I did before .
image


What is Hyper-V?


Hyper-V is one of the server virtualization technologies that allows you to run multiple virtual operating systems on a single physical server. These operating systems are referred to as “guest”, and the operating system installed on the physical server - “host”. Each guest operating system runs in its own isolated environment, and “thinks” that it runs on a separate computer. They do not know about the existence of other guest OS and host OS.
These isolated environments are referred to as “virtual machines” (or abbreviated as VM). Virtual machines are implemented in software, and provide guest OS and applications access to server hardware resources through the hypervisor and virtual devices. As already mentioned, the guest OS behaves as if it completely controls the physical server, and has no idea about the existence of other virtual machines. Also, these virtual environments can be referred to as “partitions” (not to be confused with partitions on hard disks).
First introduced as part of Windows Server 2008, Hyper-V now exists as a standalone product, Hyper-V Server (de facto heavily curtailed by Windows Server 2008), and in the new version, R2, the enterprise-class virtualization system. Version R2 supports some new features, and the article will cover this version.

Hypervisor


The term “hypervisor” dates back to 1972, when IBM implemented virtualization in its System / 370 mainframe. This was a breakthrough in IT, as it allowed to bypass the architectural limitations and high cost of using mainframes.
The hypervisor is a virtualization platform that allows you to run multiple operating systems on a single physical computer. It is the hypervisor that provides the isolated environment for each virtual machine, and it is this that provides the guest OS with access to the computer hardware.
Hypervisors can be divided into two types according to the launch method (on the “bare iron” or inside the OS) and into two types according to the architecture (monolithic and micronuclear).
')
Hypervisor 1 kind

Type 1 hypervisor runs directly on the physical hardware and manages it independently. Guest operating systems running inside virtual machines are located one level higher, as shown in Figure 1.

image
Fig.1 The hypervisor of the first kind is launched on the “bare gland”.

The work of the first kind of hypervisors directly with the equipment allows to achieve greater performance, reliability and security.
Type I hypervisors are used in many enterprise-class solutions:

Hypervisor 2 kind


Unlike type 1, type 2 hypervisor runs inside the host OS (see Figure 2).

image
Fig.2 Hypervisor of the 2nd kind runs inside the guest OS

Virtual machines run in the user space of the host OS, which is not the best effect on performance.
Examples of type 2 hypervisors are MS Virtual Server and VMware Server, as well as desktop virtualization products — MS VirtualPC and VMware Workstation.

Monolithic hypervisor

Monolithic architecture hypervisors include hardware device drivers in their code (see Figure 3).

image
Fig. 3. Monolithic architecture

Monolithic architecture has its advantages and disadvantages. Among the advantages include:

The disadvantages of the monolithic architecture are the following:

The most common example of a monolithic architecture is VMware ESX.

Micronuclear architecture

With a microkernel architecture, device drivers work within the host OS.
The host OS in this case runs in the same virtual environment as all VMs, and is referred to as the “parent partition”. All other environments, respectively - "child". The only difference between the parent and child partitions is that only the parent part has direct access to the server hardware. The hypervisor itself is engaged in memory allocation and CPU time planning.

image
Fig. 4. Micronuclear architecture

The advantages of this architecture are as follows:

The most striking example of micronuclear architecture is, in fact, Hyper-V itself.

Hyper-V Architecture


Figure 5 shows the main elements of the Hyper-V architecture.

image
Fig.5 Hyper-V Architecture

As can be seen from the figure, the hypervisor works on the next level after the iron - which is typical of type 1 hypervisors. Parent and child partitions operate at a level higher than the hypervisor. Partitions in this case are areas of isolation within which operating systems operate. Do not confuse them, for example, with partitions on the hard disk. In the parent partition, the host OS (Windows Server 2008 R2) and the virtualization stack are started. It is also from the parent partition that external devices are managed, as well as child partitions. Child partitions, as you might guess, are created from the parent partition and are intended to run the guest OS. All partitions are connected to the hypervisor through the hypercall interface, which provides the operating system with a special API. If any of the developers are interested in the details of the hypercall APIs - the information is available in MSDN .

Parent partition

The parent partition is created immediately when you install the Hyper-V system role. The components of the parent partition are shown in Fig. 6
The purpose of the parent partition is as follows:


image
Fig.6. Components of the Hyper-V parent partition

Virtualization stack

The following components that work in the parent partition are collectively called the virtualization stack:

In addition, two more components work in the parent partition. These are virtualization service providers (VSP) and virtual machine bus (VMBus).
Virtual Machine Management Service
The tasks of the Virtual Machine Management Service (VMMS) include:


When you start the VMMS virtual machine, it creates a new virtual machine workflow. More about the workflow will be described below.
Also, VMMS determines which operations are allowed to perform with the virtual machine at the moment: for example, if snapshot is removed, then snapshot will not be applied during the delete operation. You can read more about working with snapshots of virtual machines in my corresponding article .
More specifically, VMMS manages the following states of virtual machines:

Other management tasks — Pause, Save, and Power Off — are not performed by the VMMS service, but directly by the workflow of the corresponding virtual machine.
The VMMS service operates both at the user level and at the kernel level as a system service (VMMS.exe) and depends on the Remote Procedure Call (RPC) and Windows Management Instrumentation (WMI) services. The VMMS service includes many components, among which there is a WMI provider that provides an interface for managing virtual machines. This allows you to manage virtual machines from the command line and using VBScript and PowerShell scripts. System Center Virtual Machine Manager also uses this interface to manage virtual machines.

Virtual Machine Workflow (VMWP)

To manage the virtual machine from the parent partition, a special process is launched - the virtual machine workflow (VMWP). This process works at the user level. For each running virtual machine, VMMS runs a separate workflow. This allows you to isolate virtual machines from each other. To increase security, workflows run under the built-in Network Service user account.
The VMWP process is used to manage the corresponding virtual machine. His tasks include:
Creating, configuring and running a virtual machine
Pause and continue work (Pause / Resume)
Save and Restore State (Save / Restore State)
Create snapshots
In addition, it is the workflow that emulates a virtual motherboard (VMB), which is used to provide guest OS memory, control interrupts, and virtual devices.

Virtual devices

Virtual Devices (VDevs) are software modules that implement configuration and device management for virtual machines. VMB includes a basic set of virtual devices, including a PCI bus and system devices identical to the Intel 440BX chipset. There are two types of virtual devices:


Virtual Infrastructure Driver (VID)

The virtual infrastructure driver (vid.sys) runs at the kernel level and manages partitions, virtual processors, and memory. This driver is also an intermediate between the hypervisor and the components of the user's virtualization stack.

Hypervisor Interface Library

The hypervisor interface library (WinHv.sys) is a kernel-level DLL that loads in both the host and the guest OS, provided that the integration components are installed. This library provides a hypercall interface that is used for the interaction between the OS and the hypervisor.

Virtualization Service Providers (VSPs)

Virtualization service providers operate in the parent partition and provide the guest OS access to hardware devices through a virtualization services client (VSC). The connection between VSP and VSC is carried out through a virtual bus VMBus.

Virtual Machine Bus (VMBus)

The purpose of VMBus is to provide high-speed access between the parent and child partitions, while other access methods are much slower due to high overhead during device emulation.
If the guest OS does not support the work of integration components - it is necessary to use device emulation. This means that the hypervisor has to intercept the calls of the guest OS and redirect them to the emulated devices, which, I remind, are emulated by the virtual machine's workflow. Since the workflow runs in user space, the use of emulated devices leads to a significant performance degradation compared to using VMBus. That is why it is recommended to install the integration components immediately after installing the guest OS.
As already mentioned, when using VMBus, the interaction between the host and the guest OS occurs on the client-server model. In the parent partition, virtualization service providers (VSP) are running, which are the server part, and in the child partitions, the client part is the VSC. VSC redirects guest OS requests via VMBus to the VSP in the parent partition, and the VSP itself forwards the request to the device driver. This interaction process is completely transparent to the guest OS.

Child partitions

Let's return to our drawing with the Hyper-V architecture, we will only reduce it a bit, because we are only interested in subsidiary partitions.
image
Fig. 7 Child partitions

So, in the child partitions can be set:

In all three cases, the set of components in the child partitions will differ slightly.

Windows OS with installed integration components

Microsoft Windows operating systems starting with Windows 2000 support the installation of an integration component. After installing Hyper-V Integration Services in the guest OS, the following components are launched:

Also, the integration components provide the following functionality:

OS is not from the Windows family, but supports integration components

There are also OSes that are not related to the Windows family, but support integration components. At the moment, this is only SUSE Linux Enterprise Server and Red Hat Enterprise Linux. Such OSes, when installing integration components, use third-party VSCs to interact with VSC over VMBus and access equipment. The integration components for Linux are developed by Microsoft in conjunction with Citrix and are available for download in the Microsoft Download Center. Since the integration components for Linux were released under the GPL v2 license, they are being integrated into the Linux kernel through the Linux Driver Project , which will significantly expand the list of supported guest OSs.

Instead of conclusion


With this, I will probably finish my second article on Hyper-V architecture. The previous article caused some readers questions, and I hope that now I have answered them.
I hope that the reading was not too boring. I used the “academic language” quite often, but this was necessary because the subject of the article implies a very large amount of theory and practically zero point zero tenths of practice.

Many thanks to Mitch Tulloch and Microsoft Virtualization Team. Based on their book Understanding Microsoft Virtualization Solutions , an article was prepared.

Source: https://habr.com/ru/post/98580/


All Articles