Everyone take their places! Batten the hatches! Get ready to dive!
In this article I will try to talk about the Hyper-V architecture in more detail than I did
before .
What is Hyper-V?
Hyper-V is one of the server virtualization technologies that allows you to run multiple virtual operating systems on a single physical server. These operating systems are referred to as “guest”, and the operating system installed on the physical server - “host”. Each guest operating system runs in its own isolated environment, and “thinks” that it runs on a separate computer. They do not know about the existence of other guest OS and host OS.
These isolated environments are referred to as “virtual machines” (or abbreviated as VM). Virtual machines are implemented in software, and provide guest OS and applications access to server hardware resources through the hypervisor and virtual devices. As already mentioned, the guest OS behaves as if it completely controls the physical server, and has no idea about the existence of other virtual machines. Also, these virtual environments can be referred to as “partitions” (not to be confused with partitions on hard disks).
First introduced as part of Windows Server 2008, Hyper-V now exists as a standalone product, Hyper-V Server (de facto heavily curtailed by Windows Server 2008), and in the new version, R2, the enterprise-class virtualization system. Version R2 supports some new features, and the article will cover this version.
Hypervisor
The term “hypervisor” dates back to 1972, when IBM implemented virtualization in its System / 370 mainframe. This was a breakthrough in IT, as it allowed to bypass the architectural limitations and high cost of using mainframes.
The hypervisor is a virtualization platform that allows you to run multiple operating systems on a single physical computer. It is the hypervisor that provides the isolated environment for each virtual machine, and it is this that provides the guest OS with access to the computer hardware.
Hypervisors can be divided into two types according to the launch method (on the “bare iron” or inside the OS) and into two types according to the architecture (monolithic and micronuclear).
')
Hypervisor 1 kind
Type 1 hypervisor runs directly on the physical hardware and manages it independently. Guest operating systems running inside virtual machines are located one level higher, as shown in Figure 1.
Fig.1 The hypervisor of the first kind is launched on the “bare gland”.The work of the first kind of hypervisors directly with the equipment allows to achieve greater performance, reliability and security.
Type I hypervisors are used in many enterprise-class solutions:
- Microsoft Hyper-V
- VMware ESX Server
- Citrix xenserver
Hypervisor 2 kind
Unlike type 1, type 2 hypervisor runs inside the host OS (see Figure 2).
Fig.2 Hypervisor of the 2nd kind runs inside the guest OSVirtual machines run in the user space of the host OS, which is not the best effect on performance.
Examples of type 2 hypervisors are MS Virtual Server and VMware Server, as well as desktop virtualization products — MS VirtualPC and VMware Workstation.
Monolithic hypervisor
Monolithic architecture hypervisors include hardware device drivers in their code (see Figure 3).
Fig. 3. Monolithic architectureMonolithic architecture has its advantages and disadvantages. Among the advantages include:
- Higher (theoretically) performance due to finding drivers in the hypervisor space
- Higher reliability, since failures of the operating system (in terms of VMware - “Service Console”) will not lead to failure of all running virtual machines.
The disadvantages of the monolithic architecture are the following:
- Only hardware with supported drivers in the hypervisor is supported. Because of this, the hypervisor vendor must work closely with the hardware vendors so that drivers for the operation of all new hardware with the hypervisor are written in time and added to the hypervisor code. For the same reason, when switching to a new hardware platform, you may need to switch to a different version of the hypervisor, and vice versa - when switching to a new version of the hypervisor, you may need to change the hardware platform, since the old equipment is no longer supported.
- Potentially lower security is due to the inclusion of third-party code in the form of device drivers in the hypervisor. Since the driver code is executed in the hypervisor space, there is a theoretical possibility to take advantage of the vulnerability in the code and gain control over both the host OS and all guest OSes.
The most common example of a monolithic architecture is VMware ESX.
Micronuclear architecture
With a microkernel architecture, device drivers work within the host OS.
The host OS in this case runs in the same virtual environment as all VMs, and is referred to as the “parent partition”. All other environments, respectively - "child". The only difference between the parent and child partitions is that only the parent part has direct access to the server hardware. The hypervisor itself is engaged in memory allocation and CPU time planning.
Fig. 4. Micronuclear architectureThe advantages of this architecture are as follows:
- No driver required, “sharpened” under the hypervisor. The micronuclear architecture hypervisor is compatible with any hardware that has drivers for the parent partition OS.
- Since the drivers are executed inside the parent partition, the hypervisor has more time for more important tasks - memory management and scheduler operation.
- Higher security. The hypervisor does not contain extraneous code, respectively, and the ability to attack it becomes less.
The most striking example of micronuclear architecture is, in fact, Hyper-V itself.
Hyper-V Architecture
Figure 5 shows the main elements of the Hyper-V architecture.
Fig.5 Hyper-V ArchitectureAs can be seen from the figure, the hypervisor works on the next level after the iron - which is typical of type 1 hypervisors. Parent and child partitions operate at a level higher than the hypervisor. Partitions in this case are areas of isolation within which operating systems operate. Do not confuse them, for example, with partitions on the hard disk. In the parent partition, the host OS (Windows Server 2008 R2) and the virtualization stack are started. It is also from the parent partition that external devices are managed, as well as child partitions. Child partitions, as you might guess, are created from the parent partition and are intended to run the guest OS. All partitions are connected to the hypervisor through the hypercall interface, which provides the operating system with a special API. If any of the developers are interested in the details of the hypercall APIs - the information is available in
MSDN .
Parent partition
The parent partition is created immediately when you install the Hyper-V system role. The components of the parent partition are shown in Fig. 6
The purpose of the parent partition is as follows:
- Creating, deleting and managing child partitions, including remote ones, through a WMI provider.
- Controlling access to hardware devices, with the exception of the allocation of processor time and memory - this is the hypervisor.
- Power management and hardware error handling, if any.
Fig.6. Components of the Hyper-V parent partitionVirtualization stack
The following components that work in the parent partition are collectively called the virtualization stack:
- Virtual Machine Management Service (VMMS)
- Virtual Machine Workflows (VMWP)
- Virtual devices
- Virtual Infrastructure Driver (VID)
- Hypervisor Interface Library
In addition, two more components work in the parent partition. These are virtualization service providers (VSP) and virtual machine bus (VMBus).
Virtual Machine Management Service
The tasks of the Virtual Machine Management Service (VMMS) include:
- Virtual machine health management (on / off)
- Add / Remove Virtual Devices
- Snapshot Management
When you start the VMMS virtual machine, it creates a new virtual machine workflow. More about the workflow will be described below.
Also, VMMS determines which operations are allowed to perform with the virtual machine at the moment: for example, if snapshot is removed, then snapshot will not be applied during the delete operation. You can read more about working with snapshots of virtual machines in
my corresponding article .
More specifically, VMMS manages the following states of virtual machines:
- Starting
- Active
- Not active
- Taking snapshot
- Applying snapshot
- Deleting snapshot
- Merging disk
Other management tasks — Pause, Save, and Power Off — are not performed by the VMMS service, but directly by the workflow of the corresponding virtual machine.
The VMMS service operates both at the user level and at the kernel level as a system service (VMMS.exe) and depends on the Remote Procedure Call (RPC) and Windows Management Instrumentation (WMI) services. The VMMS service includes many components, among which there is a WMI provider that provides an interface for managing virtual machines. This allows you to manage virtual machines from the command line and using VBScript and PowerShell scripts. System Center Virtual Machine Manager also uses this interface to manage virtual machines.
Virtual Machine Workflow (VMWP)
To manage the virtual machine from the parent partition, a special process is launched - the virtual machine workflow (VMWP). This process works at the user level. For each running virtual machine, VMMS runs a separate workflow. This allows you to isolate virtual machines from each other. To increase security, workflows run under the built-in Network Service user account.
The VMWP process is used to manage the corresponding virtual machine. His tasks include:
Creating, configuring and running a virtual machine
Pause and continue work (Pause / Resume)
Save and Restore State (Save / Restore State)
Create snapshots
In addition, it is the workflow that emulates a virtual motherboard (VMB), which is used to provide guest OS memory, control interrupts, and virtual devices.
Virtual devices
Virtual Devices (VDevs) are software modules that implement configuration and device management for virtual machines. VMB includes a basic set of virtual devices, including a PCI bus and system devices identical to the Intel 440BX chipset. There are two types of virtual devices:
- Emulated devices — Emulate certain hardware devices, such as the VESA video adapter, for example. There are a lot of emulated devices, for example: BIOS, DMA, APIC, ISA and PCI buses, interrupt controllers, timers, power management, serial port controllers, system speaker, keyboard and mouse PS / 2 controller, Legacy Ethernet adapter ( DEC / Intel 21140), FDD, IDE controller and VESA / VGA video adapter. That is why only the virtual IDE controller can be used to load the guest OS, and not SCSI, which is a synthetic device.
- Synthetic devices - do not emulate glands actually existing in nature. Examples include a synthetic video adapter, a human interaction device (HID), a network adapter, a SCSI controller, a synthetic interrupt controller, and a memory controller. Synthetic devices can be used only if the integration component is installed in the guest OS. Synthetic devices access server hardware devices through virtualization service providers operating in the parent partition. The call goes through the virtual bus VMBus, which is much faster than the emulation of physical devices.
Virtual Infrastructure Driver (VID)
The virtual infrastructure driver (vid.sys) runs at the kernel level and manages partitions, virtual processors, and memory. This driver is also an intermediate between the hypervisor and the components of the user's virtualization stack.
Hypervisor Interface Library
The hypervisor interface library (WinHv.sys) is a kernel-level DLL that loads in both the host and the guest OS, provided that the integration components are installed. This library provides a hypercall interface that is used for the interaction between the OS and the hypervisor.
Virtualization Service Providers (VSPs)
Virtualization service providers operate in the parent partition and provide the guest OS access to hardware devices through a virtualization services client (VSC). The connection between VSP and VSC is carried out through a virtual bus VMBus.
Virtual Machine Bus (VMBus)
The purpose of VMBus is to provide high-speed access between the parent and child partitions, while other access methods are much slower due to high overhead during device emulation.
If the guest OS does not support the work of integration components - it is necessary to use device emulation. This means that the hypervisor has to intercept the calls of the guest OS and redirect them to the emulated devices, which, I remind, are emulated by the virtual machine's workflow. Since the workflow runs in user space, the use of emulated devices leads to a significant performance degradation compared to using VMBus. That is why it is recommended to install the integration components immediately after installing the guest OS.
As already mentioned, when using VMBus, the interaction between the host and the guest OS occurs on the client-server model. In the parent partition, virtualization service providers (VSP) are running, which are the server part, and in the child partitions, the client part is the VSC. VSC redirects guest OS requests via VMBus to the VSP in the parent partition, and the VSP itself forwards the request to the device driver. This interaction process is completely transparent to the guest OS.
Child partitions
Let's return to our drawing with the Hyper-V architecture, we will only reduce it a bit, because we are only interested in subsidiary partitions.
Fig. 7 Child partitionsSo, in the child partitions can be set:
- Windows OS, with installed integration components (in our case - Windows 7)
- OS is not from the Windows family, but supports integration components (Red Hat Enterprise Linux in our case)
- Operating systems that do not support integration components (for example, FreeBSD).
In all three cases, the set of components in the child partitions will differ slightly.
Windows OS with installed integration components
Microsoft Windows operating systems starting with Windows 2000 support the installation of an integration component. After installing Hyper-V Integration Services in the guest OS, the following components are launched:
- Virtualization service clients. VSCs are synthetic devices that allow access to physical devices through a VMBus through a VSP. VSC appear in the system only after installing the integration components, and allow the use of synthetic devices. Without installing the integration component, the guest OS can use only emulated devices. Windows 7 and Windows Server 2008 R2 include integration components, so they do not need to be installed separately.
- Improvements. This refers to modifications in the OS code to ensure that the OS operates with the hypervisor and thereby increase its efficiency in a virtual environment. These modifications concern the disk, network, graphics subsystems and I / O subsystems. Windows Server 2008 R2 and Windows 7 already contain the necessary modifications; on other supported OS, you need to install integration components for this.
Also, the integration components provide the following functionality:
- Heartbeat - helps to determine whether a child partition responds to requests from the parent.
- Exchange registry keys - allows you to exchange registry keys between the child and the parent partition.
- Time synchronization between host and guest OS
- Shut down the guest OS
- Volume Shadow Copy Service (VSS), which allows you to get consistent backups.
OS is not from the Windows family, but supports integration components
There are also OSes that are not related to the Windows family, but support integration components. At the moment, this is only SUSE Linux Enterprise Server and Red Hat Enterprise Linux. Such OSes, when installing integration components, use third-party VSCs to interact with VSC over VMBus and access equipment. The integration components for Linux are developed by Microsoft in conjunction with Citrix and are available for download in the Microsoft Download Center. Since the integration components for Linux were released under the GPL v2 license, they are being integrated into the Linux kernel through the
Linux Driver Project , which will significantly expand the list of supported guest OSs.
Instead of conclusion
With this, I will probably finish my second article on Hyper-V architecture. The previous article caused some readers questions, and I hope that now I have answered them.
I hope that the reading was not too boring. I used the “academic language” quite often, but this was necessary because the subject of the article implies a very large amount of theory and practically zero point zero tenths of practice.
Many thanks to Mitch Tulloch and Microsoft Virtualization Team. Based on their book
Understanding Microsoft Virtualization Solutions , an article was prepared.