📜 ⬆️ ⬇️

Analysis of modern virtualization technologies


Currently, virtualization technologies are becoming increasingly popular. And this is no accident - the computing power of computers is growing. As a result of the development of technologies, six-, eight-, sixteen-core processors appear (and this is not yet the limit). The bandwidth of computer interfaces is increasing, as well as the capacity and responsiveness of storage systems. As a result, there is such a situation that having such capacities on one physical server, you can transfer all the servers operating in an organization (enterprise) to a virtual environment. This can be done with the help of modern virtualization technology.

Virtualization technologies are now becoming one of the key components of the modern IT infrastructure of large enterprises (organizations). Now it is difficult to imagine building a new server node of the company without using virtualization technology. Despite some shortcomings, the determining factors of such popularity are saving of money and time, as well as a high level of security and ensuring the continuity of business processes.

This article provides an analysis of modern virtualization technology, its advantages and disadvantages. Also reviewed are modern virtualization systems and approaches to creating virtual environments.

Modern visualization can be understood in different ways. For example, virtualizing means that you can take something from one form and make it look like another form. Computer virtualization means that you can make a computer appear as several computers at the same time or a completely different computer.
')
Virtualization is also called the situation when several computers are represented as one separate computer. This is usually called server cluster or grid computing.

Virtualization is not a new topic, in fact, for more than four decades. IBM recognized the importance of virtualization back in the 1960s, along with the development of mainframe-class computers. For example, System / 360 ™ Model 67 virtualized all hardware interfaces through the Virtual Machine Monitor (VMM) program. At the dawn of the computing era, the operating system was called the supervisor. When it became possible to run one operating system on another operating system, the term hypervisor (introduced in the 1970s) appeared.

VMM runs directly on the host hardware, which allows you to create multiple virtual machines (VMs). In addition, each virtual machine may have its own operating system.

Another use of virtualization is to simulate a processor. This is the so-called P-code (or pseudo-code) machine. P-code is a machine language that runs on a virtual machine, and not on real hardware. P-code became known in the early 1970s. It was used to compile a Pascal program in P-code and then execute it on a P-code virtual machine.

A new aspect of virtualization has been called command virtualization or binary virtualization. In this case, virtual commands are translated (translated) to physical commands of the main equipment. This usually happens dynamically. Since the code is executable, it is translated into a code segment. If a fork occurs, the new code segment is captured and translated.

When virtualization is performed, there are several ways to implement it, which are used to achieve the same results through different levels of abstraction. Each method has its advantages and disadvantages, but the main thing is that each of them finds its place depending on the application.

We can assume that the most complex virtualization is provided by hardware emulation. In this method, VM hardware is created on the host system to emulate the hardware of interest.

Another interesting use of emulation is hardware emulation, which is the joint development of embedded software and hardware. In this method, VM hardware is created on the host system to emulate the hardware of interest.


Hardware emulation uses a VM to simulate the necessary hardware.

Instead of waiting for real hardware to be available, embedded software developers can use virtual hardware to develop and test software.

The main problem with hardware emulation is to significantly slow down the execution of programs in such an environment. Since each command must be modeled on the underlying hardware, a slowdown of 100 times during emulation is common. However, hardware emulation has significant advantages. For example, using hardware emulation, you can control an unmodified operating system designed for PowerPC® on a system with an ARM processor. You can also manage multiple virtual machines, each of which will simulate a different processor.

Full (hardware) virtualization , or “native” virtualization, is another way of virtualization. This model uses a virtual machine manager (hypervisor) that communicates between the guest operating system and the system hardware.


Full virtualization uses the hypervisor to share the underlying hardware.

The interaction between the guest operating system (OS) and the hardware is carried out by means of a hypervisor. Inside the hypervisor, some protection must be installed and configured, because the underlying hardware does not belong to the OS, but is shared by the hypervisor. When building large corporate systems, as a rule, hardware virtualization is used. At the same time, major vendors such as VMware, IBM and Microsoft are developing their virtualization platforms based on hardware virtualization technologies Intel VT (VT-x), AMD-V.

Paravirtualization is another popular way that has some similarities with full virtualization. This method uses the hypervisor to share access to the underlying hardware, but integrates virtualization code into the operating system itself. This approach eliminates the need for any recompilation or interception, because the operating systems themselves cooperate in the virtualization process.


Paravirtualization shares the process with the guest operating system.

Paravirtualization requires that the guest OS be modified for the hypervisor, and this is a disadvantage of the method. However, paravirtualization offers high performance, almost like a real system. At the same time, as with full virtualization, different operating systems can be simultaneously supported. But a certain disadvantage of paravirtualization can be considered a limited number of supported operating systems. Since there is a need to make changes to the kernel code of the OS, which is not always possible due to the closed nature of some operating systems.

Of the known hypervisors, paravirtualization uses Xen and its branches (Citrix XenServer, XCP) along with hardware virtualization.

Virtualization operating system level. This technique virtualizes servers directly above the operating system. This method supports a single operating system and, in the most general case, simply isolates independent virtual servers (containers) from each other. To share the resources of one server between containers, this virtualization requires making changes to the operating system kernel (for example, as in the case of OpenVZ), but the advantage is the native performance, without the "overhead" of device virtualization.


Operating system-level virtualization isolates virtual servers.

This approach is used in Solaris Containers, FreeBSD jail and Virtuozzo / OpenVZ in Linux and * BSD, as well as in Linux Containers (LXC), about which much has already been written on Habré.

Now we will try to answer the question: “Why do we need virtualization?” . Currently there are many reasons for using virtualization. Perhaps the most important reason is the so-called server consolidation. Simply put, the ability to virtualize multiple systems on a separate server. This allows the enterprise (organization) to save on power, space, cooling and administration due to the presence of fewer servers. In this case, an important factor is the abstraction from the equipment. For example, servers sometimes fail. At the same time there is an opportunity to redistribute the load on the equipment. The lack of binding, to any "hardware" significantly simplifies the life of the IT department and reduces the risk of plant downtime.

Another possibility of using virtualization is that it is initially difficult to determine the load on the server. At the same time, the virtualization procedure supports the so-called live migration. Live migration allows an OS that moves to a new server and its applications to balance the load on the available hardware.

Using the capabilities of modern PCs, you can easily deploy any virtual server even on a home computer, and then easily transfer it to other equipment. Virtualization is also important for developers. For example, virtualization allows you to manage multiple operating systems, and if one of them crashes due to an error, the hypervisor and other operating systems continue to work. This allows you to make kernel debugging similar to custom application debugging.

In general, the following advantages of using virtualization can be highlighted:


1. Reducing the cost of purchasing and maintaining equipment. In modern conditions, almost in every company there will always be one or two servers having several roles, for example, a mail server, a file server, a database server, etc. Of course, on a single physical machine, you can raise several software systems (servers) that perform various tasks. But very often there are situations when installing new software requires an independent server unit. In this case, the virtual machine with the required OS will come to the rescue. This also includes cases where the network needs to have several independent virtual servers with their own set of services and their own characteristics, which must exist as independent nodes of the network. A typical example is VPS hosting services.

2. Reduction of server park. The advantage of virtualization is that you can significantly reduce the number of physical computers. As a result, less time and money is spent searching for, purchasing and replacing equipment. Along with this, the areas allocated for the maintenance of the server database are reduced.

3. Reduction in IT staff. The maintenance of fewer physical computers requires fewer people. From the point of view of company management, staff reduction is the reduction of a serious expense item of an enterprise.

4. Easy maintenance. Add a hard drive or expand an existing one, increase the amount of RAM, it all takes some time in the case of a physical server. Turning off, disconnecting from the rack, connecting new equipment, turning on - in the case of virtualization, all these actions are omitted, and the operation is reduced to a few mouse clicks or administrative commands.

5. Cloning and backup. Another advantage of virtualization is the ease of virtual machine cloning. For example, a company opens a new office. At the same time, the server infrastructure of the central office is standardized and consists of several servers with the same settings. Deploying such an infrastructure comes down to simply copying images to a new office server, configuring network equipment and changing settings in the application software.



You are not using virtualization yet?


Now it is difficult to imagine the IT industry without virtualization, the development of information systems of organizations is closely connected with the use of virtualization technologies. Moreover, these technologies can significantly reduce the costs associated with the acquisition and maintenance of server systems, reduce the time to restore information or deploy similar systems in the new equipment. If you still do not use the benefits of virtualization, then you should think about it now.
Our company makes extensive use of container virtualization in almost all projects. And if the situation requires it, we also use full virtualization. We recommend that all enterprises and organizations that have several servers in their fleet proceed to the implementation of the technologies described in this article.

Useful articles and resources:

VM GU.RU
Server virtualization
Overview of virtualization methods, architectures and implementations.
Virtualization: a new approach to building an IT infrastructure

Source: https://habr.com/ru/post/212985/


All Articles