📜 ⬆️ ⬇️

Why do we need virtualization?

The word "virtualization" has recently become some kind of "fashion" in the IT environment. All hardware and software vendors, all IT companies in one voice shout that virtualization is cool, modern, and necessary for everyone. But let's, instead of being led by marketing slogans (and sometimes Goebbels himself would die of envy), try to look at this buzzword from the point of view of simple "techies" and decide whether we need it or not. .



Types of virtualization


')
Let's start with the fact that virtualization is divided into three types:

Many of you are familiar with the virtualization of views : the most vivid example is the terminal services of Windows Server. The terminal server provides its computing resources to clients, and the client application runs on the server, while the client receives only a “picture”, that is, a presentation. Such an access model makes it possible, firstly, to reduce the requirements for software and hardware on the client side, secondly, it reduces the network bandwidth requirements, and thirdly, it allows for increased security. As for the equipment, even smartphones or old computers up to the Pentium 166 can be used as terminal clients, not to mention specialized thin clients. There are, for example, thin clients in the form factor sockets Legrand , mounted in a box. In client workstations, it is enough to install only a monitor, keyboard and mouse - and you can work. To work with a terminal server, it is not necessary to have a high-speed connection to the local network, even a low-speed connection with a bandwidth of 15-20 kbit / s is quite sufficient, therefore terminal solutions are very suitable for firms with a highly distributed structure (for example, networks of small shops). In addition, when using thin clients, security is greatly enhanced, because users can be allowed to run only a limited set of applications, and prohibit installing their own applications. In principle, the same can be done with full-fledged client workstations, but using terminal services it will be much easier, especially without providing access to the entire desktop, but only by publishing individual applications (perhaps in Citrix Metaframe / PS, but also in Windows Server 2008 and higher). Moreover, no information can be copied to and from external media unless explicitly permitted in the settings of the terminal services. That is, the problem of "viruses on flash drives" disappears automatically. Another indisputable advantage is reducing the complexity of administration: updating applications is easy (updating them on the server is enough), and support services are simplified: you can connect to a terminal session of any user remotely without installing additional software.
There are two disadvantages to such systems: firstly, the need to purchase more powerful servers (although it may be cheaper than many client workstations with TTX sufficient to run applications locally), secondly, the emergence of a single point of failure in the form of a terminal server. This problem is solved through the use of clusters, or server farms, but this leads to an even more expensive system.

Application virtualization is quite an interesting, and relatively new, direction. I will not talk about it in detail here, since this is a topic for a whole separate article. In short, application virtualization allows you to run a separate application in its own isolated environment (sometimes called a sandbox, sandbox). This method helps to solve many problems. First, security again: an application running in an isolated environment is incapable of harming the OS and other applications. Secondly - all virtualized applications can be updated centrally from a single source. Third, application virtualization allows running on the same physical PC several different applications that conflict with each other, or even several different versions of the same application. For more information about application virtualization, see, for example, this webcast: www.techdays.ru/videos/1325.html Maybe one day I will even write an article on this topic.

And finally, we turn to server virtualization and dwell on it in detail.
Server virtualization is software simulation using special computer hardware software: a processor, memory, hard disk, etc. Further, an operating system can be installed on such a virtual computer, and it will work on it in the same way as on a simple “iron” computer. The most interesting advantage of this technology is the ability to run multiple virtual computers inside one “iron” one, while all virtual computers can work independently of each other. What can this be used for?
The first thing that comes to mind is that server virtualization can be used for learning and testing purposes. For example, new applications or operating systems can be tested before being launched into commercial operation in a virtual environment without buying hardware specifically for this and without risking to paralyze the work of the IT infrastructure if something goes wrong.

But beyond that, server virtualization can also be used in a production environment. There are many reasons for this.
Virtualization allows reducing the number of servers due to consolidation, that is, where previously several servers were required - now you can install one server and run the required number of guest OS in a virtual environment. This will save on equipment acquisition costs, as well as reduce energy consumption, and hence the heat dissipation of the system - and, therefore, you can use less powerful, and, consequently - cheaper cooling systems. But this medal has a downside, and not one. The fact is that when implementing solutions based on virtualization, you will most likely have to buy new servers. The fact is that virtual servers use hardware resources of a physical server, and, accordingly, more powerful processors will be needed, large amounts of RAM, as well as a faster disk subsystem, and, most likely, more. In addition, some virtualization systems (in particular, MS Hyper-V) require processor support for hardware virtualization technologies (Intel VT or AMD-V) and some other processor functions. Many processors that were released until recently, in particular - all x86_32bit - do not meet these requirements, and therefore from the old, although quite working servers will have to be abandoned. However, one more powerful server is likely to be much cheaper than several less powerful ones, and even older servers, most likely it’s time to change because of moral obsolescence.

There is another very important point: the virtualization of the north makes it possible to simplify the administration of infrastructure to the limit. The main advantage that all system administrators will appreciate is the ability to remotely access the virtual server console at the “hardware” level, or more precisely, at the “virtual-hardware” level, regardless of the installed guest OS and its state. So, in order to restart the “hung” server, now you don’t need to run to the server, or buy expensive equipment such as IP-KVM switches, you just need to go to the virtual server console and click the “Reset” button. In addition, virtual servers support snapshot technology (see my previous article about it), as well as backup and recovery of virtual systems is much easier.

Another distinct advantage is that an OS running inside a virtual machine (guest OS) has no idea what hardware is installed on the physical server within which it runs (host). Therefore, when replacing hardware, upgrading or even moving to a new server, you need to update the drivers only on the host operating system (host operating system). Guest OS will work as before, since only virtual devices “see”.

Also, I would like to remind you that in a virtual environment special software licensing rules may apply (in particular, buying a license for Microsoft Windows Server 2008 Enterprise allows you to use four copies of the OS as a guest for free, and Microsoft Windows Server 2008 Datacenter generally allows you to use an unlimited number of guest OS subject to full processor licensing).

Still it is necessary to mention technology of fault tolerance. The physical servers on which virtual machines are started can be combined into a cluster, and in the event of a failure of one of the servers, they can automatically “move” to another. It is not always possible to achieve full resiliency (in particular, in MS Hyper-V such a “sudden move” will look the same and have the same possible consequences as a sudden server blackout), but possible downtime will be greatly reduced: the “move” takes several minutes while repair or replacement of the server itself may take hours, or even days. If the "relocation" of virtual machines occurs in the normal mode, then it can pass completely unnoticed by users. Such technologies are referred to differently by different vendors, for example, MS calls it “Live Migration”, while VMware calls it Vmotion. The use of such technologies will allow carrying out work related to shutting down the server (for example, replacing some hardware components, or restarting the OS after installing critical updates) during working hours and not driving users out of their favorite applications. In addition, if the infrastructure is properly structured, the running virtual machines can automatically be moved to less loaded servers, or, on the contrary, “unload” the most loaded ones. In infrastructure based on Microsoft technologies, System Center Virtual Machine Manager and Operations Manager are used for this.

In conclusion, the topic of server virtualization - I note that virtualization is not always equally useful. In particular, it will not always be a good idea to transfer high-powered servers to a virtual environment, and especially high-loaded ones on a disk subsystem - these are “heavy” DBMS, Exchange Server, especially the Mailbox Server role, and other high-loaded applications. But servers with less load (domain controllers AD, WSUS, various System Center * Manager, web server) can be virtualized and even necessary. I note, by the way, that it is with domain controllers that it is very desirable that at least one of the controllers be “iron”, that is, not virtual. This is necessary because for the correct operation of the entire infrastructure, it is desirable that when all other servers are running, at least one CD will be available on the network.

Summary



So let's summarize: what kind of virtualization can be useful when, and what are its pros and cons.
If you have a lot of users working with the same set of software, and the system is highly distributed geographically, then you should think about using virtualization of views, that is, terminal services.

Advantages of such a system:

Disadvantages:


If you have many applications that work incorrectly in the new OS, or conflict with each other, or you need to run multiple versions of the same program on the same computer, then application-level virtualization is needed.

Advantages:

Disadvantages:


If you need to free up space in the rack, reduce the power consumption of systems, get rid of the "server zoo" - then your solution is server virtualization.

Advantages of such a decision:


The disadvantages are, in principle, the same as for terminal solutions:



I hope my article will be useful for someone. Thanks and constructive criticism, as always, can be expressed in the comments.

Source: https://habr.com/ru/post/91503/


All Articles