📜 ⬆️ ⬇️

Real virtuality, or “640 kilobytes of memory is enough for everyone”

image
With this article, I open the series of articles devoted to working with memory in Windows Server 2008 R2 SP1 Hyper-V. Basically, SP1 has two interesting innovations regarding Hyper-V:

In this series of articles (there will be several of them, so stock up on beer;) it will be said about dynamic memory. At the same time, it is most likely that dynamic memory in Hyper-V will be discussed only in the last article :)

Why was server virtualization originally invented? As I mentioned in the article “Why is virtualization necessary?” - one of the reasons is the increase in consolidation and a more rational use of hardware resources. You can either use a separate server for each individual task that will use its resources at best in half, or you can consolidate them all as multiple virtual servers on one physical server, using its resources almost 100% and eliminating the need to buy a mountain of iron.
So, what am I talking about? And the fact that many customers want to climb on the Christmas tree, and not scratch anything, but in scientific language they want to place as many virtual machines on one server as possible, and not lose productivity. Unfortunately, many people make a mistake at the first stage - determining the requirements for server resources. A simple example: ask any friend working in the relevant field - how much memory does it take when it orders a new server? And then - concretizing questions:

Questions can be further specified, and therefore it is very difficult to unequivocally answer the question “How much memory is needed by the server”. Evaluation of the necessary system resources is called "sizing" and has grown into a whole science. No wonder that most often act on one of the three options listed:

Of course, with any of the above options, we get not quite (or not at all) the optimal solution. Either we will acquire more memory than we need, and spend money in vain, or we will get less than we need, and we will get performance problems. Of course, it will be much more interesting if the memory can be allocated flexibly, depending on the loads, and used most efficiently.

Memory overcommitment


Since we are talking about memory and virtualization, it is impossible not to mention the phrase, often causing heated debates: memory overcommitment. Translated into Russian IT, this expression means “allocate resources to a virtual machine (in our case, memory, auth. Note) more than it exists physically.” That is, roughly speaking, to give three virtuals with 1 GB of memory, when physically the server has only 2.5 GB. The main reason why the phrase “memory overcommitment” itself caused such wild battles between supporters of different virtualization platforms is that Microsoft doesn’t provide such technologies: if the server has 3 GB of memory, then that’s what you can give to virtual machines, and nothing more. It seems that this is considered to be bad: the possibility of so-called overselling disappears. What is overselling? Internet providers are a prime example. They provide Internet access at speeds of up to 8 Mbps, in fact, the speed is likely to fluctuate around 4-6 Mbit / s due to the fact that the provider’s uplink channel is slightly narrower than 8 Mbps multiplied by its subscribers. However, at certain phases of the moon, at midnight, when Mercury is in the sign of Aquarius and nobody watches your favorite videos on the Internet except you - the speed may well reach the promised 8 Mbit. If you want to get exactly 8 Mbit / s and not less baud - there are tariffs with guaranteed bandwidth, and, oddly enough, with a completely different price. So, when using Hyper-V, there is no possibility of overselling, and therefore a fixed amount of RAM is allocated to each virtual machine, which, however, can also be changed, but only when the guest OS is shaded.
But we are still (I hope) techies and creepy materialists, and according to this, the phases of the moon and other bioenergy do not interest us. So we will see what really lies behind the phrase “memory overcommitment”.
Strangely enough, there is no single opinion: there are as many as three technologies that can be described with these words. So this:

The next article will be devoted to Page Sharing technology, in which the sharing of memory pages technology itself, as well as pros and cons of its use, will be discussed in detail. You can read it here . In future articles, I will try to talk about the other technologies listed, as well as about the Dynamic Memory feature itself in Windows Server 2008 R2, and why it does not fit the term “memory overcommitment”.

')

Source: https://habr.com/ru/post/93241/


All Articles