Hi, Habr! Now I will ask you a question, and you think: What, which is very popular and once thrill to you personally, today is remembered only for “ponozhalgirovat”? Surely, someone will remember Dendy or Super Nintendo, and some of their pager. So, why am I ... There is an expression "nothing lasts forever." In today's article, we’ll see if this is really the case in development and should we abandon VMWare in favor of Hyper-V in the matter of virtualization? We will also touch on the advantages of both platforms and the process of transition from one to another. Look under the cat!

I give the word to the author.
Disclaimer:
- This article is informative in nature and does not pursue the desire to pohaypit, but simply the desire to share their story, which may be useful to someone. Some things are purely individual, and judgments are personal.
- No, I did not sell MS. I have just been looking for articles of such a plan as food for thought for a long time, but they were not there. I had to do it myself.
- Blog MS - I do not have invite, and comrades liked the idea of the article, and they offered to post it.
- There will be no PR product, there will be a story about live testing / implementation.
Lyrical digressionWe live in an amazing time. And, perhaps, in a terrible, depending on which side to look. Now it is possible that literally 20 years ago I read in science fiction books: the future came in 200–500–1000 years. Flights to other planets, going beyond our solar system, “blooming apple trees on Mars” - all this seemed far away and unrealizable.
')
And now we have (well, practically) a cosmic nuclear engine, a plan to fly to Mars in 2024 and a satellite outside our solar system.
So, actually, what am I leading to. This is me to the fact that all this was made possible thanks to (or despite) the rapidly developing computer technologies. Let's talk about one of these technologies now.
Epigraph
There was one company. Neither big nor small, neither high nor low. Such a straight poured out medium-sized business. She lived to herself with several racks of equipment, old, from the mother of the inherited. And the time has come to update this whole economy. The comrades considered the cost of equipment, thought, and decided to implement virtualization. And the year was a long-standing one; of the representatives of the glorious kind of universal virtualizations, only VMWare was. In general, it and implemented. Time passed, tasks changed, other representatives of the glorious kind of virtualization grew. And the time has come to choose your representative again ...
The main question of an IT professional is “Why?”
(or "What for?")
Let me introduce myself. My name is Anton and I manage the department of infrastructure solutions in one of the major Russian retailers. As in any self-respecting organization, we use virtualization and, of course, our “beloved” 1C. We implemented VMware for a long time, we lived with it, in principle, not bad (although the stories of gray hair that added to me are also enough), but, like with any development, we occasionally have to look around to learn about alternative solutions.
And our transition story began with the fact that I saw Hyper-V in one corner with VMware in the Gartner quadrant. It was then that I became thoughtful. As a result of the deliberation, it turned out that such a sign was “for / against” the transition. And also the famous VMware jambs with CBT ... Right down the hook. Yes, and twice in two different releases. Straight fire!
Minute HYIPHere the joke is remembered:
“How to know that a man is an ardent vegan. No He will tell you about it. "
So it is here - how to recognize an ardent Krasnoglazik. No He will tell you that Linux is God's grace, and Windows is a creature of the prince of darkness.
Haters 2x354 immediately stand in a rack and, splashing fluids, will begin to tell how Microsoft updates break the hell out of the whole OS. This is yes, I will not argue here, my friends have a love for such cheerful gifts. But in general, the evolution process in my opinion at Microsoft is perfected. The revolution is not theirs, but evolution is a horse. And everyone chooses what is closer to him.
Immediately make a reservation - the comparison of the “by feature” was also, only in life no one “in his right mind and strong memory” will build a cluster on the limit values. Yes, and they actually look almost like twin brothers, and I don’t see a fundamental difference between how many hundreds of cores can be given to a single virtual machine.
Minute holivarWhy is VMware's “Killer feature” largely just marketing?
Fault Tolerance. Seriously? Did you read the restrictions? Do you really use it in production? If yes, then I sincerely, humanly, feel sorry for you ... For all the time, I have never seen anyone really come in handy for this ...
Forwarding USB and PCI devices. Also a very controversial point. These things deprive the virtual user of the main advantages of virtualization - free migration between hosts. We used PCI probros, but as soon as we could refuse, we breathed a sigh of relief. For USB probros, both software and hardware solutions have been invented and made long ago. Much easier.
Caching read data to local SSDs. Yes, when I came out I was very happy about this opportunity. But in reality, the increase was not seen even on synthetic data. And in the work environment, I occasionally caught wild hangs of this system (I’m not saying here that the system’s fault — perhaps it’s not my hands that were crooked). And the cherry on the cake: this system caches only blocks of a certain size, and you have to spend a lot of time collecting information about the size of the disk request, thinking which virtual machine should be prioritized in using this technology.
But Hyper-V has a regular opportunity to reduce the disk. Do you know how many times I dreamed of this in VMware? Much more than you can imagine.
Yes, another moment. Switching to another hypervisor is an individual solution, but here is my list of stop factors, in the presence of which, in my opinion, you definitely shouldn't switch to Hyper-V. Well, or very carefully think things through and test.
- You have a main OS on Linux servers.
- You need to run exotic.
- You need ready virtual servers from venders (I think it's just a matter of time).
- You do not like Microsoft.
- VMware you got for free with the equipment.
Reflection plate
For the transition to Hyper-V | Against the transition to Hyper-V |
---|
Reduce VMware license costs | VMware Platform Fame |
On the basis of the same platform built Azure | The size of the distribution (spoiler: Nano Server is not an analogue of esxi - it is a slightly different ideology and positioning) |
Interesting network virtualization | Simple licensing scheme |
Replication on other virtualok storage systems using standard methods | Support a large number of different operating systems |
Bonuses when buying a kit to build a virtualization (CIS set, which includes Windows Datacenter + System Center) | VMware is already working |
Various buns when deploying Windows servers | There is no support for the hypervisor as a separate product. |
You can shrink discs on the fly | VDI can only be used here for labs / tests. This is not suitable for production |
More operational support for newer versions of Windows | Availability of interesting complete solutions for virtualization, when you buy hardware and software from one vendor, and you get one management console and one technical support window |
This is Microsoft | This is Microsoft |
"Leap of faith"
I thought and wondered for a long time, but then the stars came together, and we updated the server park. And the old ones remained, and not bad ones, only slow by today's standards and, moreover, morally obsolete. And it was made a strategic decision to make a farm for development based on Hyper-V. They dragged the server to the new site, updated all the server firmware and rushed.
The test plan was simple:- We take the server.
- Install esxi on it. We do not change anything, the default settings.
- We deploy a virtual machine.
- We perform tests 5 times:
a) For 1C Gilev test.
b) For SQL, write script. - Customized by Best Practice's.
- We perform tests 5 times:
a) For 1C Gilev test.
b) For SQL, write script. - Install Hyper-V. We do not change anything, the default settings.
- We deploy a virtual machine.
- We perform tests 5 times:
a) For 1C Gilev test.
b) For SQL, write script. - Customized by Best Practice's.
- We perform tests 5 times:
a) For 1C Gilev test.
b) For SQL, write script. - We install on the Windows Server physical machine, set up according to Best Practice's and conduct tests.
- Compare and think.
Hardware: Dell FC 630, 2 Intel Xeon E5-2643 v4 processor (purely for 1C), 512GB of memory.
Drives: san-network based on Dell SC 200 with Read-Intensive SSD.We got the following results:VMWare without Best Practices | Gilev test | SQL test |
---|
one | 22.42 | 12.2 |
2 | 18.6 | 17.51 |
3 | 18.12 | 7.12 |
four | 26.74 | 7.18 |
five | 26.32 | 4.22 |
VMWare with Best Practices | Gilev test | SQL test |
one | 26.46 | 4.28 |
2 | 26.6 | 6.38 |
3 | 26.46 | 4.22 |
four | 26.46 | 6.56 |
five | 26.6 | 4.2 |
HyperV without Best Practices | Gilev test | SQL test |
one | 27.17 | 4.32 |
2 | 26.46 | 6.08 |
3 | 26.04 | 4.24 |
four | 26.18 | 5.58 |
five | 25.91 | 6.01 |
HyperV with Best Practices | Gilev test | SQL test |
one | 26.18 | 6.02 |
2 | 27.62 | 6.04 |
3 | 26.46 | 6.2 |
four | 26.74 | 4.23 |
five | 26.74 | 6.02 |
Physics | Gilev test | SQL test |
one | 35.97 | 4.06 |
2 | 32.47 | 4.04 |
3 | 31.85 | 6.14 |
four | 32.47 | 5.55 |
five | 32.89 | 5.43 |
Legend
Gilev's test - more means better, abstract “parrots”.
SQL test - less means better execution time.
What set up:1. Steps to prepare the host DELL Poweredge 630.1.1.
We adjust a host according to recommendations from DELL
1.1.1. Enable Processor Settings -> Virtualization Technology - enabled.
1.1.2. Enable Processor Settings -> Logical Processor - enabled.
1.1.3. Enable System Profile Settings -> Turbo Boost (in Turbo Mode documentation) - enabled.
1.1.4. Disable Memory Setting -> Node Interleaving (includes NUMA) - disabled.
1.1.5. Enable Power Management -> Maximum Performance - seems to be enabled.
1.1.6. Disable unnecessary devices in Integrated Devices - did not touch.
1.1.7. Disable System Profile Settings -> 1E - disabled.
1.1.8. Enable Processor Settings -> Hardware Prefetcher - enabled.
1.1.9. Enable Processor Settings -> Adjacent Cache Line Prefetch - enabled.
1.1.10. Enable Processor Settings -> DCU Streamer Prefetcher - enabled.
1.1.11. Enable Processor Settings -> Data Reuse - not found.
1.1.12. Enable Processor Settings -> DRAM Prefetcher - not found.
1.2
Configure the host by recommendation
1.2.1 Configure Fiber Chanel HBA.
1.2.1.1 When loading a host, go to QLogic Fast! UTIL (CTRL + Q).
1.2.1.2 Select the first port.
1.2.1.3 Reset Configuration Settings -> Restore Default Settings.
1.2.1.4 Enable Configuration Settings -> Adapter Settings -> Host Adapter BIOS -> Host Adapter BIOS -> Enable.
1.2.1.5 Enable Configuration Settings -> Adapter Settings -> Host Adapter BIOS -> Connection Options -> 1.
1.2.1.6 Enable Configuration Settings -> Advanced Adapter Settings -> Enable LIP Reset -> Yes.
1.2.1.7 Enable Configuration Settings -> Advanced Adapter Settings -> Enable LIP Full Login -> Yes.
1.2.1.8 Enable Configuration Settings -> Advanced Adapter Settings -> Login Retry Count -> 60.
1.2.1.9 Enable Configuration Settings -> Advanced Adapter Settings -> Port Down Retry Count -> 60.
1.2.1.10 Enable Configuration Settings -> Advanced Adapter Settings -> Link Down Timeout -> 30.
1.2.1.11. Configure the second port for items 1.2.1.3 - 1.2.1.10.
2. Testing steps on the VMware platform without best practices.2.1 Install VMware 5.5 with all updates.
2.2 Making the necessary settings on VMware (not included in the cluster, we test separately).
2.3 Install Windows 2016 and all updates.
2.4 Installing 1C: Enterprise. Configure, if necessary, while we set by default, version 1C - 8.3.10. (last).
2.5 On a separate machine, install Windows 2016 with SQL Server 2016 — with all updates.
2.6 Perform tests (5 times).
3. Testing steps on the VMware platform according to best practices .3.1.1 Place the swap file on the SSD disk. Cluster -> Swap file location -> Store the swap file in the same directory as VM. Configuration -> VM Swapfile location -> Edit.
3.1.2 It is recommended to enable the vSphere Flash Infrastructure layer - I don’t know how feasible it is in our realities.
3.1.3 Configure SAN Multipathing via Host -> Configuration -> Storage -> Manage Paths -> Path Selection -> Round Robin.
3.1.4 Enable Host -> Configuration -> Power management -> Properties -> High Perfomance.
3.2 Configure the VM according to the recommendations:
3.2.1 Use paravirtual disks: VM -> SCSI Controller -> Change Type -> Paravirtual.
3.2.2 It is advisable to use Thick provision eager zeroed.
3.2.3 Turn on VM -> Options -> CPU / MMU Virtualization -> Use Intel VTx for instruction set and Intel EPT for MMU Virtualization.
3.2.4 Disable VM BIOS -> Legacy diskette, VM BIOS -> Primary Mater CD ROM.
4. Testing steps on the Windows Server platform without best practices:4.1 Install Windows Server 2016 Datacenter on the host and all updates.
4.2 Making the necessary settings on the host.
4.3 Install a virtual machine with Windows and all updates.
4.4 Installing 1C: Enterprise. Configure, if necessary, while we set by default, version 1C - 8.3.10 (last).
4.5 On a separate machine, install Windows Server 2016 with SQL Server 2016 with all updates.
5. Testing steps on the Windows Server platform according to best practicesBest practices are stated here , here and here .5.1 Configuring Host according to the recommendations:
5.1.1
Activate MPIO:
Enable-WindowsOptionalFeature – Online – FeatureName MultiPathIO
(Get-WindowsOptionalFeature – Online – FeatureName "MultiPathIO").State
5.2 Configure the VM according to the recommendations:
5.2.1 Use Generation2 VM.
5.2.2 Use fixed disks in VM.
If life is on mars?
It seems that life was a success, the tests show that the calculations and rates were correct and now that very desired nirvana will come ... So I thought and hoped until we set up a cluster for developers in test mode.
I will not lie, the installation really happens simply and unpretentiously. The system itself checks everything that it needs for happiness, and if there is nothing, it sends you to the nearest grocery store for this to show a detailed report on what is wrong and even gives advice on how to fix the problem. In this regard, I liked the Microsoft product much more.
Immediately I remembered the story of a five-day correspondence with VMware technical support about the problem during the transition to 5.5. It turned out fun stuff. If you have a separate account for connecting to vSphere on the SQL server, then the password should be no longer than 14 characters (or 10, I don’t remember now), because then the system trivially cuts off and throws out a piece of the password as an unnecessary part. Indeed, quite a reasonable behavior.
But all the fun began later. One server crashed and refused to see the network card (as a result, the OS had nothing to do with it). Then the servers began to lose a quorum. Then the servers randomly began to fly out of the cluster. VMM did not really work and often simply could not connect to the farm. Then the servers began to pause in the cluster. Then, during the migration, the machines began to be seen on two hosts. In general, the situation was close to disaster, as we thought.
But, having gathered our courage, we, nevertheless, decided to make war. And you know what? Everything worked out. And it turned out that the problems with the network card were hardware, the problem with the cluster was decided after the correct network configuration. And after we rearranged the host OS and VMM to the English versions, everything was all right. And then I felt sad ... 2017, but I still need to install English Windows so that there are fewer problems. This is an epic fail in my opinion. But the bonus got a much simpler search for the text of errors.
As a result, the cluster started up, VMM works correctly, and we started distributing virtuals to users.
By the way, a separate boiler in hell is deserved by the one who came up with the VMM interface and logic ... To say that it is incomprehensible is to say nothing. When I first opened, I had the complete feeling that I was looking at the dashboard of an alien ship. It seems that the forms are familiar, but there is no understanding what is there and why. Although perhaps in many years I will get used to it. Or just memorize actions like a monkey.
What is it like when you still started the tractor?
In general, my emotions and feelings from the transition are positive. Patterns and their capabilities for Microsoft's OS do not go to any comparison with counterparts from VMware. They are very comfortable, with a huge number of whistles and ryushechek, which in general are quite sensible. While we are driving a cluster for developers, we get used to a new life.
Still very much, but very pleasantly surprised by the issue of the migration of machines from VMWare. Initially, I read forums, searched for software, thought how it would be. It turned out that everything was already invented for me. We connected in VMM vCenter in two accounts and directly from VMM said “dear comrade, please, give me those sweets, it really hurts delicious, please send me this virtual machine to the new hypervisor.” And the most funny thing was that I migrated. The first time. Without tambourine and errors. And in the end, the migration, for the test of which I planned to allocate a week, was packed within 40 minutes, of which 20 was the migration itself.
What is missing:- Small distribution, sharpened just for virtualization (analog esxi).
- A normal management console (the console is inconvenient, especially after the VMware control, but there is hope for the Honolulu project. In any case, looking at the technical preview, there is an understanding that the product should give the same ease of management).
- Technical support product virtualization. Yes, I know that there is Premium Support, but this is not at all what I want.
Summing up (if you are too lazy to read the article):
- Now the performance of the two platforms is about the same.
- Performance 1C is the same.
- In Hyper-V, virtual disks can be either enlarged or reduced. And online.
- Very, very well, very simple migration from VMWare.
- The trouble with support in its usual sense.
- VMM is extremely uncomfortable, especially after vCenter. But on the other hand, VMM is just a graphical shell for PowerShell scripts, so you can steer all of this through the usual CLI Powershell.
- The transition requires retraining and understanding the subtleties of Hyper-V. Many things and ideological approaches differ.
- Chic virtual machine templates with Windows. Amazingly comfortable.
- Saving money.
- More interesting in my opinion is the implementation of Software-defined storage, but this is “an amateur.”
- Respect for the fact that the whole Azure is built on its own technologies, which then come on-premise.
- Simple and very tight integration with the cloud.
- A good network virtualization, with many interesting moments.
- In my opinion, VDI is not about Microsoft and Hyper-V. But on the other hand, streaming programs (RemoteApp) have been made quite soundly, and for most companies there will be little worse than the same Citrix.
- Weak support by third-party vendors for ready virtualos images for Hyper-V (I assume that the phenomenon is temporary).
- A very strange new licensing policy (by core).
about the author

Anton Litvinov - the last 6 years working in the company 585 / Gold. He has gone from a network engineer to the head of the infrastructure solutions department and as a result combines Mr. Jekyll and Dr. Hyde - a full-stack engineer and manager. In IT for about 20 years.