Open Source Virtualization: Report on Innovations and Approaches to Virtualization of Corporate Infrastructure
As promised, we publish reviews in the wake of HP Education Day .This is a transcript of a very sensible report.The first part is about enterprise virtualization in general, its different variants, what's new and interesting has appeared in virtualization in the corporate network, updates in VMware 6, etc.The second part is how we will manage this virtual structure, the cloud based on OpenStack, the HP Helion solution, and many other interesting points in corporate virtualization .
Introduction I picked up my report for quite a long time. Still, all that I suggested to tell, someone already captured me a little earlier. Therefore, I decided to reveal another topic from the other side. I hope you find it interesting. What do I want to talk about? The first is how it is in general, what's new and interesting is in virtualization in the corporate network. And the second is how we will manage this virtual structure. ')
Here I will begin with it. At the very beginning a few brief words about what virtualization is in the sense of using it in corporations. If we start talking about virtualization, then this is resource allocation. But only. How it is achieved is that nobody cares anymore. But today the task of virtualization has gone even further. We now share not only resources, we now also divide functional responsibilities. If earlier we said: we formed a block of data storage systems, formed a block of computing power, formed a virtual network, - now we still have a separate structure for managing the hardware part of virtualization, separate already for work with the user, separate for monitoring, separate for analysis.
A few words about the analysis of infrastructure capabilities: BSM has already been told to you, and analytics have also been told. I'm talking a little about other things.
If we are talking about virtual infrastructure, then we have quite big problems. What kind? First, if we use the division of resources, the division of responsibilities, we have a more complex structure. The more we have our components, the more complex the structure is. The system is changing rapidly. You need a machine - you created it, you do not need it - you destroyed it, you moved the machine from one data center to another - you had new programs, you lost old ones. You now have quite a lot of equipment that is in common responsibility, where there are several administrators who, in general, are in different departments. If you begin to divide areas of responsibility and share resources, you need to use a quality of service policy.
How to properly manage this structure? And you need to take and divide it into levels. We didn’t come up with anything better, 50 or 60 years ago, when only the IT infrastructure started, we started with this: let’s divide the structure into levels, and we will provide each level separately, and we will deal with each level separately. Let's look at these levels a little bit, and then move on to the overall structure.
The first level is, of course, the computing resources. This includes the CPU, this includes memory, this includes some kind of data bus, host bass adapters. What we have here is interesting? There are interesting pools. We create resource pools, we create clusters for both high availability and resource management. Here we add the rules of joint replacement of machines (affinity, anti-affinity, we include reservations, we include tasks). Pros: hardware virtualization. Which one? First, the CPU. The advantages of hardware virtualization are that we get the opportunity to devote more time to user tasks, and less - to office work on the virtualization itself. There is hardware CPU virtualization, there is hardware bus virtualization.
Hardware bus virtualization, such as SR-IOV, allows you to transfer almost direct control of our virtual machines to direct access to the hardware platform. If the usual, even fairly well-structured, system worked with the virtual software layer, then using bus virtualization, we can turn directly to the hardware functions of our host bass adapters. The advantage is that the host CPU is no longer handling these tasks.
The second block. Storage System. What is its virtualization? Virtualization consists in the fact that we divide this data storage system into small pieces and begin to distribute between all virtual machines. The advantage of this approach is obvious. Well, first of all, absolutely all hardware solutions started using it.
And what is the obvious benefit? We get flexibility. A disk partitioned and assembled into virtual partitions is much easier to enlarge, reduce, move, optimize, transfer from SIM to C mode and return to the contrary, make a backup, take a snapshot, restore, in a word ensure the required quality of services. Additional features that this virtual storage system provides for us are integration with the hardware platform through various acceleration methods. Well, for example, in VMware there is a connection with hardware storage and transfer of commands instead of recording real data. Wrightsame, wrightzero, what was already mentioned today, which is used in thin provision. The same technologies, for example, are used when creating backups during storage in the air, during migration, they can also be used when building clusters of high availability. When you actually switch a virtual machine from one host to a host and at the same time transparently move from one storage system to another storage system.
Virtual network If we start talking about a virtual network - this is an analysis of the packets that are transferred between our virtual machines. Here it is very important for us:
The first. Provide all the protocols that we need in a normal network. The second. Provide communication with the hardware components of the network, with the same switches that you install in a normal network. And the third. As much as possible to unload the work of the CPU.
How can I do that? And this can be done with exactly the same approach that conventional switches and regular routers have been doing for quite some time. Namely: for example, cisco express forwarding. We use two tables. One table, which is responsible for packet switching, and the second table, which tracks the topology and changes the switching table as the topology changes. Here is such an option, for example, is being built already in OpenFlow technology, which today is being introduced into all virtualization options, including those used and actually offered by HP.
What does all this lead to? And all this leads to the fact that most data centers go in the direction of cloud technology. Here the question naturally arises: do you need it? The question is non-trivial, because it all depends on what kind of network you really have. That is, if you have three ESXs, then you still do not need a cloud network. If you have 30 you might think. If you have 300, then you probably should already go on. What are the advantages of the cloud? The advantages of the cloud are that you separate the management system of machines, hardware machines for virtualization, and the system in creating new operating systems for new applications. You have two administrators. One provides you with the work of iron, and the other provides the interface already for users. The user, creating a new virtual machine, does not represent at all, but what is really you have there at the hardware level.
Moreover, today we are going to the next stage (as if someone has already come, someone else thinks, someone is working today), when we create not a virtual machine operating system, but a ready application. You need a web server as a front-end, you create it. You need a web server as a back-end with Java, you create it. And you are not even interested in what the operating system is. You are interested in the fact that there will be a container on which your application will run. That's about the same and it works, do you need it now, think. Why? Because such flexibility and simplicity, it turns into a greater consumption of resources and greater complexity of customization. That is, yes, if you do it all the time, it is better to do it once, after which everything will work. If you need to create new cars every two to three weeks or once a month, it is clear that while the game is worth the candle. But look at this technology, again, also necessary.
The cloud itself provides three such stages (this is according to the standard that goes according to the NIS, according to the American Institute of Standardization):
Stage number one. He is the lowest. Infrastructure as a service. This is what almost all large data centers have come to, and with which I think that you will also have to work, providing resources on demand. You need to know the OSes, you need to make a switch, you need to give several access to the Internet for the virtual machine. This is infrastructure as a service. You are not attached to any applications yet. You just create, if necessary, several new operating systems with the specified connection parameters.
Stage number two. Platform. You use it for collaborative application development. The advantage is that you are no longer interested in the operating system. Your task is jointly with a large team (or not a very large team) to develop and launch some applications to life, with the ability to run on a regular machine, inside your personal virtual data center or transfer to the classic public cloud, to some Amazon or HP-choreon, which we'll see a little bit today too.
The topmost level, the completion of the entire pyramid, is Software per Service. What is there - providing just the API library interaction, which allows you to use the application when you need it. You buy the application for the desired period of time. For example, do you need an office? You bought it. You are not worried about patches, updates, or time tracking. You get a complete set of access to interact with this software. You can create a new car with an office, with an apache, with a database. You can remove it if you no longer need it. Activate and deactivate.
Who does all this? And here we go to the question with which I started. We have two levels: Level one is virtualization that runs on a real hardware platform. We will start with it, then move on to level number two.
What's new
Let's start with VMware. Why? Well, he is the most popular, as without him? Most supported. You know that the sixth version of the VMware hypervisor and the system are already out at the same time. And in the sixth version appeared quite a lot of interesting features. Where do we start? Increasing highs. It is difficult to say where there were big problems, but in general, the highs increased quite seriously. Of the most interesting increases in highs, this is Fault Tolerance with cars. Who was on the courses of VMware, he knows that you considered Fault Tolerance - a very interesting solution, it always works on two machines. At the end, the instructor always said: “And you will not use it, because it is useless - just one cycle.” So, starting with the sixth version, you can already do four cycles. Now you can build four virtual CPUs. Now this Fault Tolerance machine will come in handy for you not only to demonstrate that VMware can create non-falling machines.
Second level. If you have very few hypervisors, and you do not plan, in general, to expand your network, and this is very normal for small companies, now VMware provides you with the opportunity to create local users. It was a problem, in fact, it was necessary either to have the external domain already, or to use the input via the command line and through the PC, and through the web user, add users and then work with them. So, starting from the sixth version, the opportunity has appeared through the standard VMware interface to create a new user. Plus you do not need to do an external authorization system. True, then a lot of minuses, but as if the question is that this is already possible.
A question from the audience: “Is it possible, some of the most“ main ”minus?If you suddenly want to create, then what to fear in the first place? "
And if you have a local user, then you have to say, the work is already in a cluster. You need a V-center, where these local users are not visible at all. A local user has access to only one host. This is the main problem that should be feared. That is, there is only this. It is not scalable. Everything else, well, as if you have a small network - this is a wonderful thing. Because I myself had cases when you lift two or three or four cars, you need to install a V-center, otherwise it would not work, and you quickly created users and they quietly even work from the V-center. If the cluster is not needed, migration is not needed. Additional features for blocking users, if I went wrong, the installation of the rules of complexity, well, as if this is also not interesting.
What else? Improved drivers, and new ones appeared, including a module for the kernel to support Intel and Nvidia graphics accelerators. That is, you can now fully use them as an accelerator for virtual machines. But there is one remark. They can not work as a console. That is, if you already use them as a console, then you can no longer use them in full acceleration mode.
Further and further, we are in the process of migration, in version 5.5, if I'm not mistaken, or 5.1, Enhanced vMotion appeared, which allowed us to simultaneously change the host and the story. But it worked only through the web client and was not supported by the cluster. Today there are two more migration options. This is a cross switch vMotion, which allows you to change a switch at the same time. Remember, there was a restriction that when you migrate you must have the same or standard switch on two hosts, or one distributed one. Now you can change and switch, but it is required that all this was in the same L2 network. Roughly speaking, this is a change, not even a vlan for vlan, some group of ports, because the ip-address does not change with you. But you can move, and then pull the interface and your ip-address will change. Automatic will not change. That is, the ip-address remains.
A question from the audience: “How is this operation going on with DRS?” In my opinion, still nothing. Not yet, yes. With DRS, the same thing happened with Enhancer vMotion. It is, but by itself. And in the DRS it is not used. That is, it is more a handheld engine.
And another option is Cross vCenter vMotion. Hot transfer between two V-centers with full replacement of the switch, host and storage, but only one ip-address is assigned. What else can be? You can create a resource pool both on one and another host, and at the same time limit the resources of this virtual machine. That is, the machine and when moving will still obey the rules on the limitation of the resource. But also this is only in manual mode.
Another interesting feature is the Content Library. The decision was long overdue. They used to say how. Here you have the NFS-ball, and through the NFS-ball all isoshniki shared. What can be done now? Now a special service is being created. The service works on all v-centers. A single database, which by default includes working with vApps, with images, with a virtual machine template, with scripts, plus any types of files can be kept without any additional functions. How it works? Each V-center has its own set of installations. But you can say: V-center number one shares virtual machine templates to V-center number two. The V-center number two by default sets itself a link, that here, on the V-center number one, there is an image of the machine that interests me. And if you need to deploy this machine, then it pulls the template already in the storage. It turns out such a distributed database with such a federated or on demand repository, when you overload only what you need. The library itself can be both public and private. That is, you either share it for everyone, or you define a circle of cars that will support all this.
A question from the audience: “Does transitivity work there?”Host number one shares host two, host two host number three.The third host sees the first "ball"?
Frankly, I don’t even know such an option, because if you share a lot of people, it is recommended to make a public. And they already see the public. But with a chain of private, I have not twisted. I will look, just now I can not say. But you can see.
New opportunities.Network I / O Control version 3.
Network I / O Control version 3 allows you to give a guaranteed bandwidth for each virtual machine interface. Plus it is possible to install again in the subgroup. And in addition to the fact that switches, the same switch can use completely different machines, a multi tennant environment is created. That is, the tenant is such a unit, which includes several ports or subgroups. For each of these blocks, you can create your own reservations, which will not affect the other subgroups. So where did VMware6 go? They went in the right direction again, resource management. With virtualization, everything is clear. So far, nothing new hardware will not appear, something new software will not work. But as if they took the management of resources quite seriously.