📜 ⬆️ ⬇️

Xen Cloud Platform in Enterprise [1]

Among all enterprise virtualization systems, XCP is the only free and free one. The history of XCP goes to XenServer, which, although based on an open source hypervisor, was quite a paid software. Cyrix has published the XenServer code under a free license, and from then on XenServer began to smoothly transform into the Xen Cloud Platform.

In this series of articles, I will talk about how to use XCP in a single administrative center when virtual machines and the virtualization infrastructure are managed by the same organization (that is, a typical scenario with the virtualization of enterprise servers). In these articles there will be few examples and command-line keys (the administration guide on the Cyrics website is quite published), instead I will talk about concepts, terms and relationships of objects.

From the user point of view, the main difference between the usual zen (as part of most operating systems) and XCP is the installation process and the number of dobrovok before launch into product. XCP comes in the form of ISO's with a ready OS for dom0 (CentOS), adapted to serve the hypervisor and ensure the work of the hosts in the cloud. Xen usually comes in the form of hypervisor + utils, it is assumed that the person will create everything else himself. Another bonus for those who have to deal with Microsoft products is signed Windows drivers (you can also install them in xena with some tricks, but they are native in XCP).
')
XCP is a relatively peculiar platform. It is not “closed” in the sense that hyper-v, for example, is closed, but comes in the form of a complete OS, many aspects of the configuration of which are controlled by the platform, and not the OS. For example, a network: you can hang an ip-address on any interface with ifconfig, but the consequences will be sad - you should use the tools of the platform for managing networks and interfaces.

XCP consists of several components: xen, xapi, open vswitch, xe cli, stunnel, sqeezed that provide different aspects of the system.

At the beginning of the system requirements:

1) If we are talking about windows virtualization (that is, HVM domains), then processors with VT / Pacifica support are imperative.
2) In case the cloud is planned with more than one server, it is mandatory to use network storage (iscsi or NFS).
3) Hosts (if there are more than one) should be strictly the same - the same processor stepping, motherboard, etc.
4) Hosts must be in the same channel segment (i.e. be connected through a switch, not through a router).

Now, actually, to the point.

XCP terminology


(contents)
Host - server engaged in virtualization.
Pool - the union of identical hosts, allowing migration.
SR - storage repository - a place where virtual machines are stored (either a local screw or an NFS / ISCSI storage). To be precise, SR is storage information. Each host has its own PBD (physical block device) connecting the host to the SR. The presence of PBD in SR on each host is a condition for machine migration.
VDI - virtual disk image, I think, does not require decryption. Can be either a file or a LVM logical volume
VM is a virtual machine.
VBD - virtual block device - an XCP-specific construction, a logical connection between VDI and a block device inside a virtual machine.
network — network (more precisely, network record). Similarly, SR hosts connect to the network using a PIF (Physical interface).
VIF - virtual interface - a logical structure connecting a network and a virtual machine. Unlike VBD, it is more “real”, it can be seen in the list of network interfaces at the moment when the virtual machine is turned on.
vlan - wilan is wilan. If the vilanas are used, they represent the level between the network and pif (on one pif there can be several vilan, vilan are included in the network).

Pula


Poole is an abstraction that unites hosts. A pool has a configuration (state) that describes all (almost all) aspects of the configuration of everything — hosts, pool, networks, SRs, virtual machines, etc. Each host stores a full replica of the state, although the master is only one pool. The wizard sends the changes to all pending approximately once every 15 seconds (these are hosts, and possibly external observers using XenAPI). In addition, changes in specific components are notified "in real time". The master can be reassigned on the go (almost without interfering with the normal work and without exerting any influence on the virtual machines). In the event of a crash wizard, hosts can be reconfigured to a new wizard on the go. Accepting / excluding a host to a pool requires it to reboot, in addition, all virtual machines that are on it are lost (if the machines were in a pool with several hosts and stored on an external SR, they will remain available for running on other pool hosts if the machines were locally, they will be destroyed). For business needs, the hosts in the pool can be turned on / off without removing them from the pool (in fact, it’s just a ban on launching new machines on them).

If there is only one host, it is “self pool”. If a host joins someone else's pool, then it “forgets” about its pool and accepts someone else's. Hosts in a pool always belong to one pool and know nothing about other pools (that is, a pool is always one, even though it has a unique identifier, but this is just a formality).

Virtual machines


Virtual machines come in two types: hardware virtualization (HVM) and paravirtualized (PV). Paravirtualized virtual machines are always preferable to HVM because PV uses a special kernel that “assists” virtualization and uses hypervisor calls (hypercalls) directly, rather than via interception of privileged instructions by the hypervisor (as happens in HVM). Windows can only work in HVM mode due to the fact that Microsoft did not publish the kernel code under a license, which allows it to be adapted for effective work in PV mode.

A virtual machine in XCP is significantly more complex than a domain in regular zen. The virtual machine "exists" even when it is turned off. A virtual machine has a lot of attributes that are used to start and operate the machine (in fact, this config is the “virtual machine”).

The concepts of VBD (virtual block device), VIF (virtual network adapter) are associated with the virtual machine. And drives, and network adapters can be in the set (I have not tested tightly, but 8 pieces can be, and the numbers are allowed to create devices at least hundreds).

Among the important features of a virtual machine: quotas of memory, processor, number of allowed cores (from 1 to 16 in the current configuration).

An important feature: XCP allows you to change the amount of memory of a virtual machine on the go, however, it does not allow you to use any kind of oversell (i.e. a declaration to the virtual machine that there is more memory than it is). The maximum amount of memory that can be allocated to virtual machines is equal to the virtual memory of the host minus the overhead (about 512 MB). Memory can be moved between machines on the go, but the total cannot be exceeded. Each machine can have its own swap and use it as much as it wants.

The processor can be connected and disconnected while on the move (this is a flyer; in fact, certain processors are simply allowed and / or prohibited). Not all programs like this (for example, atop pulls down the roof, if the processor pokes on the go). You can specify a virtual machine quota (as a percentage of computer time) and / or priority in case of competitive access to the processor.

For extremely thin configurations, you can allocate to the virtual machine some cores (processors) for exclusive use (vcpu pinning).

Network


The network is the most difficult area of ​​virtualization. XCP uses open vswitch and open flow technology to implement a virtual network. The description of this technology is much beyond the cycle of articles, I can only say that this technology allows you to make the "logic" of managing the switch as a separate application. The network can be connected with physical adapters, and can be purely virtual. Unfortunately, pure virtual networks do not migrate properly (for communication between virtual machines located on different hosts, it is necessary to use the network connected to the switch, the hosts connecting). The created virtual network adapter connects to the virtual network. It can work in both normal (unicast) mode and promiscuous mode (listening to all network traffic). In principle, there are no restrictions on the number of network adapters of a virtual machine on a single network. In the existing implementation, this network does not support jumbo frames, however, it supports offload to the control domain the CRC calculation of outgoing frames (and, if there is an understanding iron, TCP processing is done).

Of course, the network may not be associated with a physical adapter, but with a Vilan - in this case, all network traffic will go beyond the host in the trunk.

SR


One of the fundamental features of XCP is the concept of SR - storage repository. SR is a disk storage (VDI) of virtual machines and ISO's (future CDs for virtual machines). SR can be of two types: local (not interesting, because by its functionality it is the usual local ball, disk partition, directory, etc.) and shared (shared). It is shared SR that is the main XCP tool. The cloud (more precisely, the cloud manager) controls that all hosts have access to the SR. If in a cloud with several hosts, a single SR creation will automatically create all the necessary connectors (PBD - physical block device) for all hosts and change their configuration so that the storage is connected automatically after a reboot.

Common SR allows for live migration between machines, launching a machine on any (first available) host, and in general, is mandatory when using more than one host in a cloud. Depending on the SR, different functionality can be provided: copy-on-write, thin provisioning, high-speed disk cloning, snapshots, etc.

Frankly, I will not name all types of SRs, among those available without specials. hardware - NFS and iSCSI. NFS has a slightly more economical use of disk space, iSCSI is faster.

PBD


PBD - physical Block Device. Abstraction, which is the method used by the host to access the storage location of virtual machine disks (VDI). This can be either an NFS ball, or an iSCSI ball, or FC, or some other solution from the shelf manufacturers. The main idea of ​​PBD is the versatility of the work of PBD, regardless of what it is based on (the process of creation and the parameters for each type are different, but after creation all PBD are provided with the same means and are administered by common means in some frames). Each host has its own PBD for each SR to which it is connected.

PIF


Physical network interface. Used to connect the host to the network. Most often it is a real network interface, however, in the case of tagged vilans, it is an abstraction associated with a particular vilan. (In this case, several wilan connected to one interface, and PIF are based on these wilan). All PIFs are included in the host's internal network, organized using an open vSwitch.

VDI


VDI is the most valuable part of a virtual machine, the disk image (virtual disk image). It is located on the SR. VDI itself is not a property of the virtual machine and is connected to it using VBD (see below). VDI is of several types, among which is the system (it does not contain valuable information and can be slanted as you please) and user (which stores information and is subject to careful protection and care). VDI can form a snapshot chain, theoretically reducing the amount of disk space. In practice, this is not recommended, since chain processing reduces the performance of disk operations.

VBD


Abstract device in a virtual machine. Connects the disk in the virtual machine and VDI. From the point of view of the internal structure of the XCP, VBD is the “VDI access driver”. It may be, it may not be, this does not particularly affect the existence of VDI. (And vice versa, VBD cannot exist without VDI). VBD is of several types, in particular, it can emulate CDs (catching ISO's). When you migrate a VBD machine, it is recreated again, and VDI, as it was lying on SR, remains to be.

Vif


Virtual network interface, used for access by virtual machines to the network. From the point of view of dom0, vif is exactly the same interface as all the others, and it is included in the same virtual switch (there can be several switches themselves).

Metrics


There are metrics associated with virtual machines - RRD database with relative load values ​​for each of the resources taken into account (memory, disk, processor, network). Metrics are somewhat separate from all other types of objects, because they require special inclusion (due to overhead).

(to be continued, hereinafter: migration, memory management, domain concept, difference between HVM and PV, console, ISO connection, processor and quota management, disk schedulers, monitoring, console and graphical management methods, API)

Second part (further)

Source: https://habr.com/ru/post/104025/


All Articles