📜 ⬆️ ⬇️

Installing and configuring KVM running CentOS 6

Greetings, Habrazhiteli!

Today I want to share with you one of my established manuals, which are perfected by multiple use, about which I can say with confidence that “it works for sure!” Without any extra dancing with a tambourine.
The article focuses more on novice system administrators than on gurus (for them there is nothing new here :)), and in it I will try to uncover a working and fairly quick option for deploying a virtual machine server, while trying to cover as many nuances and pitfalls as possible.

However, I will be glad to the attention of knowledgeable and experienced admins, who may give good advice and help correct errors, if any.
')
Disclaimer
Correct, if not, but in the search I did not find the implementation of this task on CentOS with a detailed description of all steps for beginners.
A good series of articles written by librarian , but they are for Debian.
Naturally, for experienced admins, this is no problem, but I repeat, my task is to describe detailed instructions for newbies.

Question: There are many guides on the Internet for installing Qemu KVM under CentOS, you will argue, and what is this article going to be interesting for?
Answer: it describes the complete cycle of installing and configuring components necessary for virtualization, installing guest virtual machines (VM), setting up a white and gray network for VMs, as well as some aspects that will help simplify VM management using forwarding graphics from a remote server to your PC and launching virt-manager.


Remember the 7 steps?


To begin with, if you are reading this, then you already have CentOS 6 OS (I used version 6.3), and to install guest VMs of different bitness (32 or 64), the host server (the physical server on which we will install KVM along with VM) should be exactly with a 64-bit OS.
All actions are performed as root.

So let's get down to leadership.

1. Step - Preparation

Check if the CPU supports hardware virtualization:
# egrep '(vmx|svm)' /proc/cpuinfo 

If the output is not empty, then the processor supports hardware virtualization.
Who cares, all actions were performed on the configuration of Intel Xeon Quad Core E3-1230 3.20 GHz / 8GB / 2x 1TB.

Install KVM and virtualization libraries:
 # yum install kvm libvirt 

We start the KVM service
 # service libvirtd start 

See if the KVM module is loaded.
 # lsmod | grep kvm 

Should get a conclusion:
 kvm_intel 52890 16 kvm 314739 1 kvm_intel 

In this case, we see that the kvm_intel module is loaded , since the CPU arbitrator is Intel.

Check KVM connectivity
 # virsh sysinfo 

Should get a conclusion:
 <sysinfo type='smbios'> <bios> <entry name='vendor'>HP</entry> <entry name='version'>J01</entry> ..... 


2. Step - Creating Storage for Virtual Machines (Storage Pool)

Here is a description of how to configure different types of storage.
In this example, a simple type of storage is described - for each VM, a new * .img file is created for the virtual hard disk (or disks - if you add several), they will be placed in the / guest_images directory.
Only we have this directory will be the mount point of a separate hard disk host server, specially designated for these needs.
The security of data storage and the fact that you need to create at least a mirror raid array, so as not to lose VM data in the event of a hard disk failure, we will not, as this is a separate topic.

Let's look at the list of physical disks on the host server:
 # fdisk -l 

The result was:
 Disk /dev/sda: 1000.2 GB, 1000204886016 bytes ...... Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes ...... 

There is an OS installed on the sda hard disk, we don’t touch it, but on sdb we create a partition for all free space on the ext4 file system:
(more details about the following operations can be read here )

We select a disk for editing
 # fdisk /dev/sdb 

Create a new section
 Command (m for help): n Command action e extended p primary partition (1-4) p Partition number (1-4): 1 

Save changes
 Command (m for help): w The partition table has been altered! 

Create an ext4 file system on the entire free space of the disk / dev / sdb
 # mkfs.ext4 /dev/sdb1 

Create a mount point for our hard disk for virtual machine files:
 # mkdir /guest_images # chmod 700 /guest_images # ls -la /guest_images total 8 drwx------. 2 root root 4096 May 28 13:57 . dr-xr-xr-x. 26 root root 4096 May 28 13:57 .. 

Many advise you to disable Selinux altogether , but we will choose a different path. We will set it up correctly.
 # semanage fcontext -a -t virt_image_t /guest_images 

If the execution of this command is not successful, you need to install an additional package. First, find out which package provides this command.
 # yum provides /usr/sbin/semanage 

We get the output:
 Loaded plugins: rhnplugin policycoreutils-python-2.0.83-19.8.el6_0.x86_64 : SELinux policy core python utilities Repo : rhel-x86_64-server-6 Matched from: Filename : /usr/sbin/semanage policycoreutils-python-2.0.83-19.1.el6.x86_64 : SELinux policy core python utilities Repo : rhel-x86_64-server-6 Matched from: Filename : /usr/sbin/semanage 

Install policycoreutils-python
 # yum -y install policycoreutils-python 

After that again:
 # semanage fcontext -a -t virt_image_t /guest_images 

Mount the / dev / sdb1 partition in / guest_images
 # mount -t ext4 /dev/sdb1 /guest_images 

Edit the / etc / fstab file so that when the host server is restarted, the partition with the VM will be mounted automatically
 # vi /etc/fstab 

Add a line following the example of those already in the file.
 /dev/sdb1 /guest_images ext4 defaults 1 1 

Save the file and continue to create the repository:
 # virsh pool-define-as guest_images_dir dir - - - - "/guest_images" Pool guest_images_dir defined 

Check if it was created:
 # virsh pool-list --all Name State Autostart ----------------------------------------- default active yes guest_images_dir inactive no 

Further:
 # virsh pool-build guest_images_dir Pool guest_images_dir built 

We start storage:
 # virsh pool-start guest_images_dir Pool guest_images_dir started # virsh pool-list --all Name State Autostart ----------------------------------------- default active yes guest_images_dir active no 

Add to autoload:
 # virsh pool-autostart guest_images_dir Pool guest_images_dir marked as autostarted # virsh pool-list --all Name State Autostart ----------------------------------------- default active yes guest_images_dir active yes 

Checking:
 # virsh pool-info guest_images_dir 


3. Step - Setting up the network on the host server

!!! IMPORTANT!!!
Before performing this step, you need to make sure that the host server has the bridge-utils package installed,
 # rpm -qa | grep bridge-utils 

otherwise, when performing operations with a network, you risk losing communication with the server, especially offensive if it is remote and you do not have physical access to it. If the output of the previous command is empty, then:
 # yum -y install bridge-utils 

Suppose that the interface eth0 was used to exit "into the world" and it was configured accordingly.
It is configured with the IP address 10.110.10.15 of the / 24 network, the mask - 255.255.255.0, the gateway 10.110.10.1.
We continue, we create the network interface like "bridge" on a host server
 # vi /etc/sysconfig/network-scripts/ifcfg-br0 

File contents
 DEVICE="br0" NM_CONTROLLED="no" ONBOOT="yes" TYPE="Bridge" BOOTPROTO="static" IPADDR="10.110.10.15" GATEWAY="10.110.10.1" DNS1="8.8.8.8" DNS2="8.8.4.4" MTU="1500" NETMASK="255.255.255.0" DEFROUTE="yes" IPV4_FAILURE_FATAL="yes" IPV6INIT="no" NAME="System br0" 

Here is the main network interface, which was used to enter the "world", to the form:
 # vi /etc/sysconfig/network-scripts/ifcfg-eth0 DEVICE="eth0" BOOTPROTO="none" HOSTNAME="localhost.localdomain" HWADDR="00:9C:02:97:86:70" IPV6INIT="no" MTU="1500" NM_CONTROLLED="no" ONBOOT="yes" TYPE="Ethernet" NAME="System eth0" BRIDGE="br0" 

!!! Important!!!
DEVICE = "eth0" The interface name must remain as it was on the system. If you use the eth1 interface to access the Internet, then you need to edit it.
HWADDR = "00: 2C: C2: 85: 29: A3" The MAC address must also remain the same as it was in the system

When everything is checked, restart the network:
 # service network restart 

Check the status of the connection type "bridge":
 # brctl show 

We get something like this
 bridge name bridge id STP enabled interfaces br0 8000.002cc28529a3 no eth0 

We make settings in iptables so that virtualok traffic “goes” through a bridge connection
 # iptables -I FORWARD -m physdev --physdev-is-bridged -j ACCEPT # service iptables save # service iptables restart 

Optionally: you can improve the speed of the bridge connection by adjusting the settings in /etc/sysctl.conf
 net.bridge.bridge-nf-call-ip6tables = 0 net.bridge.bridge-nf-call-iptables = 0 net.bridge.bridge-nf-call-arptables = 0 

Thereafter
 # sysctl -p /etc/sysctl.conf # service libvirtd reload 


4. Step - Installing a new virtual machine

Installing CentOS on a guest VM:
 virt-install -n VMName_2 --ram 1024 --arch=x86_64 \ --vcpus=1 --cpu host --check-cpu \ --extra-args="vnc sshd=1 sshpw=secret ip=static reboot=b selinux=0" \ --os-type linux --os-variant=rhel6 --boot cdrom,hd,menu=on \ --disk pool=guest_images_dir,size=50,bus=virtio \ --network=bridge:br0,model=virtio \ --graphics vnc,listen=0.0.0.0,keymap=ru,password=some.password.here \ --noautoconsole --watchdog default,action=reset --virt-type=kvm \ --autostart --location http://mirror.yandex.ru/centos/6.3/os/x86_64/ 

Note 1:
VMName_2 - the name of the new virtual machine
–Ram 1024 - number of virtual memory
–Arch = x86_64 - Virtual OS OS Architecture
–Vcpus = 1 - number of virtual processors
–Os-type linux - OS type
–Disk pool = guest_images_dir, size = 50 - storage location, Wirth size. disk
–Network = bridge: br0

Note 2:
If the VM needs a “white network”, then set
--network = bridge: br0
If the VM requires a “gray network”, then set
--network = bridge: virbr0
In this case, the VM will be assigned a gray IP via DHCP from the host server.
--graphics vnc, listen = 0.0.0.0, keymap = ru, password = some.password.here
Here we specify the password to connect to the VM by vnc

Installing Windows on a guest VM:
 virt-install --connect qemu:///system --arch=x86_64 \ -n VMName_1 -r 1024 --vcpus=1 \ --disk pool=guest_images_dir,size=50,bus=virtio,cache=none \ -c /iso/Windows2008R2RU.ISO --graphics vnc,listen=0.0.0.0,keymap=ru,password=some.password.here \ --noautoconsole --os-type windows --os-variant win2k8 \ --network=bridge:br0,model=e1000 --disk path=/iso/virtio-win.iso,device=cdrom,perms=ro 

Note:
The parameters are the same as in the CentOS installation example. But there are differences.
When installing, Windows will not see the virtual hard disk, so you need to load the additional virtual cdrom with the drivers / iso / virti-win.iso - the location of the ISO file with the drivers of the virtual disk. You can get from here .

We execute the command to install the new VM, then connect via vnc to the host server to continue installing the OS. In order to find out the port for the connection, perform:
 # netstat -nltp | grep q tcp 0 0 0.0.0.0:5900 0.0.0.0:* LISTEN 64141/qemu-kvm tcp 0 0 0.0.0.0:5903 0.0.0.0:* LISTEN 63620/qemu-kvm tcp 0 0 0.0.0.0:5904 0.0.0.0:* LISTEN 6971/qemu-kvm tcp 0 0 0.0.0.0:5905 0.0.0.0:* LISTEN 57780/qemu-kvm 

When installing a new VM, the vnc server port will increase by 1. When you remove a VM, the port is released,
and then issued a new VM. That is, the port number of the latest VM is not necessarily the largest of the 590 ...
To find out which vnc port virtualka with a specific name, enter:
 # virsh vncdisplay VMName_1 :3 

where VMName_1 is the name of the VM ,: 3 is the number in order of the port starting from 5900, that is, you need to connect to port 5903, but in the UltraVNC program it will work as well 10.110.10.15.15


Note
If the Permission denied error crashes while creating a VM, kvm cannot open the VM disk file * .img,
This means you need to allow qemu-kvm actions to be performed as root (it is assumed that
VM is produced from under a user specially created for this purpose, for example, libvirt ). But we will manage also the root user .

We fix the config:
 # vi /etc/libvirt/qemu.conf 

We find and uncomment the lines in it:
 # The user ID for QEMU processes run by the system instance. user = "root" # The group ID for QEMU processes run by the system instance. group = "root" 


Good to know:
VM configs are located here / etc / libvirt / qemu /
In order to edit the parameters (add a processor, RAM or something else),
look for the VM config with the desired name, edit:
 # vi /etc/libvirt/qemu/VMName_1.xml 

For example, you can specify a static vnc port for a specific VM in order to always connect to the correct port.
 <graphics type='vnc' port='5914' autoport='no' listen='0.0.0.0' passwd='some.password.here'> <listen type='address' address='0.0.0.0'/> </graphics> 

Now this VM has vnc port - 5914. Do not forget to reload libvirtd to apply the changes. The VM itself should also be rebooted. Therefore, change the VM configuration file while it is turned off, then perform service libvirtd reload, then start the VM.

Commands to control VM:
 virsh -c qemu:///system help     virsh -c qemu:///system list --all     virsh -c qemu:///system start vsrv1   vsrv1 virsh -c qemu:///system shutdown vsrv1      virsh -c qemu:///system destroy vsrv1     virsh -c qemu:///system undefine vsrv1   


5. Step - Network configuration in case of “gray” IP addresses in the VM

If at the 4th step you selected a gray network for the new VM (--network = bridge: virbr0), then you need to perform the following actions ( on the host server! ) To forward the traffic to the VM
Allow forwarding traffic at the kernel level:
 # sysctl net.ipv4.ip_forward=1 # iptables -I FORWARD -j ACCEPT # iptables -t nat -I PREROUTING -p tcp -d 10.110.10.15 --dport 5910 -j DNAT --to-destination 192.168.122.170:5901 

Here 10.110.10.15 is the white (external) IP of the host server. 192.168.122.170 - gray IP-address of the guest OS.
 # iptables -t nat -I POSTROUTING -p tcp -s 192.168.122.170 --sport 5901 -j SNAT --to-source 10.110.10.15:5910 

On the example of installing CentOS OS on a guest machine, when the installation went into graphics mode, and offers to connect to the local port 5901 of the guest OS.
Connecting from the PC you are sitting at, by vnc to 10.110.10.15:5910 or 10.110.10.15:10 will also work in UltraVNC.

By the same principle, you can throw the port (standard) RDP 3389 or SSH 22 in the guest OS.

6. Step - Preparation for managing virtual machines of a remote server with a convenient graphical interface (using virt-manager)

There are many ways to propel the graphics of a remote server on a PC, for which you perform administration actions. We will focus on ssh tunneling.
Suppose that you are performing actions from a local PC running Windows (on Linux operating systems, this is much easier :), you need to execute just one ssh -X username@12.34.56.78 command, of course, with the proviso that on a remote X11 server forwarding is enabled and you are sitting at the local Linux PC with a graphical shell), then we need

1. Everyone knows PuTTY ,
2. X Server Port for Windows - Xming
3. In the PuTTY settings, enable “Enable X11 Forwarding”
Make as shown in the picture:


At the time of connecting to a remote server, Xming should already be running.
On the CentOS for SSH host server, enable X11 Forwarding , to do this, edit the sshd_config file:
 # vi /etc/ssh/sshd_config X11Forwarding yes X11DisplayOffset 10 X11UseLocalhost yes 

Thereafter
 # /etc/init.d/sshd restart 

Install virt-manager on the host server:
 # yum install virt-manager 

Another component
 # yum -y install xorg-x11-xauth 

To display windows without kryakozyabr
 # yum install liberation-sans-fonts 


7. Step - Immediately launch virt-manager

After that, you need to restart via SSH to the remote server. Xming must be running.
We start the graphic management utility of virtual computers
 # virt-manager 

The virt-manager window opens.


VM Management Console


VM configuration and its change


I hope the reader liked the article. Personally, I would have read a similar in my time, would have dramatically reduced the time spent in order to shovel a lot of manuals from different admins for different operating systems; would save a lot of time spent on googling when more and more nuances appeared.

I would be happy for comments, suggestions on this topic.

Source: https://habr.com/ru/post/168791/


All Articles