📜 ⬆️ ⬇️

Our experience in testing LXC (Linux Containers) using the example of Debian Wheezy



We at Centos-admin are following the advent of new technologies, testing them and, of course, introducing them. Almost on all servers we use OpenVZ container virtualization. In order to expand the set of tools used in the work, we decided to study and test the native Linux LXC virtualization.
Under the cat you will find a small overview of the technology and a brief manual on the use of LXC in Debian Wheezy and our conclusions.

The technology has long been actively developed. At the moment, the stable version is 0.9, next year a release 1.0 is being prepared, which will be included in Ubuntu 14.04 LTS. However, currently there is no User Namespace support in the Ubuntu mainstream core, so this article discusses the use of Linux Containers using the example of Debian Wheezy.
Should I start using LXC now? Let's try to figure it out.

LXC (Linux Containers) is nothing else but an operating system level virtualization technology.
Although it is not possible to call LXC virtualization technology to the full, it is rather a technology of isolation and separation of computer resources.
')
LXC is a logical continuation of the two previous technologies Vserver and OpenVZ, however, it develops within the framework of the “vanilla” kernel branch starting from version 2.6.29. kernels.

What is LXC? LXC is a set of utilities that allows using the API to use the capabilities of the Linux kernel in creating isolated operating system containers and managing them. To achieve all this, various features of the Linux kernel allow:

Like any container virtualization technology, LXC will be useful for the needs of web hosting, development, as well as for testing and debugging web projects.

Install, configure LXC on Debian 7

As mentioned earlier, LXC uses Cgroup, and in order to start working with containers, you need to mount the cgroup file system. By default, the mount point is / sys / fs / cgroup, but you can mount at an arbitrary point.
Fix fstab, add cgroup:
vi /etc/fstab 

 ... cgroup /sys/fs/cgroup cgroup defaults 0 0 ... 

And mount the cgroup virtual file system:
 mount /sys/fs/cgroup 

For container administration, the lxc toolkit is used.
Install the lxc package, the rest of the system will tighten:
 apt-get install lxc 

The folder in which the containers will be stored, by default: / var / lib / lxc
After installing the utilities, you need to make sure that the system is fully ready to start working with containers:
 lxc-checkconfig 

 root@lxc-debian:~# lxc-checkconfig Kernel config /proc/config.gz not found, looking in other places... Found kernel config file /boot/config-3.2.0-4-amd64 --- Namespaces --- Namespaces: enabled Utsname namespace: enabled Ipc namespace: enabled Pid namespace: enabled User namespace: enabled Network namespace: enabled Multiple /dev/pts instances: enabled --- Control groups --- Cgroup: enabled Cgroup clone_children flag: enabled Cgroup device: enabled Cgroup sched: enabled Cgroup cpu account: enabled Cgroup memory controller: enabled Cgroup cpuset: enabled --- Misc --- Veth pair device: enabled Macvlan: enabled Vlan: enabled File capabilities: enabled Note : Before booting a new kernel, you can check its configuration usage : CONFIG=/path/to/config /usr/bin/lxc-checkconfig 

If everything is in order, you can try to create the first container.

Container Management

 lxc-create -n test -t debian 

where '-n test' is the container name, '-t debian' is the OS template of the container being created.

The lxc template itself is a bash script that creates root file system folders, a minimal set of configs files, and also pulls up fresh packages from repositories. At the same time, after the first launch of the template creation script, the necessary packages are cached on disk. Of course, the script is easy to customize. For example, add the installation set of packages you need. Such an approach to templates, in my humble opinion, is a bit more convenient than in the same OpenVZ.
In Debian, you can find Archlinux, Altlinux, Fedora, Opensuse, Ubuntu-Cloud templates. If the existing set of templates is not enough, you can try to create the missing template yourself.
By default, a wizard is launched in Debian that will help you create a container by following a few steps. And everything would be fine, but the Wheezy template is “ broken ”, and I don’t really want to use Sqeeze. Therefore, you need to either make your template, or look for a worker on the Internet.

Download the new template, for example, for CentOS 6:
 cd /usr/share/lxc/templates 

 wget https://gist.github.com/hagix9/3514296/raw/7f6bb4e291fad1dad59a49a5c02f78642bb99a45/lxc-centos 

 chmod +x lxc-centos 

You will also need yum package manager for CentOS.
 apt-get install yum 

Create a new container using the CentOS template:
 lxc-create -n test -t centos 

Having created the first container, we get an absolutely clean system: there is practically nothing in it, even text editors.
There are two ways to get into the container console:
When you start the container, you will automatically be taken to its console, where you will need to log in, by default in Debian the root password is: root.
 lxc-start -n test 

If you are using an alternative template, the password can be overlooked in the template file.
The second option is to connect to an already running container:
 lxc-console -n test 

When you first connect to the container, the following notation should appear:
 Type <Ctrl+a q> to exit the console, <Ctrl+a Ctrl+a> to enter Ctrl+a itself 
- it does not work or does not always work. Maybe on more recent releases already repaired.
Therefore, you should use screen, or run the container with the -d key and access it via ssh (if the network is already configured).
In the general case, the container will use the network stack of the host machine. For testing, it will be fine, but not for web projects, so further we will look at how to get an isolated network stack in a container.

Container removal is performed by the lxc-destroy utility:
 lxc-destroy -n test 

Network configuration
Consider two options:
- We have a server and many “white” ip-addresses. Each container can have its own address and freely communicate with the Internet.
- We have a server and several, or even just one “white” ip-address. In this case, most likely, most of the containers will work for NAT.
In both cases, a network bridge, a DHCP server and iptables come in handy:
 apt-get install bridge-utils isc-dhcp-server 

Before setting up the container network, we will configure two network bridges ( bridge ) on the host machine. One for “white” addresses, the other for private “gray” addresses.
The network adapter configuration file will look like this:
 vi /etc/network/interfaces 

 #    auto br0 iface br0 inet static bridge_ports eth0 bridge_fd 0 address 192.168.0.100 netmask 255.255.255.0 gateway 192.168.0.1 dns-nameservers 192.168.0.1 8.8.8.8 #    auto lxcbr0 iface lxcbr0 inet static bridge_ports none bridge_fd 0 address 10.0.0.1 netmask 255.255.255.0 

First option
Let's write the network adapter in the container config.
By default, the folder containing the containers / var / lib / lxc, in it we look for a folder with the name of our container (test), it will contain the config file, and we will edit it.
Add a block with network settings to the end of the file:
 vi /var/lib/lxc/test/config 

 ... # networking lxc.utsname = centos #   ( ) lxc.network.type = veth #  -   lxc.network.flags = up #    ( ) lxc.network.link = br0 #      lxc.network.name = eth0 #       lxc.network.veth.pair = veth0 # IP-  lxc.network.ipv4 = 192.168.0.101/24 #    lxc.network.ipv4.gateway = 192.168.0.1 #  (mac)    lxc.network.hwaddr = 00:1E:2D:F7:E3:4F 

Now you can run the container and try to connect via ssh:
 lxc-start -n test -d && ssh root@192.168.0.101 

Second option
The second option will differ from the first one in that we will use another network bridge and our containers will be able to access the Internet only via NAT.
Let's fix the centos container configuration:
 vi /var/lib/lxc/test/config 

 ... # networking lxc.utsname = centos lxc.network.type = veth lxc.network.flags = up lxc.network.link = lxcbr0 lxc.network.name = eth0 lxc.network.veth.pair = veth1 lxc.network.ipv4 = 10.0.0.10/24 lxc.network.ipv4.gateway = 10.0.0.1 lxc.network.hwaddr = 00:1E:2D:F7:E3:4E 

In order for our containers to receive addresses automatically, we will configure the dhcp server:
 vi /etc/dhcp/dhcpd.conf 

 ... #   authoritative; … #     subnet 10.0.0.0 netmask 255.255.255.0 { range 10.0.0.10 10.0.0.50; option domain-name-servers 192.168.0.1, 8.8.8.8; option domain-name "somehost.com"; option routers 10.0.0.1; default-lease-time 600; max-lease-time 7200; } 

It is also necessary to indicate on which network interface DHCP will work:
 vi /etc/default/isc-dhcp-server 

 ... INTERFACES="lxcbr0" ... 

You need to allow forwarding of packages on the host machine:
 /etc/sysctl.conf 

 ... net.ipv4.ip_forward=1 ... 

 sysctl -p 

To provide access to the Internet from containers, and access to containers from the Internet, you need to configure the firewall using the following iptables rules:
 vi /etc/network/iptables.up.rules 

 *nat :PREROUTING ACCEPT [0:0] :OUTPUT ACCEPT [0:0] :POSTROUTING ACCEPT [0:0] #       NAT -A POSTROUTING -s 10.0.0.0/24 -o lxbr0 -j MASQUERADE #    ,   SNAT # -A POSTROUTING -s 10.0.0.10/32 -j SNAT --to-source 192.168.0.100 #  SSH   -A PREROUTING -p tcp -m tcp -d 192.168.0.100 --dport 5678 -j DNAT --to-destination 10.0.0.10:22 COMMIT *mangle :PREROUTING ACCEPT [0:0] :INPUT ACCEPT [0:0] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [0:0] :POSTROUTING ACCEPT [0:0] COMMIT *filter :FORWARD ACCEPT [0:0] :INPUT DROP [0:0] :OUTPUT ACCEPT [0:0] -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT -A INPUT -p tcp -m tcp --dport 22 -j ACCEPT -A INPUT -p tcp -m tcp --dport 5678 -j ACCEPT COMMIT 

Apply the rules:
 iptables-restore < /etc/network/iptables.up.rules 

We will also make the rules automatically load when the server is loaded:
 echo “post-up iptables-restore < /etc/network/iptables.up.rules” >> /etc/network/interfaces 

Run the container and check the Internet connection:
 lxc-start -n centos -d 


Backup, Clone, Restore
For backup containers used utility lxc-backup. You shouldn’t compare it with vzdump. The utility uses rsync to copy container files. In fact, it does nothing except just copying the container files to the next folder. Practical benefits are questionable. With the same success, you can use your script to backup files with the same rsync to the right place.
Containers are cloned using the lxc-clone utility, and there is lxc-restore to restore from backup. These utilities cannot boast a wealth of functionality, but there is a necessary minimum.

What is the result?
The technology has come a long way in 39 releases and 1989 commits (as of 11/14/2013) and currently fits some complete form. It is possible that it is still early to use LXC on virtual hosting in its current form, but the technology is quite suitable for private projects.

Utilities for working with containers, perhaps, still need to be improved, and such work is actively underway. At the same time, at the moment, their functionality is quite enough to complete the work on the implementation and administration of Linux containers.

Few digits last
Testing is a topic for a separate post, so I will not go into details, let me just say that the tests were conducted on the same machine, Phoronix Test Suite 3.8 was used for testing (I know too old), the OS in the CentOS 6.4 container / virtual machine. Everything was done in haste, however, here's what happened:



Useful articles:
Debian Wiki LXC
Setting up LXC containers in 30 minutes
Ubuntu Server Guide LXC
LXC: Linux Container Utilities
Secure Linux Containers Guide

Source: https://habr.com/ru/post/202482/


All Articles