📜 ⬆️ ⬇️

lxc - native linux containers

Currently Linux has the following widely known container implementations:

But they require the imposition of their patches on the core, to obtain the necessary functionality. Unlike them, lxc since kernel 2.6.29 does not require this. For the implementation of isolation, the existing namespaces in the kernel are used, and for the management of Control Group resources. This allows you to create not only complete isolated environments, but also to isolate individual applications. In order to start working with lxc, you will need running linux with kernel 2.6.29. The following options should be enabled in the kernel:

* General setup
** Control Group support
---> Namespace cgroup subsystem
---> Freezer cgroup subsystem
---> Cpuset support
----> Simple CPU accounting cgroup subsystem
----> Resource counters
----> Memory resource controllers for Control Groups
** Group CPU scheduler
---> Basis for grouping tasks (Control Groups)
** Namespaces support
---> UTS namespace
---> IPC namespace
---> User namespace
---> Pid namespace
---> Network namespace
* Security options
--> File POSIX Capabilities
* Device Drivers
** Network device support
---> Virtual ethernet pair device

If all this is included, then mount the cgroup file system:
mkdir -p /var/lxc/cgroup
mount -t cgroup cgroup /var/lxc/cgroup

Download lxc , build and install:
./configure --prefix=/
make
make install

Then check which version of iproute2 is installed. Requires version above 2.6.26. This version allows you to manage virtual network devices and customize network namespaces. In addition, if you plan to use the network inside the container, then you need to reconfigure your network system so that it uses the bridge mode. To do this, lower the network interface:
ifconfig eth0 down

Create a br0 bridge:
brctl addbr br0
brctl setfd br0 0

Connect your network interface to it:
brctl addif br0 eth0
ifconfig eth0 0.0.0.0 up

Set the required address on br0 and write the default gateway:
ifconfig bdr0 192.168.1.2/24 up
route add default gw 192.168.1.1

Further, when the container starts, a special virtual device will be created connecting the bridge with the virtual container interface. Now you need a system image for the container. The easiest way is to use ready-made templates from OpenVZ. I used the template for CentOS . Download and extract it to the / var / lxc / centos / rootfs directory . After that, you will need to slightly modify the template, since it is designed to work with OpenVZ. To do this, follow these steps:

Go to the /var/lxc/centos/rootfs/etc/rc.d directory and comment out the following lines in the rc.sysinit file:
/sbin/start_udev
mount -n /dev/pts >/dev/null 2>&1

Currently, the / dev directory is mounted using bind from the current system.

Then comment out the / var / lxc / centos / rootfs / etc / directory in the fstab file and comment out the line:
none /dev/pts devpts rw 0 0

After that, go to the / var / lxc / centos / rootfs / etc / sysconfig / network-scripts directory and create the following type of ifcfg-eth0 file:
DEVICE=eth0
IPADDR=192.168.1.102
NETMASK=255.255.255.0
NETWORK=192.168.1.0
BROADCAST=192.168.1.255
ONBOOT=yes
NAME=eth0

Next, go to the / var / lxc / centos / rootfs / etc / sysconfig / directory and create the network file:
NETWORKING="yes"
GATEWAY="192.168.1.1"
HOSTNAME="centos_ssh"

Now it remains to change the root password. To do this, chroot into the system image and call passwd:
chroot /var/lxc/centos/rootfs
passwd

')
With the preparation of the system is completed. We proceed to the creation of container settings. To do this, create two files fstab and lxc-centos.conf in the / var / lib / directory
Next you need to create container settings files. Create in the / var / lxc / centos lxc-centos.conf and fstab directory as follows:
lxc-centos.conf
lxc.utsname = centos_ssh
lxc.network.type = veth
lxc.network.flags = up
lxc.network.link = br0
lxc.network.ipv4 = 192.168.1.101/24
lxc.network.name = eth0
lxc.mount = /var/lxc/centos/fstab
lxc.rootfs = /var/lxc/centos/rootfs

fstab
/dev /var/lxc/centos/rootfs/dev none bind 0 0
/dev/pts /var/lxc/centos/rootfs/dev/pts none bind 0 0

Now you can create a container. Specify the centos name and configuration file:
lxc-create -n centos -f /var/lxc/centos/lxc-centos.conf

Check if the container has been created:
lxc-info -n centos
'centos' is STOPPED

The container was created, but at the moment it does not work. Run it:
lxc-start -n centos

Your centos download will begin. As soon as you see:
INIT: no more processes left in this runlevel

Loading is complete. Open the next console and try to ping the address allocated to the container. As soon as it gets stuck, you can log in via ssh.

But in addition to full containers, lxc allows you to create application containers. To do this, in the / var / lxc / simple directory, create the following directory structure:
rootfs
|-- bin
|-- dev
| |-- pts
| `-- shm
| `-- network
|-- etc
|-- lib
|-- proc
|-- root
|-- sbin
|-- sys
|-- usr
`-- var
|-- empty
|-- lib
| `-- empty
`-- run

Then create lxc-simple.conf :
lxc.utsname = simple
lxc.mount = /var/lxc/simple/fstab
lxc.rootfs = /var/lxc/simple/rootfs

and fstab :
/lib /var/lxc/simple/rootfs/lib none ro,bind 0 0
/bin /var/lxc/simple/rootfs/bin none ro,bind 0 0
/usr /var/lxc/simple/rootfs/usr none ro,bind 0 0
/sbin /var/lxc/simple/rootfs/sbin none ro,bind 0 0

Next, create a container:
lxc-create -n simple -f /var/lxc/simple/lxc-simple.conf


And run the application:
lxc-execute -n centos /bin/ls


As you can see, on the one hand, creating a container with an application is simpler; on the other hand, it is more difficult than creating a full-fledged container. You now have one running container and one application container in the stopped state. But containers except for insulation should allow to limit resuses. For this, lxc uses lxc-cgroup. At the moment, it allows you to specify which processor will be used, how much processor time it will be allocated, the limitation of available memory and the class of network traffic outgoing from the container for further processing. All settings are based on cgroup for detailed acquaintance with settings you should refer to the Documentation / cgroups kernel documentation directory

Let me give you some of the post examples. Container binding to the first processor core:
lxc-cgroup -n centos cpuset.cpus 0


Container memory limit of up to 128Mg:
lxc-cgroup -n centos memory.limit_in_bytes 128M


In addition, there are various accounting options. Directly, all this without lxc-cgroup can be viewed in the / var / lxc / cgroup / centos directory.

If you don’t need the container, you can stop it:
lxc-stop -n centos


And delete:
lxc-destroy -n centos

Note that although the container will be removed, the system image will remain on disk.

You can view running processes using lxc-ps:
lxc-ps --lxc
centos 7480 ? 00:00:00 init
centos 7719 ? 00:00:00 syslogd
centos 7736 ? 00:00:00 sshd


lxc-info shows the state of the container:
lxc-info -n centos
'centos' is RUNNING


lxc-freeze blocks all processes in the container before calling lxc-unfreeze
lxc-freeze -n centos


lxc-unfreeze removes blocking from all processes in the container
lxc-unfreeze -n centos


lxc is an interesting technology, but at the moment it is not ready for use in production. Insulation is clearly insufficient. So top inside the container shows all the processors and all the memory, mount displays the points mounted outside the container, and the time setting call changes it outside the container. In addition, there is no quota of used disk space and a hard limit on the use of the processor. The work on quotas is currently underway so I will hope that in the near future to create containers there will be no need to patch the kernel.

Source: https://habr.com/ru/post/74808/


All Articles