📜 ⬆️ ⬇️

Linux containers at home: why and how




Reasoning


At the mention of the phrase "container virtualization", many immediately come to mind Virtuozzo and OpenVZ , as well as Docker . All this is associated, first of all, with hosting, VPS and other similar things.

At home, on personal computers, many use virtual machines: basically, perhaps, Virtualbox. As a rule, in order to work under Linux, have Windows on hand or vice versa. However, with a lot of related Linux operating systems, I began to notice that using virtual machines is, to put it mildly, irrational.

First of all, disk space is consumed very quickly. Each virtual machine needs a place, even if several of them differ only in configs. This is especially critical on the small size of the SSD laptop. In principle, Virtualbox is able to work with raw-devices and, in theory, machines can be assigned rw LVM snapshots, but here again there are issues with changing the size of the file system in the future, automating cloning, moving, deleting, and the like.
')
In the second - this is a greater consumption of RAM. The third is not the most convenient interaction tools ...

Therefore, there was an idea to test container virtualization at home. OpenVZ dismissed immediately, because of the need to mess with the custom kernel. The choice fell on LXC , supplied in the stable Debian repository.

What are containers and how does this all differ from virtualization? In the case of containers, a virtual hardware environment is not created, but an isolated process space and network stack are used. Let's just say, it turns out chroot with advanced features.

Why do you need it:

- To build software with the reluctance to clutter up the main working system with unsuited * -dev packages.
- The need for another distribution to run any specific programs and, again, build.
- Isolation of potentially unsafe software, like the same Skype, performs various incomprehensible actions in the user's home directory and all sorts of dubious web technologies: vulnerability in a flash , in java , in a pdf handler is just what floats on the surface.
- Anonymity. The commercial can simply be logged in to his favorite social network, forget to clean up the cookie or be unfamiliar with another new web technology like this webrtc . You can, of course, keep several browser profiles, but this will not protect against the holes and technologies listed above.

So, consider the pros and cons of the LXC:

+ Powered by vanilla core
+ Easy forwarding of devices and host directories, since it works all through cgroups
+ Very undemanding to resources, unlike virtual machines like Virtualbox or qemu

“The containers will work on the same core as the host, although this is rather a feature of container virtualization as a whole.
- Some unfinished utility bundles.

Deploy and configure the container



First of all, we install the lxc package and all the necessary utilities:
sudo apt-get install lxc bridge-utils 


We look at the available LVM volume groups:
 $sudo vgs VG #PV #LV #SN Attr VSize VFree nethack-vg 1 6 0 wz--n- 119,00g 7,36g 


 sudo lxc-create -t debian -B lvm --vgname nethack-vg --fssize 2G -n deb_test 




Specify using LVM as the storage system, Volume Group (in my case, nethack-vg) and a size of 2 gigabytes, otherwise a single-gig volume will be created by default. Although, if suddenly it became cramped, you can make lvresize.

We look:



 $sudo lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert deb_test nethack-vg -wi-ao---- 2,00g home nethack-vg -wi-ao---- 93,09g root nethack-vg -wi-ao---- 8,38g tmp nethack-vg -wi-ao---- 380,00m var nethack-vg -wi-ao---- 2,79g vm nethack-vg -wi-ao---- 5,00g 


We see that we have a deb_test volume.

Typical configuration created by the script:
/ var / lib / lxc / deb_test / config

 # Template used to create this container: /usr/share/lxc/templates/lxc-debian # Parameters passed to the template: # For additional config options, please look at lxc.container.conf(5) lxc.rootfs = /dev/nethack-vg/deb_test # Common configuration lxc.include = /usr/share/lxc/config/debian.common.conf # Container specific configuration lxc.mount = /var/lib/lxc/deb_test/fstab lxc.utsname = deb_test lxc.arch = amd64 lxc.autodev = 1 lxc.kmsg = 0 




We start:
 sudo lxc-start -n deb_test 



Log in with the specified password. To run in headless mode, the -d switch is used, and the root console can be obtained using the command

 sudo lxc-attach -n deb_test 


So far we have neither the network nor the programs necessary for the work. To do this, we raise a bridge on the host, set the IP, wrap the traffic from the virtual subnet, and if we turn off the interface, we destroy the bridge.

On the host, write in / etc / network / interfaces
 auto lo br0 iface br0 inet static address 172.20.0.1 netmask 255.255.255.0 pre-up /sbin/brctl addbr br0 post-up /sbin/brctl setfd br0 0 post-up iptables -t nat -A POSTROUTING -s 172.20.0.0/24 -j MASQUERADE post-up echo 1 > /proc/sys/net/ipv4/ip_forward pre-down /sbin/brctl delbr br0 


In the configuration of the container we add:
 lxc.network.type = veth lxc.network.flags = up lxc.network.link = br0 lxc.network.hwaddr = 00:01:02:03:04:05 


It is clear that the mac-address is arbitrary, for every taste.
To immediately get a working network and the ability to install packages apt'om, we add
 lxc.network.ipv4 = 172.20.0.3 lxc.network.ipv4.gateway = 172.20.0.1 

And execute
 echo "nameserver 192.168.18.1">/etc/resolv.conf 


It is clear that 192.168.18.1 is the IP of my DNS.

Install the necessary packages:
 #apt-get install vim openvpn zsh iftop 


Further, either on the host or on another working virtual, you can get a list of installed packages and install them all in our new container:
 scp user@172.20.0.2:/etc/apt/sources.list /etc/apt/ scp -r user@172.20.0.2:/etc/apt/sources.list.d /etc/apt/ apt-get update ssh user@172.20.0.2 'dpkg --get-selections|grep -v deinstall'|dpkg --set-selections apt-get dselect-upgrade 


Now you can customize the network interface in the container using your favorite text editor:

/ etc / network / interfaces:
 auto lo eth0 iface lo inet loopback iface eth0 inet static address 172.20.0.3 netmask 255.255.255.0 gateway 172.20.0.1 dns-nameservers 192.168.18.1 


However, this could be done from the host system, for example, by mounting a logical volume. There are many ways.

In principle, as a DNS, you can specify any public, if you do not fear for their privacy. For example, Google 8.8.8.8 and 8.8.4.4.

As regards access to the devices of the host system, I adhere to the policy “everything that is not allowed is prohibited”. Add the following line to the config for this:
 lxc.cgroup.devices.deny = a 


Delete
 lxc.include = /usr/share/lxc/config/debian.common.conf 


Let's try to connect via OpenVPN. Immediately we get the error:
 Thu Oct 15 16:39:33 2015 ERROR: Cannot open TUN/TAP dev /dev/net/tun: No such file or directory (errno=2) Thu Oct 15 16:39:33 2015 Exiting due to fatal error 




The system writes that the TUN / TAP interfaces are not available due to their absence. Obviously, you need to allow the guest system to use host devices. Open the container configuration file, / var / lib / lxc / deb_test / config and add the line there:
 lxc.cgroup.devices.allow = c 10:200 rwm 


In the container we execute:
 root@deb_test:/# mkdir /dev/net; mknod /dev/net/tun c 10 200 





Pay attention to 10: 200 - this is the device type identifier. If we execute on the host:
 $ls -l /dev/net/tun crw-rw-rw- 1 root root 10, 200  13 10:30 /dev/net/tun 


Then we will see identifiers 10, 200. We will be guided by them, allowing access to the device, for example, camera - video0.

 lxc.cgroup.devices.allow = c 81:* rwm 


In the same way we add other necessary devices:

 # /dev/null and zero lxc.cgroup.devices.allow = c 1:3 rwm lxc.cgroup.devices.allow = c 1:5 rwm # consoles lxc.cgroup.devices.allow = c 5:1 rwm lxc.cgroup.devices.allow = c 5:0 rwm lxc.cgroup.devices.allow = c 4:0 rwm lxc.cgroup.devices.allow = c 4:1 rwm # /dev/{,u}random lxc.cgroup.devices.allow = c 1:9 rwm lxc.cgroup.devices.allow = c 1:8 rwm lxc.cgroup.devices.allow = c 136:* rwm lxc.cgroup.devices.allow = c 5:2 rwm # rtc lxc.cgroup.devices.allow = c 254:0 rm #usb passthrough lxc.cgroup.devices.allow = c 189:* rwm #video lxc.cgroup.devices.allow = c 81:* rwm #sound lxc.cgroup.devices.allow = c 116:* rwm lxc.cgroup.devices.allow = c 14:* rwm 


In order for X to function and allow them to pass through ssh, you must add a mount point:
 lxc.mount.entry = /tmp/.X11-unix/X0 tmp/.X11-unix/X0 none bind,optional,create=file 


By analogy, you can mount other, necessary directories and files:
 lxc.mount.entry = /home/user/.vim home/user/.vim none bind,optional,create=dir 0 0 lxc.mount.entry = /home/user/.vimrc home/user/.vimrc none bind,optional,create=file 0 0 


To play the sound, you can allow access to the sound device if the card is multi-threaded (with single-line dmix problems with blocking occur):
 lxc.cgroup.devices.allow = c 116:* rwm lxc.cgroup.devices.allow = c 14:* rwm lxc.mount.entry = /dev/snd dev/snd none bind,optional,create=dir 0 0 


And you can set up pulseaudio to play audio over the network, as described here . Briefly:

Edit /etc/pulse/default.pa on the host by adding there:
 load-module module-native-protocol-tcp auth-ip-acl=127.0.0.1;172.20.0.3 auth-anonymous=1 


As a result, we get this config:
/ var / lib / lxc / deb_test / config

 lxc.rootfs = /dev/nethack-vg/deb_test lxc.mount = /var/lib/lxc/deb_test/fstab lxc.utsname = deb_test lxc.arch = amd64 lxc.autodev = 1 lxc.kmsg = 0 lxc.network.type = veth lxc.network.flags = up lxc.network.link = br0 lxc.network.hwaddr = 00:01:02:03:04:05 lxc.network.ipv4 = 172.20.0.3 lxc.network.ipv4.gateway = 172.20.0.1 #deny acces for all devices lxc.cgroup.devices.deny = a # /dev/null and zero lxc.cgroup.devices.allow = c 1:3 rwm lxc.cgroup.devices.allow = c 1:5 rwm # consoles lxc.cgroup.devices.allow = c 5:1 rwm lxc.cgroup.devices.allow = c 5:0 rwm lxc.cgroup.devices.allow = c 4:0 rwm lxc.cgroup.devices.allow = c 4:1 rwm # /dev/{,u}random lxc.cgroup.devices.allow = c 1:9 rwm lxc.cgroup.devices.allow = c 1:8 rwm lxc.cgroup.devices.allow = c 136:* rwm lxc.cgroup.devices.allow = c 5:2 rwm # rtc lxc.cgroup.devices.allow = c 254:0 rm #sound lxc.cgroup.devices.allow = c 116:* rwm lxc.mount.entry = /dev/snd dev/snd none bind,optional,create=dir 0 0 #tun/tap adapters lxc.cgroup.devices.allow = c 10:200 rwm #video0 lxc.cgroup.devices.allow = c 81:* rwm lxc.mount.entry = /dev/video0 dev/video0 none bind,optional,create=file lxc.mount.entry = /dev/dri dev/dri none bind,optional,create=dir lxc.mount.entry = /tmp/.X11-unix/X0 tmp/.X11-unix/X0 none bind,optional,create=file 




The container is ready to use.

Using



Install, for example, i2p with Tor, if you have not done this before, and immediately configure privoxy:
 wget -q https://geti2p.net/_static/i2p-debian-repo.key.asc -O- | sudo apt-key add - echo "deb http://deb.i2p2.no/ jessie main" >/etc/apt/sources.list.d/i2p.list echo "deb-src http://deb.i2p2.no/ jessie main" >>/etc/apt/sources.list.d/i2p.list apt-get update apt-get install privoxy i2p tor 

/ etc / privoxy / config

 user-manual /usr/share/doc/privoxy/user-manual confdir /etc/privoxy logdir /var/log/privoxy actionsfile user.action # User customizations filterfile default.filter filterfile user.filter # User customizations logfile logfile listen-address localhost:8118 toggle 1 enable-remote-toggle 1 enable-remote-http-toggle 1 enable-edit-actions 1 enforce-blocks 0 buffer-limit 4096 enable-proxy-authentication-forwarding 0 forwarded-connect-retries 0 accept-intercepted-requests 0 allow-cgi-request-crunching 0 split-large-forms 0 keep-alive-timeout 5 tolerate-pipelining 1 socket-timeout 300 forward .i2p localhost:4444 forward-socks5 .onion localhost:9050 . 




It is most convenient to launch graphical applications like a browser via ssh:
 ssh -Y 172.20.0.2 "PULSE_SERVER=172.20.0.1 http_proxy=127.0.0.1:8118 chromium" 





Also, of course, LXC provides tools for cloning containers and removing snapshots.

So, for example, you can clone a container whose file system will be LVM snapshot with the command:
 sudo lxc-clone -s -H -o deb_test -L 200M --new deb_test2 




A deb_test2 container will be created with a file system hosted on a 200MB LVM snapshot (for storage of diffs). This will be an exact copy of deb_test, on which you can conduct a couple of experiments and, for example, painlessly remove.

But lxc-snapshot with LVM as storage, for some reason does not work on the lxc-1.0.6 version:
 ->sudo lxc-snapshot -n deb_test lxc_container: deb_test's backing store cannot be backed up. lxc_container: Your container must use another backing store type. lxc_container: Error creating a snapshot 


The problem is described and discussed here . Therefore, it is necessary to do pictures in the old manner:
 sudo lvcreate -L100M -s -n deb_test_before_rm_rf -pr /dev/nethack-vg/deb_test 


In this case, we created a read-only snapshot with the name deb_test_before_rm_rf with a size of 100MB. What to do with it next? For example, you can dump it using dd, transfer it to another machine together with the container configs, create a volume of the required size there and shed the same dd (cp, cat, etc.) - a kind of “live migration”.

As stated above, the areas of use for containers can be found mass. But the main, at home use, in my opinion, is the isolation of applications.

That's all for now.

Source: https://habr.com/ru/post/269423/


All Articles