📜 ⬆️ ⬇️

Second order virtualization

UPDATE (2016-01-28) : now there is a Docker for this.

What to do when you need a bunch of small and cheap servers to test different versions of different sites? You can buy a Dedik and put OpenVZ on it. Although, OpenVZ will be somehow rather small - there is a lot of memory. Better put XEN. Or KVM. Or even VMWare.
And we start all this admin? - Of course not.

Virtualization my love


I'll start from afar, the first post is. More than six years ago, I did change one huge and expensive iron server into several small virtual ones. At that time, the masterhost was still a cake, and he had virtuals only under Virtuozzo (however, as now). So met. Time passed, the technology of containers fascinated me with its simplicity: the file system in a simple folder, full backups with one click, stable kernels from the hoster and, most importantly, no problems with hardware. A dream come true man who never collected the server.
')
But when it becomes necessary to go beyond the limitations of twuh-three convenient tariffs for the hoster, grief begins. In my case, it took a lot of small servers created on the fly that run different versions of the site for each branch in the repository. It takes a bit of memory, a little disk too, the processor and the network are used occasionally. And most importantly, the server must be able to create from a script. And best of all, fast.

And with this there are difficulties. First of all, serious guys with their API (Linode, Amazon) use XEN from 20 bucks for a server with a bunch of everything superfluous. This is not an option. Secondly, cheap hosting services constantly lag and fall off, even if they learn to create servers from them automatically. Also not an option. And all this is logical, because expensive hosters are expensive because they can do everything and work stably.

And what if to get cheap containers on an expensive hosting? Beginning with ubuntu 12.04, this is easier to do with steamed turnips: using Linux Containers, which have been living in the vanilla kernel for several years. In the case of the server from Linode, this is a pair of apt-get s, a small patch, a reboot and 300 MB of space.

Vanilla core


Let's start:
 sudo apt-get update && sudo apt-get upgrade 

Put the LXC scripts:
 sudo apt-get install lxc 

And check for Linux Containers kernel support:
 sudo lxc-checkconfig 

The current pre-default core of the linods supports containers in a very partial way. Put the vanilla core from ubunt:
 sudo apt-get install linux-virtual 

virtual because you need XEN support.

Moving on according to the PV-GRUB instructions:
 sudo apt-get install grub-legacy-ec2 

When the hornbeam installer asked the question "where to put", I chose xvda.

Next we package `/ boot / grub / menu.lst` (towards the end of the file):
 - # defoptions=console=hvc0 + # defoptions=console=hvc0 rootflags=nobarrier 

and update the hornbeam config:
 sudo update-grub-legacy-ec2 


Everything, the container is ready to load the vanilla kernel. It remains only to ask our linode to boot in accordance with the hornbeam config. To do this, go to the virtual server profile editor, look at the kernel bit size (I have the Latest 32 bit ) and select the corresponding pv-grub-* (I have pv-grub-x86_32 ). Next, save the profile and restart the server.

Container


Now you can check the LXC support in the kernel:
 sudo lxc-checkconfig 

and enjoy all the green points.

A new network interface will appear in the system:
 ifconfig | grep lxc 

which will combine all the containers and release them on the Internet.

Finally, create the first container:
 sudo lxc-create -t ubuntu -n demo1 

For the first time, this process will take about five minutes, since lxc-create will collect a new ubuntu in the /var/cache/lxc/ , and then copy it to the /var/lib/lxc/demo1/rootfs/ , where it will live our new container.

You can launch the container and get into its console like this ( ubuntu login and password):
 sudo lxc-start -n demo1 

Go out, offer by pressing Ctrl+AQ . It is not possible to exit lxc-start this way, but you can simply close the terminal window. In the future, you can open the container console like this (login and password are the same):
 sudo lxc-console -n demo1 

It remains only to configure the launch of the container at startup:
 sudo ln -s /var/lib/lxc/demo1/config /etc/lxc/auto/demo1.conf 


Container window


Now you need to somehow get to the site inside the container. Since it does not have a public IP, you will have to be wise.
You can pick iptables , because the containers are still on the same machine. Or disable network virtualization altogether, then all containers will have a common network interface with a virtual machine. But today, I am not ready to smoke such strong mana. Better, we will do the same as with any other private cluster: we will proxy HTTP requests to internal nodes. Today I have experience only with nginx proxying, and I will describe it. But if your site uses web sockets, then you need to proxy using HAproxy , since nginx is going to gain their support only in early 2013.

So, the config for nginx:
 user nobody nogroup; worker_processes 2; events { worker_connections 1024; } http { keepalive_timeout 90; access_log off; server { server_name demo.example.com; location / { proxy_pass 10.3.0.2; proxy_redirect http://localhost/ http://$host:$server_port/; proxy_buffering off; } } } 

Everything is simple: nginx receives a request for port 80 of the virtual machine and sends it to port 80 of the container with the address 10.3.0.2. The answer is not buffering, and then nginx likes to add pictures first to disk, and then to the network. The server section will need to be repeated for each site / container pair, changing server_name and proxy_pass to suit your situation. I am sure everyone can write a script for this.

Performance


On the one hand, what is there to talk about? Well this is a server for demos and developer research. On the other hand, the containers are so good that they do not add complex abstractions. The processor all receive in its original form, memory, too. Probably a bit spent on the file system isolation logic, and then only when opening a file. Here is the network, yes, virtual, but only at the IP level, no one emulates a network card. In the comments inkvizitor68sl calls 1-2% loss. I believe that for an ordinary site there will be even less. Of course, if you enable quoting, then a lot will change. But accounting and limiting resource consumption is always worth something. Because my containers are not limited to anything, they are simply isolated, so that it is more convenient to deploy a bunch of small sites from the script. That's all.

Deep conclusions


Cool!

Oh yes. Solaris and Frya, a hundred years already, they know how out of the box. And the same can be done on OpenVZ (even inside LXC).

Source: https://habr.com/ru/post/162105/


All Articles