📜 ⬆️ ⬇️

We use Docker and do not worry about vendor-lock

Docker has significantly changed the approach to setting up servers, supporting and delivering applications. Developers are beginning to think about whether their application architecture can be divided into smaller components that will run in isolated containers, which will allow for greater acceleration, parallelization of performance and reliability. Docker also solves the important problem of removing the vendor – lock cloud and allows you to easily migrate customized applications between your own servers and clouds. All that is required from the server to start Docker is more or less modern Linux OS with a kernel not lower than 3.8.

In this article we will talk about how easy it is to use Docker and what advantages it will give to the system administrator and developer. Forget about problems with dependencies, run software on one server that requires different Linux distributions, do not be afraid to "pollute" the system with wrong actions. And share the groundwork with the community. Docker solves a lot of actual problems and helps to make IaaS much more similar to PaaS, without vendor-lock.

InfoboxCloud Docker
')
On the cloud VPS from Infobox, we made a ready image of Ubuntu 14.04 with Docker. Get a free trial version (“Test 10 days” button) and start using Docker right now! Do not forget to tick the “Allow OS kernel management” checkbox when creating the server, this is required for the Docker operation. In the very near future we will have other OS with Docker inside.

Under the cut, you will find out that Docker was so inspired by the author of the article that in a couple of days he transferred his cloud servers that automate parts of the development process into Docker containers.

What is a Docker?


Docker is an open – source engine that automates the deployment of applications into lightweight, portable, self-contained containers that can be transferred between servers without changes.

The same container that the developer creates and tests on a laptop can be easily transferred to the production server in the cloud and just as easily migrated to another region if necessary.

Basic Docker Uses:

Fifteen years ago, almost all applications were developed on well-known technology stacks and deployed to a single, monolithic, proprietary server. Today, developers create and distribute applications using the best available services and technologies and must prepare applications for deployment to various places: to physical servers, to public and private clouds. The criterion for choosing a cloud is the quality of service, security, reliability and availability, while vendor – lock is a thing of the past.

You can draw a good analogy from the field of freight. Until the 1960s, most of the goods were transported mixed. Carriers had to take care of the impact of one type of cargo on another (for example, if anvils were suddenly placed on bags of bananas). The change of transport, for example, from train to ship, was also a test for cargo. Up to half of the travel time took loading, unloading and reloading. There were heavy losses during the trip due to damage to the cargo.

The solution was a standard shipping container. Now any types of cargo (from tomatoes to cars) could be packed in containers. Containers did not open until the end of the trip. It was easy to efficiently arrange the containers in transport and overload with automatic cranes if necessary, without unloading the container itself. Containers have changed the world of freight. Today, 18 million transported standard containers account for 90% of world trade.


Containers for sea freight in the port of Qingdao, China.

Docker can be represented exactly as such containers in the computer code. Virtually any application can be packaged in a lightweight container that allows automation. Such containers are designed to run virtually on any Linux – server (with kernel 3.8 or higher).

In other words, developers can package their applications once and be sure that the application runs exactly in their tested configuration. The work of system administrators is also greatly simplified - the need to take care of software support is less and less.

Docker components


Client and server

Docker is a client-server application. Clients talk to a server (daemon) that directly does all the work. Docker command line utility and RESTful API can be used to manage Docker. You can run the client and server on the same host or connect to the Docker server remotely.

Docker Images

The user launches his containers from images that are part of the container building process. The image uses AuFS to transparently mount file systems. Using bootfs, the container is loaded into memory. Then bootfs is unmounted, freeing memory. Next, rootfs works (from Debian, Ubuntu, etc.). In Docker, rootfs is mounted in read-only mode. When the container is launched from an image, the file system is mounted on the record above the required layer below.



Registries

Docker stores the images you create in registries. There are two types of registries: public and private. The official registry is called Docker Hub . By creating an account in it, you can save your images in it and share them with other users.

The Docker Hub has more than 10,000 images with various operating systems and software. You can also save private images in Docker Hub and use them within your organization. Using the Docker Hub is optional. You can create your own repositories outside the Docker infrastructure (for example, on your corporate cloud servers).

Containers

Docker helps you create and deploy containers within which you can run your applications and services. Containers run from images.

When the Docker starts the container, the write layer is empty. With changes, they are recorded in this layer. For example, when a file is changed, it is copied to a layer that can be written to (copy on write). A “read only” copy still exists, but is hidden. After creating the container, Docker builds a set of read – only images and turns on the layer for writing.

Create an interactive container


After creating a virtual machine with Docker, you can start creating containers. You can get basic installation information using the docker info command .


A complete list of available commands can be obtained with the docker help command .

Let's build a container with Ubuntu.
sudo docker run -i -t ubuntu /bin/bash 

The -i flag leaves STDIN open, even when you are not attached to the container. The -t flag assigns a pseudo-tty container. This creates an interactive interface to the container. We also specify the name of the image (ubuntu is the base image) and shell / bin / bash.

Let's install the nano into a container.
 apt-get update apt-get install -y nano 

You can exit the container with the exit command.

The docker ps command shows a list of all running containers, and docker ps -a lists all, including those that have been stopped.


The list of running containers is not. When you left the container, it stopped. In the screenshot above (docker ps -a) you can see the name of the container. When you create a container, the name is automatically generated. You can specify a different name when creating the container:
 docker run --name habrahabr -t -i ubuntu 

The container can be accessed not only by ID, but also by name.
Let's run the container:
 docker start stupefied_lovelace 

To connect to the container, you must use the attach command:
 docker attach stupefied_lovelace 

(you may need to press Enter before the prompt appears).

Create a container daemon


Of course, you can create long-lived containers suitable for running applications and services. Such containers do not have an interactive session.
 docker run --name city -d ubuntu /bin/bash -c "while true; do echo hello world; sleep 1; done" 
where city is the name of the container.
You can see what is happening inside the container with the docker logs <container name> command .
You can stop a container with the docker stop command <container name> . If after that the container is restarted again to docker start <container name> , the while loop will continue in the container.

You can see the details of the container with the command docker inspect <container name> .
To remove a container, use docker rm <container name> .

How to get and put the data?


In order to copy data to a container or remove it from it, you need to use the command
 docker cp <    > < >:<> 

You can mount the host folder in the container when creating:
 docker run -v /tmp:/root -t -i < > 
,
where / tmp is the path to the folder on the host, and / root is the path to the folder on the server. This way you can work from the container with the data on the host and eliminate the need to copy data in both directions.

We work with images


Let's see a list of all our docker images



Changes to an existing container can be committed to an image for future use.
 docker commit <id > < > 
.

Transferring the image to another host


Finally, the main thing. Let's say you set up your application in Docker and commit to an image. Now you can save the image to a file.
 docker save _ > ~/transfer.tar 

Copy this image to another host, for example, using scp and import it into Docker.
 docker load < /tmp/transfer.tar 

That's all, you can easily transfer your applications between hosts, clouds and your own servers. No vendor – lock. Just for this you should use Docker! (if you saved data to the mounted file system, do not forget to transfer it as well).

Install nginx in Docker


For example, let's install nginx in Docker and configure its autorun. Of course, you can simply download the image from nginx for Docker, but we'll see how this is done from the very beginning.

Create a clean container with Ubuntu 14.04 with ports 80 and 443 open:
 docker run -i -t -p 80:80 -p 443:443 --name nginx ubuntu:trusty 

Add the official repository of the stable version of nginx to /etc/apt/sources.list:
 deb http://nginx.org/packages/ubuntu/ trusty nginx deb-src http://nginx.org/packages/ubuntu/ trusty nginx 


Install nginx:
 apt-key update apt-get update apt-get install nginx 


You can verify that nginx starts by running:
 /etc/init.d/nginx start 

We will see the welcome page by going to the ip server on port 80:


For different applications, the nginx settings may differ, it makes sense to save the container from nginx to an image of <your login on hub.docker.com> / nginx:
 docker commit nginx trukhinyuri/nginx 

Here we first met the Docker Hub . Time to create an account in this service and log in using the docker login command.

Now you can share the image with other users or simply save the image for reuse on other hosts. We did not just save the image in the format <username>: <image tag>. Attempting to send an image named in a different format will fail. For example, if you try to send an image just named nginx, you will be politely informed that only favorites can save images to the root repository.

Send our trukhinyuri / nginx image to the docker hub for reuse on other servers in the future. (here trukhinyuri is the author's repository name):
 docker push trukhinyuri/nginx 

To enable nginx to start when the host starts, add the upstart initialization script to the address /etc/init/nginx.conf:
 description "Nginx" author "Me" start on filesystem and started docker stop on runlevel [!2345] respawn script /usr/bin/docker start -a nginx end script 

Conclusion


In this article, you were able to try Docker and appreciate how easy it is to package an application and migrate between different hosts. This is only the tip of the iceberg, a lot of things are left behind the scenes and will be considered in the future. For additional reading, we recommend the book The Docker Book .

You can try the image with Docker in on cloud VPS from Infobox in Amsterdam by getting a trial version for free (the button “Test 10 days”).

UPD: Access to the trial version of cloud VPS is temporarily restricted. Order is still available. We are testing a new technology to make the service even faster. Follow the news.

If you find an error in the article, the author will gladly correct it. Please write in the LAN or in the mail about it.
In case you can not leave comments on Habré - write a comment on the article in the InfoboxCloud Community .

Successful use!

Source: https://habr.com/ru/post/237405/


All Articles