📜 ⬆️ ⬇️

Engineer's opinion: Why not use Docker anytime and anywhere



In our blog, we write a lot about the development of cloud service 1cloud and promising technologies like Docker. Containers over the past year have become a real hit, but such popularity has a downside. Engineer Nick Barrett (Nick Barret) in his blog wondered why now Docker containers are starting to be used even for solving tasks that are not suitable for this tool?

Barrett says he loves Docker. According to the engineer himself, he spent a lot of time to master Docker and Kubernetes. Combined with stateless containers, they provide fantastic scalability, service deployment, and almost instant deployment of applications (apart from creating the primary image).
')
But today, Docker containers are being used for everything and, according to Barrett, this confuses him.

To illustrate this, the engineer proposes to consider the situation with the launch of the image store (Docker Registry) on Docker (v2). Here he is going to:


This is a one-time box that is not needed in the Kubernetes cluster, moreover, in this case, the engineer is not interested in the possibility of Docker for its scaling. So all this can be run immediately on the "hardware".

The trouble is that the installation parameters do not find information on how this can be done. In essence, the “official” way is to use the Docker image. Fortunately, the Dockerfile is nothing more than a limited script for work, so you can use the path : docker / distribution -> Registry Image -> Dockerfile. Barrett came to this experimentally.

There are other options for working outside of Docker. It's about datastores (temporary storage). Let's say you need to run an Elasticsearch or Galera cluster. Docker containers will offer a quick installation, which looks pretty tempting.

But do not rush. How can I configure these services for multiple environments (test / prod clusters)? They do not read ENVvars, and they know nothing about the internal service disclosure tools used. These types of systems have their own configs, they can be elasticsearch.yml or my.cnf. In this and similar cases, the Dockerfile is terribly useless, says Barrett.

The engineer is sure that the most common solution is to install other utilities inside the image that would load the configuration before starting the service. And in his opinion, this is a violation of the very idea of ​​the existence of containers without unspecialized software. Tools like pyinfra and Ansible for such work are much more convenient (they do not install unnecessary trash to generate a config).

After all, you can use easily accessible instances Elasticsearch / Galera /, etc., which is much more useful at the stage of product development. The ability to instantly launch a single Elasticsearch instance linked to a specific application branch will save a lot of time. This is definitely the best way to deploy stateless applications ever created. Therefore, Barret advises "just not to bother" with the creation of clusters or complex third-party software using Docker containers.

Source: https://habr.com/ru/post/275601/


All Articles