📜 ⬆️ ⬇️

How to bind Docker containers without forcing the application to read environment variables

Docker, if anyone has managed to not hear about it yet - an open source framework for managing container virtualization. He is fast, comfortable, thoughtful and fashionable. In fact, it changes the rules of the game in the noble work of managing the configuration of servers, building applications, executing server code, managing dependencies, and much more.

The architecture that Docker encourages is isolated containers, each of which executes one command. These containers should be known only as each other to find - in other words, you need to know about the container its fqdn and port, or ip and port, that is, nothing more than about any external service.

The recommended way to communicate such coordinates within the process running in Docker is to use environment variables. A typical example of this approach, not applicable to the docker, is DATABASE_URL , adopted in the Rails framework or NODE_ENV adopted in the Nodejs framework.
')
And so the environment variables allow the application inside the container to conveniently and easily find a database. But for this, the person who writes the application should know about it. And although the configuration of an application using environment variables is good and correct , sometimes applications are written poorly, but they need to be run somehow.

Docker, environment variables and links


Docker comes to the rescue of us when we want to link two containers and gives us a Docker links mechanism. You can read more about them in the manual on the Docker site itself , but if in brief, it looks like this:

  1. Give the name of the container at startup: docker run -d --name db training/postgres . Now we can reference this container by the name of db .
  2. We start the second container, associating it with the first: docker run -d -P --name web --link db:db training/webapp python app.py The most interesting in this line: --link name:alias . name is the name of the container, alias is the name by which this container will be known to be launched.
  3. This will lead to two consequences: firstly, a set of environment variables pointing to the db container will appear in the web container, and secondly an alias db indicating the ip on which we launched the database container will appear in the web container's /etc/hosts The set of environment variables that will be available in the web container is:


 DB_NAME=/web/db DB_PORT=tcp://172.17.0.5:5432 DB_PORT_5432_TCP=tcp://172.17.0.5:5432 DB_PORT_5432_TCP_PROTO=tcp DB_PORT_5432_TCP_PORT=5432 DB_PORT_5432_TCP_ADDR=172.17.0.5 


And if the application is desperately not ready to read such variables, then the console utility socat will come to the rescue.

socat


socat is a Unix utility for port forwarding. The idea is that with its help we will create an impression inside the container that, for example, the database is running in the same container on the same host and on its standard port, as it does on the developer’s computer. socat , like all low-level unix-like, is very lightweight and does not burden the main process of the container.

Let's take a closer look at the environment variables that the link mechanism forwards into the container. We are particularly interested in one of them: DB_PORT_5432_TCP=tcp://172.17.0.5:5432 . This variable contains all the data we need: the port to listen to on localhost (5432 in DB_5432_TCP ) and the coordinates of the database itself (172.17.0.5:5432).

Such a variable will be thrown into the container for each transferred link: database, Redis , auxiliary service.

We will write a script that will wrap any command as follows: scan the list of environment variables in search of the ones we are interested in, run socat for each, then run the transferred command and give control. When the script ends, it must complete all socat processes.

Script


Standard header set -e instructs the shell to terminate the script at the first error, that is, it requires the usual behavior of the programmer.

 #!/bin/bash set -e 

Since we will generate additional socat processes, we will need to monitor them so that we can complete them later and wait for them to complete.

 store_pid() { pids=("${pids[@]}" "$1") } 

Now we can write a function that will spawn child processes that we can memorize.

 start_command() { echo "Running $1" bash -c "$1" & pid="$!" store_pid "$pid" } start_commands() { while read cmd; do start_command "$cmd" done } 

The basis of the idea is to pull out tuples (_,_,_) from a set of environment variables ending in _TCP and turn them into a set of socat start commands.

 to_link_tuple() { sed 's/.*_PORT_\([0-9]*\)_TCP=tcp:\/\/\(.*\):\(.*\)/\1,\2,\3/' } to_socat_call() { sed 's/\(.*\),\(.*\),\(.*\)/socat -ls TCP4-LISTEN:\1,fork,reuseaddr TCP4:\2:\3/' } env | grep '_TCP=' | to_link_tuple | sort | uniq | to_socat_call | start_commands 

env will print a list of environment variables, grep will leave only the necessary ones, to_link_tuple pull out the triples we need, sort | uniq sort | uniq prevent the launch of two socat s for one service, to_socat_call will create the command we need.

We also wanted to complete the socat child processes when the main process is complete. We will do this by sending a SIGTERM signal.

 onexit() { echo Exiting echo sending SIGTERM to all processes kill ${pids[*]} &>/dev/null } trap onexit EXIT 

We start the main process with the exec command. Then the control will be transferred to him, we will see his STDOUT and he will receive STDIN signals as well.

 exec "$*" 

The whole script can be viewed in one piece .

So what?


We put this script in a container, for example, in /run/links.sh and now start the container like this:

 $ docker run -d -P --name web --link db:db training/webapp /run/links.sh python app.py 

Voila! In the container on 127.0.0.1 on port 5432 our post-res will be available.

Entrypoint


In order not to remember our script, the image can be set as an entry point with the ENTRYPOINT directive in Dockerfile . This will lead to the fact that any command launched in such an image will first be prefixed with this entry point.

Add to your Dockerfile :

 ADD ./links.sh /run/links.sh ENTRYPOINT ["/run/links.sh"] 

and again the container can be started simply by passing commands to it and to be sure that services from the associated containers will be visible to the application as if they were running on a local host.

And if there is no access to the image?


In connection with the above, there is an interesting problem: how to make the same convenient proxying services, if there is no access to the inside of the image? Well, that is, they give us an image and swear that there is socat inside, but our script is not there and we cannot put it on. But we can make the launching team arbitrarily difficult. How do we get our wrapper inside?

It comes to the aid of the possibility of forwarding a piece of the host file system inside the container. In other words, we can do, for example, the /usr/local/docker_bin folder on the host file system, put links.sh there and run the container like this:

 $ docker run -d -P \ --name web \ --link db:db training/webapp \ -v /usr/local/docker_bin:/run:ro \ /run/links.sh python app.py 

As a result, any scripts that we put in /usr/local/docker_bin will be available inside the container to run.

Please note that we used the ro flag, which does not allow the container to write to the /run folder.

An alternative would be to inherit from the image and just add the files there.

Total


With the help of socat and a kind word one can achieve a much more convenient way of communication between containers than with the help of a kind word alone.

Instead of an afterword


An attentive and sophisticated reader will probably notice that the ambassadord library does the same thing in principle, and it already does. And this reader will be absolutely right. A user who just needs to get his system to earn will probably prefer to use a ready-made and proven solution, but Habr does not differ in the mass character of such users. That is why this opus appeared, which, like a good joke, not only tells the obvious things, but also teaches.

Thanks for attention.

Source: https://habr.com/ru/post/260053/


All Articles