📜 ⬆️ ⬇️

Docker-compose. How to wait for container readiness

Introduction


There are many articles about running containers and writing docker-compose.yml . But for me, for a long time, it was not clear what to do if a container should not be launched until another container is ready to process its requests or does some amount of work.

This question became relevant after we began to actively use docker-compose , instead of launching individual dockers.

What is it for


Indeed, let the application in container B depend on the readiness of the service in container A. And at launch, the application in container B does not receive this service. What should it do?

Option two:
')

After container B has died, docker-compose (depending on the configuration of course) will restart it and the application in container B will again try to reach the service in container A.

This will continue until the service in container A is ready to respond to requests, or until we notice that the container is constantly overloaded.
And in fact, this is the normal path for multi-container architecture.

But, in particular, we are faced with a situation where container A starts and prepares data for container B. An application in container B does not know how to check whether data is ready or not, it immediately starts working with them. Therefore, we have to receive and process the data readiness signal ourselves.

I think that you can still give a few use cases. But most importantly, you must understand exactly why you are doing this. Otherwise, it’s better to use standard docker-compose tools.

Some ideology


If you carefully read the documentation, then everything is written there. Namely - each
container unit is independent and must take care that all services with
which he is going to work, are available to him.

Therefore, the question is not to run or not to run the container, but to
inside the container, check for readiness of all required services and only
then transfer control to the container application.

How it is implemented


To solve this problem, I was greatly helped by the description of docker-compose ;
and an article about the proper use of entrypoint and cmd .

So, what we need to get:


Official documentation offers two ways to solve this problem.

The first is to write your own entrypoint in a container that will perform all the checks, and then launch the working application.

The second is the use of the already written command file wait-for-it.sh .
We tried both ways.

Writing your own entrypoint


What is entrypoint ?

This is just an executable file that you specify when creating a container in the Dockerfile in the ENTRYPOINT field. This file, as already mentioned, performs the checks, and then runs the main application of the container.

So, what we get:

Create an Entrypoint folder.

It has two subfolders - container_A and container_B . In them we will create our containers.

For container A, take a simple http server on python. After the start, it starts responding to get requests on port 8000.

To make our experiment more explicit, we set a delay of 15 seconds before starting the server.

It turns out the following docker file for container A :

FROM python:3 EXPOSE 8000 CMD sleep 15 && python3 -m http.server --cgi 

For container B, create the following docker file for container B :

 FROM ubuntu:18.04 RUN apt-get update RUN apt-get install -y curl COPY ./entrypoint.sh /usr/bin/entrypoint.sh ENTRYPOINT [ "entrypoint.sh" ] CMD ["echo", "!!!!!!!! Container_A is available now !!!!!!!!"] 

And we put our executable file entrypoint.sh in the same folder. We will have it like this

 #!/bin/bash set -e host="conteiner_a" port="8000" cmd="$@" >&2 echo "!!!!!!!! Check conteiner_a for available !!!!!!!!" until curl http://"$host":"$port"; do >&2 echo "Conteiner_A is unavailable - sleeping" sleep 1 done >&2 echo "Conteiner_A is up - executing command" exec $cmd 

What happens in container B:


We will run everything using docker-compose .

docker-compose.yml with us here is this:

 version: '3' networks: waiting_for_conteiner: services: conteiner_a: build: ./conteiner_A container_name: conteiner_a image: conteiner_a restart: unless-stopped networks: - waiting_for_conteiner ports: - 8000:8000 conteiner_b: build: ./conteiner_B container_name: conteiner_b image: waiting_for_conteiner.entrypoint.conteiner_b restart: "no" networks: - waiting_for_conteiner 

Here in conteiner_a it is not necessary to specify ports: 8000: 8000 . This was done in order to be able to test the operation of the http server running in it from the outside.

Also, container B is not restarted after completion of work.

Run:

 docker-compose up —-build 

We see that for 15 seconds there is a message about the inaccessibility of container A, and then

 conteiner_b | Conteiner_A is unavailable - sleeping conteiner_b | % Total % Received % Xferd Average Speed Time Time Time Current conteiner_b | Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd"> conteiner_b | <html> conteiner_b | <head> conteiner_b | <meta http-equiv="Content-Type" content="text/html; charset=utf-8"> conteiner_b | <title>Directory listing for /</title> conteiner_b | </head> conteiner_b | <body> conteiner_b | <h1>Directory listing for /</h1> conteiner_b | <hr> conteiner_b | <ul> conteiner_b | <li><a href=".dockerenv">.dockerenv</a></li> conteiner_b | <li><a href="bin/">bin/</a></li> conteiner_b | <li><a href="boot/">boot/</a></li> conteiner_b | <li><a href="dev/">dev/</a></li> conteiner_b | <li><a href="etc/">etc/</a></li> conteiner_b | <li><a href="home/">home/</a></li> conteiner_b | <li><a href="lib/">lib/</a></li> conteiner_b | <li><a href="lib64/">lib64/</a></li> conteiner_b | <li><a href="media/">media/</a></li> conteiner_b | <li><a href="mnt/">mnt/</a></li> conteiner_b | <li><a href="opt/">opt/</a></li> conteiner_b | <li><a href="proc/">proc/</a></li> conteiner_b | <li><a href="root/">root/</a></li> conteiner_b | <li><a href="run/">run/</a></li> conteiner_b | <li><a href="sbin/">sbin/</a></li> conteiner_b | <li><a href="srv/">srv/</a></li> conteiner_b | <li><a href="sys/">sys/</a></li> conteiner_b | <li><a href="tmp/">tmp/</a></li> conteiner_b | <li><a href="usr/">usr/</a></li> conteiner_b | <li><a href="var/">var/</a></li> conteiner_b | </ul> conteiner_b | <hr> conteiner_b | </body> conteiner_b | </html> 100 987 100 987 0 0 98700 0 --:--:-- --:--:-- --:--:-- 107k conteiner_b | Conteiner_A is up - executing command conteiner_b | !!!!!!!! Container_A is available now !!!!!!!! 

We receive the answer to the inquiry, we print !!! Container_A is available now !!!!!!!! and finish.

Using wait-for-it.sh


It should immediately be said that this path did not work in our country in the manner described in the documentation.
Namely, it is known that if ENTRYPOINT and CMD are specified in the Dockerfile, then when the container is started, the command from ENTRYPOINT will be executed, and the contents of the CMD will be passed to it as parameters.

It is also known that the ENTRYPOINT and CMDs specified in the Dockerfile can be overridden in docker-compose.yml

The startup format for wait-for-it.sh is as follows:

 wait-for-it.sh __ -- ___ 

Then, as stated in the article , we can define a new ENTRYPOINT in docker-compose.yml , and the CMD will be substituted from the Dockerfile .

So, we get:

The docker file for container A remains unchanged:

 FROM python:3 EXPOSE 8000 CMD sleep 15 && python3 -m http.server --cgi 

Docker file for container B

 FROM ubuntu:18.04 COPY ./wait-for-it.sh /usr/bin/wait-for-it.sh CMD ["echo", "!!!!!!!! Container_A is available now !!!!!!!!"] 

Docker-compose.yml looks like this:

 version: '3' networks: waiting_for_conteiner: services: conteiner_a: build: ./conteiner_A container_name: conteiner_a image: conteiner_a restart: unless-stopped networks: - waiting_for_conteiner ports: - 8000:8000 conteiner_b: build: ./conteiner_B container_name: conteiner_b image: waiting_for_conteiner.wait_for_it.conteiner_b restart: "no" networks: - waiting_for_conteiner entrypoint: ["wait-for-it.sh", "-s" , "-t", "20", "conteiner_a:8000", "--"] 

Run the wait-for-it command, tell it to wait for 20 seconds until container A comes to life, and specify another “-” parameter that should separate the wait-for-it parameters from the program that it will launch after its completion.

We try!
And unfortunately, we get nothing.

If we check with which arguments we start wait-for-it, we will see that only what we specified in the entrypoint is passed to it , the CMD from the container is not attached.

Working option


Then, there is only one option. What we have indicated in the CMD in the Dockerfile , we need to transfer to the command in docker-compose.yml .

Then, leave container B's Dockerfile unchanged, and docker-compose.yml will look like this:

 version: '3' networks: waiting_for_conteiner: services: conteiner_a: build: ./conteiner_A container_name: conteiner_a image: conteiner_a restart: unless-stopped networks: - waiting_for_conteiner ports: - 8000:8000 conteiner_b: build: ./conteiner_B container_name: conteiner_b image: waiting_for_conteiner.wait_for_it.conteiner_b restart: "no" networks: - waiting_for_conteiner entrypoint: ["wait-for-it.sh", "-s" ,"-t", "20", "conteiner_a:8000", "--"] command: ["echo", "!!!!!!!! Container_A is available now !!!!!!!!"] 

And in this version it works.

In conclusion, we must say that in our opinion, the right way is the first. It is the most versatile and allows you to do a readiness check in any way possible. Wait-for-it is just a useful utility that can be used both separately and embedded in your entrypoint.sh .

Source: https://habr.com/ru/post/454552/


All Articles