📜 ⬆️ ⬇️

Running cron inside the Docker container


It just so happened that running cron in a Docker container is a very specific, if not hard to say. The network is full of solutions and ideas on this topic. Here is one of the most popular (and easiest) launch methods:
cron -f 

But such a solution (and most of the others too) has a number of drawbacks that are difficult to get around right away:


Logs


The problem of viewing logs using standard Docker tools is relatively easy to fix. To do this, it is enough to decide on which file your cron job logs will be written to. Suppose it is /var/log/cron.log:
* * * * * www-data task.sh >> /var/log/cron.log 2>&1

Starting after this container with the command:
 cron && tail -f /var/log/cron.log 

we can always see the results of the tasks with the help of docker logs.

A similar effect can be achieved by redirecting /var/log/cron.log to the standard container output:
 ln -sf /dev/stdout /var/log/cron.log 

UPD : This way will not work for this reason.
')
If cron jobs write logs to different files, then most likely it would be preferable to use tail, which can “follow” several logs at the same time:
 cron && tail -f /var/log/task1.log /var/log/task2.log 

UPD : It is more convenient to create the file (s) for the log as a named pipe (FIFO). This will avoid the accumulation of unnecessary information inside the container, and assign the log rotate tasks to the Docker . Example:
 mkfifo --mode 0666 /var/log/cron.log 


Environment variables


Studying the information on the assignment of environment variables for cron tasks, I found out that the latter can use so-called plug-in authentication modules (PAM) . That at first glance is a non-subject matter of fact. But PAM has the ability to define and redefine any environment variables for services that use it (or, more precisely, authentication modules), including cron. All configuration is done in the /etc/security/pam_env.conf file (in the case of Debian / Ubuntu). That is, any variable described in this file automatically enters the Environment of all cron tasks.

But there is one problem, more precisely even two. The syntax of the file (its description) at the first glance may discourage stupor . The second problem is as when launching the container to transfer environment variables inside pam_env.conf.

Experienced Docker users about the second problem will probably immediately say that you can use a live hack called docker-entrypoint.sh and they will be right. The essence of this lifehack is to write a special script that is started at the time the container is started, and which is the entry point for the parameters listed in the CMD or transmitted on the command line. The script can be registered inside the Dockerfile, for example, like this:
 ENTRYPOINT ["/docker-entrypoint.sh"] 

And its code should be written in a special way:
docker-entrypoint.sh
 #!/usr/bin/env bash set -e #      /etc/security/pam_env.conf exec "$@" 


Let's return to the transfer of environment variables a bit later, but for now let's focus on the syntax of the pam_env.conf file. When describing any variable in this file, the value can be specified using two directives: DEFAULT and OVERRIDE. The first allows you to specify the default value of a variable (if it is not defined at all in the current environment), and the second allows you to override the variable value (if the value of this variable in the current environment is present). In addition to these two cases, more complicated cases are described in the file as an example, but we are mostly interested only in DEFAULT. In total, to determine the value for some environment variable, which will then be used in cron, you can use the following example:
 VAR DEFAULT="value" 

Please note that value in this case should not contain variable names (for example, $ VAR), because the file context is executed inside the target Environment, where the specified variables are missing (or have a different value).

But you can do even easier (and for some reason this method is not described in the examples pam_env.conf). If you are satisfied that the variable in the target Environment will have the specified value, regardless of whether it is already defined in this environment or not, instead of the above line, you can simply write:
 VAR="value" 

Here you should be warned that you cannot replace $ PWD, $ USER and $ PATH for cron jobs any time you want, because cron assigns the values ​​of these variables based on their own convictions. You can, of course, take advantage of various hacks , among which there are workers, but this is at your discretion.

Finally, if you need to transfer all current variables to the cron jobs environment, then in this case you can use the following script:
docker-entrypoint.sh
 #!/usr/bin/env bash set -e #       env | while read -r LINE; do #    'env'  #     ,     "=" (. IFS) IFS="=" read VAR VAL <<< ${LINE} #      ,    sed --in-place "/^${VAR}/d" /etc/security/pam_env.conf || true #        echo "${VAR} DEFAULT=\"${VAL}\"" >> /etc/security/pam_env.conf done exec "$@" 


By placing the “print_env” script in the /etc/cron.d folder inside the image and running the container (see the Dockerfile), we will be able to make sure that this solution works:
print_env
 * * * * * www-data env >> /var/log/cron.log 2>&1 


Dockerfile
 FROM debian:jessie RUN apt-get clean && apt-get update && apt-get install -y cron RUN rm -rf /var/lib/apt/lists/* RUN mkfifo --mode 0666 /var/log/cron.log COPY docker-entrypoint.sh / COPY print_env /etc/cron.d ENTRYPOINT ["/docker-entrypoint.sh"] CMD ["/bin/bash", "-c", "cron && tail -f /var/log/cron.log"] 


container launch
 docker build --tag cron_test . docker run --detach --name cron --env "CUSTOM_ENV=custom_value" cron_test docker logs -f cron #    



Graceful shutdown


Speaking about the reason for the impossibility of the normal completion of the described container with cron, we should mention the way in which the Docker daemon communicates with the service running inside it. Any such service (process) is started with PID = 1, and only Docker can work with this PID. That is, each time Docker sends a control signal to a container, it addresses it to a process with PID = 1. In the case of “docker stop”, this is SIGTERM and, if the process continues, after 10 seconds SIGKILL. Since "/ bin / bash -c" is used to run (in the case of "CMD cron && tail -f /var/log/cron.log", the Docker still uses "/ bin / bash -c", simply implicitly), then PID = 1 receives the process / bin / bash, and cron and tail already receive other PIDs, which cannot be predicted for obvious reasons.

So it turns out that when we execute the “docker stop cron” command, SIGTERM receives the "/ bin / bash -c" process, and in this mode it ignores any received signal (except SIGKILL, of course).

The first thought in this case is usually - it is necessary to somehow “kiln” the tail process. Well, it is quite easy to do:
 docker exec cron killall -HUP tail 

Cool, the container immediately stops working. The truth about the graceful there are some doubts. And the error code is still non-zero. In general, I could not advance in solving the problem, following this path.

By the way, launching the container using the cron -f command also does not give the desired result, cron in this case simply refuses to respond to any signals.

True graceful shutdown with zero exit code


Only one thing remains - to write a separate cron daemon startup script, which is able to react correctly to control signals. It is relatively easy, even if you didn’t have to write to bash before, you can find information that it is possible to program the signal processing in it (using the trap command). Here is how, for example, such a script could look like:
start-cron
 #!/usr/bin/env bash #  cron service cron start #  SIGINT  SIGTERM   trap "service cron stop; exit" SIGINT SIGTERM 


if we could somehow make this script work endlessly (before receiving the signal). And here one more life hacking, peeped here , comes to the rescue, namely the addition of the following line to the end of our script:
 tail -f /var/log/cron.log & wait $! 

Or, if cron jobs write logs to different files:
 tail -f /var/log/task1.log /var/log/task2.log & wait $! 


Conclusion


The result was an effective solution for running cron inside the Docker container, bypassing the limitations of the first and following the rules of the second, with the possibility of normal stopping and restarting the container.

At the end I quote a link where everything described in the article is in the form of a separate Docker image: renskiy / cron .

Source: https://habr.com/ru/post/305364/


All Articles