Please start reading a series of notes from the beginning, here:
habrahabr.ru/post/267441
Set up locally
In this article, I assume that the docker service is running on the same machine on which the commands are executed, and the process has read access to the current folder. I also mean that you can configure a bunch of PHP-FPM and Nginx.
')
I take images of Nginx and PHP 7.
~$ docker pull nginx ... ~$ docker pull php:7-fpm Status: Downloaded newer image for php:7-fpm
Now I have two alien classes that need to be linked together through dependency injection. The easiest way to add dependencies to someone else's code is, of course,
monkeypatching ! First create containers. I remember the
second complexity of programming - I give the containers intelligible names, they will be needed so that the containers can interact with each other.
~$ docker create
Php
I'll start with PHP - it's more difficult to configure. Where configs for PHP lie - can be seen in its
Dockerfile :
ENV PHP_INI_DIR /usr/local/etc/php --with-config-file-scan-dir="$PHP_INI_DIR/conf.d" \ WORKDIR /var/www/html COPY php-fpm.conf /usr/local/etc/
I copy the contents of the directory with the php configuration files from the container
~$ mkdir monkeypatch ~$ cd monkeypatch/ $ docker cp php7:/usr/local/etc localetc $ ls localetc/ pear.conf php php-fpm.conf php-fpm.conf.default php-fpm.d $ ls localetc/php conf.d
Maintainers put in the image of php-fpm.conf, but did not put the default php.ini. We'll have to take it from the source php.
$ docker cp "$PHP7:/usr/src/php/php.ini-development" localetc/php/php.ini
I correct configs, as usual. In which folder does PHP look for extensions? You can find out by running php, for example, in a temporary container.
$ docker run --rm php:7-fpm php -i |grep extension_dir extension_dir => /usr/local/lib/php/extensions/no-debug-non-zts-20141001 => /usr/local/lib/php/extensions/no-debug-non-zts-20141001 $ docker run --rm php:7-fpm ls /usr/local/lib/php/extensions/no-debug-non-zts-20141001 opcache.a opcache.so
In the only opcache extensions, you can connect it.
$ echo extension_dir = "/usr/local/lib/php/extensions/no-debug-non-zts-20141001" >> localetc/php/php.ini $ echo zend_extension = opcache.so >> localetc/php/php.ini
I recreate the php container and mount the folder with the configs in it. The path to the mounted folder must be from the root - the service does not know from which folder the docker client is called.
$ docker rm php7 php7 $ docker run -v "$(pwd)/localetc:/usr/local/etc" --name=php7 php:7-fpm php -i |grep Configuration Configuration File (php.ini) Path => /usr/local/etc/php Loaded Configuration File => /usr/local/etc/php/php.ini
Now you can re-create php7 container with php test application. The creators of the image did not take care that php-fpm worked as a daemon, so you have to run it yourself as a background without releasing standard I / O channels.
$ docker rm php7 $ mkdir scripts $ echo " scripts/test.php $ docker run -v "$(pwd)/localetc:/usr/local/etc" \ -v "$(pwd)/scripts:/scripts" \ --name=php7 php:7-fpm & [29-Aug-2015 15:19:25] NOTICE: fpm is running, pid 1 [29-Aug-2015 15:19:25] NOTICE: ready to handle connections
So far for convenience of debugging, I leave the output from the php-fpm container to my console.
Nginx
With Nginx, everything is simple and standard. I copy the configs folder to disk:
$ docker cp nginx:/etc/nginx .
In the
nginx / folder, you need to edit nginx.conf, fastcgi_params to taste, and create a configuration file for your site in
nginx / conf.d / . The main thing for nginx communication with php is to specify the name of the container with php in the host name, and the directives root and SCRIPT_FILENAME should point to the path that php will understand in its php7 container.
location ~ \.php$ { fastcgi_pass php7:9000; fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name;
I mount the configs into the nginx container and run with the mapping of the 80th container port onto the local 8080.
$ docker rm nginx $ docker run -v "$(pwd)/nginx:/etc/nginx" -p 8080:80 --name=nginx nginx & $ curl 127.0.0.1:8080/test.php 172.17.0.65 - 29/Aug/2015:15:50:29 +0000 "GET /test.php" 200 Hello world! 7.0.0RC1
Rock'n'Roll!
In version 1.7 in the docker run command you need to specify the --link parameter so that the name of another container is resolved in the container. In version 1.8.1, everything works without this parameter.
Logs
Maintainers of the php image decided to write all the fpm logs in / proc / self / fd / 2, which is STDERR - both error_log and access.log. However, I’ll write nginx queries, and in php I’m only interested in errors, so I suggest editing localetc / php-fpm.conf and writing something familiar:
error_log = /var/log/php/php-fpm.error.log ;access.log = /proc/self/fd/2
In Nginx, we managed without amateur activities, so I include the access log in the nginx / conf.d / site.ru.conf site config
access_log /var/log/nginx/host.access.log main;
Now you can create a folder for logs with write access for the docker daemon and mount it into containers. In the same folder, you can write and withdrawal of containers, while containers can be detail:
$ mkdir log $ sudo chgrp docker log/ $ sudo chmod g+rwx log/ $ docker stop nginx php7 $ docker rm nginx php7 $ docker run -d --name=php7 \ -v "$(pwd)/localetc:/usr/local/etc" \ -v "$(pwd)/scripts:/scripts" \ -v "$(pwd)/log:/var/log/php" \ php:7-fpm >>log/docker.php.log 2>&1 $ docker run -d --name=nginx \ -v "$(pwd)/nginx:/etc/nginx" \ -v "$(pwd)/log:/var/log/nginx" \ -p 8080:80 \ nginx >>log/docker.nginx.log 2>&1 $ curl 127.0.0.1:8080/test.php Hello world! 7.0.0RC1
When you need to change the configuration - you can give the command to restart php and nginx.
$ docker exec php7 pkill -o -USR2 php-fpm $ docker exec nginx service nginx reload Reloading nginx: nginx.
When php 7 is included in the Debian distribution, an init script will appear in the php: 7 image. If desired, you can add it yourself from the distribution of choice.
Continuation can be found
here .
Supplement from Sep 23
Given the level of batthert, which caused my notes, I will clarify: we have different goals. In my case, the full stack of applications, databases, libraries and all configs is lifted from a single archive of several megabytes in size. I can send it by mail, upload it to the google drive, provide it to the customer as a result of the work, attach it to the ticket and deploy it on the servers.
Admins want to put the registry, loop on it and development, and deployment, make backups, monitor, raise when it falls, be very necessary. Normal, but you can do better.
When I have an application with a base and configs in one small file, the whole stack is raised by one command, and all services work independently, I do not have SPOF, backup can be at least directly in git.
For transport, I can use the registry, git, rsync, ssh, and vagrant for transport — I am not limited, and this is a separate topic.
If we put everything together in one big image - we have no choice, only to pray for registy.
I do not need the dependence of the application on the version of php. With each change in any part of the stack, you need to rebuild the whole image and fill it in the registry? It does not suit me. If I updated the version of php - the application I do not reassemble. If I updated MySQL - php I do not touch.
Yes, it is tearing the template, this has not happened before. This article is for those who need a modular system design. If you recognize the only one correct way - this article is not for you.