📜 ⬆️ ⬇️

Our experience with Docker

Instead of the preface





Today I had a dream, as if I had been squeezed down to the size of several
kilobyte, stuck in some socket and run in a container.
Highlighted transport in the overlay network and allowed
test services in other containers ...
Docker rm is not done yet.

')
Not so long ago I was lucky to become a member of a very cool team.
Centos-admin.ru , in which I met people like me: like-minded people with a passion for new technologies, enthusiasts and just great guys. And now, on the second working day, my colleague and I were put to work on a single project, in which it was necessary to “dock everything that could be verified” and it was critically important to ensure high availability of services.

I will say right away that before this I was an ordinary Linux-admin room: I ran uptime, upt-get-installed packages, rules configs, restarted services, taylit logs. In general, he did not have particularly outstanding practical skills; he knew absolutely nothing about the concept of The Pets vs. Cattle , was practically not familiar with Docker and in general was very poorly aware of what opportunities he hides. And from automation tools I used only ansible for setting up servers and various bash-scripts.



Based on the experience that we managed to get when working with this project, I would like to share a bit of it.



What tasks our dockerized cluster should solve:


- dynamic infrastructure.
- quick implementation of changes.
- simplify the deployment of applications.

Tools used:


- Docker
- Docker swarm (agent + manage)
- Consul
- Registrator
- Consul Template
- Docker compose
- arms

Description of tools:



Docker





About Docker already there were many articles, including on Habré. I think it is not necessary to describe in detail what it is.
A tool that makes life easier for everyone. Developer, tester, system administrator, architect.
Docker allows us to create, launch, deploy almost any application and practically on any platform.
Docker can be compared with git, but not in the context of working with code, but in the context of working with the application as a whole.

Here you can talk a lot about the charms of this wonderful product.

Docker swarm





Swarm provides the functionality of the logical combination of all our hosts (node) into one cluster.
It works in such a way that we don’t have to think about on which node we need to launch this or that container. Swarm does it for us. We just want to run the application "somewhere out there."
Working with Swarm - we work with a pool of containers. Swarm uses the Docker API to work with containers.

Usually, when working on the command line, it is convenient to specify a variable.
export DOCKER_HOST=tcp://<my_swarm_ip>:3375 

and use the docker commands as usual, but already working not with the local node, but with the cluster as a whole.

Note the --label option . With it, we can specify the label node. For example, if we have a machine with SSD disks and we need to start the container with PosrgreSQL no longer “somewhere there” in the cluster, but on the node where the fast disks are installed.

Assign daemon tag label:
 docker daemon --label com.example.storage="ssd" 

Run PostgreSQL with a filter for the specified tag:
 docker run -d -e constraint:com.example.storage="ssd" postgres 


More about filters

It is also worth considering a parameter like startegy in a Swarm cluster. This parameter allows you to more efficiently distribute the load between the nodes of the cluster.
Node, you can assign three parameters to strategy :

- spread
It is used by default, unless another strategy parameter is specified. In this case, the swarm will launch a new container if fewer containers are running on this node than on other nodes. This parameter does not take into account the state of the containers. They all can even be stopped, but this node will not be selected to launch a new container on it.

- binpack
With this parameter, on the contrary, swarm will try to score each node with containers out of the string. It also takes into account stopped containers.

- random
The name speaks for itself.

Consul





Consul is another great product from the Mitchell Hashimoto gang, from Hashicorp , which makes us happy with such wonderful tools as Vagrant and many others.
Consul performs the role of a distributed consistent storage configurations, which is kept up to date by the registrator.
Consists of agents and servers (server quorum N / 2 + 1). Agents run on the nodes of the cluster and register services, execute validation scripts, and report the results to the Consul-server.
It is also possible to use Consul as a key-value store for more flexible configuration of container links.
In addition, Consul functions as a health-checker on its checklist, which the Registrator also supports.
There is a web-UI in which you can view the status of services, checks, nodes, and, of course, the REST API.

It is a little about checks:

Script

Check script. The script must return the status code:

- Exit code 0 - checking in the status of passing (i.e. everything is good with the service)
- Exit code 1 - Check in the status of warning
- Any other code - Check in failing status

Example:
 #!/usr/bin/with-contenv sh RESULT=`redis-cli ping` if [ "$RESULT" = "PONG" ]; then exit 0 fi exit 2 


The documentation also gives examples of using something like nagios-plugins.
 { "check": { "id": "mem-util", "name": "Memory utilization", "script": "/usr/local/bin/check_mem.py", "interval": "10s" } } 


gist.github.com/mtchavez/e367db8b69aeba363d21

Tcp

Knocks on the socket of the specified hostname / IP address. Example:
 { "id": "ssh", "name": "SSH TCP on port 22", "tcp": "127.0.0.1:22", "interval": "10s", "timeout": "1s" } 


HTTP

An example of a standard HTTP check:

In addition to registering checks through the REST API Consul, checks can be hung when the container is started using the argument -l ( label )
For example, I'll run a container with django + uwsgi inside:
 docker run -p 8088:3000 -d --name uwsgi-worker --link consul:consul -l "SERVICE_NAME=uwsgi-worker" -l "SERVICE_TAGS=django" \ -l "SERVICE_3000_CHECK_HTTP=/" -l "SERVICE_3000_CHECK_INTERVAL=15s" -l "SERVICE_3000_CHECK_TIMEOUT=1s" uwsgi-worker 


In the Consul’s UI, we’ll see the title of the standard django page. We see that the verification status is passing, which means that the service is OK.



Or you can make a request to the REST API at http:
 curl http://<consul_ip>:8500/v1/health/service/uwsgi-worker | jq . 

 [ { "Node": { "Node": "docker0", "Address": "127.0.0.1", "CreateIndex": 370, "ModifyIndex": 159636 }, "Service": { "ID": "docker0:uwsgi-worker:3000", "Service": "uwsgi-worker", "Tags": [ "django" ], "Address": "127.0.0.1", "Port": 8088, "EnableTagOverride": false, "CreateIndex": 159631, "ModifyIndex": 159636 }, "Checks": [ { "Node": "docker0", "CheckID": "serfHealth", "Name": "Serf Health Status", "Status": "passing", "Notes": "", "Output": "Agent alive and reachable", "ServiceID": "", "ServiceName": "", "CreateIndex": 370, "ModifyIndex": 370 }, { "Node": "docker0", "CheckID": "service:docker1:uwsgi-worker:3000", "Name": "Service 'uwsgi-worker' check", "Status": "passing", "Notes": "", "Output": "", "ServiceID": "docker0:uwsgi-worker:3000", "ServiceName": "uwsgi-worker", "CreateIndex": 159631, "ModifyIndex": 159636 } ] } ] 


As long as the HTTP service gives a 2xx response status, Consul considers it to be alive and well. If the response code is 429 (Too Many Request), the check will be in the Warning state, all other codes will be marked as Failed and the Consul will mark this service as failure.
The default http-check interval is 10 seconds. You can set a different interval by defining the timeout parameter.
The Consul Template, in turn, based on the result of the test, generates a configuration file for the balancer, with an N-number of healthy workers, and the balancer sends requests to the workers.

Register a new check in the consul:
 curl -XPUT -d @_ssh_check.json http://<consul_ip>:8500/v1/agent/check/register 


Where in the ssh_check.json file the check parameters are specified:
 { "id": "ssh", "name": "SSH TCP on port 22", "tcp": "<your_ip>:22", "interval": "10s", "timeout": "1s" } 


Disable verification:
 curl http://<consul_ip>:8500/v1/agent/check/deregister/ssh_check 


Consul's opportunities are very great and, unfortunately, it is problematic to cover them all in one article.
Those interested can refer to the official documentation in which there are a lot of examples and everything is described quite well.

Registrator



Registrator serves as an informer about changes to running Docker containers. It monitors the lists of containers and makes the appropriate changes to the Consul in case of starting or stopping containers. Including the creation of new containers Registrator immediately reflects in the list of services in Consul.
He also adds entries for health-check in Consul, based on container metadata.
For example, when starting a container with the command:
 docker run --restart=unless-stopped -v /root/html:/usr/share/nginx/html:ro --links consul:consul -l "SERVICE_NAME=nginx" -l "SERVICE_TAGS=web" -l "SERVICE_CHECK_HTTP=/" -l "SERVICE_CHECK_INTERVAL=15s" -l "SERVICE_CHECK_TIMEOUT=1s" -p 8080:80 -d nginx 


Registrator will add the nignx service to Consul and create an HTTP check for this service.

Read more

Consul Template



Another great tool from the guys from Hashicorp. It refers to the Consul and, depending on the state of the parameters / values ​​that are in it, can generate the contents of the files according to their templates, for example, inside the container. Consul Template can also execute various commands when updating data in Consul.
Example:

Nginx:

Create a server.conf.ctmpl file
 upstream fpm { least_conn; {{range service "php"}}server {{.Address}}:{{.Port}} max_fails=3 fail_timeout=60 weight=1; {{else}}server 127.0.0.1:65535{{end}} } server { listen 80; root /var/www/html; index index.php index.html index.htm; server_name your.domain.com; sendfile off; location / { } location ~ \.php$ { fastcgi_param SCRIPT_FILENAME $document_root/$fastcgi_script_name; fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_pass fpm; fastcgi_index index.php; include fastcgi_params; } } 

and run Consul Template:
 consul-template -consul <your_consul_ip>:8500 -template server.conf.ctmpl -once -dry 

The -dry parameter displays the resulting config in stdout, the -once parameter runs the consul-template once.
 upstream fpm { least_conn; server 127.0.0.1:9000 max_fails=3 fail_timeout=60 weight=1; } server { listen 80; root /var/www/html; index index.php index.html index.htm; server_name your.domain.com; sendfile off; location / { } location ~ \.php$ { fastcgi_param SCRIPT_FILENAME $document_root/$fastcgi_script_name; fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_pass fpm; fastcgi_index index.php; include fastcgi_params; } } 

As we can see, he asks Consul for IP addresses and ports of services called php and displays the resulting configuration file.
We can maintain the current nginx configuration file:
 consul-template -consul <your_consul_ip>:8500 -template server.conf.ctmpl:/etc/nginx/conf.d/server.conf:service nginx reload 


Thus, Consul Template will monitor the services and transfer them to the nginx config. In case the service suddenly fell or its port changed, Consul Template will update the configuration file and do nginx reload.

It is very convenient to use Consul Template for the balancer (nginx, haproxy).
But this is just one of the user cases in which this wonderful tool can be used.

Read more about Consul Template

Practice





So, we have four virtual machines on localhost, Debian 8 Jessie is installed on them, the kernel version is> 3.16, and we have time and desire to learn more about this technology stack and try running some web application in a cluster.

Let's raise a simple wordpress blog on them.

* Here we omit the TLS configuration time * between the Swarm and Consul nodes.

Setting the environment on the node.



Add a repository to each virtual machine (hereinafter referred to as node)
 echo "deb http://apt.dockerproject.org/repo debian-jessie main" > /etc/apt/sources.list.d/docker.list 

And install the necessary packages for our environment.
 apt-get update apt-get install ca-certificates apt-key adv --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys 58118E89F3A912897C070ADBF76221572C52609D apt-get update apt-get install docker-engine aufs-tools 

Starting the binding on the primary node:
 docker run --restart=unless-stopped -d -h `hostname` --name consul -v /mnt:/data \ -p `ifconfig eth0 | grep 'inet addr:' | cut -d: -f2 | awk '{ print $1}'`:8300:8300 \ -p `ifconfig eth0 | grep 'inet addr:' | cut -d: -f2 | awk '{ print $1}'`:8301:8301 \ -p `ifconfig eth0 | grep 'inet addr:' | cut -d: -f2 | awk '{ print $1}'`:8301:8301/udp \ -p `ifconfig eth0 | grep 'inet addr:' | cut -d: -f2 | awk '{ print $1}'`:8302:8302 \ -p `ifconfig eth0 | grep 'inet addr:' | cut -d: -f2 | awk '{ print $1}'`:8302:8302/udp \ -p `ifconfig eth0 | grep 'inet addr:' | cut -d: -f2 | awk '{ print $1}'`:8400:8400 \ -p 8500:8500 \ -p 172.17.0.1:53:53/udp \ gliderlabs/consul-server -server -rejoin -advertise `ifconfig eth0 | grep 'inet addr:' | cut -d: -f2 | awk '{ print $1}'` -bootstrap 

The --restart = unless-stopped option will keep the container in a running state even when the docker-daemon is restarted, if it has not been manually stopped.

After running Consul, you need to correct the startup parameters of docker-daemon in systemd
In the /etc/systemd/system/multi-user.target.wants/docker.service file, the ExecStart line should be reduced to the following form:
 ExecStart=/usr/bin/docker daemon -H fd:// -H tcp://<your_ip>:2375 --storage-driver=aufs --cluster-store=consul://<your_ip>:8500 --cluster-advertise <your_ip>:2375 

And then restart the demon
 systemctl daemon-reload service docker restart 

Check that Consul is up and running:
 docker ps 

Now run the swarm-manager on the primary node.
 docker run --restart=unless-stopped -d \ -p 3375:2375 \ swarm manage \ --replication \ --advertise `ifconfig eth0 | grep 'inet addr:' | cut -d: -f2 | awk '{ print $1}'`:3375 \ consul://`ifconfig eth0 | grep 'inet addr:' | cut -d: -f2 | awk '{ print $1}'`:8500/ 

The manage command will start the swarm manager on the node.
The --replication parameter enables replication between the primary and secondary nodes of the cluster.
 docker run --restart=unless-stopped -d \ swarm join \ --advertise=`ifconfig eth0 | grep 'inet addr:' | cut -d: -f2 | awk '{ print $1}'`:2375 \ consul://`ifconfig eth0 | grep 'inet addr:' | cut -d: -f2 | awk '{ print $1}'`:8500/ 

The join command will add a node to the swarm cluster, on which we will run applications in containers.
Having transferred the Consul address, we will add the option of service discovery.

And, of course, the Registrator:
 docker run --restart=unless-stopped -d \ --name=registrator \ --net=host \ --volume=/var/run/docker.sock:/tmp/docker.sock \ gliderlabs/registrator:latest \ consul://`ifconfig eth0 | grep 'inet addr:' | cut -d: -f2 | awk '{ print $1}'`:8500 

Now we proceed to the rest of the nodes.

Run Consul:
 docker run --restart=unless-stopped -d -h `hostname` --name consul -v /mnt:/data \ -p `ifconfig eth0 | grep 'inet addr:' | cut -d: -f2 | awk '{ print $1}'`:8300:8300 \ -p `ifconfig eth0 | grep 'inet addr:' | cut -d: -f2 | awk '{ print $1}'`:8301:8301 \ -p `ifconfig eth0 | grep 'inet addr:' | cut -d: -f2 | awk '{ print $1}'`:8301:8301/udp \ -p `ifconfig eth0 | grep 'inet addr:' | cut -d: -f2 | awk '{ print $1}'`:8302:8302 \ -p `ifconfig eth0 | grep 'inet addr:' | cut -d: -f2 | awk '{ print $1}'`:8302:8302/udp \ -p `ifconfig eth0 | grep 'inet addr:' | cut -d: -f2 | awk '{ print $1}'`:8400:8400 \ -p `ifconfig eth0 | grep 'inet addr:' | cut -d: -f2 | awk '{ print $1}'`:8500:8500 \ -p 172.17.0.1:53:53/udp \ gliderlabs/consul-server -server -rejoin -advertise `ifconfig eth0 | grep 'inet addr:' | cut -d: -f2 | awk '{ print $1}'` -join <primary_node_ip> 

Here, in the -join parameter, you must specify the address of our primary-node, which we configured above.

Swarm manager:
 docker run --restart=unless-stopped -d \ -p 3375:2375 \ swarm manage \ --advertise `ifconfig eth0 | grep 'inet addr:' | cut -d: -f2 | awk '{ print $1}'`:3375 \ consul://`ifconfig eth0 | grep 'inet addr:' | cut -d: -f2 | awk '{ print $1}'`:8500/ 

Attach the node to the cluster:
 docker run --restart=unless-stopped -d \ swarm join \ --advertise=`ifconfig eth0 | grep 'inet addr:' | cut -d: -f2 | awk '{ print $1}'`:2375 \ consul://`ifconfig eth0 | grep 'inet addr:' | cut -d: -f2 | awk '{ print $1}'`:8500/ 

And the Registrator for registering services in Consul.
 docker run --restart=unless-stopped -d \ --name=registrator \ --net=host \ --volume=/var/run/docker.sock:/tmp/docker.sock \ gliderlabs/registrator:latest \ -ip `ifconfig eth0 | grep 'inet addr:' | cut -d: -f2 | awk '{ print $1}'` \ consul://`ifconfig eth0 | grep 'inet addr:' | cut -d: -f2 | awk '{ print $1}'`:8500 


A little bit about the "quick team"


Restart all containers
 docker stop $(docker ps -aq);docker start $(docker ps -aq) 

Remove all containers
 docker stop $(docker ps -aq);docker rm $(docker ps -aq) 

Delete all inactive containers:
 docker stop $(docker ps -a | grep 'Exited' | awk '{print $1}') && docker rm $(docker ps -a | grep 'Exited' | awk '{print $1}') 

Delete all volumes (busy not deleted)
 docker volume rm $(docker volume ls -q); 

Delete all images (busy does not move)
 docker rmi $(docker images -q); 


Frontend



So, our cluster is ready for work and defense. Let's go back to our primary node and start the front-end balancer.
As I mentioned above, when working on the command line, it is convenient to specify a variable
 export DOCKER_HOST=tcp://<my_swarm_ip>:3375 

and use the docker commands as usual, but already working not with the local node, but with the cluster as a whole.

We will use the image of phusion-baseimage and modify it a bit in the process. You need to add the Consul Template to it in order for it to keep the nginx configuration file up to date and keep a list of live and working workers in it. Create the nginx-lb folder and create a Dockerfile file in it of approximately the following content:

Hidden text
 FROM phusion/baseimage:0.9.18 ENV NGINX_VERSION 1.8.1-1~trusty ENV DEBIAN_FRONTEND=noninteractive # Avoid ERROR: invoke-rc.d: policy-rc.d denied execution of start. RUN echo "#!/bin/sh\nexit 0" > /usr/sbin/policy-rc.d RUN curl -sS http://nginx.org/keys/nginx_signing.key | sudo apt-key add - && \ echo 'deb http://nginx.org/packages/ubuntu/ trusty nginx' >> /etc/apt/sources.list && \ echo 'deb-src http://nginx.org/packages/ubuntu/ trusty nginx' >> /etc/apt/sources.list && \ apt-get update -qq && apt-get install -y unzip ca-certificates nginx=${NGINX_VERSION} && \ rm -rf /var/lib/apt/lists/* && \ ln -sf /dev/stdout /var/log/nginx/access.log && \ ln -sf /dev/stderr /var/log/nginx/error.log EXPOSE 80 #      Consul Template ADD https://releases.hashicorp.com/consul-template/0.12.2/consul-template_0.12.2_linux_amd64.zip /usr/bin/ RUN unzip /usr/bin/consul-template_0.12.2_linux_amd64.zip -d /usr/local/bin ADD nginx.service /etc/service/nginx/run RUN chmod a+x /etc/service/nginx/run ADD consul-template.service /etc/service/consul-template/run RUN chmod a+x /etc/service/consul-template/run RUN rm -v /etc/nginx/conf.d/*.conf ADD app.conf.ctmpl /etc/consul-templates/app.conf.ctmpl CMD ["/sbin/my_init"] 



Now we need to create a nignx startup script. Create the nginx.service file:
 #!/bin/sh /usr/sbin/nginx -c /etc/nginx/nginx.conf -t && \ exec /usr/sbin/nginx -c /etc/nginx/nginx.conf -g "daemon off;" 

And the Consul Template start script:
 #!/bin/sh exec /usr/local/bin/consul-template \ -consul consul:8500 \ -template "/etc/consul-templates/app.conf.ctmpl:/etc/nginx/conf.d/app.conf:sv hup nginx || true" 

Fine. Now we need a nginx configuration file for Consul Template. Create app.conf:

Hidden text
 upstream fpm { least_conn; {{range service "fpm"}}server {{.Address}}:{{.Port}} max_fails=3 fail_timeout=60 weight=1; {{else}}server 127.0.0.1:65535{{end}} } server { listen 80; root /var/www/html; index index.php index.html index.htm; server_name domain.example.com; sendfile off; location / { try_files $uri $uri/ /index.php?q=$uri&$args; } location /doc/ { alias /usr/share/doc/; autoindex on; allow 127.0.0.1; allow ::1; deny all; } error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/share/nginx/www; } location ~ \.php$ { try_files $uri =404; fastcgi_param SCRIPT_FILENAME $document_root/$fastcgi_script_name; fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_pass fpm; fastcgi_index index.php; include fastcgi_params; } location ~ /\.ht { deny all; } } 



Now we need to collect the modified image:
 docker build -t nginx-lb . 

We have two options: to collect this image on each node of the cluster with our hands or to upload it to the free Docker Hub cloud , from where it can be taken anytime and from any place without unnecessary gestures. Or in your personal Docker Registry.
Working with the Docker Hub is described in great detail in the documentation .

Now is the time to see what happened. We start the container:
 docker run -p 80:80 -v /mnt/storage/www:/var/www/html -d --name balancer --link consul:consul -l "SERVICE_NAME=balancer" -l "SERVICE_TAGS=balancer" \ -l "SERVICE_CHECK_HTTP=/" -l "SERVICE_CHECK_INTERVAL=15s" -l "SERVICE_CHECK_TIMEOUT=1s" nginx-lb 

Check by poking at the browser. Yes, he will give Bad Gateway, because we have no static or backend.

Backend



Great, we figured out the front end. Now someone has to process php-code. This will help us a WordPress image with FPM
Here we also need to correct the image a little. Namely - add Consul Template to detect MySQL servers. We don’t want to search every time on which node the database server is running and specify its address manually when launching the image? It takes not very much time, but we are lazy, and “laziness is the engine of progress” (c).

Dockerfile
 FROM php:5.6-fpm # install the PHP extensions we need RUN apt-get update && apt-get install -y unzip libpng12-dev libjpeg-dev && rm -rf /var/lib/apt/lists/* \ && docker-php-ext-configure gd --with-png-dir=/usr --with-jpeg-dir=/usr \ && docker-php-ext-install gd mysqli opcache # set recommended PHP.ini settings # see https://secure.php.net/manual/en/opcache.installation.php RUN { \ echo 'opcache.memory_consumption=128'; \ echo 'opcache.interned_strings_buffer=8'; \ echo 'opcache.max_accelerated_files=4000'; \ echo 'opcache.revalidate_freq=60'; \ echo 'opcache.fast_shutdown=1'; \ echo 'opcache.enable_cli=1'; \ } > /usr/local/etc/php/conf.d/opcache-recommended.ini VOLUME /var/www/html ENV WORDPRESS_VERSION 4.4.2 ENV WORDPRESS_SHA1 7444099fec298b599eb026e83227462bcdf312a6 # upstream tarballs include ./wordpress/ so this gives us /usr/src/wordpress RUN curl -o wordpress.tar.gz -SL https://wordpress.org/wordpress-${WORDPRESS_VERSION}.tar.gz \ && echo "$WORDPRESS_SHA1 *wordpress.tar.gz" | sha1sum -c - \ && tar -xzf wordpress.tar.gz -C /usr/src/ \ && rm wordpress.tar.gz \ && chown -R www-data:www-data /usr/src/wordpress ADD https://releases.hashicorp.com/consul-template/0.12.2/consul-template_0.12.2_linux_amd64.zip /usr/bin/ RUN unzip /usr/bin/consul-template_0.12.2_linux_amd64.zip -d /usr/local/bin #    . ADD db.conf.php.ctmpl /db.conf.php.ctmpl #    consul-template ADD consul-template.sh /usr/local/bin/consul-template.sh #    MySQL      WP ADD mysql.ctmpl /tmp/mysql.ctmpl COPY docker-entrypoint.sh /entrypoint.sh # grr, ENTRYPOINT resets CMD now ENTRYPOINT ["/entrypoint.sh"] CMD ["php-fpm"] 



Create a MySQL settings template db.conf.php.ctmpl:
 <?php {{range service "mysql"}} define('DB_HOST', '{{.Address}}'); {{else}} define('DB_HOST', 'mysql'); {{end}} ?> 

And the consul-template.sh startup script:
 #!/bin/sh echo "Starting Consul Template" exec /usr/local/bin/consul-template \ -consul consul:8500 \ -template "/db.conf.php.ctmpl:/var/www/html/db.conf.php" 


MySQL server mysql.ctmpl detection template:
 {{range service "mysql"}}{{.Address}} {{.Port}} {{end}} 

In the docker-entrypoint.sh script , we should fix a few things. Namely, connect the Consul Template to detect the MySQL server and move fpm to 0.0.0.0 , because by default it listens only to 127.0.0.1:

Hidden text
 #!/bin/bash set -e #    WORDPRESS_DB_HOST="$(/usr/local/bin/consul-template --template=/tmp/mysql-master.ctmpl --consul=consul:8500 --dry -once | awk '{print $1}' | tail -1)" #    WORDPRESS_DB_PORT="$(/usr/local/bin/consul-template --template=/tmp/mysql-master.ctmpl --consul=consul:8500 --dry -once | awk '{print $2}' | tail -1)" if [[ "$1" == apache2* ]] || [ "$1" == php-fpm ]; then if [ -n "$MYSQL_PORT_3306_TCP" ]; then if [ -z "$WORDPRESS_DB_HOST" ]; then WORDPRESS_DB_HOST='mysql' else echo >&2 'warning: both WORDPRESS_DB_HOST and MYSQL_PORT_3306_TCP found' echo >&2 " Connecting to WORDPRESS_DB_HOST ($WORDPRESS_DB_HOST)" echo >&2 ' instead of the linked mysql container' fi fi if [ -z "$WORDPRESS_DB_HOST" ]; then echo >&2 'error: missing WORDPRESS_DB_HOST and MYSQL_PORT_3306_TCP environment variables' echo >&2 ' Did you forget to --link some_mysql_container:mysql or set an external db' echo >&2 ' with -e WORDPRESS_DB_HOST=hostname:port?' exit 1 fi # if we're linked to MySQL and thus have credentials already, let's use them : ${WORDPRESS_DB_USER:=${MYSQL_ENV_MYSQL_USER:-root}} if [ "$WORDPRESS_DB_USER" = 'root' ]; then : ${WORDPRESS_DB_PASSWORD:=$MYSQL_ENV_MYSQL_ROOT_PASSWORD} fi : ${WORDPRESS_DB_PASSWORD:=$MYSQL_ENV_MYSQL_PASSWORD} : ${WORDPRESS_DB_NAME:=${MYSQL_ENV_MYSQL_DATABASE:-wordpress}} if [ -z "$WORDPRESS_DB_PASSWORD" ]; then echo >&2 'error: missing required WORDPRESS_DB_PASSWORD environment variable' echo >&2 ' Did you forget to -e WORDPRESS_DB_PASSWORD=... ?' echo >&2 echo >&2 ' (Also of interest might be WORDPRESS_DB_USER and WORDPRESS_DB_NAME.)' exit 1 fi if ! [ -e index.php -a -e wp-includes/version.php ]; then echo >&2 "WordPress not found in $(pwd) - copying now..." if [ "$(ls -A)" ]; then echo >&2 "WARNING: $(pwd) is not empty - press Ctrl+C now if this is an error!" ( set -x; ls -A; sleep 10 ) fi tar cf - --one-file-system -C /usr/src/wordpress . | tar xf - echo >&2 "Complete! WordPress has been successfully copied to $(pwd)" if [ ! -e .htaccess ]; then # NOTE: The "Indexes" option is disabled in the php:apache base image cat > .htaccess <<-'EOF' # BEGIN WordPress <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / RewriteRule ^index\.php$ - [L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L] </IfModule> # END WordPress EOF chown www-data:www-data .htaccess fi fi # TODO handle WordPress upgrades magically in the same way, but only if wp-includes/version.php's $wp_version is less than /usr/src/wordpress/wp-includes/version.php's $wp_version # version 4.4.1 decided to switch to windows line endings, that breaks our seds and awks # https://github.com/docker-library/wordpress/issues/116 # https://github.com/WordPress/WordPress/commit/1acedc542fba2482bab88ec70d4bea4b997a92e4 sed -ri 's/\r\n|\r/\n/g' wp-config* # FPM   0.0.0.0 sed -i 's/listen = 127.0.0.1:9000/listen = 0.0.0.0:9000/g' /usr/local/etc/php-fpm.d/www.conf if [ ! -e wp-config.php ]; then awk '/^\/\*.*stop editing.*\*\/$/ && c == 0 { c = 1; system("cat") } { print }' wp-config-sample.php > wp-config.php <<'EOPHP' // If we're behind a proxy server and using HTTPS, we need to alert Wordpress of that fact // see also http://codex.wordpress.org/Administration_Over_SSL#Using_a_Reverse_Proxy if (isset($_SERVER['HTTP_X_FORWARDED_PROTO']) && $_SERVER['HTTP_X_FORWARDED_PROTO'] === 'https') { $_SERVER['HTTPS'] = 'on'; } EOPHP #   Consul Template    MySQL DB_HOST_PRE=$(grep 'DB_HOST' wp-config.php) sed -i "s/$DB_HOST_PRE/include 'db.conf.php';/g" wp-config.php chown www-data:www-data wp-config.php fi # see http://stackoverflow.com/a/2705678/433558 sed_escape_lhs() { echo "$@" | sed 's/[]\/$*.^|[]/\\&/g' } sed_escape_rhs() { echo "$@" | sed 's/[\/&]/\\&/g' } php_escape() { php -r 'var_export(('$2') $argv[1]);' "$1" } set_config() { key="$1" value="$2" var_type="${3:-string}" start="(['\"])$(sed_escape_lhs "$key")\2\s*," end="\);" if [ "${key:0:1}" = '$' ]; then start="^(\s*)$(sed_escape_lhs "$key")\s*=" end=";" fi sed -ri "s/($start\s*).*($end)$/\1$(sed_escape_rhs "$(php_escape "$value" "$var_type")")\3/" wp-config.php } set_config 'DB_HOST' "$WORDPRESS_DB_HOST" set_config 'DB_USER' "$WORDPRESS_DB_USER" set_config 'DB_PASSWORD' "$WORDPRESS_DB_PASSWORD" set_config 'DB_NAME' "$WORDPRESS_DB_NAME" # allow any of these "Authentication Unique Keys and Salts." to be specified via # environment variables with a "WORDPRESS_" prefix (ie, "WORDPRESS_AUTH_KEY") UNIQUES=( AUTH_KEY SECURE_AUTH_KEY LOGGED_IN_KEY NONCE_KEY AUTH_SALT SECURE_AUTH_SALT LOGGED_IN_SALT NONCE_SALT ) for unique in "${UNIQUES[@]}"; do eval unique_value=\$WORDPRESS_$unique if [ "$unique_value" ]; then set_config "$unique" "$unique_value" else # if not specified, let's generate a random value current_set="$(sed -rn "s/define\((([\'\"])$unique\2\s*,\s*)(['\"])(.*)\3\);/\4/p" wp-config.php)" if [ "$current_set" = 'put your unique phrase here' ]; then set_config "$unique" "$(head -c1M /dev/urandom | sha1sum | cut -d' ' -f1)" fi fi done if [ "$WORDPRESS_TABLE_PREFIX" ]; then set_config '$table_prefix' "$WORDPRESS_TABLE_PREFIX" fi if [ "$WORDPRESS_DEBUG" ]; then set_config 'WP_DEBUG' 1 boolean fi TERM=dumb php -- "$WORDPRESS_DB_HOST" "$WORDPRESS_DB_USER" "$WORDPRESS_DB_PASSWORD" "$WORDPRESS_DB_NAME" <<'EOPHP' <?php // database might not exist, so let's try creating it (just to be safe) $stderr = fopen('php://stderr', 'w'); list($host, $port) = explode(':', $argv[1], 2); $maxTries = 10; do { $mysql = new mysqli($host, $argv[2], $argv[3], '', (int)$port); if ($mysql->connect_error) { fwrite($stderr, "\n" . 'MySQL Connection Error: (' . $mysql->connect_errno . ') ' . $mysql->connect_error . "\n"); --$maxTries; if ($maxTries <= 0) { exit(1); } sleep(3); } } while ($mysql->connect_error); if (!$mysql->query('CREATE DATABASE IF NOT EXISTS `' . $mysql->real_escape_string($argv[4]) . '`')) { fwrite($stderr, "\n" . 'MySQL "CREATE DATABASE" Error: ' . $mysql->error . "\n"); $mysql->close(); exit(1); } $mysql->close(); EOPHP fi #  consul-template exec /usr/local/sbin/php-fpm & exec /usr/local/bin/consul-template.sh exec "$@" 



Well, now we will collect an image:
 docker build -t fpm . 

It’s not worth running yet, since we don’t have a database server for full Wordpress work yet.
 docker run --name fpm.0 -d -v /mnt/storage/www:/var/www/html \ -e WORDPRESS_DB_NAME=wordpressp -e WORDPRESS_DB_USER=wordpress -e WORDPRESS_DB_PASSWORD=wordpress \ --link consul:consul -l "SERVICE_NAME=php-fpm" -l "SERVICE_PORT=9000" -p 9000:9000 fpm 


Database:



Master



As a database, we use the MySQL 5.7 image .

We also need to fix it a little. Namely: to make two images. One - for the Master, the second - for the Slave.
Let's start with the image for the Master.

Our Dockerfile
 FROM debian:jessie # add our user and group first to make sure their IDs get assigned consistently, regardless of whatever dependencies get added RUN groupadd -r mysql && useradd -r -g mysql mysql RUN mkdir /docker-entrypoint-initdb.d # FATAL ERROR: please install the following Perl modules before executing /usr/local/mysql/scripts/mysql_install_db: # File::Basename # File::Copy # Sys::Hostname # Data::Dumper RUN apt-get update && apt-get install -y perl pwgen --no-install-recommends && rm -rf /var/lib/apt/lists/* # gpg: key 5072E1F5: public key "MySQL Release Engineering <mysql-build@oss.oracle.com>" imported RUN apt-key adv --keyserver ha.pool.sks-keyservers.net --recv-keys A4A9406876FCBD3C456770C88C718D3B5072E1F5 ENV MYSQL_MAJOR 5.7 ENV MYSQL_VERSION 5.7.11-1debian8 RUN echo "deb http://repo.mysql.com/apt/debian/ jessie mysql-${MYSQL_MAJOR}" > /etc/apt/sources.list.d/mysql.list # the "/var/lib/mysql" stuff here is because the mysql-server postinst doesn't have an explicit way to disable the mysql_install_db codepath besides having a database already "configured" (ie, stuff in /var/lib/mysql/mysql) # also, we set debconf keys to make APT a little quieter RUN { \ echo mysql-community-server mysql-community-server/data-dir select ''; \ echo mysql-community-server mysql-community-server/root-pass password ''; \ echo mysql-community-server mysql-community-server/re-root-pass password ''; \ echo mysql-community-server mysql-community-server/remove-test-db select false; \ } | debconf-set-selections \ && apt-get update && apt-get install -y mysql-server="${MYSQL_VERSION}" && rm -rf /var/lib/apt/lists/* \ && rm -rf /var/lib/mysql && mkdir -p /var/lib/mysql # comment out a few problematic configuration values # don't reverse lookup hostnames, they are usually another container RUN sed -Ei 's/^(bind-address|log)/#&/' /etc/mysql/my.cnf \ && echo 'skip-host-cache\nskip-name-resolve' | awk '{ print } $1 == "[mysqld]" && c == 0 { c = 1; system("cat") }' /etc/mysql/my.cnf > /tmp/my.cnf \ && mv /tmp/my.cnf /etc/mysql/my.cnf VOLUME /var/lib/mysql COPY docker-entrypoint.sh /entrypoint.sh ENTRYPOINT ["/entrypoint.sh"] EXPOSE 3306 CMD ["mysqld"] 



MySQL startup script:

docker-entrypoint.sh
 #!/bin/bash set -eo pipefail # if command starts with an option, prepend mysqld if [ "${1:0:1}" = '-' ]; then set -- mysqld "$@" fi if [ "$1" = 'mysqld' ]; then # Get config DATADIR="$("$@" --verbose --help 2>/dev/null | awk '$1 == "datadir" { print $2; exit }')" if [ ! -d "$DATADIR/mysql" ]; then if [ -z "$MYSQL_ROOT_PASSWORD" -a -z "$MYSQL_ALLOW_EMPTY_PASSWORD" -a -z "$MYSQL_RANDOM_ROOT_PASSWORD" ]; then echo >&2 'error: database is uninitialized and password option is not specified ' echo >&2 ' You need to specify one of MYSQL_ROOT_PASSWORD, MYSQL_ALLOW_EMPTY_PASSWORD and MYSQL_RANDOM_ROOT_PASSWORD' exit 1 fi mkdir -p "$DATADIR" chown -R mysql:mysql "$DATADIR" echo 'Initializing database' "$@" --initialize-insecure echo 'Database initialized' "$@" --skip-networking & pid="$!" mysql=( mysql --protocol=socket -uroot ) for i in {30..0}; do if echo 'SELECT 1' | "${mysql[@]}" &> /dev/null; then break fi echo 'MySQL init process in progress...' sleep 1 done if [ "$i" = 0 ]; then echo >&2 'MySQL init process failed.' exit 1 fi if [ -n "${REPLICATION_MASTER}" ]; then echo "=> Configuring MySQL replication as master (1/2) ..." if [ ! -f /replication_set.1 ]; then echo "=> Writting configuration file /etc/mysql/my.cnf with server-id=1" echo 'server-id = 1' >> /etc/mysql/my.cnf echo 'log-bin = mysql-bin' >> /etc/mysql/my.cnf touch /replication_set.1 else echo "=> MySQL replication master already configured, skip" fi fi # Set MySQL REPLICATION - SLAVE if [ -n "${REPLICATION_SLAVE}" ]; then echo "=> Configuring MySQL replication as slave (1/2) ..." if [ -n "${MYSQL_PORT_3306_TCP_ADDR}" ] && [ -n "${MYSQL_PORT_3306_TCP_PORT}" ]; then if [ ! -f /replication_set.1 ]; then echo "=> Writting configuration file /etc/mysql/my.cnf with server-id=2" echo 'server-id = 2' >> /etc/mysql/my.cnf echo 'log-bin = mysql-bin' >> /etc/mysql/my.cnf echo 'log-bin=slave-bin' >> /etc/mysql/my.cnf touch /replication_set.1 else echo "=> MySQL replication slave already configured, skip" fi else echo "=> Cannot configure slave, please link it to another MySQL container with alias as 'mysql'" exit 1 fi fi # Set MySQL REPLICATION - SLAVE if [ -n "${REPLICATION_SLAVE}" ]; then echo "=> Configuring MySQL replication as slave (2/2) ..." if [ -n "${MYSQL_PORT_3306_TCP_ADDR}" ] && [ -n "${MYSQL_PORT_3306_TCP_PORT}" ]; then if [ ! -f /replication_set.2 ]; then echo "=> Setting master connection info on slave" echo "!!! DEBUG: ${REPLICATION_USER}, ${REPLICATION_PASS}." "${mysql[@]}" <<-EOSQL -- What's done in this file shouldn't be replicated -- or products like mysql-fabric won't work SET @@SESSION.SQL_LOG_BIN=0; CHANGE MASTER TO MASTER_HOST='${MYSQL_PORT_3306_TCP_ADDR}',MASTER_USER='${REPLICATION_USER}',MASTER_PASSWORD='${REPLICATION_PASS}',MASTER_PORT=${MYSQL_PORT_3306_TCP_PORT}, MASTER_CONNECT_RETRY=30; START SLAVE ; EOSQL echo "=> Done!" touch /replication_set.2 else echo "=> MySQL replication slave already configured, skip" fi else echo "=> Cannot configure slave, please link it to another MySQL container with alias as 'mysql'" exit 1 fi fi if [ -z "$MYSQL_INITDB_SKIP_TZINFO" ]; then # sed is for https://bugs.mysql.com/bug.php?id=20545 mysql_tzinfo_to_sql /usr/share/zoneinfo | sed 's/Local time zone must be set--see zic manual page/FCTY/' | "${mysql[@]}" mysql fi if [ ! -z "$MYSQL_RANDOM_ROOT_PASSWORD" ]; then MYSQL_ROOT_PASSWORD="$(pwgen -1 32)" echo "GENERATED ROOT PASSWORD: $MYSQL_ROOT_PASSWORD" fi "${mysql[@]}" <<-EOSQL -- What's done in this file shouldn't be replicated -- or products like mysql-fabric won't work SET @@SESSION.SQL_LOG_BIN=0; DELETE FROM mysql.user ; CREATE USER 'root'@'%' IDENTIFIED BY '${MYSQL_ROOT_PASSWORD}' ; GRANT ALL ON *.* TO 'root'@'%' WITH GRANT OPTION ; DROP DATABASE IF EXISTS test ; FLUSH PRIVILEGES ; EOSQL if [ ! -z "$MYSQL_ROOT_PASSWORD" ]; then mysql+=( -p"${MYSQL_ROOT_PASSWORD}" ) fi # Set MySQL REPLICATION - MASTER if [ -n "${REPLICATION_MASTER}" ]; then echo "=> Configuring MySQL replication as master (2/2) ..." if [ ! -f /replication_set.2 ]; then echo "=> Creating a log user ${REPLICATION_USER}:${REPLICATION_PASS}" "${mysql[@]}" <<-EOSQL -- What's done in this file shouldn't be replicated -- or products like mysql-fabric won't work SET @@SESSION.SQL_LOG_BIN=0; CREATE USER '${REPLICATION_USER}'@'%' IDENTIFIED BY '${REPLICATION_PASS}'; GRANT REPLICATION SLAVE ON *.* TO '${REPLICATION_USER}'@'%' ; FLUSH PRIVILEGES ; RESET MASTER ; EOSQL echo "=> Done!" touch /replication_set.2 else echo "=> MySQL replication master already configured, skip" fi fi if [ "$MYSQL_DATABASE" ]; then echo "CREATE DATABASE IF NOT EXISTS \`$MYSQL_DATABASE\` ;" | "${mysql[@]}" mysql+=( "$MYSQL_DATABASE" ) fi if [ "$MYSQL_USER" -a "$MYSQL_PASSWORD" ]; then echo "CREATE USER '$MYSQL_USER'@'%' IDENTIFIED BY '$MYSQL_PASSWORD' ;" | "${mysql[@]}" if [ "$MYSQL_DATABASE" ]; then echo "GRANT ALL ON \`$MYSQL_DATABASE\`.* TO '$MYSQL_USER'@'%' ;" | "${mysql[@]}" fi echo 'FLUSH PRIVILEGES ;' | "${mysql[@]}" fi echo for f in /docker-entrypoint-initdb.d/*; do case "$f" in *.sh) echo "$0: running $f"; . "$f" ;; *.sql) echo "$0: running $f"; "${mysql[@]}" < "$f"; echo ;; *.sql.gz) echo "$0: running $f"; gunzip -c "$f" | "${mysql[@]}"; echo ;; *) echo "$0: ignoring $f" ;; esac echo done if [ ! -z "$MYSQL_ONETIME_PASSWORD" ]; then "${mysql[@]}" <<-EOSQL ALTER USER 'root'@'%' PASSWORD EXPIRE; EOSQL fi if ! kill -s TERM "$pid" || ! wait "$pid"; then echo >&2 'MySQL init process failed.' exit 1 fi echo echo 'MySQL init process done. Ready for start up.' echo fi chown -R mysql:mysql "$DATADIR" fi exec "$@" 



And build:
 docker build -t mysql-master . 

 docker run --name mysql-master.0 -v /mnt/volumes/master:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=rootpass -e MYSQL_USER=wordpress -e MYSQL_PASSWORD=wordpress -e MYSQL_DB=wordpress -e REPLICATION_MASTER=true -e REPLICATION_USER=replica -e REPLICATION_PASS=replica --link consul:consul -l "SERVICE_NAME=master" -l "SERVICE_PORT=3306" -p 3306:3306 -d mysql-master 


If you noticed, we added to the script the ability to pass startup parameters to configure MySQL replication (REPLICATION_USER, REPLICATION_PASS, REPLICATION_MASTER, REPLICATION_SLAVE).

Slave



We will make the Slave image in such a way that MySQL itself finds the Master server and includes replication. Here again Consul Template comes to our rescue:

Dockerfile
 FROM debian:jessie # add our user and group first to make sure their IDs get assigned consistently, regardless of whatever dependencies get added RUN groupadd -r mysql && useradd -r -g mysql mysql RUN mkdir /docker-entrypoint-initdb.d # FATAL ERROR: please install the following Perl modules before executing /usr/local/mysql/scripts/mysql_install_db: # File::Basename # File::Copy # Sys::Hostname # Data::Dumper RUN apt-get update && apt-get install -y perl pwgen --no-install-recommends && rm -rf /var/lib/apt/lists/* # gpg: key 5072E1F5: public key "MySQL Release Engineering <mysql-build@oss.oracle.com>" imported RUN apt-key adv --keyserver ha.pool.sks-keyservers.net --recv-keys A4A9406876FCBD3C456770C88C718D3B5072E1F5 ENV MYSQL_MAJOR 5.7 ENV MYSQL_VERSION 5.7.11-1debian8 RUN echo "deb http://repo.mysql.com/apt/debian/ jessie mysql-${MYSQL_MAJOR}" > /etc/apt/sources.list.d/mysql.list # the "/var/lib/mysql" stuff here is because the mysql-server postinst doesn't have an explicit way to disable the mysql_install_db codepath besides having a database already "configured" (ie, stuff in /var/lib/mysql/mysql) # also, we set debconf keys to make APT a little quieter RUN { \ echo mysql-community-server mysql-community-server/data-dir select ''; \ echo mysql-community-server mysql-community-server/root-pass password ''; \ echo mysql-community-server mysql-community-server/re-root-pass password ''; \ echo mysql-community-server mysql-community-server/remove-test-db select false; \ } | debconf-set-selections \ && apt-get update && apt-get install -y mysql-server="${MYSQL_VERSION}" && rm -rf /var/lib/apt/lists/* \ && rm -rf /var/lib/mysql && mkdir -p /var/lib/mysql # comment out a few problematic configuration values # don't reverse lookup hostnames, they are usually another container RUN sed -Ei 's/^(bind-address|log)/#&/' /etc/mysql/my.cnf \ && echo 'skip-host-cache\nskip-name-resolve' | awk '{ print } $1 == "[mysqld]" && c == 0 { c = 1; system("cat") }' /etc/mysql/my.cnf > /tmp/my.cnf \ && mv /tmp/my.cnf /etc/mysql/my.cnf ADD https://releases.hashicorp.com/consul-template/0.12.2/consul-template_0.12.2_linux_amd64.zip /usr/bin/ RUN unzip /usr/bin/consul-template_0.12.2_linux_amd64.zip -d /usr/local/bin ADD mysql-master.ctmpl /tmp/mysql-master.ctmpl VOLUME /var/lib/mysql COPY docker-entrypoint.sh /entrypoint.sh ENTRYPOINT ["/entrypoint.sh"] EXPOSE 3306 CMD ["mysqld"] 



docker-entrypoint.sh
 #!/bin/bash set -eo pipefail #   Consul,     master MYSQL_PORT_3306_TCP_ADDR="$(/usr/bin/consul-template --template=/tmp/mysql-master.ctmpl --consul=consul:8500 --dry -once | awk '{print $1}' | tail -1)" MYSQL_PORT_3306_TCP_PORT="$(/usr/bin/consul-template --template=/tmp/mysql-master.ctmpl --consul=consul:8500 --dry -once | awk '{print $2}' | tail -1)" if [ "${1:0:1}" = '-' ]; then set -- mysqld "$@" fi if [ "$1" = 'mysqld' ]; then # Get config DATADIR="$("$@" --verbose --help 2>/dev/null | awk '$1 == "datadir" { print $2; exit }')" if [ ! -d "$DATADIR/mysql" ]; then if [ -z "$MYSQL_ROOT_PASSWORD" -a -z "$MYSQL_ALLOW_EMPTY_PASSWORD" -a -z "$MYSQL_RANDOM_ROOT_PASSWORD" ]; then echo >&2 'error: database is uninitialized and password option is not specified ' echo >&2 ' You need to specify one of MYSQL_ROOT_PASSWORD, MYSQL_ALLOW_EMPTY_PASSWORD and MYSQL_RANDOM_ROOT_PASSWORD' exit 1 fi mkdir -p "$DATADIR" chown -R mysql:mysql "$DATADIR" echo 'Initializing database' "$@" --initialize-insecure echo 'Database initialized' "$@" --skip-networking & pid="$!" mysql=( mysql --protocol=socket -uroot ) for i in {30..0}; do if echo 'SELECT 1' | "${mysql[@]}" &> /dev/null; then break fi echo 'MySQL init process in progress...' sleep 1 done if [ "$i" = 0 ]; then echo >&2 'MySQL init process failed.' exit 1 fi if [ -n "${REPLICATION_MASTER}" ]; then echo "=> Configuring MySQL replication as master (1/2) ..." if [ ! -f /replication_set.1 ]; then echo "=> Writting configuration file /etc/mysql/my.cnf with server-id=1" echo 'server-id = 1' >> /etc/mysql/my.cnf echo 'log-bin = mysql-bin' >> /etc/mysql/my.cnf touch /replication_set.1 else echo "=> MySQL replication master already configured, skip" fi fi # Set MySQL REPLICATION - SLAVE if [ -n "${REPLICATION_SLAVE}" ]; then echo "=> Configuring MySQL replication as slave (1/2) ..." if [ -n "${MYSQL_PORT_3306_TCP_ADDR}" ] && [ -n "${MYSQL_PORT_3306_TCP_PORT}" ]; then if [ ! -f /replication_set.1 ]; then echo "=> Writting configuration file /etc/mysql/my.cnf with server-id=2" echo 'server-id = 2' >> /etc/mysql/my.cnf echo 'log-bin = mysql-bin' >> /etc/mysql/my.cnf echo 'log-bin=slave-bin' >> /etc/mysql/my.cnf touch /replication_set.1 else echo "=> MySQL replication slave already configured, skip" fi else echo "=> Cannot configure slave, please link it to another MySQL container with alias as 'mysql'" exit 1 fi fi # Set MySQL REPLICATION - SLAVE if [ -n "${REPLICATION_SLAVE}" ]; then echo "=> Configuring MySQL replication as slave (2/2) ..." if [ -n "${MYSQL_PORT_3306_TCP_ADDR}" ] && [ -n "${MYSQL_PORT_3306_TCP_PORT}" ]; then if [ ! -f /replication_set.2 ]; then echo "=> Setting master connection info on slave" "${mysql[@]}" <<-EOSQL -- What's done in this file shouldn't be replicated -- or products like mysql-fabric won't work SET @@SESSION.SQL_LOG_BIN=0; CHANGE MASTER TO MASTER_HOST='${MYSQL_PORT_3306_TCP_ADDR}',MASTER_USER='${REPLICATION_USER}',MASTER_PASSWORD='${REPLICATION_PASS}',MASTER_PORT=${MYSQL_PORT_3306_TCP_PORT}, MASTER_CONNECT_RETRY=30; START SLAVE ; EOSQL echo "=> Done!" touch /replication_set.2 else echo "=> MySQL replication slave already configured, skip" fi else echo "=> Cannot configure slave, please link it to another MySQL container with alias as 'mysql'" exit 1 fi fi if [ -z "$MYSQL_INITDB_SKIP_TZINFO" ]; then # sed is for https://bugs.mysql.com/bug.php?id=20545 mysql_tzinfo_to_sql /usr/share/zoneinfo | sed 's/Local time zone must be set--see zic manual page/FCTY/' | "${mysql[@]}" mysql fi if [ ! -z "$MYSQL_RANDOM_ROOT_PASSWORD" ]; then MYSQL_ROOT_PASSWORD="$(pwgen -1 32)" echo "GENERATED ROOT PASSWORD: $MYSQL_ROOT_PASSWORD" fi "${mysql[@]}" <<-EOSQL -- What's done in this file shouldn't be replicated -- or products like mysql-fabric won't work SET @@SESSION.SQL_LOG_BIN=0; DELETE FROM mysql.user ; CREATE USER 'root'@'%' IDENTIFIED BY '${MYSQL_ROOT_PASSWORD}' ; GRANT ALL ON *.* TO 'root'@'%' WITH GRANT OPTION ; DROP DATABASE IF EXISTS test ; FLUSH PRIVILEGES ; EOSQL if [ ! -z "$MYSQL_ROOT_PASSWORD" ]; then mysql+=( -p"${MYSQL_ROOT_PASSWORD}" ) fi # Set MySQL REPLICATION - MASTER if [ -n "${REPLICATION_MASTER}" ]; then echo "=> Configuring MySQL replication as master (2/2) ..." if [ ! -f /replication_set.2 ]; then echo "=> Creating a log user ${REPLICATION_USER}:${REPLICATION_PASS}" "${mysql[@]}" <<-EOSQL -- What's done in this file shouldn't be replicated -- or products like mysql-fabric won't work SET @@SESSION.SQL_LOG_BIN=0; CREATE USER '${REPLICATION_USER}'@'%' IDENTIFIED BY '${REPLICATION_PASS}'; GRANT REPLICATION SLAVE ON *.* TO '${REPLICATION_USER}'@'%' ; FLUSH PRIVILEGES ; RESET MASTER ; EOSQL echo "=> Done!" touch /replication_set.2 else echo "=> MySQL replication master already configured, skip" fi fi if [ "$MYSQL_DATABASE" ]; then echo "CREATE DATABASE IF NOT EXISTS \`$MYSQL_DATABASE\` ;" | "${mysql[@]}" mysql+=( "$MYSQL_DATABASE" ) fi if [ "$MYSQL_USER" -a "$MYSQL_PASSWORD" ]; then echo "CREATE USER '$MYSQL_USER'@'%' IDENTIFIED BY '$MYSQL_PASSWORD' ;" | "${mysql[@]}" if [ "$MYSQL_DATABASE" ]; then echo "GRANT ALL ON \`$MYSQL_DATABASE\`.* TO '$MYSQL_USER'@'%' ;" | "${mysql[@]}" fi echo 'FLUSH PRIVILEGES ;' | "${mysql[@]}" fi echo for f in /docker-entrypoint-initdb.d/*; do case "$f" in *.sh) echo "$0: running $f"; . "$f" ;; *.sql) echo "$0: running $f"; "${mysql[@]}" < "$f"; echo ;; *.sql.gz) echo "$0: running $f"; gunzip -c "$f" | "${mysql[@]}"; echo ;; *) echo "$0: ignoring $f" ;; esac echo done if [ ! -z "$MYSQL_ONETIME_PASSWORD" ]; then "${mysql[@]}" <<-EOSQL ALTER USER 'root'@'%' PASSWORD EXPIRE; EOSQL fi if ! kill -s TERM "$pid" || ! wait "$pid"; then echo >&2 'MySQL init process failed.' exit 1 fi echo echo 'MySQL init process done. Ready for start up.' echo fi chown -R mysql:mysql "$DATADIR" fi exec "$@" 



And the template for Consul Template, mysql-master.ctmpl:
 {{range service "master"}}{{.Address}} {{.Port}} {{end}} 

We collect:
 docker build -t mysql-slave . 

Run:
 docker run --name mysql-slave.0 -v /mnt/volumes/slave:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=rootpass -e REPLICATION_SLAVE=true -e REPLICATION_USER=replica -e REPLICATION_PASS=replica --link=consul:consul -l "SERVICE_NAME=slave" -l "SERVICE_PORT=3307" -p 3307:3306 -d mysql-slave 

So now is the time to launch our backend.
 docker run --name fpm.0 -d -v /mnt/storage/www:/var/www/html \ -e WORDPRESS_DB_NAME=wordpressp -e WORDPRESS_DB_USER=wordpress -e WORDPRESS_DB_PASSWORD=wordpress \ --link consul:consul -l "SERVICE_NAME=php-fpm" -l "SERVICE_PORT=9000" -l "SERVICE_TAGS=worker" -p 9000:9000 fpm 

If everything went well, then, opening the address of our balancer in the browser, we will see Wordress greeting with a proposal to install it.
Otherwise, we look at the logs.
 docker logs <container_name> 


Docker-compose.



We collected images with the services necessary for our application, we can run it anytime, anywhere, but ... Why do we need to remember so many commands, launch parameters, variables to launch containers? Here another cool tool comes to our aid - docker-compose .
This tool is designed to run applications in multiple containers. Docker-compose uses a declarative script in YAML format, which specifies with which parameters and variables to start the container. Scripts are easy to read and easy to read.

We will write such a simple script that will run in several containers everything you need for our web application docker-compose.yml.

Hidden text
 mysql-master: image: mysql-master ports: - "3306:3306" environment: - "MYSQL_DATABASE=wp" - "MYSQL_USER=wordpress" - "MYSQL_PASSWORD=wordpress" - "REPLICATION_MASTER=true" - "REPLICATION_USER=replica" - "REPLICATION_PASS=replica" external_links: - consul:consul labels: - "SERVICE_NAME=mysql-master" - "SERVICE_PORT=3306" - "SERVICE_TAGS=db" volumes: - '/mnt/storage/master:/var/lib/mysql' mysql-slave: image: mysql-slave ports: - "3307:3306" environment: - "REPLICATION_SLAVE=true" - "REPLICATION_USER=replica" - "REPLICATION_PASS=replica" external_links: - consul:consul labels: - "SERVICE_NAME=mysql-slave" - "SERVICE_PORT=3307" - "SERVICE_TAGS=db" volumes: - '/mnt/storage/slave:/var/lib/mysql' wordpress: image: fpm ports: - "9000:9000" environment: - "WORDPRESS_DB_NAME=wp" - "WORDPRESS_DB_USER=wordpress" - "WORDPRESS_DB_PASSWORD=wordpress" external_links: - consul:consul labels: - "SERVICE_NAME=php-fpm" - "SERVICE_PORT=9000" - "SERVICE_TAGS=worker" volumes: - '/mnt/storage/www:/var/www/html' 



Now it remains to execute the command to launch our “docked” application, sit back and admire the result.
 docker-compose up 


Conclusion



Of the merits



- Distributed application architecture.
Swarm copes with load balancing. We can run as many copies of the application as long as there are resources on the nodes. And run "in one click."

- Easy scaling.
As you can see, to include a new node in the cluster is quite simple. Connect the node - start the service. If desired, this procedure can be further automated.

- Dynamic application infrastructure.
Each service can easily receive information about where it is and with whom it needs to interact.

- Run the application in one command.
The docker-compose script allows us to deploy the entire infrastructure and link applications literally at the touch of a button.

Of disadvantages



Persistent data.
More than once it has been said that Docker is not so smooth with stateful services. We tried the flocker, but it seemed very raw, the plugin was constantly “falling off” for unknown reasons.
We used to synchronize the persistent data first glusterfs, then lsyncd. Glusterfs, it seems, is doing quite well with its task, but in production we have not yet decided to use it.

Perhaps you know a more elegant way to solve this problem - it would be great to hear it.



PS
how-to, .
/ , , .

Source: https://habr.com/ru/post/278939/


All Articles