📜 ⬆️ ⬇️

Docker implementation for a small project in Production, part 3

image

In the previous parts we prepared the server for the use of containers:

Part 1. Installing CoreOS
Part 2. Basic configuration and security settings for SSH
')
In this part, we will learn how to work with containers and deploy the application stack in seconds. But first, I would like to make a small digression on the comments and comments received.

One of the important questions was, and how in CoreOS things are with Swap, I answer, things are fine and you are now convinced of this. So let's get down to setting it up. As always, for this we need to connect to the server via SSH and a text editor, by the way, I wanted to add earlier that I said that there is no vim editor in the system, I was wrong, it is there, so you can use vi and vim doesn't change it. In order to enable the swap, we will not need to mark up the disk, for this it is enough to have an existing partition with ext4. Swap will be started as a service, plus this method is that the swap can be disabled and enabled as a service, while adjusting the size of the swap through the service description file.

So, on the command line, do the following:

sudo vi /etc/systemd/system/swap.service 

In the contents of the file we add the following text:

 [Unit] Description=Turn on swap partition [Service] Type=oneshot Environment="SWAP_PATH=/var/vm" "SWAP_FILE=swapfile1" ExecStartPre=-/usr/bin/rm -rf ${SWAP_PATH} ExecStartPre=/usr/bin/mkdir -p ${SWAP_PATH} ExecStartPre=/usr/bin/touch ${SWAP_PATH}/${SWAP_FILE} ExecStartPre=/bin/bash -c "fallocate -l 2048m ${SWAP_PATH}/${SWAP_FILE}" ExecStartPre=/usr/bin/chmod 600 ${SWAP_PATH}/${SWAP_FILE} ExecStartPre=/usr/sbin/mkswap ${SWAP_PATH}/${SWAP_FILE} ExecStartPre=/usr/sbin/sysctl vm.swappiness=10 ExecStart=/sbin/swapon ${SWAP_PATH}/${SWAP_FILE} ExecStop=/sbin/swapoff ${SWAP_PATH}/${SWAP_FILE} ExecStopPost=-/usr/bin/rm -rf ${SWAP_PATH} RemainAfterExit=true [Install] WantedBy=multi-user.target 

For convenience, the swap setting goes through the environment variables:

 Environment="SWAP_PATH=/var/vm" "SWAP_FILE=swap_part1" 

Specify the path and file name, for further work, the following line:

 ExecStartPre=/bin/bash -c "fallocate -l 2048m ${SWAP_PATH}/${SWAP_FILE}" 

We say that the file size is 2048 megabytes, which is equal to 2 GB, I think this will be abundant in our system.

This part of the file is actually responsible for enabling and disabling the file itself, I think additional explanations are not needed, everything is extremely readable and understandable.

 ExecStart=/sbin/swapon ${SWAP_PATH}/${SWAP_FILE} ExecStop=/sbin/swapoff ${SWAP_PATH}/${SWAP_FILE} 

After that, you need to save our file and leave the editor, now you need to start the service and activate swapping.

To do this, as usual, use the console and enter the command:

 sudo systemctl enable --now /etc/systemd/system/swap.service 

After execution, we will enable the swap, to see that it is active, a simple command will help us:

 free –hm 

Which will show us information about the memory in a clear form in megabytes. As a result, we will see that the values ​​of the Swap partition will be acquired by the numbers that we set up, namely only 2GB, and the use depends on the load on the system, but since we have not loaded it, then there will be zeroes.

Great, now it's time to prepare everything you need to deploy our site, since we have a WordPress site and we need a Web Server (Apache, Nginx), PHP, MySQL for it to work. Personally, I will use the following bundle: PHP 7.1, Nginx 1.11.9, MariaDB 10.1.21. Why this one? Yes, because I like it so much. I believe that this is the most productive combination in order to use WordPress, but in addition to this, we will probably add Varnish 5, and Memcached 1.4.34 in order for our blog to work even faster. We begin with the simplest and move on to the more complex. First we start memcache, for this we give the command to execute the command to the console as usual:

 docker run -d -p 11211:11211 --restart=always --log-driver=syslog --name=memcached memcached 

It's all quite simple, we tell the container to restart automatically, we output logs to syslog, and we give a distinct name. By default, the settings are well balanced, and if I'm not mistaken there is allocated 64 megabytes of memory, for our blog this will be enough. In order to use this feature in our WP, you need to deliver a plugin, I personally use WP Total Cache, how to set it up is a bunch of manuals, we will not stop there.

Next, we will launch our database server, this is also a trivial task and does not require much time and settings, the command to start looks like this:

 docker run -d -ti -p local_ip:3306:3306 --log-driver=syslog -v /cloud/mysql:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=PASSWORD --restart=always --name=mariadb mariadb 

I'll tell you about what options we set at startup, so local_ip: 3306: 3306 here you need to specify what IP address our server will have access to, if you have a private private cloud, specify the address assigned to the map that looks in the cloud's local network if you just have a dedicated virtual host then specify 127.0.0.1, you don’t need to publish the database server to an external IP in order to maintain security. Thus, all software that uses database resources will be connected via our internal network address to our database. Logs are sent as usual to syslog, but the -v / cloud / mysql parameter: / var / lib / mysql tells us that we need to forward a local folder to our container. This is done to ensure that during the destruction of the container, our bases remain safe and sound. Then in the container you need to pass an environment variable, MYSQL_ROOT_PASSWORD without it, our container will not start. Here we set the root user password to our database, the more complicated the better. Do not limit password length.

Enter it rarely have to. Further parameters known to us. I just wanted to draw attention to the fact that I use the official images of these applications from the hub, without specifying the tag, that is, the latest version will always be taken by default. But if you already have an image locally, then the version that is stored locally will be taken, so in order to be updated we can execute the following command:

 docker pull memcached mariadb 

Then from the hub, the latest version of the image will be taken guaranteed. For those who need time according to the time zone set by us at the installation stage, it makes sense to add the following option to the launch of each container:

 -v /etc/localtime:/etc/localtime 

She will forward a file to us in the container with the time zone setting. Next, run our web server, for this we execute the command:

 docker run -d -p 80:80 -p 443:443 -p 81:81 -v /cloud/run/php-fpm:/sock -v /cloud/etc/nginx:/etc/nginx -v /cloud/etc/letsencrypt/:/etc/letsencrypt/ --log-driver=syslog -v /cloud/data/www/:/var/www/html --restart=always --name=nginx nginx 

If you take a closer look, there are already a lot more parameters we pass to the container at launch. Let's look at them in more detail. To begin with, we will forward ports 80, 443 and 81 to our system, if everything is clear with the first two, then here 81 raises questions. We will use this port for the Varnish backend, this will be further. The scheme is quite simple, since Varnish doesn’t know how to work with SSL, first connect to port 80 on Nginx, make a redirect to port 443, since we’ll use LetsEncrypt certificates, from port 443 we will redirect to Varnish, if it finds cache, the result of the request then returns it; if not, refer to the back-end on port 81 which serves Nginx again. An example of the configuration files I post below. In order for our php container to handle skrpity, we will forward a folder with a php-fpm socket to the web server container like this: -v / cloud / run / php-fpm: / sock. Next, let's forward the -v / cloud / etc / nginx configuration: / etc / nginx, -v / cloud / etc / letsencrypt /: / etc / letsencrypt / certificates. Just configure the log, and forward our directory with the sites -v / cloud / data / www /: / var / www / html. This is where the launch of the container is complete, but in order for it to really start we need to prepare the configuration files. An example of a file as promised here below.

Nginx.conf
 user nginx; worker_processes 1; pid /var/run/nginx.pid; events { worker_connections 1024; use epoll; multi_accept on; } http { server_tokens off; include /etc/nginx/mime.types; default_type application/octet-stream; access_log /dev/stdout; sendfile on; sendfile_max_chunk 128k; keepalive_timeout 65; keepalive_requests 10; client_body_buffer_size 1K; client_header_buffer_size 2k; large_client_header_buffers 2 1k; client_max_body_size 32m; fastcgi_buffers 64 16K; fastcgi_buffer_size 64k; client_body_timeout 10; client_header_timeout 10; reset_timedout_connection on; send_timeout 1; tcp_nopush on; tcp_nodelay on; open_file_cache max=200000 inactive=20s; open_file_cache_valid 30s; open_file_cache_min_uses 2; open_file_cache_errors on; include /etc/nginx/sites-enabled/*.conf; } Site-0001.conf # frontend configuration section # listen based 80 http server { listen 80 default_server; server_name www.your_site.ru; location /.well-known { root /var/www/html; } return 301 https://$host$request_uri; } # listen based 80 http server { listen 80; server_name your_site.ru; location /.well-known { root /var/www/html; } return 301 https://$host$request_uri; } server { listen 443 ssl http2; server_name www.your_site.ru; location /.well-known { root /var/www/html; } ssl on; ssl_certificate /etc/letsencrypt/live/your_site.ru/fullchain.pem; ssl_certificate_key /etc/letsencrypt/live/your_site.ru/privkey.pem; ssl_trusted_certificate /etc/letsencrypt/live/your_site.ru/chain.pem; return 301 https://your_site.ru$request_uri; } server { listen 443 ssl http2 default_server; server_name your_site.ru; ssl on; ssl_stapling on; ssl_certificate /etc/letsencrypt/live/your_site.ru/fullchain.pem; ssl_certificate_key /etc/letsencrypt/live/your_site.ru/privkey.pem; ssl_trusted_certificate /etc/letsencrypt/live/your_site.ru/chain.pem; root /var/www/html/your_site.ru; rewrite /wp-admin$ $scheme://$host$uri/ permanent; keepalive_timeout 60 60; gzip on; gzip_comp_level 1; gzip_min_length 512; gzip_buffers 8 64k; gzip_types text/plain; gzip_proxied any; ssl_prefer_server_ciphers on; ssl_session_cache shared:ssl_session_cache:10m; ssl_session_timeout 2m; ssl_dhparam /etc/nginx/ssl/dh2048.pem; ssl_protocols TLSv1.2 TLSv1.1 TLSv1; ssl_ciphers EECDH+ECDSA+AESGCM:EECDH+aRSA+AESGCM:EECDH+ECDSA+SHA512:EECDH+ECDSA+SHA384:EECDH+ECDSA+SHA256:ECDH+AESGCM:ECDH+AES256:DH+AESGCM:DH+AES256:RSA+AESGCM:!aNULL:!eNULL:!LOW:!RC4:!3DES:!MD5:!EXP:!PSK:!SRP:!DSS; add_header Strict-Transport-Security 'max-age=31536000; includeSubDomains'; location / { location = /wp-login.php { auth_basic "Restricted"; auth_basic_user_file /etc/nginx/.htpasswd/passwd; proxy_pass http://your_interal_ip:81; } location ~* /wp-admin/~^.*\$ { auth_basic "Authorization Required"; auth_basic_user_file /etc/nginx/.htpasswd/passwd; proxy_pass http://your_interal_ip:81; } proxy_pass http://your_interal_ip:6081/; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto https; proxy_set_header X-Forwarded-Port 443; } } # end of frontend configuration section # backend configuration server { listen 81; root /var/www/html/your_site.ru; gzip on; gzip_comp_level 7; gzip_min_length 512; gzip_buffers 8 64k; gzip_types text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript application/javascript image/svg+xml; gzip_proxied any; server_name your_site.ru; index index.html index.php; location / { if ($host !~ ^(your_site.ru)$ ) { return 444; } try_files $uri $uri/ /index.php?$args; error_page 404 /404.html; error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/share/nginx/html; proxy_set_header X-Real-IP $remote_addr; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } location ~ /\.ht { deny all; } location ~* /(?:uploads|files)/.*\.php$ { deny all; # deny for scripts } location ~* ^.+\.(ogg|ogv|svg|svgz|eot|otf|woff|mp4|ttf|rss|atom|jpg|jpeg|gif|png|ico|zip|tgz|gz|rar|bz2|doc|xls|exe|ppt|tar|mid|midi|wav|bmp|rtf)$ { access_log off; log_not_found off; expires max; # cashe for static } location = /favicon.ico { log_not_found off; access_log off; } location = /robots.txt { allow all; log_not_found off; access_log off; } location = /xmlrpc.php { deny all; } #deny referer if ( $http_referer ~* (babes|forsale|girl|jewelry|love|nudit|organic|poker|porn|sex|teen) ) { return 403; } if ($http_user_agent ~* LWP::Simple|BBBike|wget) { return 403; } if ($http_user_agent ~* msnbot|scrapbot) { return 403; } } location ~ [^/]\.php(/|$) { fastcgi_split_path_info ^(.+?\.php)(/.*)$; if (!-f $document_root$fastcgi_script_name) { return 404; } include fastcgi_params; fastcgi_param HTTPS on; fastcgi_ignore_client_abort off; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_pass unix:/sock/php-fpm.sock; } } 


I will not explain about the values ​​in the configs, there are articles on Habré on which I set up these services myself, if you like, you can easily find them.

Now let's get down to the most interesting, this is the build of our PHP 7, the fact is that the base image does not include all the extensions we need, so we will build the image ourselves. In order for you to earn almost any engines, with any topic, we need to create the following container by describing it with a Dockerfile:

 FROM php:7-fpm RUN apt-get update \ && apt-get -y install \ libmagickwand-dev \ libmcrypt-dev \ libpng12-dev \ libjpeg62-turbo-dev \ libfreetype6-dev \ libmemcached-dev \ libicu-dev \ --no-install-recommends \ && pecl install imagick \ && docker-php-ext-enable imagick\ && curl -L -o /tmp/memcached.tar.gz "https://github.com/php-memcached-dev/php-memcached/archive/php7.tar.gz" \ && mkdir -p /usr/src/php/ext/memcached \ && tar -C /usr/src/php/ext/memcached -zxvf /tmp/memcached.tar.gz --strip 1 \ && docker-php-ext-configure gd --with-freetype-dir=/usr/include/ --with-jpeg-dir=/usr/include/ \ && docker-php-ext-configure memcached \ && docker-php-ext-install gd mcrypt mysqli pdo_mysql zip calendar opcache memcached exif intl sockets \ && rm -rf /tmp/* /var/cache/apk/* /var/lib/apt/lists/* \ 

Since the base image is built on Debian Jessie, we will not violate this tradition, and we will build our own on its base, just add to it the extensions we need. Next, the case will remain for the soap, set the config, and run the container.

After assembling the container, you need to pull out the configuration files for editing, suppose we called the container local / php7 during the assembly. Next, proceed to the configuration:

 docker create --name=php7 local/php7 docker cp php7:/usr/local/etc /cloud/etc/php-fpm 

Everything, we copied the default configuration files to our directory, it remains to configure them a bit.

docker-php-custom-user.ini
 default_charset = "UTF-8" file_uploads = On max_file_uploads = 20 date.timezone = "Europe/Moscow" cgi.fix_pathinfo=1 display_errors = Off log_errors = On log_errors_max_len = 1024 html_errors = On register_globals = Off short_open_tag = Off safe_mode = Off output_buffering = Off zlib.output_compression = Off implicit_flush = Off allow_call_time_pass_reference = Off max_execution_time = 30 max_input_time = 60 max_input_vars = 10000 variables_order = “EGPCS” register_argc_argv = Off magic_quotes_gpc = Off magic_quotes_runtime = Off magic_quotes_sybase = Off session.use_cookies = 1 magic_quotes_gpc = Off; default_charset = UTF-8; memory_limit = 64M; max_execution_time = 36000; upload_max_filesize = 999M; mysql.connect_timeout = 20; session.auto_start = Off; session.use_only_cookies = On; session.use_cookies = On; session.use_trans_sid = Off; session.cookie_httponly = On; session.gc_maxlifetime = 3600; allow_url_fopen = on; 


Further in the same folder we create files with the name of the add-on for php and the ini extension of the following content:

 extension=imagick.so 

This is for example the docker-php-ext-imagick.ini file.

And so for everyone. I will not give them all, I think the principle is clear. For the particularly lazy, I laid it all out here .

The most important setting is in the zz-docker.conf file.

 [global] daemonize = no [www] listen = /sock/php-fpm.sock 

Do not forget to register it, otherwise php-fpm will not work through unix sockets. If we leave it by default, then php-fpm will run on port 9000, and we will need to change our upstream in nginx from socket to tcp, this is for those who want to take php-fpm to a separate machine in the future. As part of a virtual private cloud, it is not only easy but also safe.

Everything is ready, now we start the container by typing the command in the console:

 docker run -d -v /cloud/run/php-fpm:/sock -v /cloud/etc/php-fpm/etc:/usr/local/etc -v /cloud/data/www:/var/www/html -v /cloud/log/php-fpm:/var/log/php-fpm --log-driver=syslog --restart=always --name=php7 visman/php7.1 

I’m not going to describe the parameters of the container launch command, I’ll just note that I have forwarded the -v / cloud / data / www: / var / www / html folder with the web server files to the container, I think you know for what.

So let's summarize, we have Nginx ready to accept connections, there is PHP-FPM 7.1, which will process our php files, there is a database, and there are beginnings of caching in the form of memcached. Now we need to configure Varnish. But at first we will collect our image, as usual below I bring Dockerfile

 FROM debian:jessie RUN export DEBIAN_FRONTEND=noninteractive && \ apt-get update -y -q && \ apt-get install -y -q apt-transport-https curl && \ rm -rf /var/lib/apt/lists/* RUN curl -k https://repo.varnish-cache.org/GPG-key.txt | apt-key add - && \ echo "deb https://repo.varnish-cache.org/debian/ jessie varnish-4.1" | tee -a /etc/apt/sources.list.d/varnish-cache.list && \ apt-get update -y -q && \ apt-get install -y -q gcc libjemalloc1 libedit2 && \ curl -O https://repo.varnish-cache.org/pkg/5.0.0/varnish_5.0.0-1_amd64.deb && \ dpkg -i varnish_5.0.0-1_amd64.deb &&\ rm varnish_5.0.0-1_amd64.deb && \ apt-get install -y -q varnish-agent && \ rm -rf /var/lob/apt/lists/* ADD docker-entrypoint.sh /usr/bin/entrypoint.sh ADD varnish /etc/default/varnish RUN chmod +x /usr/bin/entrypoint.sh EXPOSE 6081 6082 6085 ENTRYPOINT ["/usr/bin/entrypoint.sh"] 

Please note, I put the latest 5th version not from repositories, but manually. Since she came out not so long ago and did not have time to get into the turnips. Moreover, I add Varnish Agent, unfortunately it is version 4.1, but this does not hurt us. For what we will see later.

entrypoint.sh
 #!/bin/bash set -e service varnish start varnish-agent -c 6085 -H /var/www/html/varnish-dashboard/ tailf /etc/varnish/default.vcl 

The last line serves us to deduce our caching rules for Varnish, it is not necessary, but at the time of debugging it is convenient to see if it picks up the necessary rules or not.

varnish
 RELOAD_VCL=1 START=yes # Maximum number of open files (for ulimit -n) NFILES=131072 # Maximum locked memory size (for ulimit -l) # Used for locking the shared memory log in memory. If you increase log size, # you need to increase this number as well MEMLOCK=82000 DAEMON_OPTS="-a :6081 \ -T :6082 \ -f /etc/varnish/default.vcl \ -S /etc/varnish/secret \ -s malloc,256m" 

This is the settings file I allocate 256 MB for the cache, for me personally this is enough. Next, we need to configure the Varnish Dashboard, as it is done, described here . I will not describe this process, but for whom I need to write in comments or drugs, I will definitely help. For nginx, the configuration file will be as follows:

 server { listen 80; server_name varnish.your_site.ru; return 301 https://$host$request_uri; if ($scheme != "https") { return 301 https://$host$request_uri; } # managed by Certbot } server { listen 443 ssl http2; server_name varnish.your_site.ru; ssl on; ssl_certificate /etc/letsencrypt/live/varnish. your_site.ru/fullchain.pem; ssl_certificate_key /etc/letsencrypt/live/varnish. your_site.ru/privkey.pem; ssl_trusted_certificate /etc/letsencrypt/live/varnish. your_site.ru/chain.pem; location /.well-known { root /var/www/html; } location / { proxy_pass http://interal_ip:6085; } } 

Now run the container.

 docker run -d -ti -p 6082:6082 -p 6081:6081 -p 6085:6085 -v /cloud/data/www/varnish-dashboard:/var/www/html/varnish-dashboard -v /cloud/etc/varnish:/etc/varnish -v /etc/localtime:/etc/localtime --log-driver=syslog --name=varnish visman/d_varnish:5 

For those who are too lazy to collect their image, the launch line already has my image collected earlier. You only need to place the configs in their places. I deliberately did not post the default.vcl file since everyone has his own, moreover there is a good article on how to configure Varnish. That's actually we finished installing our soup kit to start a blog on WP.

I deliberately missed the MariaDB configuration part, since the Internet is full of instructions on how to do this. I do not use docker-compose due to the fact that it is not in the CoreOS distribution, but this is solved. And the launch of single services seems to me more convenient. Moreover, all the commands can be wrapped into one script, put in cloud-init, use Ansible ....

Thank you for your attention, to be continued. If I forgot anything, I apologize, since I am writing an article looking up from my work.

Source: https://habr.com/ru/post/320872/


All Articles