📜 ⬆️ ⬇️

Docker for symfony 4 - from lokalki to production

Prehistory


One fine day I needed to deploy a development environment for my project. Vagrant was already pretty fed up and wanted to have a single development environment for all project participants that would be identical to the production server. Accordingly, having heard a lot about the hipster docker, I decided to start working with him. Next, I will try to describe in as much detail as possible all the steps from installing a docker on LAN to deploying a product on KVM.

Initial technology stack:

- Docker
- symfony 4
- nginx
- php-fpm
- postgresql
- elasticsearch
- rabbitmq
- jenkins
')
Iron:

- a laptop under OS Ubuntu 16.04
- production server on KVM hosting

Why, besides the technological stack, I also listed an iron stack?

If you have never worked with a docker before, then you may encounter a number of problems related specifically to hardware, the operating system of your laptop, or the type of virtualization on hosting.

The first and probably the most important aspect when you start working with a docker is the operating system of your laptop. The easiest way to work with the docker is on linux systems. If you are working on Windows or Mac, then you will have 100% of some difficulties, but these difficulties will not be critical and if you want to “google” how this is corrected will not be any problems.

The second question is hosting. Why do we need Hosting with the type of virtualization KVM? The reason is that virtualization of VPS is very different from KVM and you can’t install docker on VPS yourself, because VPS allocates server resources dynamically.

Subtotal: for the fastest start on Docker, it is most reasonable to choose Ubuntu as a local OS and KVM hosting (or your own server). Further, the story will go relying on these two components.

Docker-compose for LAN


Installation


First you need to install the docker locally. Installation instructions can be viewed on the official website link to the official documentation for ubuntu (you need to install docker and docker-compose), or by running the command in the console:

curl -sSl https://get.docker.com/ | sh 

This command will install both docker and docker-compose. After that check the version of the docker with the command:

 docker --version 

I run this whole thing on docker version 18.06.0-ce.

Installation is complete!

Awareness


In order to work with something less successfully, you need to have an idea how it works. If you have previously worked only with Vagrant or something similar, it will be extremely unusual and incomprehensible at first, but this is only at first.

I will try to draw an analogy to Vagrant. Now many may say that comparing Vagrant and Docker is fundamentally wrong. Yes, I agree with that, but I’m not going to compare them, I’ll only try to get to the newbies who worked only with Vagrant, the Docker system, appealing with what the newbies know.

My vision of the container "on the fingers" is as follows: each container is a tiny isolated world. Each container can be imagined as if it is a tiny Vagrant on which only 1 tool is installed, for example, nginx or php. Initially, containers are generally isolated from everything around, but by not tricky manipulations, you can configure everything so that they communicate with each other and work together. This does not mean that each of the containers is a separate virtual machine, not at all. But it is easier for the initial understanding, as it seems to me.

Vagrant just bites off some of your resources from your computer, creates a virtual machine, installs an operating system on it, installs libraries, installs everything that you have written in the script after vagrant up. In the end, it looks like this:

→ View shemku

Docker, in turn, works radically differently. It does not create virtual machines. Docker creates containers (for the time being, you can perceive them as micro-virtual machines) with its Alpine operating system and 1-3 libraries that are necessary for the operation of the application, for example, php or nginx. At the same time, Docker does not block the resources of your system, but simply uses them as needed. In the end, if you illustrate, it will look something like this:

→ View shemku

Each of the containers has an image from which it is created. The overwhelming part of the images is an extension of another image, for example, Ubuntu xenial or Alpine or Debian, to which additional drivers and other components roll over.

My first image was for php-fpm. My image extends the official php image: 7.2-fpm-alpine3.6. That is, in fact, he takes the official image and delivers the components I need, for example, pdo_pgsql, imagick, zip, and so on. Thus, you can create an image that is right for you. If you wish, you can use it here .

With the creation of images, everything is quite simple in my opinion, if they are made on the basis of xenial for example, but deliver a little hemorrhoids, if they are made on the basis of Alpine. Before working with the docker, I didn’t even hear about Alpine, since I always worked with Vagrant under Ubuntu xenial. Alpine is an empty Linux operating system, which is essentially nothing at all (at least). Therefore, at first it is extremely inconvenient to work with it, since there is no such as the same apt-get install (which you get used to), but there is only apk add and not quite sane set of packages. A big plus to Alpine is its weight, for example, if Xenial weighs (abstract) 500 bags, then Alpine (abstract) is about 78 bags. What does this even affect? And this affects the speed of assembly and the final weight of all the images that will be stored on your server in the end. Suppose you have 5 different containers and all on the basis of xenial, their total weight will be more than 2.5 gigabytes, and alpine - about 500 bags only. Therefore, ideally, we should strive to ensure that the containers are as thin as possible. (Useful link for installing packages in Alpine - Alpine packages ).

Everywhere they write on the docker hub how to start the container using the docker run , and for some reason they don’t write how it can be run through docker-compose, and it is through docker-compose that it will run most of the time, since few people want to manually start all containers, grids, ports open and so on. Docker-compose on behalf of the user looks like a yaml file with settings. It includes a description of each of the services that need to be launched. My build for the local environment looks like this:

 version: '3.1' services: php-fpm: image: otezvikentiy/php7.2-fpm:0.0.11 ports: - '9000:9000' volumes: - ../:/app working_dir: /app container_name: 'php-fpm' nginx: image: nginx:1.15.0 container_name: 'nginx' working_dir: /app ports: - '7777:80' volumes: - ../:/app - ./nginx/nginx.conf:/etc/nginx/conf.d/default.conf postgres: image: postgres:9.6 ports: - '5432:5432' container_name: 'postgresql' working_dir: /app restart: always environment: POSTGRES_DB: 'db_name' POSTGRES_USER: 'db_user' POSTGRES_PASSWORD: 'db_pass' volumes: - ./data/dump:/app/dump - ./data/postgresql:/var/lib/postgresql/data rabbitmq: image: rabbitmq:3.7.5-management working_dir: /app hostname: rabbit-mq container_name: 'rabbit-mq' ports: - '15672:15672' - '5672:5672' environment: RABBITMQ_DEFAULT_USER: user RABBITMQ_DEFAULT_PASS: password RABBITMQ_DEFAULT_VHOST: my_vhost elasticsearch: image: docker.elastic.co/elasticsearch/elasticsearch:6.3.0 container_name: 'elastic-search' environment: - discovery.type=single-node - "discovery.zen.ping.unicast.hosts=elasticsearch" - bootstrap.memory_lock=true - "ES_JAVA_OPTS=-Xms512m -Xmx512m" ports: - 9200:9200 - 9300:9300 working_dir: /app volumes: - ../:/app - ./data/elasticsearch:/usr/share/elasticsearch/data volumes: elasticsearch: postgresql: 

docker-compose.yaml for SF4 is a specific set of services: nginx, php-fpm, postgresql, rabbitmq (if you need it), elasticsearch (if you need it). For the local environment, this is enough. To make it all work, there is a minimum set of settings, without which nothing will work. Most often it is image, volumes, ports, environment, working_dir and container_name. Everything for launching an image is described in its documentation on hub.docker.com . There is not always a description for docker-compose, but this does not mean that it does not work with it. You just need to transfer all the incoming data from the docker run command to docker-compose and it will work.

For example, there is an image for RabbitMQ here . When you see THIS for the first time - it causes mixed feelings and emotions, but not everything is so scary. This image shows tags. Usually tags - represent different images, different versions of the application with different extensible images. For example, the tag 3.7.7-alpine means that this image is thinner than, for example, 3.7.7, as it is made on the basis of Alpine. Well and also in tags are indicated most often the version of the application itself. I usually choose the most recent version and the stable version of the application itself and the alpine image.

After you have studied and selected the tag - then you often see something of this kind:

 docker run -d --hostname my-rabbit --name some-rabbit -e RABBITMQ_DEFAULT_USER=user -e RABBITMQ_DEFAULT_PASS=password rabbitmq:3-management 

And the first thought is WTF? How to transfer it to docker-compose?

It's pretty not difficult. In fact, this line contains all the same parameters as in the yaml file, only abbreviated. For example, -e is an environment to which various parameters are passed, there can also be entries of type -p — these are ports, which in yaml are called ports. Accordingly, in order to qualitatively use an unfamiliar image, you just need to “google” the abbreviations of the docker run commands and apply the full names in the yaml file.

Now back to docker-compose.yml, which I gave in the sample above.

This example uses my image for php7.2 made as an extension for the official image php7.2-fpm-alpine, but if you do not need so many additional libraries, then you can build your extension for the official image and use it. The rest of the images for LAN I use completely original and official.

image - we specify what image to download. For example (rabbitmq: 3.7.7-management-alpine).

ports - specify the ports that the container will use (see the image documentation). Sample port nginx is 80 by default. Accordingly, if you want to use port 80, then you must specify 80:80 and your site will be available on localhost. Or you can specify 7777: 80, and then your site will be url localhost: 7777. This is necessary so that several projects can be deployed on the same host.

volumes - the shared directories are indicated here. For example, your project is in the ~ / projects / my-sf4-app directory, and the php container is configured to work with the / app directory (the same as in the / var / www / my-sf4-app version). Accordingly, it would be convenient for the container to have access to the project. Accordingly, in volumes we prescribe ~/projects/my-sf4-app:/app (see this example in docker-compose.yml above (this is indicated by a relative path ../:/app)).

Thus, the folder will be shared with the container and it will be able to perform various actions like php bin/console doctrine:migrations:migrate . It is also convenient to use these directories in order to save application data. For example, postgresql, you can specify the directory for storing database data, and then during the re-creation of the container you will not need to roll out a dump or fixtures.

working_dir - specifies the working directory of the container. In this case, / app (or by analogy with the / var / www / my-sf4-app).

environment - all variables for the container are transferred here. For example, for rabbitmq, the username and password are transmitted, for postgresql, the base name, username, password are transmitted.

container_name is not a required field, but I prefer to specify, for easy connection to containers. If you do not specify, then the names will be assigned by default with hashes.

These are the basic parameters that must be specified. The rest can be optional for additional settings, or according to the documentation for the container.

Now, in order to run all this, you need to run the command docker-compose up -d in the directory where the docker-compose file is located.

How and where does all this store for LAN?


For lokalki I use the docker folder in the root of the project.


It contains the data folder in which I store all the postgresql and elasticsearch information so that when re-creating a project, you do not have to roll fixtures from scratch. There is also a nginx daddy in which I store the config for the local nginx container. I synchronize these folders in docker-compose.yml with the corresponding files and folders in the containers. Also in my opinion it is very convenient to write bash scripts to work with the docker. For example, the start.sh script starts containers, then runs composer install, cleans the cache, and migrates. For colleagues on the project, it’s just as convenient, they don’t have to do anything, they just run the script and everything works.

Example script start.sh

 #!/usr/bin/env bash green=$(tput setf 2) toend=$(tput hpa $(tput cols))$(tput cub 6) echo -n '   ?: ' read name echo "  $name!       tutmesto.ru" echo -n "$name,      ? (y/n): " read use_dump echo '    !' docker-compose up -d || exit echo -en '\n' echo -n "  ! ${green}${toend}[OK]" echo -en '\n' echo '    .' ./composer-install.sh echo -en '\n' echo -n "   ${green}${toend}[OK]" echo -en '\n' echo '     40 ,    postgres-' sleep 5 echo '  35 ...' sleep 5 echo '  30 ...' sleep 5 echo '  25 ...' sleep 5 echo '  20 ...' sleep 5 echo '  15 ...' sleep 5 echo '  10 ...' sleep 5 echo '  5 ...' sleep 5 echo ' .   postgres-        !' case "$use_dump" in y|Y) ./dump.sh echo -en '\n' echo -n "  ! ${green}${toend}[OK]" echo -en '\n' ;; *) echo "$name, ,   ! =)" ;; esac echo '    !' ./migrations-migrate.sh echo -en '\n' echo -n "  ! ${green}${toend}[OK]" echo -en '\n' echo '  !' ./php-fpm-command.sh rm -rf var/cache/* ./php-fpm-command.sh chmod 777 var/ -R ./cache-clear.sh echo -en '\n' echo -n "  ! ${green}${toend}[OK]" echo -en '\n' echo '    !' ./env.sh echo -en '\n' echo -n "   ! ${green}${toend}[OK]" echo -en '\n' echo ", $name,    !    localhost:7777  !" echo -en '\n' echo "------------------------------------------------------------------------------" echo -en '\n' echo "    :" echo "./cache-clear.sh |  symfony 4" echo "./composer.sh [command(ex. install)] |  " echo "./composer-install.sh | composer install" echo "./connect-to-php-fpm.sh |   php" echo "./console.sh [command(ex. cache:clear)] |  php bin/console" echo "./destroy.sh |  .    ." echo "./dump.sh | ,     (dump.sql)" echo "./env.sh |   " echo "./migrations-migrate.sh | " echo "./php-fpm-command.sh [command(ex. php -m)] |   php-fpm " echo "./start.sh |  ( )" echo "./stop.sh |Gracefull shutdown " echo -en '\n' echo "        :" echo "client@c.cc | QWEasd123" echo "admin@a.aa | QWEasd123" echo "moderator@m.mm | QWEasd123" echo -en '\n' echo "------------------------------------------------------------------------------" echo -en '\n' echo -en '\n' echo 'OtezVikentiy brain corporation!' echo -en '\n' echo -en '\n' 

Sample php-fpm-command.sh script

 #!/usr/bin/env bash cd "`dirname \"$0\"`" && \ docker-compose exec -T "php-fpm" sh -c "cd /app && $*" 

Example script connect-to-php-fpm.sh

 #!/usr/bin/env bash docker exec -i -t --privileged php-fpm bash 

The local development environment ends there. Congratulations, you can share with your colleagues the finished result! )

Productive


Training


Suppose you have already written something on LAN and want to put it on a production server or on a test server. You have hosting on KVM virtualization or your server in the next room with air conditioning.

To deploy a product or beta, the server should have an operating system (ideally linux) and an installed docker. Docker can be installed exactly the same as on LAN, no difference.

Docker in a product differs from a lokalka a little. First, you can no longer just take and specify passwords and other information and docker-compose. Secondly, you cannot use docker-compose directly.

Docker for a product uses docker swarm and docker stack. If it is right on the fingers, then this system differs only in other commands and in that docker swarm is the load balancer for the cluster (again, a little abstract, but it will be easier to understand).

PS: I advise you to practice setting up a docker swarm on Vagrant (no matter how paradoxical it sounds). A simple recipe for training - pick up an empty Vagrant with the same operating system as in production and set it up to start.

To configure docker swarm, you just need to run a few commands:

 docker swarm init --advertise-addr 192.168.***.** (ip-  ) mkdir /app (          app) chown docker /app (     ) docker stack deploy -c docker-compose.yml my-first-sf4-docker-app 

Consider now all this a little more.

docker swarm init --advertise-addr - it launches docker swarm itself and fumbles the link itself so that you can hook up another server to this “swarm” so that they work in a cluster.
mkdir / app && chown .. - it is necessary to create in advance all the necessary directories for the work of the docker, so that during the build it would not complain about the absence of directories.
docker stack deploy -c docker-compose.yml my-first-sf4-docker-app - this command starts the build of your application itself, analogous to docker-compose up -d for docker swarm only.

In order for any build to begin, you need the same docker-compose.yaml, but already slightly modified just for the production / beta.

 version: '3.1' services: php-fpm: image: otezvikentiy/php7.2-fpm:0.0.11 ports: - '9000:9000' networks: - my-test-network depends_on: - postgres - rabbitmq volumes: - /app:/app working_dir: /app deploy: replicas: 1 restart_policy: condition: on-failure placement: constraints: [node.role == manager] nginx: image: nginx:1.15.0 networks: - my-test-network working_dir: /app ports: - '80:80' depends_on: - php-fpm volumes: - /app:/app - ./nginx/nginx.conf:/etc/nginx/conf.d/default.conf deploy: replicas: 1 restart_policy: condition: on-failure placement: constraints: [node.role == manager] postgres: image: postgres:9.6 ports: - '5432:5432' working_dir: /app networks: - my-test-network secrets: - postgres_db - postgres_user - postgres_pass environment: POSTGRES_DB_FILE: /run/secrets/postgres_db POSTGRES_USER_FILE: /run/secrets/postgres_user POSTGRES_PASSWORD_FILE: /run/secrets/postgres_pass volumes: - ./data/dump:/app/dump - ./data/postgresql:/var/lib/postgresql/data deploy: replicas: 1 restart_policy: condition: on-failure placement: constraints: [node.role == manager] rabbitmq: image: rabbitmq:3.7.5-management networks: - my-test-network working_dir: /app hostname: my-test-sf4-app-rabbit-mq volumes: - /app:/app ports: - '5672:5672' - '15672:15672' secrets: - rabbitmq_default_user - rabbitmq_default_pass - rabbitmq_default_vhost environment: RABBITMQ_DEFAULT_USER_FILE: /run/secrets/rabbitmq_default_user RABBITMQ_DEFAULT_PASS_FILE: /run/secrets/rabbitmq_default_pass RABBITMQ_DEFAULT_VHOST_FILE: /run/secrets/rabbitmq_default_vhost deploy: replicas: 1 restart_policy: condition: on-failure placement: constraints: [node.role == manager] elasticsearch: image: docker.elastic.co/elasticsearch/elasticsearch:6.3.0 networks: - my-test-network depends_on: - postgres environment: - discovery.type=single-node - discovery.zen.ping.unicast.hosts=elasticsearch - bootstrap.memory_lock=true - ES_JAVA_OPTS=-Xms512m -Xmx512m ports: - 9200:9200 - 9300:9300 working_dir: /app volumes: - /app:/app - ./data/elasticsearch:/usr/share/elasticsearch/data deploy: replicas: 1 restart_policy: condition: on-failure placement: constraints: [node.role == manager] jenkins: image: otezvikentiy/jenkins:0.0.2 networks: - my-test-network ports: - '8080:8080' - '50000:50000' volumes: - /app:/app - ./data/jenkins:/var/jenkins_home - /var/run/docker.sock:/var/run/docker.sock - /usr/bin/docker:/usr/bin/docker deploy: replicas: 1 restart_policy: condition: on-failure placement: constraints: [node.role == manager] volumes: elasticsearch: postgresql: jenkins: networks: my-test-network: secrets: rabbitmq_default_user: file: ./secrets/rabbitmq_default_user rabbitmq_default_pass: file: ./secrets/rabbitmq_default_pass rabbitmq_default_vhost: file: ./secrets/rabbitmq_default_vhost postgres_db: file: ./secrets/postgres_db postgres_user: file: ./secrets/postgres_user postgres_pass: file: ./secrets/postgres_pass 

As you can see - the file with the settings for the product is slightly different from the file for LAN. It added secrets, deploy and networks.

secrets - files for storing keys. Keys are pretty simple to create. You create a file with the name of the key - write the value inside. After that, you specify the secrets section in docker-compose.yml and transfer to it the entire list of files with keys. More details .
Networks - this will create some kind of internal grid through which containers communicate with each other. On LAN, this is done automatically, but on a production basis, it needs to be done a little by hand. Plus, you can specify additional settings other than default. More details .
deploy is the main difference between locale and product / beta.

  deploy: replicas: 1 restart_policy: condition: on-failure placement: constraints: [node.role == manager] 

Minimum fighter set:

replicas - specify the number of replicas that you want to run (in fact, this is used if you have a cluster and you use the load balancer from the docker). For example, you have two servers and you connected them through docker swarm. Pointing here the number 2, for example, 1 instance you will have created on 1 server, and the second on the second server. Thus, the load on the server will be divided in half.
restart_policy - the policy of automatic "re-raising" the container in case it fell down for some reason.
placement is the location of the container instance. For example, there are cases when you want all container instances to rotate exactly on 1 out of 5 servers, and not to be distributed between them.

I want to read the documentation!

So, we have a little bit to understand what distinguishes docker-compose.yaml for lokalki from the version for productive / beta. Now let's try to run this thing.

, Vagrant' docker-compose.yml

 sudo apt-get update sudo apt-get -y upgrade sudo apt-get install -y language-pack-en-base export LC_ALL=en_US.UTF-8 export LANGUAGE=en_US.UTF-8 export LANG=en_US.UTF-8 curl -sSl https://get.docker.com/ | sh sudo usermod -aG docker ubuntu sudo apt-get install git sudo docker swarm init --advertise-addr 192.168.128.77 sudo mkdir /app sudo chmod 777 /app -R docker stack deploy -c /docker-compose.yml my-app git clone git@bitbucket.org:JohnDoe/my-app.git /app docker stack ps my-app docker stack ls docker stack services my-app 

PS: sudo 777, . .

, .
«» (docker swarm).
, .
SF4 /app.
: ps, ls services.

- . ps, , , .

, , - docker stack ps my-app . docker container ps -a — , . , my-app_php-fpm.1.*- hash*.

, , — docker logs my-app_php-fpm.1.*- hash* . . :

 docker stack rm my-app 

swarm - . — docker stack deploy -c docker-compose.yml my-app.

Source: https://habr.com/ru/post/420673/


All Articles