📜 ⬆️ ⬇️

Full automation of the “development” environment using docker-compose

In this article we will share the experience of automating the launch, testing and configuration of large projects using docker-compose. A few simple changes can help your team be more efficient and spend time on important, rather than routine tasks.


Docker in 2017


At the Dockercon 2016 conference, Docker's CEO said that the number of applications that are launched in Docker has grown by 3100% over the past two years. More than 460,000 applications worldwide are launched in Docker. This is incredible!


If you are still not using Docker, I would advise you to read a great article about using Docker around the world. Docker completely changed the way we write applications and became an integral part for developers and DevOps teams. In this article, we assume that you are already familiar with Docker and want to give you one more good reason to continue using it.


What's wrong?


From the beginning of my career, when I was developing web applications, launching an application in a work environment has always been a challenge. We had to do a lot of extra work from installing the database to configuring the application in order to simply run it. Developers love do not like to write documentation, and the steps to start a project are usually hidden in the heads of team members. As a result, the launch of the project becomes a painful task, especially for new guys.


Many projects are simple at the beginning, but become larger with time. This leads to an increase in external dependencies, such as databases, queues. Due to the growing popularity of microservices, many projects cease to be monolithic and are divided into several small parts. Any such change requires the attention of the whole team, since after such changes, the project needs to be launched differently. Usually, developers involved in root changes write a letter, or they create a wiki page with a description of the steps that need to be taken in order for the project to start again on the working environments. It usually works, but not always :) Once our team got into a situation where a developer from another continent made many changes to the project, wrote a long letter and went to bed. I suppose you know what happened next. That's right, he forgot to mention a few important points. As a result, the next day, part of the team simply could not start the project and the day was lost.


As an engineer, I like to automate everything around. I believe that startup, testing and deployment should always be one-step. In this case, the team will be able to focus on the important tasks: the development and improvement of the product. It was harder to do 10 years ago, but now it has become much easier to automate and, as it seems to me, every team should devote time to it. The earlier the better.


Quick start with docker-compose


Docker-compose is a simple tool that allows you to configure and launch multiple containers with one command. Before we dive deeper into docker-compose, we need to dwell a bit on the structure of the project. We use "monorepo" . The code of each service (frontend, api, worker, etc) is in its directory and has a Dockerfile. An example of the project structure can be found here .


The entire configuration for docker-compose is described in the docker-compose.yml , which usually lies at the root of the project. Let's start with the automation of a simple Node.JS application that works with a MongoDB database. This is what the configuration file will look like:


 version: '2' services: web: build: context: ./web dockerfile: Dockerfile.dev volumes: - "./web/src:/web/src" ports: - "8080:8080" mongo: command: mongod image: mongo:3.2.0 ports: - "27100:27017" # map port to none standard port, to avoid conflicts with locally installed mongodb. volumes: - /var/run/docker.sock:/var/run/docker.sock 

To run the project, we need one command:


 $ docker-compose up 

At the first start, all containers will be built or downloaded. If you worked with Docker, the configuration file for docker-compose should be more or less clear, but you should pay attention to a few details:


  1. context: ./web - this indicates the path to the docker of the service inside our repository.
  2. dockerfile: Dockerfile.dev - we use a separate Dockerfile.dev for working environments. For the "production" environments, we copy the code into the Docker image, and on the work environments we add the code as "volume". When using "volume", you do not have to restart docker-compose every time after changes in the code.
  3. volumes: - "./web/src:/web/src" - this is how the code is added as "volume" to the Docker.
  4. Docker-compose automatically binds containers. Thanks to this, it is possible to contact the service by name. For example, from the web service you can connect to the MongoDB database: mongodb://mongo:27017

Always use --build


By default, docker-compose up will not rebuild containers if they are already on the host. To get the docker to do this, you need to use the --build argument. This is usually needed when third-party dependencies of a project change or the pre-file changes. We always use docker-compose up --build in our team. Docker can cache layers and will not rebuild the container if nothing has changed. When using - --build everywhere, you may lose a few seconds when you start the application. But at the same time, you will never encounter the magical problems of launching a new version of the application with old dependencies.


Tip: You can wrap the startup command in a simple bash script:


 #!/bin/sh docker-compose up --build "$@" 

This will give you the opportunity to change the arguments or approach to run the application as a whole. For a team, it will always be just like: ./bin/start.sh .


Partial start


In this example, docker-compose.yml some services depend on each other:


  api: build: context: ./api dockerfile: Dockerfile.dev volumes: - "./api/src:/app/src" ports: - "8081:8081" depends_on: - mongo 

In this case, the api service needs a database to work with. When running docker-compose, you can pass the name of the service in order to start only it and all its dependencies: docker-compose up api . This command will start MongoDB and only after that will it start api .


In large projects there are always parts that are needed only from time to time. Different team members can work on different parts of the application. The frontend developer who works on the landing site, there is no need to launch the project entirely. He can just run the parts he really needs.


> / dev / null annoying logs


Often we use tools that generate a lot of logs, thereby distracting us from the useful logs of our application. To disable logs for a particular service, you just need to set the logging driver to none.


  mongo: command: mongod image: mongo:3.2.0 ports: - "27100:27017" volumes: - /var/run/docker.sock:/var/run/docker.sock logging: driver: none 

Multiple docker-compose files


By default, when you run docker-compose up , docker-compose looks for the docker-compose.yml configuration file in the current directory. In some cases (let's talk about this in a minute), you will have to create several such configuration files. To do this, you just need to use the argument - --file :


 docker-compose --file docker-compose.local-tests.yml up 

So why you may need several configuration files? The first use case is to split a large project into several smaller ones. Interestingly, even if you run several separate docker-compose, services will still be able to communicate with each other by the name of docker-compose. For example, you can split infrastructure containers (databases, queues, etc.) and application containers into separate docker-compose files.


Running tests


Our tests include various types: unit, integration, UI testing, code syntax checking. Each service has its own set of tests. Integration and UI tests require api and web frontend for their work.


At the very beginning, it seemed to us that we should run tests every time we start docker-compose. But very soon we realized that this is not always convenient and takes too much time. In some cases, we also wanted to have a little more control over which tests to run. For this, we use a separate docker-compose configuration file:


 version: '2' services: api-tests: image: app_api command: npm run test volumes: - "./api/src:/app/src" web-tests: image: app_web command: npm run test volumes: - "./web/src:/app/src" 

To run the tests, you need to run the main docker-compose . Integration tests use the working version of the api service, and UI tests use the web frontend service. In essence, tests simply use images that are mostly compiled in docker-compose. It is also possible to run tests only for a specific service, for example:


 docker-compose --file docker-compose.local-tests.yml up api-tests 

This command will only run tests for the api service.


Container Prefix


By default, all containers that are launched using docker-compose use the name of the current directory as a prefix. The name of this directory may differ in the working environments of different developers. This prefix ( app_ ) is used when we want to refer to the container from the main docker-compose file. To fix this prefix, you need to create a .env file .env next to docker-compose configuration files in the directory from which docker-compose is launched:


 COMPOSE_PROJECT_NAME=app 

Thus, the prefix will be the same in all work environments.


Conclusion


Docker-compose is a very useful and flexible way to automate the launch of projects.


When new developers are added to our team, we give them a small task, which they must complete by the end of the first working day. Everyone who joined our team coped with this and was the happiest man on earth. From the very first minutes, new developers can focus on important tasks and not waste time on launching a project. Our documentation for the start of the project consists of three points:


  1. Install Docker and Docker-compose
  2. Clone repository
  3. Run in terminal ./bin/start.sh

To make it easier for you to understand this article, we have an example of a project on Github. Share your experience and ask questions .


We hope that the article was useful and will help make your project better :)


English version, you can read here .


')

Source: https://habr.com/ru/post/322440/


All Articles