📜 ⬆️ ⬇️

Full development environment automation with docker-compose

According to the data presented at Dockercon 2016 by Docker Ben Company CEO Ben Golub, the number of applications running in Docker containers over the past two years has increased by 3100%. Docker provides operation of 460 thousand applications worldwide. This is incredible!


If you haven't started using Docker yet, read this impressive implementation document . Docker changed the approach to creating applications and became an extremely important tool for developers and DevOps specialists. This article is intended for those who are already using Docker, and is designed to open another reason why it should continue to do so.


We would like to share our experience of using docker-compose in large projects. By applying this tool to automate tasks related to development, testing, and configuration, we were able to make our team more efficient and focus directly on product development in a few simple steps.


Problem


At the beginning of my career, when I was still a young developer on c # and asp.net, deploying the development environment was not an easy task. It was necessary to install databases and tools necessary for the application to work. In this case, the configuration files had to be changed in such a way as to match the settings of the local machine. I had to register ports, paths to local directories with updates, and so on. These steps were usually poorly documented, so it took a huge amount of time to launch the development environment.


Many products at the beginning of their development do not differ in complexity, but as new functions are implemented, it becomes more and more difficult to deal with them. New tools and subsystems are added to them, such as additional databases and message queues. Due to the growing popularity of microservices, monolithic monsters of large applications are increasingly being broken up into many pieces. Such changes usually require the participation of the entire team working on the project. The developer, who makes changes that break local environments, usually writes long letters with a list of steps needed to set up. I remember a case when a specialist working overseas made a major change in the structure of the product, wrote a letter with instructions on how to restore the local environment and went to sleep. I think you guessed what happened next. That's right: he forgot to mention a few important points. As a result, most of the team lost the next business day in an attempt to make the updated code work in their local work environments.


Developers are very (not) like to write documentation, and some steps to launch a project are often kept exclusively in their heads. As a result, setting up the working environment from scratch becomes a non-trivial task, especially for beginners.


Like any engineer, I strive to automate everything around. I am convinced that launching, testing and deploying an application should be done in one step. This allows the team to focus on the really important things: developing and improving the product. Ten years ago, automating these tasks was much more difficult than it is now. Now everyone can and should do it, and the earlier the start, the better.


Quick start with docker-compose


Docker-compose is a simple tool that allows you to run several docker containers in one command. Before plunging into details, I have to tell you about the structure of the project. We use monorepo , and the code base of each service (web application, API, background handlers) is stored in its root directory. Each service has a Docker file describing its dependencies. An example of such a structure can be seen in our demonstration project .


Let's start by automating a simple application that depends on MongoDB and a small service on Node.JS. The configuration for docker-compose is in the docker-compose.yml , which is usually placed in the root directory of the project.


 version: '2' services: web: build: context: ./web dockerfile: Dockerfile.dev volumes: - "./web/src:/web/src" ports: - "8080:8080" mongo: command: mongod image: mongo:3.2.0 ports: - "27100:27017" # map port to none standard port, to avoid conflicts with locally installed mongodb. volumes: - /var/run/docker.sock:/var/run/docker.sock 

To start the project you need to execute only one command:


 $ docker-compose up 

During the first run, all necessary containers will be created or loaded. At first glance, nothing complicated, especially if you have previously worked with Docker, but still let's discuss some details:


  1. context: ./web - this indicates the path to the source code of the service within monorepo.
  2. dockerfile: Dockerfile.dev - for development environments we use a separate Dockerfile.dev. In production, the source code is copied directly into the container, and for development it is connected as a volume. Therefore, there is no need to re-create the container each time the code changes.
  3. volumes: - "./web/src:/web/src" - this way the directory with the code is added to the docker as a volume.
  4. Docker-compose automatically binds containers to each other, so, for example, a web service can access mongodb by name: mongodb://mongo:27017

Always use the --build argument


By default, if the containers are already on the host, docker-compose up does not re-create them. To force the operation, use the argument - --build . This is necessary when changing third-party dependencies or the Docker file itself. We made it a rule to always perform docker-compose up --build . Docker perfectly caches container layers and will not recreate them if nothing has changed. Continuing use of --build can slow down the download for a few seconds, but prevents unexpected problems associated with running an application with outdated third-party dependencies.


Tip: you can abstract a project launch using a simple script:


 #!/bin/sh docker-compose up --build "$@" 

This technique allows you to change the options and tools used when starting. Or you can simply execute ./bin/start.sh .


Partial start


In the docker-compose.yml example, some services depend on others:


  api: build: context: ./api dockerfile: Dockerfile.dev volumes: - "./api/src:/app/src" ports: - "8081:8081" depends_on: - mongo 

In this snippet, the api service requires a database. When using docker-compose, you can specify the service name to start only it: docker-compose up api . This command will start MongoDB and after it the API service. In large projects such opportunities can be useful.


This functionality is useful when different developers need different parts of the system. For example, a frontend specialist who works on a landing page does not need a whole project, just the landing page itself is sufficient.


Unnecessary logs in> / dev / null


Some programs generate too many logs. This information is in most cases useless and only distracts attention. In our demo repository, we turned off the MongoDB logs by setting the log driver to none :


  mongo: command: mongod image: mongo:3.2.0 ports: - "27100:27017" volumes: - /var/run/docker.sock:/var/run/docker.sock logging: driver: none 

Multiple docker-compose files


After running the docker-compose up by default, it searches for the docker-compose.yml in the current directory.


In some cases (we’ll talk about this a bit later), you may need several docker-compose.yml . To connect another configuration file, the --file argument can be used:


 docker-compose --file docker-compose.local-tests.yml up 

So why do we need several configuration files? First of all for splitting the composite project into several subprojects. I am glad that services from different compose-files can still be connected. For example, you can put infrastructure containers (databases, queues, etc.) in one docker-compose file, and application-related containers in the other.


Testing


We use various types of testing: unit, integrational, ui, linting. For each service developed a separate set of tests. For example, integration and UI tests require api and web services to run.


At first, we thought it was better to perform tests every time the main compose file is run, but we soon found out that it takes a lot of time. In some cases, we needed to be able to run specific tests. To do this, a separate compose file was created:


 version: '2' services: api-tests: image: app_api command: npm run test volumes: - "./api/src:/app/src" web-tests: image: app_web command: npm run test volumes: - "./web/src:/app/src" 

Our compose test file depends on the main docker-compose file. Integration tests are connected to the development version of api , UI tests are connected to the web frontend . The test compose file only runs containers created in the main docker-compose file. If you need to run tests for only one service, you can use a partial run:


 docker-compose --file docker-compose.local-tests.yml up api-tests 

This command will only run tests for api .


Container Name Prefixes


By default, all containers running with docker-compose are prefixed with the name of the parent directory. The name of the directory in different development environments may vary. Because of this, the Docker-compose test files that we talked about earlier may stop working. We use the prefix ( app_ ) for containers in the main docker-compose file. For consistent configuration work in different environments, we created a special .env file in the directory where we run docker-compose:


 COMPOSE_PROJECT_NAME=app 

Thus, it is possible to ensure that containers will be assigned the same prefixes in all environments, regardless of the name of the parent directory.


Conclusion


Docker-compose is a useful and flexible tool for running software used to work on projects.


When new developers come to us, we usually on the first day give them the task of implementing a simple function or fixing an error in production. Our getting started guide looks like this:


1) install Docker and Docker-compose ,
2) copy the github repository,
3) execute the command ./bin/start.sh in the terminal.


To better understand the concepts outlined in this article, we recommend watching the demo project posted on GitHub. Share your experiences and ask questions .


We hope you found this article useful and the information obtained will help make your projects better :)


Original: Fully automated development environment with docker-compose


')

Source: https://habr.com/ru/post/325568/


All Articles