⬆️ ⬇️

When docker-compose is missing

What will be discussed



Here periodically there are posts in which the authors share their approaches to the use of docker . Well, here's another one. Below, I will tell you about our experience in using the docker environment, the inconvenience we faced, how we struggled with them, and what it turned out to be. And also I will share a small, but so useful tool for us.



image



How we lived before



First, a little history. It so happened that on duty, we in varying degrees, develop and support several projects at the same time. All of them have different ages, requirements and accordingly work in different environments. In this connection, there were some inconveniences when deploying a local copy. When you switch to a project that you haven’t previously worked with, you have to tinker with setting it up, as well as setting up the working environment. And if within the team it could be resolved rather quickly, then with occasionally connected freelance developers it is more and more difficult. It was decided to transfer the development to the docker-environment. Here we did not invent anything, but went the standard way. Each service went up in a separate container. Docker-compose was used for the bundle.



For parallel work on several projects requires the installation of all the services necessary for each of them. First of all, we created a repository in which the configuration file for docker-compose was located, as well as the configuration of the images required for operation. Everything worked pretty quickly and for a while it suited us. As it turned out, in the future, this approach solved our problem partially. As projects were added to the new ecosystem, this repository was filled with configuration files and various supporting scripts. This led to the fact that when working on a single project, the developer had to either drag the dependencies of all projects, or edit docker-compose.yml, disabling unnecessary services. In the first case, we had to put extra containers, which seemed to us not the best solution, and in the second we needed to know which containers are required for the application to work. I wanted to have a more flexible solution that allows you to install only the necessary components, and also, if not exclude, then minimizes manual work. And what we have come to ...



ddk



ddk (Docker Development Kit) is a tool designed to simplify setting up the environment and automate the deployment of the development environment for projects running in the docker environment. Sounds, probably, strongly. In fact, ddk is a kind of wrapper over git and docker and provides a number of additional commands for conveniently managing packages, configuration files and projects. In some ways, it is the environment dependency manager for docker-compose projects and services.



Initially, ddk is a set of python scripts, but the end user gets a single executable file that he works with. Now, in addition to installing docker itself and docker-compose, the developer needs to initialize ddk, creating a configuration file. This task is solved by invoking the init command.



cd /var/projects/ddk ddk init 


After that, the connection to the new project is as follows:



 ddk project get my.project.ru ddk compose --up 


Also, if necessary, we redirect the new domain to localhost.



 echo 127.0.0.1 my.project.ddk >> /etc/hosts 


The first team clones the project and executes its initialization. The second generates a configuration for docker-compose and starts the necessary services. During the execution process all missing components will be loaded. Upon completion of the build, the developer receives a fully working local copy of the project, which is available at my.project.ddk.



A little about how it works.



When using ddk, the working directory is the directory in which the configuration file is generated, generated by the init command. The executable file itself can be located in any convenient place. The configuration search is performed starting from the current directory, and then ddk climbs the directory tree until it finds the file you are looking for or reaches the root of the file system. Git and docker-compose work in a similar way. After the configuration file is found, ddk creates some directories for storing packages and source code for projects, resolves and installs dependencies. Components are installed by simple cloning of a git repository, whose address is determined by concatenating the component name and the prefix from the configuration file.



 # "project-repo-prefix": ["git@github.com/vendor-name/"] ddk project get my.project.ru git clone git@github.com/vendor-name/my.project.ru.git 


Of course, ddk is not a simple shortcut for git clone, and it has additional functionality, which is why it was intended. About how, why and why - just below, but here I will add only that as a result a directory will be formed in which all the projects will be collected, as well as the configuration files necessary for their work. This directory can easily be moved to another directory or to another machine.



ddk packages



The first thing I wanted to achieve was to make the whole environment as modular as possible. We selected the description of each service into separate configuration files and brought them to independent repositories. A colleague called them packages. These very packages formed the basis of the work of our instrument. When building docker-compose.yml, ddk goes through all the required packages and generates a final configuration file based on them.



As a rule, there is no need to install separate packages on your own, since all missing components are automatically loaded during assembly. However, it is possible to install and update them.



 ddk package install package-name ddk package update 


Now about the content. At the root is always the configuration file ddk.json, in which the name of the container and the used docker image are indicated. Below is an example of a package with minimal configuration.



 { "container_name": "memcached.ddk", "image": "memcached:latest" } 


As you probably noticed, in fact, this is part of the configuration from docker-compose.yml presented in JSON format. This approach makes it possible to set any parameters supported by docker-compose. Here is an example of a more complex package that uses a separate Dockerfile and mounts directories.



 { "build": "${PACKAGE_PATH}", "container_name": "nginx.ddk", "volumes": [ "${SHARE_PATH}/var/www:/var/www", "${PACKAGE_PATH}/storage/etc/nginx/conf.d:/etc/nginx/conf.d:ro", "${PACKAGE_PATH}/storage/etc/nginx/nginx.conf:/etc/nginx/nginx.conf:ro", "${PACKAGE_PATH}/storage/var/log/nginx:/var/log/nginx" ] } 


Listing package directory:



 storage/ etc/ nginx/ conf.d/ site.ddk.sample nginx.conf ddk.json Dockerfile 


Keys that have the prefix "ddk-" are used to specify special directives. Currently, the only supported key is "ddk-post-install", which stores a list of commands that are executed after installing and updating the package.



 { "ddk-post-install": [ "echo 'Done'" ] } 


One of the options for using this option is given in the "Agreement" section.



Projects



Now we will consider how to use ddk on the example of a specific project. To deploy an existing project, just call the get command.



 ddk project get project-id 


This command clones the project into the share / var / www directory, after which the configuration file is searched (by default in the project root), and all the necessary commands from the on-init section are launched. At this stage, the setting of individual project parameters is performed (generation of .env, installation of rights to files, database configuration, etc.).



In addition to the commands for initialization, the ddk.json file contains a list of packages on which the project’s work depends. If any of the packages are missing, it will be automatically installed. Below is an example of the configuration of the project.



 { "packages": [ "mysql5.5", "memcached", "apache-php5.5" ], "on-init": [ "${PROJECT_PATH}/init.sh ${PACKAGES_PATH} ${PROJECT_DIR}" ] } 


Despite the fact that the on-init section allows you to send several commands, we usually specify only one. In the example above, you can see that when the project is initialized, a deployment script will run, which will perform the basic configuration. Such an approach turned out to be more convenient, since it gives greater flexibility and allows you to add an interactive to the project initialization process.



If you need to expand the configuration of a package, you can do this by specifying it as an object. This object must have a name attribute containing the name of the package. All other attributes will be perceived as a configuration.



 { "packages": [ { "name": "nginx", "depends_on": [ "php-fpm7.1" ], "environment": [ "SOME_VAR=Hello" ] } ] } 


Thus, we have the opportunity to influence the work of services without changing the original configuration of the package.



Agreements



In the process of working in the docker-environment, we have developed several agreements, which we try to adhere to.



Firstly, when mounting any files and directories of a package or project, their structure must match the structure inside the container. Those. package-name / storage corresponds to the root directory of the package-name container. The share directory also corresponds to the container root directory. That is why all projects are located in share / var / www. This rule can be traced in the examples above.



The next point is that when installing packages in which containers the file system is supposed to be modified, a special user is created whose credentials correspond to the data of the user of the host system. In other words, we map username, user ID and group ID from the host system to the container. In the future, all commands in the container are recommended to be executed using this data. This approach allows you to avoid problems with access rights when accessing files outside the container. If at least one of the projects is configured in this way, a share / home / <user-dir> directory will be created, which is mounted in a container and used as a home directory. Below is an example of how we implemented it.



 { "container_name": "php71-fpm.ddk", "command": "map-user.sh", "env_file": [ "${PACKAGE_PATH}/env/user.env" ], "ddk-post-install": [ "mkdir -p ${PACKAGE_PATH}/env", "echo USER_NAME=`whoami` > ${PACKAGE_PATH}/env/user.env", "echo USER_ID=`id -u` >> ${PACKAGE_PATH}/env/user.env", "echo GROUP_ID=`id -g` >> ${PACKAGE_PATH}/env/user.env" ] } 


As you can see, after installing the package, a file with user data is generated. When the container is started, the map-user.sh script checks and, if necessary, creates an account using the data obtained.



Pinch of magic



The last thing you want to do is run all the necessary services using the usual docker-compose. To generate startup parameters, use the compose command. When it is called, ddk goes through all active projects, collects data about the packages and their parameters, combines all the information received with the configurations of the packages themselves and on the basis of this data generates the final docker-compose.yml. This file is used at startup.



 ddk compose docker-compose up -d 


If the corresponding option is specified during configuration, you can get by with one command.



 ddk compose --up 


Hello, world



Those who want to see ddk in action can deploy a demo project.



Downloading the latest build:



 wget https://github.com/simbigo/ddk/raw/master/dist/ddk chmod +x ddk 


We configure the future domain:



 echo 127.0.0.1 hello.ddk >> /etc/hosts 


We develop the project:



 ./ddk init ./ddk project get hello ./ddk compose --up 


After successfully assembling all the images, the project is available at http: //hello.ddk



Conclusion



What have achieved:



  1. Modular environment.
  2. Configuration of a single team.
  3. The absence of reusable handmade.
  4. Minimum time to include a developer in the project.


What is worth the work:



  1. There is practically no error handling in ddk.
  2. We planned to implement the correct work on MacOS, but at the moment there is no Mac driver in our team, and the tool in this system has not been tested. Most likely, some features will emerge, and refinement will be required.
  3. The addresses of the repositories for packages and projects are transferred as an array, but in fact work is done only with the first element. It is necessary to implement a correct check of the existence of the repository and search by a set of addresses.
  4. Removing excess containers.
  5. There is quite a lot of duplicate code in initialization scripts. Maybe it makes sense to render common functions in ddk.


For those who have an overwhelming desire to see, do better, or just criticize the code, I attach a link to github. We will be happy if the tool will be useful to someone else besides us.



Visit github



')

Source: https://habr.com/ru/post/330452/



All Articles