📜 ⬆️ ⬇️

Setting up automatic deployment of independent development environments on one machine (Docker, Ansible, TeamCity)

In this post I will tell how we, in TheQuestion, realized our long-held dream - separate, automatically deployed development environments for each individual task.


image // picture


From the very beginning, our development was built as follows:



It should be said that the CI for the master branch is arranged in a quite usual way:


  1. Push on githab
  2. TeamCity sees a new commit and makes make
  3. Automatic tests run
  4. Docker containers are going to
  5. Ansible Deploit Containers

I wanted to keep this sequence and tools in order not to change a lot.


The obvious disadvantage of one dev'a is that it can only watch one branch at a time, the unfinished tasks interfere with each other and you have to resolve permanent conflicts. Our goal was the following: as soon as a new branch is created on GitHub, a separate dev is created for it.


At first glance, the task is not difficult - we look in the API of our cloud platform and before the first commit in the new branch starts its path, we create a separate server for this branch - everything is simple, there is already a development on a single machine, thanks to Ansible!


But there is one significant problem: our database. Its full sweep from a compressed dump (you must also download it) on a modest machine takes about two hours. You can of course deploy it all to more productive machines or just wait, but you would have to suffer from the cloud API (even if you would have to rewrite everything when moving to another) and did not want to pay an extra penny for each new car. So, for our solution, one medium machine is used.


TeamCity


This is a great tool that almost doesn’t need to be customized. The only thing that is required of him is to tell the scripts which branch he works with.
So change the only Build Step: command line of


 cd clusters/dev make 

turned into


 export branch_name=%teamcity.build.branch% cd clusters/dev make 

Docker


With one virgin, each part of the infrastructure, whether part of an application, Sphinx, Redis, Nginx, or PostgreSQL was launched inside a separate container. Which were started with the indication - --network-mode=host , that is, each ip:port container coincided with localhost:port host machine.


As you understand, for several devs this is not a ride, firstly the containers should communicate only with the containers of one branch, secondly, nginx should know the internal IP of each container he needs.


This is Docker network comes to the rescue and the launch of containers turns from


 docker run /path/to/Dockerfile 

at


 docker network create ${branch_name} --opt com.docker.network.bridge.name=${branch_name} docker run --network=${branch_name} -e branch_name=${branch_name} /path/to/Dockerfile 

This gives us:



PostgreSQL


We run it in the container with --network=host , as before, so that the DBMS is one, but for each branch there is its own user and its own database.


The task of quickly deploying a new database is perfectly solved with templates:


 CREATE DATABASE db_name TEMPLATE template_name 

Plus, every day I would like to have a fresh copy of the database with the sales, so that when creating a branch, build on it (also flows in a separate container with --network=host )


To do this, create two databases. Every night we spend two hours unrolling a fresh dump into one:


 pg_restore -v -Fc -c -d template_new dump_today.dump 

and if successful:


 DROP template_today; CREATE DATABASE template_today TEMPLATE template_new; 

As a result, we have a fresh pattern every morning, which will remain even if the next dump comes broken and turns around unsuccessfully.


When creating a new branch, create a database from a template


 CREATE USER db_${branch_name}; CREATE DATABASE db_${branch_name} OWNER db_${branch_name} TEMPLATE template_today; 

Thus, creating a separate base for a branch takes 20 minutes, not 2 hours, and the connection to it from within the docker containers is done via the eth0 interface, which always points to the host’s IP address.


nginx


We will also install it on the host machine, and we will collect the configuration using docker inspect - this command gives complete information about the containers, from which we need one thing: the IP address, which we substitute into the configuration template.


And due to the fact that the name of the network interface coincides with the name of the branch, it can generate configs for all devs in one single script:


 for network in $(ip -o -4 as | awk '{ print $2 }' | cut -d/ -f1); do if [ "${network}" == "eth0" ] || [ "${network}" == "lo" ] || [ "${network}" == "docker0" ]; then continue fi IP=$(docker inspect -f "{{.NetworkSettings.Networks.${network}.IPAddress}}" ${container_name}) sed -i "s/{{ ip }}/${IP}/g" ${nginx_conf_path} sed -i "s/{{ branch_name }}/${network}.site.url/g" ${nginx_conf_path} done 

Deleting branches


Because of the short life of each branch, except master , there is a need to periodically delete everything that relates to a branch - the nginx config, containers, base.


Unfortunately, I could not find how to get TeamCity to tell that the branch was deleted, so I had to be clever.


When a regular branch is added, a file with its name is created on the machine:


 touch /branches/${branch_name} 

This allows you to memorize not only all the branches that we have, but also their last modification time (it coincides with the file modification time). It is very useful to delete a branch not immediately, but a week after it stops being used. It looks something like this:


 #!/usr/bin/env bash MAX_BRANCH_AGE=7 branches_to_delete=() for branch in $(find /branches -maxdepth 1 -mtime +${MAX_BRANCH_AGE}); do branch=$(basename ${branch}) if [ ${branch} == "master" ]; then continue fi branches_to_delete+=(${branch}) done dbs=() for db in $(docker exec -it postgresql gosu postgres psql -c "select datname from pg_database" | \ grep db_ | \ cut -d'_' -f 2); do dbs+=(${db}) done for branch in ${branches_to_delete[@]}; do for db in ${dbs[@]}; do if [ ${branch} != ${db} ]; then continue fi # branch file rm /branches/${branch} # nginx rm /etc/nginx/sites-enabled/${branch} # containers docker rm -f $(docker ps -a | grep ${branch}- | awk '{ print $1 }') # db docker exec -i postgresql gosu postgres psql <<-EOSQL DROP USER db_${branch}; DROP DATABASE db_${branch}; EOSQL done done service nginx reload 

Several pitfalls


As soon as everything worked and was locked up in the master - he was not going to. It turns out that the word master key for the iproute2 utility, so that ifconfig began to be used together to determine the IP containers.


It was:


 ip -o -4 as ${branch_name} | awk '{ print $2 }' | cut -d/ -f1 

has become:


 ifconfig ${branch_name} | grep 'inet addr:' | cut -d: -f2 | awk '{ print $1}' 

As soon as the thq-1308 branch was created (by the task number from Jira ), it did not assemble. And all because of the dash. It interferes in several places: PostgreSQL and the Docker Inspect output template.
As a result, we find out the host IP:


 docker inspect -f "{{.NetworkSettings.IPAddress}}" ${network}-theq 

Change the owners of all tables in the new database:


 tables=`gosu postgres psql -h ${DB_HOST} -qAt -c "SELECT tablename FROM pg_tables WHERE schemaname = 'public';" "${DB_NAME}"` for tbl in $tables ; do gosu postgres psql -h ${DB_HOST} -d "${DB_NAME}" <<-EOSQL ALTER TABLE $tbl OWNER TO "${DB_USER}"; EOSQL done 

In general, that's all. I didn’t give complete commands, scripts (except perhaps the last one), ansible roles - there is nothing special there, but I hope I didn’t miss the point. Ready to answer all questions in the comments.


')

Source: https://habr.com/ru/post/324142/


All Articles