
I would like to share the experience of setting up a CI / CD in our company, plus, listen to the advice if you have a similar project structure.
To whom, it seems to me, this article may be useful:
- your projects contain several separate repositories with applications;
- You want to be sure that each repository passes tests;
- You want to be sure that versions between repositories are compatible.
- you have not had time, but you are planning to transfer your projects to the docker;
- Want to watch a couple of Ansible playbooks.
I highly recommend the course
“Continuous Delivery Using Docker And Ansible” . We ottaklivalis from him in the development of our solution.
')
Tasks for CI / CD
One of our projects on average is 4-5 repositories interacting with each other via rest-api. Whether this is considered a microservice architecture or not, I don’t know for sure, but considering this, we set the following tasks for CI / CD:
- in each repository in each of the main branches should be working (tested) code;
- each branch, plus each tag, must be completely consistent between all repositories in the project;
- it should be possible to deploy the project locally, both fully and separately, any repository for development;
- It should be possible to deploy the project in different environments: testing, staging, production.
So let's get started.
CI / CD setup
Preliminary step
- we switched to git-flow. It turned out that our custom vcs-workflow, in comparison with the “classics”, is redundant and complex, especially for beginners;
- Our weekly sprint is a new version of the product. Each task, be it a bug or a feature, is attached to a specific version in the task manager. Each repository has a tag with a new version at the end of the sprint, even if it was in this particular turnout that they didn’t do anything. Exception, if no repository from the project was touched during the sprint;
- banned directly pushing into the master, develop and release branches, only through pull requests;
- hung a hook for pull requests to the above branches for building and testing at Jenkins;
- banned pooling of requests without successful testing by Jenkins and without the approval of the Code Review.
We chose Jenkins as the CI tool, which runs unit tests and integration tests api.
As a CD tool - Ansible + Docker.
The first step is setting up a separate repository.
We changed the structure of each of our repositories within the project:
app |-src |-docker | |-ci | |-develop | |-release |-requirements |-Jenkinsfile |-Makefile
A configured hook in the pull request will tell Jenkins that the repository needs to be tested. Jenkins will search and execute Jenkinsfile. The latter sequentially invokes the Makefile commands for building the container and testing. The makefile runs docker-compose commands from the
./docker/ci directory. Why didn't we set up running docker-compose commands right from Jenkinsfile? To maintain its versatility for all repositories. Those. different build and launch repositories require different docker-compose commands, and these differences are configured in the Makefile, which for Jenkinsfile always has the same build and launch interface.
Nb. At the end of the article are links to repositories with examples.
Also in the Makefile are the commands for building and running the repository locally in develop-mode — a forwarding with source files from the host machine inside the docker is configured, and it will be enough just to restart docker-compose, which is also done via the make command to see the new changes. The Makefile +
./docker/develop is responsible for this.
The
./docker/release contains the repository build settings for the testing / staging environment, etc. These settings will be used later.
The second step. Setting up an additional devops repository
The purpose of the common repository is to preserve the integrity of the project when deploying the repositories that belong to it, as well as to the possibilities of integration testing.
Repository structure
devops |-ansible | |-plays | |-roles |-projects | |-project_1 | | |-apps | | | |-app_1 | | | |-app_2 | | | |-app_3 | | | |-... | | |-docker | | | |-ci-api | | | |-ci-selenium-gherkin | | | |-develop | | | |-testing | | | |-staging | | | |-production | | |-Makefile |-requirements |-Jenkinsfile |-Makefile
First, how this repository performs integration testing.
Not the easiest thing, try to explain.
As in the case of the repository with the application, there are Jenkinsfile and Makefile files that will run the build and test commands during a pull request. Build settings are located in
./projects/PROJECT/docker/ci-api , where “PROJECT” is the name of the current project. The build includes the cloning of each repository in the desired branch / tag, running the api tester container.
“Required branch / tag” is what we are trying to test - either a common branch (master, develop, release) for all repositories, or a tag version of the project. Tag must be put in each repository. Then create a branch in the devops repository with the name that matches the "necessary" one. After that, you can do a pull request.
Jenkins will attempt to build a project on the selected tag / branch, if no such repository can be found in such a way - the testing failed. If we succeeded in assembling the project, we will launch a “test framework”, in which we use Postman and its command line utility, Newman. If the tests are successful - at the output we merge the pull-request and affix the test tag to the devops repository. The presence of this tag indicates that this version of the project has been tested.
To run Postman tests, we need a link to the shared collection, which we insert into the
command container.
While this is the only kind of integration testing, we will add testing with gherkin or selenium a bit later, at least the docker / ci-selenium-gherkin directory is already there.
Now about the CD functions in this repository.
Here, in
./ansible , is the control panel of the entire project for assembling images and delivering them to different servers and environments, namely:
- develop.yml - settings for deploying the entire project locally;
- make-images.yml - creating docker images with a specific version of the project and pushing into the docker registry;
- deploy-and-run-images.yml - project deployment on servers with different environments.
At the beginning of each item is a playbook that executes this script.
They are started by the command:
$ ansible-playbook -i ../testing.ini make-images.yml -e 'project=todo ver=2017.1' - -i ../testing.ini -- inventory, , - make-images-yml -- playbook - -e 'project=todo ver=2017.1' -- , playbook, .
In
./ansible/plays/group_vars/all.yml are project settings:
- what repositories are related to this project;
- what docker-registry to use, what login-password to it;
- individual settings for each environment, etc.
As you can see, although this repository is entirely devoted to only one project, we still transfer the project name to the playbook parameters, and the project directory is located in the
projects directory. This is due to the fact that this devops repository is a fork from the master-devops repository, from which devops repositories of other projects are also forked. And this structure allows you to exchange the code of general settings and ansible commands between master forks and between the forks themselves without the threat of breaking something. More precisely, the ansible directory is common, and its refactoring can easily be transferred from the master to the fork and vice versa. And all private project settings are in a separate directory in projects. And the pool from the master or from the neighboring devops repository will not conflict with the current one.
Go back to the
docker / release directory in the applications where the Dockerfile is located, which is responsible for the build for the testing / staging and production environment, i.e. for everything but develop. By itself, the release build of one repository does not provide anything useful, only in conjunction with the rest of the project repositories. Ansible is configured in such a way that for develop-build it will take the Dockerfile from the
docker / develop directory of each project, and for assembly under the release environment from the
docker / release directory.
Total, we managed to do:
- the ability to clone any repository and run the develop-version
- each repository is checked by Jenkins;
- there is a common repository that runs integration tests for all repositories in the common version;
- ansible playbook: expands locally and runs all project repositories in develop-mode;
- ansible playbook: collects images depending on the selected environment scheme and sends it to the docker registry;
- ansible playbook: on the server configures the project;
- ansible playbook: on the server starts the application.
Links to applications for demonstration of the system:
- todo_todo - fork from the todobackend.com project. Changed the structure and added tests. Creates todo'shk;
- todo_crm - creates users, sends a request to todo_todo, creates todo and binds it to the user;
- todo_ops - devops repository with configs